abstract
stringlengths
0
11.1k
authors
stringlengths
9
1.96k
title
stringlengths
4
353
__index_level_0__
int64
3
1,000k
In this paper, we present an opportunistic packet scheduling algorithm based on buffer management (OSBM) over the downlink of a packet cellular network. OSBM is a channel-dependent scheduling algorithm with a provable delay bound. It is able to provide differentiated services to both real-time and nonreal-time applications. Particularly, these features of OSBM are achieved through a novel buffer management scheme. Since this buffer management scheme does not involve any complex online computation, OSBM is very efficient and easy to implement in an operational environment. We also present a new analytical approach for performance analysis of opportunistic scheduling in wireless networks based on the proposed concept of effective downlink capacity (EDC). This approach attempts to adapt the service curve tool for deterministic quality-of-service analysis to the wireless environment, and the concept of the EDC serves to bridge the deterministic method and the stochastic nature of the wireless link. Using this approach, the explicit expression of the delay bound of the OSBM algorithm is obtained. Simulation results are presented to demonstrate the effectiveness of the proposed opportunistic scheduling algorithm.
['Junhua Tang', 'Gang Feng', 'Chee Kheong Siew', 'Liren Zhang']
Providing Differentiated Services Over Shared Wireless Downlink Through Buffer Management
542,944
Bio-inspired dynamical processes are able to identify nonlinear features in data. We present a dynamical process model of particle competition in complex networks applied to transductive semi-supervised learning. Particles carry labels and compete for the domination of edges. The process results consist of sets of edges arranged by label dominance. The sets are analyzed as subnetworks for the data classification. Computer simulations show that this model can identify nonlinear data forms in both real and artificial data, including overlapping structure of data.
['Paulo Roberto Urio', 'Filipe Alves Neto Verri', 'Liang Zhao']
Semi-supervised learning by edge domination in complex networks
605,943
The always increasing number of videos on the internet yield data for novel quite useful multimedial service applications, but finding videos best satisfying the users need is becoming challenging. At the same time, new video collection platforms allow to upload videos enriched with positional metadata when recorded with a GPS-enabled device such as a smartphone. These platforms can thus go beyond the prevalent keyword search and instead take advantage from the positional metadata of videos, e.g., to find videos recorded in a certain area. This information, however allows for much more interesting queries. In this paper we present Video Route which allows a user to specify a target route (query) and obtain an approximation of the target route that is piecewise composed of subtrajectories derived from a set of given trajectories. Our approach is aimed at high approximation accuracy while keeping the number of composed subtrajectories low.
['Tobias Emrich', 'Olivia Hofer', 'Andreas Kolb', 'Johannes Niedermayer', 'Nepumuk Seiler', 'Michael Weiler']
Video route
708,907
['Dimitris Spiliotopoulos', 'Despoina Antonakaki', 'Sotiris Ioannidis', 'Paraskevi Fragopoulou']
Usability Evaluation of Accessible Complex Graphs
859,682
Bronchoscopic interventions are widely performed for the diagnosis and treatment of lung diseases. However, for most endobronchial devices, the lack of a bendable tip restricts their access ability to get into distal bronchi with complex bifurcations. This paper presents the design of a new wire-driven continuum manipulator to help guide these devices. The proposed manipulator is built by assembling miniaturized blocks that are featured with interlocking circular joints. It has the capability of maintaining its integrity when the lengths of actuation wires change due to the shaft flex. It allows the existence of a relatively large central cavity to pass through other instruments and enables two rotational degrees of freedom. All these features make it suitable for procedures where tubular anatomies are involved and the flexible shafts have to be considerably bent in usage, just like bronchoscopic interventions. A kinematic model is built to estimate the relationship between the translations of actuation wires and the manipulator tip position. A scale-up model is produced for evaluation experiments and the results validate the performance of the proposed mechanism.
['Ning Liu', 'Christos Bergeles', 'Guang-Zhong Yang']
Design and analysis of a wire-driven flexible manipulator for bronchoscopic interventions
812,733
The Movement of Things project is an exploration into the qualities and properties of movement. Through a range of exercises these movements are captured and translated by custom-built software and the use of an autonomous, tiny and wireless motion sensor. A series of Motion Sensing Extensions suggest different approaches of how to use a motion sensor within various physical environments to capture movement to better understand the materialization of movement and new forms of interactions through movement.
['Andreas Schlegel', 'Cedric Honnet']
Movement of Things Exploring Inertial Motion Sensing When Autonomous, Tiny and Wireless
827,684
We present Program Demultiplexing (PD), an execution paradigm that creates concurrency in sequential programs by "demultiplexing" methods (functions or subroutines). Call sites of a demultiplexed method in the program are associated with handlers that allow the method to be separated from the sequential program and executed on an auxiliary processor. The demultiplexed execution of a method (and its handler) is speculative and occurs when the inputs of the method are (speculatively) available, which is typically far in advance of when the method is actually called in the sequential execution. A trigger, composed of predicates that are based on program counters and memory write addresses, launches the speculative execution of the method on another processor. Our implementation of PD is based on a full-system execution-based chip multi-processor simulator with software to generate triggers and handlers from an x86- program binary. We evaluate eight integer benchmarks from the SPEC2000 suite .programs written in C with no explicit concurrency and/or motivation to create concurrency. and achieve a harmonic mean speedup of 1.8x with our implementation of PD.
['Saisanthosh Balakrishnan', 'Gurindar S. Sohi']
Program Demultiplexing: Data-flow based Speculative Parallelization of Methods in Sequential Programs
246,626
Recent Multi-Input-Multi-Output research demonstrates that real wireless environments exhibit spatially correlated fadings: The independent fading assumption does not hold in all cases. Accordingly, spatially correlated MIMO channel models have attracted much attention. The Kronecker product form (KPF) is the currently most widely used MIMO channel model describing spatial correlations. However, its validity in various environments have not been well examined. In this work, first the correlation structure of the KPF is discussed. It is shown that the KPF exhibits a separable correlation structure. Second, it is claimed that the separable correlation structure corresponds to separable scatterers in physical environment. Using this utility, the KPF is examined with two popular scatterer distributions: two ring models and elliptic models. It is shown that the KPF is feasible for describing two ring models, but not capable of describing elliptic models. Numerical ray-tracing simulations verify the theoretical results.
['Hui Tong', 'Seyed Alireza Zekavat']
On the Suitable Environments of the Kronecker Product Form in MIMO Channel Modeling
296,900
This study reports on the performance of an on-line evolutionary automatic programming methodology for uncovering technical trading rules for the S& P 500 and Nikkei 225 indices. The system adopts a variable sized investment strategy based on the strength of the signals produced by the trading rules. Two approaches are explored, one using a single population of rules which is adapted over the lifetime of the data and another whereby a new population is created for each step across the time series. The results show profitable performance for the trading periods explored with clear advantages for an adaptive population of rules.
['Ian Dempsey', "Michael O'Neill", 'Anthony Brabazon']
Adaptive Trading With Grammatical Evolution
404,723
Disease and Gene Annotations database (DGA, http://dga.nubic.northwestern.edu) is a collaborative effort aiming to provide a comprehensive and integrative annotation of the human genes in disease network context by integrating computable controlled vocabulary of the Disease Ontology (DO version 3 revision 2510, which has 8043 inherited, developmental and acquired human diseases), NCBI Gene Reference Into Function (GeneRIF) and molecular interaction network (MIN). DGA integrates these resources together using semantic mappings to build an integrative set of disease-to-gene and gene-to-gene relationships with excellent coverage based on current knowledge. DGA is kept current by periodically reparsing DO, GeneRIF, and MINs. DGA provides a user-friendly and interactive web interface system enabling users to efficiently query, download and visualize the DO tree structure and annotations as a tree, a network graph or a tabular list. To facilitate integrative analysis, DGA provides a web service Application Programming Interface for integration with external analytic tools.
['Kai Peng', 'Wei Xu', 'Jianyong Zheng', 'Kegui Huang', 'Huisong Wang', 'Jiansong Tong', 'Zhifeng Lin', 'Jun Liu', 'Wenqing Cheng', 'Dong Fu', 'Pan Du', 'Warren A. Kibbe', 'Simon M. Lin', 'Tian Xia']
The disease and gene annotations (DGA): an annotation resource for human disease
289,711
['Nicolas Lopez', 'Yves Grenier', 'Ivan Bourmeyster']
Low Variance Blind Estimation of the Reverberation Time
770,859
In project management, an important means to reduce risks is to provide adequate quantities of contingency reserve in terms of capital (fund), person, time (person hours), in the estimate of required project resources. Experienced project managers usually provide more accurate estimate of these quantities. The traditional project management methodologies, such as System Development Life Cycle (SDLC), were developed to handle software and information engineering projects of longer duration and larger number of project members. Many projects in modern days have short duration and smaller number of project members. They tend to follow newer methodologies, such as Agile Development, to adapt to the faster changes of modern technologies and user requirements. For a project manager, gathering information to properly create an effective project plan is like identifying a collection of problems and solving them. Solving problems needs supporting resources, which is analogous to the fact that projects need adequate supporting resources. Chang [1] proposed three elements of methods of problem solving with supporting resource capabilities. This paper proposes an approach that uses these resource capabilities to estimate and manage contingency reserves of a modern-day project in software or information engineering.
['Peter H. Chang']
Applying resource capability for planning and managing contingency reserves for software and information engineering projects
502,413
['Byron Georgantopoulos', 'Stelios Piperidis']
Term-based Identification of Sentences for Text Summarisation.
800,473
University hiring, promotion and tenure decisions make researchers’ publication productivity an important issue. This study reports on data about publication productivity of information systems (IS) researchers from 1999 to 2003. We collected information about IS papers published in twelve IS journals during this period. After classification, the most productive individuals and institutions for this sample are identified. We also compared our findings with past research to demonstrate the changes in publication productivity over time. Publication productivity changes somewhat among researchers and institutions.
['Hsieh-Hong Huang', 'Jack Shih-Chieh Hsu']
AN EVALUATION OF PUBLICATION PRODUCTIVITY IN INFORMATION SYSTEMS: 1999 TO 2003
277,056
['Jean-Philippe Draye', 'Guy Chéron', 'Marc Bourgeois', 'Davor Pavisic', 'Gaetan Libert']
Identification of the human arm kinetics using dynamic recurrent neural networks.
807,629
['Sergiy Butenko']
Journal of Global Optimization Best Paper Award for 2015
914,975
In the context of change detection and due to the multitude of change scenarios, the objective is to build a generic change detection system. For many technical and operational reasons the Support Vector Machines (SVM) algorithm is used. One of the crucial steps when using the SVM algorithm is the choice of the kernel function. With the lack of a priori information the choice of the kernel function may be difficult for the user. In this paper several techniques for constructing a suitable kernel function obtained from the data are proposed.
['Tarek Habib', 'Jordi Inglada', 'Grégoire Mercier', 'Jocelyn Chanussot']
On the Use of a New Additive Kernel for Change Detection using SVM
424,758
Wireless sensor networking remains one of the most exciting and challenging research domains of our time. As technology progresses, so do the capabilities of sensor networks. Limited only by what can be technologically sensed, it is envisaged that wireless sensor networks will play an important part in our daily lives in the foreseeable future. Privy to many types of sensitive information, both sensed and disseminated, there is a critical need for security in a number of applications related to this technology. Resulting from the continuous debate over the most effective means of securing wireless sensor networks, this paper considers a number of the security architectures employed, and proposed, to date, with this goal in sight. They are presented such that the various characteristics of each protocol are easily identifiable to potential network designers, allowing a more informed decision to be made when implementing a security protocol for their intended application. Authentication is the primary focus, as the most malicious attacks on a network are the work of imposters, such as DOS attacks, packet insertion etc. Authentication can be defined as a security mechanism, whereby, the identity of a node in the network can be identified as a valid node of the network. Subsequently, data authenticity can be achieved; once the integrity of the message sender/receiver has been established.
['David Boyle', 'Thomas Newe']
Securing Wireless Sensor Networks: Security Architectures
455,092
Of increasing importance in the civilian and military population is the recognition of major depressive disorder at its earliest stages and intervention before the onset of severe symptoms. Toward the goal of more effective monitoring of depression severity, we introduce vocal biomarkers that are derived automatically from phonologically-based measures of speech rate. To assess our measures, we use a 35-speaker free-response speech database of subjects treated for depression over a 6-week duration. We find that dissecting average measures of speech rate into phone-specific characteristics and, in particular, combined phone-duration measures uncovers stronger relationships between speech rate and depression severity than global measures previously reported for a speech-rate biomarker. Results of this study are supported by correlation of our measures with depression severity and classification of depression state with these vocal measures. Our approach provides a general framework for analyzing individual symptom categories through phonological units, and supports the premise that speaking rate can be an indicator of psychomotor retardation severity.
['Andrea Carolina Trevino', 'Thomas F. Quatieri', 'Nicolas Malyska']
Phonologically-Based Biomarkers for Major Depressive Disorder
316,522
Exceptions in knowledge-intensive processes are often the result of resource failures. Following a knowledge-based approach we demonstrate how semantic information about the resources involved and a set of generic event-condition-action rules are used to handle resource-related exceptions within a multi-agent enactment environment. The scenario of an IT-helpdesk is used to illustrate the application of the concepts described within a practical environment.
['Holger Brocks', 'Henning Meyer', 'Thomas Kamps', 'Christian Begger']
Flexible exception handling in a multi-agent enactment model for knowledge-intensive processes
347,279
This article gives an overview of the theoretical basis of the norm-optimal approach to iterative learning control followed by results that describe more recent work which has experimentally benchmarked the performance that can be achieved. The remainder of the article then describes its actual application to a physical process and a very novel application in stroke rehabilitation.
['Eric Rogers', 'D.H. Owens', 'Herbert Werner', 'Christopher Freeman', 'P.L. Lewin', 'S. Kichhoff', 'Christian Schmidt', 'Gerwald Lichtenberg']
Norm Optimal Iterative Learning Control with Application to Problems in Accelerator based Free Electron Lasers and Rehabilitation Robotics
318,549
This article gives an interpretation and justification of extensional and intensional conjunction in the relevant logic R. The interpretive frameworks are Anderson and Belnap's natural deduction system and the theory of situated inference from Mares, Relevant Logic.
['Edwin D. Mares']
Relevance and Conjunction
304,794
The high switching activity of wide fan-in dynamic domino gates introduces significant power overhead that poses a limitation on using these compact high-speed circuits. This paper presents a new limited-switching clock-delayed dynamic circuit technique, called SP-Domino, which achieves static-like switching behavior, while maintaining the low-area and high-performance characteristics of wide fan-in dynamic gates. SP-Domino is a single-phase footless domino that can be freely mixed with static gates and can provide inverting and non-inverting functions. Simulations on 8 and 16 inputs or gates show that SP-Domino reduces dynamic power by up to 63% compared to same-UNG and same-delay standard footless domino, and up to 56.9% compared to low-contention high-speed standard footless domino.
['Charbel J. Akl', 'Magdy A. Bayoumi']
Single-Phase SP-Domino: A Limited-Switching Dynamic Circuit Technique for Low-Power Wide Fan-in Logic Gates
504,493
['Michael Granitzer', 'Stefanie N. Lindstaedt']
Knowledge Work: Knowledge Worker Productivity, Collaboration and User Support.
789,839
In this paper, we propose a self-organizing communication mechanism for a wireless sensor network where a large number of sensor nodes are deployed. To accomplish application-oriented periodic communication without any centralized controls, we adopt traveling wave phenomena of a pulse-coupled oscillator model by regarding sensor nodes as oscillators and the emission of radio signals as firing. We first investigate conditions of a phase-response curve to attain wave-formed firing patterns regardless of the initial phase of oscillators. We adopted the derived phase-response curve to accomplish the desired form of message propagation through local and mutual interactions among neighboring sensor nodes. Through simulation experiments, we confirm that our mechanism can gather or diffuse information effectively in accordance with the application's requirements
['Yoshiaki Taniguchi', 'Naoki Wakamiya', 'Masayuki Murata']
A Self-Organizing Communication Mechanism using Traveling Wave Phenomena for Wireless Sensor Networks
46,348
Cyber-Physical Production Systems (CPPS) and Smart Products are considered key features in the development of the fourth industrial revolution. In order to create a connected environment in manufacturing based on CPPS, components must be able to store and exchange data with machines, and with other components and assemblies along the entire production system. At the same time, Smart Product features require that products and their components be able to store and exchange data throughout their entire lifecycle. Therefore, the aim of this paper is to present a preliminary integrated component data model based on Unified Modeling Language (UML) for the implementation of CPPS and Smart Product features. The development of the data model is based on requirements gathered both from literature review and from corporate interviews with potential users. The results are still preliminary, since the research results are part of a bigger research effort under an international collaboration network.
['Luiz Fernando C. S. Durão', 'Helge Eichhorn', 'Reiner Anderl', 'Klaus Schützer', 'Eduardo de Senzi Zancul']
Integrated Component Data Model Based on UML for Smart Components Lifecycle Management: A Conceptual Approach
571,557
A low-voltage low-dropout (LDO) regulator that converts an input of 1 V to an output of 0.85–0.5 V, with 90-nm CMOS technology is proposed. A simple symmetric operational transconductance amplifier is used as the error amplifier (EA), with a current splitting technique adopted to boost the gain. This also enhances the closed-loop bandwidth of the LDO regulator. In the rail-to-rail output stage of the EA, a power noise cancellation mechanism is formed, minimizing the size of the power MOS transistor. Furthermore, a fast responding transient accelerator is designed through the reuse of parts of the EA. These advantages allow the proposed LDO regulator to operate over a wide range of operating conditions while achieving 99.94% current efficiency, a 28-mV output variation for a 0–100 mA load transient, and a power supply rejection of roughly 50 dB over 0–100 kHz. The area of the proposed LDO regulator is only 0.0041 ${\rm mm}^{2}$ , because of the compact architecture.
['Chung-Hsun Huang', 'Ying-Ting Ma', 'Wei-Chen Liao']
Design of a Low-Voltage Low-Dropout Regulator
269,265
['Matthias Eck']
Developing Deployable Spoken Language Translation Systems given Limited Resources
735,711
The authors show that the optimum nonlinear scale operation upon the elements of the observation vector in the LMS algorithm is exactly x/(1+ mu x/sup 2/) for any independent stochastic data input and any noise density. Moreover, use of such a nonlinearity can yield a significant performance improvement in fast adaptation situations. >
['Scott C. Douglas', 'Teresa H. Meng']
The optimum scalar data nonlinearity in LMS adaptation for arbitrary IID inputs
542,621
Applications of digital imaging with extreme zoom are traditionally found in astronomy and wild life monitoring. More recently, the need for such capabilities has extended to long range surveillance and wide area monitoring such as forest fires, harbors, and waterways. In this paper, we present a number of sensor arrangements for such applications, focusing on optical setups, auto-focusing mechanisms, and image deblurring techniques. Considering both the speed of convergence and robustness to image degradations induced by high system magnifications and long observation distances, we introduce an auto-focusing algorithm based on sequential search with a variable step size. We derive the transition criteria following maximum likelihood (ML) estimation for the selection of suitable step sizes. The efficiency of the proposed algorithm is illustrated in real-time auto-focusing and tracking of faces from distances of 50m~300m. We also develop an image restoration algorithm for high magnification imaging systems, where an adaptive sharpness measure is employed as a cost function to guide the fine search for an optimal point spread function (PSF) for image deblurring. Experimental results demonstrate a considerably enhanced robustness to image noise and artifacts and ability to select the optimum PSF, producing superior restored images.
['Yi Yao', 'Besma R. Abidi', 'Mongi A. Abidi']
Extreme Zoom Surveillance: System Design and Image Restoration
311,037
Abstract#R##N#The Knowledge Grid built on top of the peer-to-peer (P2P) network has been studied to implement scalable, available and sematic-based querying. In order to improve the efficiency and scalability of querying, this paper studies the problem of multicasting queries in the Knowledge Grid. An m-dimensional irregular mesh is a popular overlay topology of P2P networks. We present a set of novel distributed algorithms on top of an m-dimensional irregular mesh overlay for the short delay and low network resource consumption end-host multicast services. Our end-host multicast fully utilizes the advantages of an m-dimensional mesh to construct a two-layer architecture. Compared to previous approaches, the novelty and contribution here are: (1) cluster formation that partitions the group members into clusters in the lower layer where cluster consists of a small number of members; (2) cluster core selection that searches a core with the minimum sum of overlay hops to all other cluster members for each cluster; (3) weighted shortest path tree construction that guarantees the minimum number of shortest paths to be occupied by the multicast traffic; (4) distributed multicast routing that directs the multicast messages to be efficiently distributed along the two-layer multicast architecture in parallel, without a global control; the routing scheme enables the packets to be transmitted to the remote end hosts within short delays through some common shortest paths; and (5) multicast path maintenance that restores the normal communication once the membership alteration appears. Simulation results show that our end-host multicast can distributively achieve a shorter delay and lower network resource consumption multicast services as compared with some well-known end-host multicast systems. Copyright © 2006 John Wiley & Sons, Ltd.
['Wanqing Tu', 'Jogesh K. Muppala', 'Hai Zhuge']
Distributed end‐host multicast algorithms for the Knowledge Grid
420,054
The MIX mediator systems incorporates a novel framework for navigation-driven evaluation of virtual mediated views. Its architecture allows the on-demand computation of views and query results as the user navigates them. The evaluation scheme minimizes superfluous source access through the use of lazy mediators that translate incoming client navigations on virtual XML views into navigations on lower level mediators or wrapped sources. The proposed demand-driven approach is inevitable for handling up-to-date mediated views of largeWeb sources or query results. The non-materialization of the query answer is transparent to the client application since clients can navigate the query answer using a subset of the standard DOM API for XML documents. We elaborate on query evaluation in such a framework and show how algebraic plans can be implemented as trees of lazy mediators. Finally, we present a new buffering technique that can mediate between the fine granularity of DOM navigations and the coarse granularity of real world sources. This drastically reduces communication overhead and also simplifies wrapper development. An implementation of the system is available on the Web.
['Bertram Ludäscher', 'Yannis Papakonstantinou', 'Pavel Velikhov']
Navigation-Driven Evaluation of Virtual Mediated Views
15,025
['Sravan Mantha', 'Luc Mongeau', 'Thomas Siegmund']
Dynamic digital image correlation of a dynamic physical model of the vocal folds.
736,936
This paper presents a multi-core processor with globally asynchronous locally synchronous (GALS) clocking style designed to achieve soft error tolerance for stream DSP applications, and to maintain system throughput energy efficiently. Each processor in the chip can be combined with one of its neighbor processors to run the same programs and their results are equivalence checked to detect the soft error occurrence. When error occurs in some processor, the program in that processor (not the whole chip) is re-executed from the saved state to recover from the error. Due to the programming model of stream DSP applications, each processor can be isolated by FIFOs in the proposed multi-core processors, and fault detection and recovery can be done with low overhead. Furthermore, the GALS clocking style allows adjusting the frequency of the processors hit by a soft error—not the frequency of the whole chip—to maintain the system throughput, which results high energy efficiency.
['Zhiyi Yu', 'Z. Shi', 'Xiaoyang Zeng']
Fault tolerant computing for stream DSP applications using GALS multi-core processors
492,220
A new kernel adaptive filtering (KAF) algorithm, namely the sparse kernel recursive least squares (SKRLS), is derived by adding a l1-norm penalty on the center coefficients to the least squares (LS) cost (i.e. the sum of the squared errors). In each iteration, the center coefficients are updated by a fixed-point sub-iteration. Compared with the original KRLS algorithm, the proposed algorithm can produce a much sparser network, in which many coefficients are negligibly small. A much more compact structure can thus be achieved by pruning these negligible centers. Simulation results show that the SKRLS performs very well, yielding a very sparse network while preserving a desirable performance.
['Chen B', 'Nanning Zheng', 'Jose C. Principe']
SPARSE KERNEL RECURSIVE LEAST SQUARES USING L1 REGULARIZATION AND A FIXED-POINT SUB-ITERATION
375,132
['Kadri Vider', 'Krista Liin', 'Neeme Kahusk']
Strategic Importance of Language Technology in Estonia.
783,636
['Daniel Oppenheim', 'Lav R. Varshney', 'Yi-Min Chee']
Work as a Service.
760,185
Web services are conveniently advertised and published based on (stateless) functional descriptions, while they are usually realized as (stateful) processes. Therefore, the automated enactment of complex Web services on the basis of pre-existing ones requires the ability to handle services described at very different abstraction levels. This is the main reason behind the current lack of approaches capable to perform automated end-to-end composition, starting from semantic requirements to obtain executable orchestrations of stateful processes. In this paper we achieve such a challenging goal, by modularly integrating a range of incrementally more complex techniques that cover the necessary discovery and composition phases. By gradually bridging the gap between the high-level requirements and the concrete realization of services, our architecture manages sensibly the complexity of the problem: incrementally more complex techniques are provided with incrementally more focused input. The tests of our architecture on a deployed scenario witness the functionality of the platform and its integrability with standard service engines.
['Piergiorgio Bertoli', 'Joerg Hoffmann', 'Freddy Lécué', 'Marco Pistore']
Integrating Discovery and Automated Composition: from Semantic Requirements to Executable Code
398,164
We contrast the performance of two methods of imposing constraints during the tracking of articulated objects, the first method preimposing the kinematic constraints during tracking and, thus, using the minimum degrees of freedom, and the second imposing constraints after tracking and, hence, using the maximum. Despite their very different formulations, the methods recover the same pose change. Further comparisons are drawn in terms of computational speed and algorithmic simplicity and robustness, and it is the last area which is the most telling. The results suggest that using built-in constraints is well-suited to tracking individual articulated objects, whereas applying constraints afterward is most suited to problems involving contact and breakage between articulated (or rigid) objects, where the ability to test tracking performance quickly with constraints turned on or off is desirable.
['T.E. de Campos', 'Ben Tordoff', 'David W. Murray']
Recovering articulated pose: a comparison of two pre and postimposed constraint methods
163,078
When observations are curves over some natural time interval, the field of functional data analysis comes into play. Functional linear processes account for temporal dependence in the data. The prediction problem for functional linear processes has been solved theoretically, but the focus for applications has been on functional autoregressive processes. We propose a new computationally tractable linear predictor for functional linear processes. It is based on an application of the Multivariate Innovations Algorithm to finite-dimensional subprocesses of increasing dimension of the infinite-dimensional functional linear process. We investigate the behavior of the predictor for increasing sample size. We show that, depending on the decay rate of the eigenvalues of the covariance and the spectral density operator, the resulting predictor converges with a certain rate to the theoretically best linear predictor.
['Johannes Klepsch', 'Claudia Klüppelberg']
An innovations algorithm for the prediction of functional linear processes
846,071
The three-dimensional (3D) model of a multiple valued network, based on a hypercube-like topology, is proposed. A graph embedding technique is used to design hypercube based structures. It is shown that the hypercube-like topology is a single-electron transistor (SET) technology-oriented solution to the implementation of multiple-valued networks.
['Svetlana N. Yanushkevich', 'Vlad Shmerko', 'L. Guy', 'D. C. Lu']
Three dimensional multiple valued circuits design based on single-electron logic
445,791
Tangible objects on capacitive multi-touch surfaces are usually only detected while the user is touching them. When the user lets go of such a tangible, the system cannot distinguish whether the user just released the tangible, or picked it up and removed it from the surface. In this demo we demonstrate PERCs , persistent capacitive tangibles that "know" whether they are currently on a capacitive touch surface or not. This is achieved by adding a small field sensor to the tangible to detect the touch screen's own, weak electromagnetic touch detection probing signal. In this demo we present two applications that make use of PERC tangibles -- An air hockey like game for two players and a single person arcade game.
['Christian Cherek', 'Simon Voelker', 'Jan Thar', 'Rene Linden', 'Florian Busch', 'Jan O. Borchers']
PERCs Demo: Persistently Trackable Tangibles on Capacitive Multi-Touch Displays
648,993
Several approaches have been proposed for evaluating information in expected utility theory. Among the most popular approaches are the expected utility increase, the selling price and the buying price. While the expected utility increase and the selling price always agree in ranking information alternatives, Hazen and Sounderpandian [11] have demonstrated that the buying price may not always agree with the other two. That is, in some cases, where the expected utility increase would value information A more highly than information B, the buying price may reverse these preferences. In this paper, we discuss the conditions under which all these approaches agree in a generic decision environment where the decision maker may choose to acquire arbitrary information bundles.
['Niyazi Onur Bakir', 'Georgia-Ann Klutke']
Information and preference reversals in lotteries
197,346
We consider the Gaussian interference channel with an intermediate relay. The relay is assumed to have abundant power and is named potent for that reason. A main reason to consider this model is to find good outerbounds for the Gaussian interference relay channel (GIFRC) with finite relay power. By setting the power of the relay constraint to infinity, we show that the capacity region is asymptotically equivalent to the case when the relay-destination links are noiseless and orthogonal to other links. The capacity region of the latter provides an outerbound for the GIFRC with finite relay power. We then show the capacity region of the former can be upper bounded by a single-input-multiple-output interference channel with an antenna common to both receivers. To establish the sum capacity of this channel, we study the strong and the weak interference regimes. For both regimes, we show that the upperbounds we find are achievable, thus establishing the sum capacity of GIFRC with the potent relay. Both results, in turn, serve as upperbounds for the sum capacity of the GIFRC with finite relay power. Numerical results show that the upperbounds are close to the known achievable rates for many scenarios of interest.
['Ye Tian', 'Aylin Yener']
The Gaussian Interference Relay Channel with a Potent Relay
258,011
The paper discusses a processing technique for LDV data, based on the use of two Kalman filters, enabling the presence of particles to be detected and their velocity to be inferred. This method turns out to be suitable for the design of real-time integrated velocimeters. A first estimator, based on the use of a Kalman filter, deals with the amplitude of the Doppler signal. A second one, using an extended Kalman filter, allows particle velocity estimation, which is assumed to be a constant. Finally, the estimator is studied by means of Monte Carlo trials obtained from synthesized signals, and its performance is then compared to the Cramer-Rao bound.
['A. Le Duff', 'Guy Plantier', 'Anthony Sourice']
Particle detection and velocity measurement in laser Doppler velocimetry using Kalman filters
20,779
Web search queries can offer a unique population-scale window onto streams of evidence that are useful for detecting the emergence of health conditions. We explore the promise of harnessing behavioral signals in search logs to provide advance warning about the presence of devastating diseases such as pancreatic cancer. Pancreatic cancer is often diagnosed too late to be treated effectively as the cancer has usually metastasized by the time of diagnosis. Symptoms of the early stages of the illness are often subtle and nonspecific. We identify searchers who issue credible, first-person diagnostic queries for pancreatic cancer and we learn models from prior search histories that predict which searchers will later input such queries. We show that we can infer the likelihood of seeing the rise of diagnostic queries months before they appear and characterize the tradeoff between predictivity and false positive rate. The findings highlight the potential of harnessing search logs for the early detection of pancreatic cancer and more generally for harnessing search systems to reduce health risks for individuals.
['John Paparrizos', 'Ryen W. White', 'Eric Horvitz']
Detecting Devastating Diseases in Search Logs
836,169
The integration of complex systems out of existing systems is an active area of research and development. There are many practical situations in which the interfaces of the component systems, for example belonging to separate organisations, are changed dynamically and without notification. In this paper we propose an approach to handling such upgrades in a structured and disciplined fashion. All interface changes are viewed as abnormal events and general fault tolerance mechanisms (exception handling, in particular) are applied to dealing with them. The paper outlines general ways of detecting such interface upgrades and recovering after them. An Internet Travel Agency is used as a case study.
['Cliff B. Jones', 'Alexander Romanovsky', 'Ian Welch']
A structured approach to handling on-line interface upgrades
71,542
This tutorial provides a starting point for curve-based modeling. It introduces three rough categories of curve-based-modeling methods: extruding 2D shapes, inflating 2D shapes, and drawing 3D curves. This tutorial introduces representative methods that yield positive results while also exposing several issues related to curve-based modeling.
['Pushkar Joshi']
Curve-Based Shape Modeling a Tutorial
536,570
['Stanisław Marciniak']
Technology Evaluation Using Modified Integrated Method of Technical Project Assessment
730,025
Sensor networks are distributed networks made up of small sensing devices equipped with processors, memory, and short-range wireless communication. They differ from traditional computer networks in that they have resource constraints, unbalanced mixture traffic, data redundancy, network dynamics, and energy balance. Work within wireless sensor networks (WSNs) Quality of service (QoS) has been isolated and specific either on certain functional layers or application scenarios. However the area of sensor network quality of service (QoS) remains largely open. In this paper we define WSNs QoS requirements within a WSNs application, and then analyzing Issues for QoS Monitoring.
['Yuanli Wang', 'Xianghui Liu', 'Jianping Yin']
Requirements of Quality of Service in Wireless Sensor Network
287,217
The applicability of Delay Tolerant Network (DTN) routing protocols is generally quite restricted to extremely mobile environments. All nodes including terminal hosts and routers are mobile, and there is no infrastructure available. This paper presents the utilization of a scheme that combines multiple DTN regions for infrastructure-based DTN communication. This is achieved through stationary devices with store-and-forward capabilities located at the border between various areas which cover both infrastructure-less and infrastructure-based environments. In this scenario, the nodes' movements are restricted to their region; sources and destinations might be located in different regions. The fixed nodes collect all the messages created by sources and transfer it to the destination nodes in other regions, if within communication range. Therefore, the existence of fixed nodes shall resolve the problem of mapping routes for different regions. In addition, we analyze the influence of diverse network conditions when the mobile nodes move around the network area.
['Yasser Mawad', 'Stefan Fischer']
Infrastructure-based delay tolerant network communication
671,478
The partial coloring method is one of the most powerful and widely used method in combinatorial discrepancy problems. However, in many cases it leads to sub-optimal bounds as the partial coloring step must be iterated a logarithmic number of times, and the errors can add up in an adversarial way. We give a new and general algorithmic framework that overcomes the limitations of the partial coloring method and can be applied in a black-box manner to various problems. Using this framework, we give new improved bounds and algorithms for several classic problems in discrepancy. In particular, for Tusnady's problem, we give an improved $O(\log^2 n)$ bound for discrepancy of axis-parallel rectangles and more generally an $O_d(\log^dn)$ bound for $d$-dimensional boxes in $\mathbb{R}^d$. Previously, even non-constructively, the best bounds were $O(\log^{2.5} n)$ and $O_d(\log^{d+0.5}n)$ respectively. Similarly, for the Steinitz problem we give the first algorithm that matches the best known non-constructive bounds due to Banaszczyk \cite{Bana12} in the $\ell_\infty$ case, and improves the previous algorithmic bounds substantially in the $\ell_2$ case. Our framework is based upon a substantial generalization of the techniques developed recently in the context of the Koml\'{o}s discrepancy problem [BDG16].
['Nikhil Bansal', 'Shashwat Garg']
Algorithmic Discrepancy Beyond Partial Coloring
937,298
In this paper we consider the Robust Connected Facility Location (ConFL) problem within the robust discrete optimization framework introduced by Bertsimas and Sim (2003). We propose an Approximate Robust Optimization (ARO) method that uses a heuristic and a lower bounding mechanism to rapidly find high-quality solutions. The use of a heuristic and a lower bounding mechanism-as opposed to solving the robust optimization (RO) problem exactly-within this ARO approach significantly decreases its computational time. This enables one to obtain high quality solutions to large-scale robust optimization problems and thus broadens the scope and applicability of robust optimization (from a computational perspective) to other NP-hard problems. Our computational results attest to the efficacy of the approach.
['M. Gisela Bardossy', 'S. Raghavan']
Approximate robust optimization for the Connected Facility Location problem
582,216
['Shadi Abras', 'Thomas Calmant', 'Stéphane Ploix', 'Didier Donsez', 'Frédéric Wurtz', 'Olivier Gattaz', 'Benoit Delinchant']
Developing Dynamic Heterogeneous Environments in Smart Building Using iPOPO.
766,462
In a recent paper, we introduced a trust-region method with variable norms for unconstrained minimization, we proved standard asymptotic convergence results, and we discussed the impact of this method in global optimization. Here we will show that, with a simple modification with respect to the sufficient descent condition and replacing the trust-region approach with a suitable cubic regularization, the complexity of this method for finding approximate first-order stationary points is \(O(\varepsilon ^{-3/2})\). We also prove a complexity result with respect to second-order stationarity. Some numerical experiments are also presented to illustrate the effect of the modification on practical performance.
['José Mario Martínez', 'Marcos Raydan']
Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
905,157
One of the most important reasoning tasks on queries is checking containment, i.e., verifying whether one query yields necessarily a subset of the result of another one. Query containment is crucial in several contexts, such as query optimization, query reformulation, knowledge-base verification, information integration, integrity checking, and cooperative answering. Containment is undecidable in general for Datalog, the fundamental language for expressing recursive queries. On the other hand, it is known that containment between monadic Datalog queries and between Datalog queries and unions of conjunctive queries are decidable. It is also known that containment between unions of conjunctive two-way regular path queries, which are queries used in the context of semistructured data models containing a limited form of recursion in the form of transitive closure, is decidable. In this paper, we combine the automata-theoretic techniques at the base of these two decidability results to show that containment of Datalog in union of conjunctive two-way regular path queries is decidable in 2EXPTIME. By sharpening a known lower bound result for containment of Datalog in union of conjunctive queries we show also a matching lower bound.
['Diego Calvanese', 'Giuseppe De Giacomo', 'Moshe Y. Vardi']
Decidable containment of recursive queries
269,524
Biophysical modeling studies have previously shown that cortical pyramidal cells driven by strong NMDA-type synaptic currents and/or containing dendritic voltage-dependent Ca++ or Na+ channels, respond more strongly when synapses are activated in several spatially clustered groups of optimal size-in comparison to the same number of synapses activated diffusely about the dendritic arbor [8]. The nonlinear intradendritic interactions giving rise to this "cluster sensitivity" property are akin to a layer of virtual nonlinear "hidden units" in the dendrites, with implications for the cellular basis of learning and memory [7, 6], and for certain classes of nonlinear sensory processing [8]. In the present study, we show that a single neuron, with access only to excitatory inputs from unoriented ON- and OFF-center cells in the LGN, exhibits the principal nonlinear response properties of a "complex" cell in primary visual cortex, namely orientation tuning coupled with translation invariance and contrast insensitivity. We conjecture that this type of intradendritic processing could explain how complex cell responses can persist in the absence of oriented simple cell input [13].
['Bartlett W. Mel', 'Daniel L. Ruderman', 'Kevin A. Archie']
Complex-Cell Responses Derived from Center-Surround Inputs: The Surprising Power of Intradendritic Computation
133,941
In this paper, the problem of H∞ adaptive smoother design is addressed for a class of Lipschitz nonlinear discrete-time systems with l2 bounded disturbance input. By comprehensively analyzing the H∞ performance, Lipschitz conditions and unknown parameter's bounded condition, a positive minimum problem for an indefinite quadratic form is introduced such that the H∞ adaptive smoothing problem is achieved. A Krein space stochastic system with multiple fictitious outputs is constructed by associating with the minimum problem of the introduced indefinite quadratic form. The minimum of indefinite quadratic form is derived in the form of innovations through utilizing Krein space orthogonal projection and innovation analysis approach. Via choosing the suitable fictitious outputs to guarantee the minimum of indefinite quadratic form is positive, the existence condition of the adaptive smoother and its analytical solutions are obtained in virtue of nonstandard Riccati difference equations. The quality of the estimator is checked on an example.
['Chenghui Zhang', 'Huihong Zhao', 'Tongxing Li']
Krein space-based H∞ adaptive smoother design for a class of Lipschitz nonlinear discrete-time systems
812,036
['Saif Khairat', 'Carolyn M. Garcia']
Developing an mHealth Framework to Improve Diabetes Self-Management
739,879
A widely used machine vision pipeline based on the Speeded-Up Robust Features feature detector was applied to the problem of identifying a runway from a universe of known runways, which was constructed using video records of 19 straight-in glidepath approaches to nine runways. The recordings studied included visible, short-wave infrared, and long-wave infrared videos in clear conditions, rain, and fog. Both daytime and nighttime runway approaches were used. High detection specificity (identification of the runway approached and rejection of the other runways in the universe) was observed in all conditions (greater than 90% Bayesian posterior probability). In the visible band, repeatability (identification of a given runway across multiple videos of it) was observed only if illumination (day versus night) was the same and approach visibility was good. Some repeatability was found across visible and shortwave sensor bands. Camera-based geolocation during aircraft landing was compared to the standard Charted...
['Andrew J. Moore', 'Matthew Schubert', 'Chester Dolph', 'Glenn A. Woodell']
Machine Vision Identification of Airport Runways with Visible and Infrared Videos
818,857
mHealth technologies are a promising resource to help people maintain better health while controlling rising healthcare expenditures. As many individuals find it difficult to meet their personal wellness goals, mHealth apps aimed at increasing compliance may be a critical component in empowering people to control their health. The research sought to identify the feature set most desired by users and determine the likelihood of user acceptance. These issues are addressed in a twofold manner: a literature review and a user survey of 519 respondents (18–54 years of age). The results of this research will inform the design of a future wellness mHealth app, the description of which is presented in this work.
['Alana Platt', 'Christina N. Outlay', 'Poornima Sarkar', 'Sasha L. Karnes']
Evaluating User Needs in Wellness Apps
622,589
It is often difficult to find a well-principled approach for the selection of a spatial indexing mechanism for medical image databases. Spatial information concerning lesions in medical images is critically important in disease diagnosis and plays an important role in image retrieval. Unfortunately, images are rarely indexed properly for clinically useful retrieval. One example is the well-known R-tree and its variants which index image objects based on their physical locations in an "absolute" way. However, such information is not meaningful in medical content-based image retrieval systems, and the approaches suffer from problems caused by variations in object size and shape, imprecise image centering, etc. A more appropriate approach, which does not require object registration, is to model the spatial relationships between lesions and anatomical landmarks. To convey diagnostic information, lesions must exist in certain locations with regard to landmarks. In this paper, we show that the histogram of forces (which represents the relative position between two objects) provides an efficient spatial indexing mechanism in the medical domain.
['Chi-Ren Shyu', 'Pascal Matsakis']
Spatial lesion indexing for medical image databases using force histograms
187,061
In the era of monumental development in all spheres of technology, the repository of huge amounts of raw data is but a necessity. Also with increasing availability of storage space, this factor is not a big problem. Hence instead of tackling issues related to reducing the amount of data to be stored, there is bigger need to implement the creation of a knowledge base and hence create semantic data visualization. The work discussed presents an approach to integrate expert knowledge all along the data mining process in a coherent and uniform manner. A collective intelligence system plays a central role in this approach. The work primarily aims at converting the presence of large amounts of raw source data into a knowledge base and collective framework. Heterogeneous Web sources have been taken into consideration, which act as containers for the raw data source. Data visualization is the second major functionality. Taking into consideration the patterns mined, the onus is on the developer to present the mined patterns in a format that is most appealing to the end-user. Creation of such data visualizations requires various dimensions of the pattern to be understood. The estimated end product creates visualization from the knowledge base created from heterogeneous Web sources.
['M. R. Sumalatha', 'A. Ravi', 'M.L. Aravind', 'N.K. Prasanna']
Collective Intelligence in Distributed Systems and Semantic Data Visualization
273,941
Customer self-service systems that belong to key applications of customer relationship management (CRM) not only empower customers reducing errors, but also enable organizations acquiring knowledge and insight to transform true CRM process. This paper focuses on knowledge and insight approach, especially applying the double-loop knowledge management model on customer self-service systems for Taiwan’s e-government. The knowledge of interpersonal network that lie in the double-loop knowledge management model could be viewed a disproportionate condition among the interaction of people. The double-loop knowledge management model provides a powerful filter to sieve out the beneficial knowledge that be produced, shared, or integrated by one's own side, some other persons, or joint collaboration for solving the problems. Moreover, the doubleloop knowledge management model could supplement the roles and implications of customer self-service systems in public eservices and link between public organizations and their target audience towards a holistic approach to understanding CRM practices.
['Shan-yan Huang', 'Han-yuh Liu']
Applying Double-Loop Knowledge Management Model on Customer Self-Service Systems for Taiwan's E-government
77,577
Webpages vary drastically in their look and feel: the presence of images is a major discriminating factor. Some webpages contain mainly text; others exploit flashy ads and a variety of eye-catching pictures. In this paper, we investigate the impact of graphics on webpage aesthetics perception and computation. We split webpages in three categories -- small, moderate and high amount of graphics -- and analyzed how different visual features predicted aesthetics for the different categories. Significant between-category differences were found, e.g., the amount of white space decreased aesthetics for the high-graphic webpages, but not for other webpages; more on-page main colors increased aesthetics for the low-graphic webpages, but decreased for the high-graphic webpages. We suggest future research investigates separately webpages with low and high graphic amount. No single improvement recipe may exist for all webpages; a more fruitful strategy would be suggesting different improvements for different types of webpages.
['Aliaksei Miniukovich', 'Antonella De Angeli']
Webpage Aesthetics: One Size Doesn't Fit All
909,549
Most LPC-based audio coders employ simplistic noise-shaping operations to perform psychoacoustic control of quantization noise. In this paper, we report on new approaches to exploiting perceptual masking in the design of adaptive quantization of LPC excitation parameters. Due to its localized spectral sensitivity, sinusoidal excitation representation is preferred to spectrally flat signals for use in excitation modeling. Simulation results indicate that the proposed multisinusoid excited coder can deliver high quality audio reproduction at the rate of 72 kb/s.
['Wen-Whei Chang', 'De-Yu Wang', 'Li-Wei Wang']
Audio coding using sinusoidal excitation representation
427,434
This paper deals with the transduction of strain accompanying elastic waves in solids by firmly attached optical fibers. Stretching sections of optical fibers changes the time required by guided light to pass such sections. Exploiting interferometric techniques, highly sensitive fiber-optic strain transducers are feasible based on this fiber-intrinsic effect. The impact on the actual strain conversion of the fiber segment’s shape and size, as well as its inclination to the elastic wavefront is studied. FEM analyses show that severe distortions of the interferometric response occur when the attached fiber length spans a noticeable fraction of the elastic wavelength. Analytical models of strain transduction are presented for typical transducer shapes. They are used to compute input-output relationships for the transduction of narrow-band strain pulses as a function of the mechanical wavelength. The described approach applies to many transducers depending on the distributed interaction with the investigated object.
['Just Agbodjan Prince', 'F. Kohl', 'Thilo Sauter']
Modeling of Distributed Sensing of Elastic Waves by Fiber-Optic Interferometry
866,993
Hackers leverage software vulnerabilities to disclose, tamper with, or destroy sensitive#R##N#data. To protect sensitive data, programmers can adhere to the principle of#R##N#least-privilege, which entails giving software the minimal privilege it needs to operate,#R##N#which ensures that sensitive data is only available to software components on a#R##N#strictly need-to-know basis. Unfortunately, applying this principle in practice is dif-#R##N#�cult, as current operating systems tend to provide coarse-grained mechanisms for#R##N#limiting privilege. Thus, most applications today run with greater-than-necessary#R##N#privileges. We propose sthreads, a set of operating system primitives that allows#R##N#�ne-grained isolation of software to approximate the least-privilege ideal. sthreads#R##N#enforce a default-deny model, where software components have no privileges by default,#R##N#so all privileges must be explicitly granted by the programmer.#R##N#Experience introducing sthreads into previously monolithic applications|thus,#R##N#partitioning them|reveals that enumerating privileges for sthreads is di�cult in#R##N#practice. To ease the introduction of sthreads into existing code, we include Crowbar,#R##N#a tool that can be used to learn the privileges required by a compartment. We#R##N#show that only a few changes are necessary to existing code in order to partition#R##N#applications with sthreads, and that Crowbar can guide the programmer through#R##N#these changes. We show that applying sthreads to applications successfully narrows#R##N#the attack surface by reducing the amount of code that can access sensitive data.#R##N#Finally, we show that applications using sthreads pay only a small performance#R##N#overhead. We applied sthreads to a range of applications. Most notably, an SSL#R##N#web server, where we show that sthreads are powerful enough to protect sensitive#R##N#data even against a strong adversary that can act as a man-in-the-middle in the#R##N#network, and also exploit most code in the web server; a threat model not addressed#R##N#to date.
['Andrea Bittau']
Toward least-privilege isolation for software
143,573
We propose a quality-of-service (QoS) driven power and rate adaptation scheme for multichannel communications systems over wireless links. In particular, we use multichannel communications to model the conceptual architectures for either diversity or multiplexing systems, which play a fundamental role in physical-layer evolutions of mobile wireless networks. By integrating information theory with the concept of effective capacity , our proposed scheme aims at maximizing the multichannel-systems throughput subject to a given delay-QoS constraint. Under the framework of convex optimization, we develop the optimal adaptation algorithms. Our analyses show that when the QoS constraint becomes loose, the optimal power-control policy converges to the well-known water-filling scheme, where the Shannon (or ergodic) capacity can be achieved. On the other hand, when the QoS constraint gets stringent, the optimal policy converges to the scheme operating at a constant-rate (i.e., the zero-outage capacity), which, by using only a limited number of subchannels, approaches the Shannon capacity. This observation implies that the optimal effective capacity function decreases from the ergodic capacity to the zero-outage capacity as the QoS constraint becomes more stringent. Furthermore, unlike the single-channel communications, which have to trade off the throughput for QoS provisioning, the multichannel communications can achieve both high throughput and stringent QoS at the same.
['Jia Tang', 'Xi Zhang']
Quality-of-service driven power and rate adaptation for multichannel communications over wireless links
216,577
The concept of C-space entropy for sensor-based exploration and view planning for general robot-sensor systems has been introduced in [?], [?], [?], [?]. The robot plans the next sensing action (also called the next best view) to maximize the expected C-space entropy reduction, (known as Maximal expected Entropy Reduction, or MER). It gives priority to those areas that increase the maneuverable space around the robot, taking into account its physical size and shape, thereby facilitating reachability for further views. However, previous work had assumed a Poisson point process model for obstacle distribution in the physical space, a simplifying assumption. In this paper we derive an expression for MER criterion assuming an occupancy grid map, a commonly used representation for workspace representation in much of the mobile robot community. This model is easily obtained from typical range sensors such as laser range finders, stereo vision, etc., and furthermore, we can incorporate occlusion constraints and their effect in the MER formulation, making it more realistic. Simulations show that even for holonomic mobile robots with relatively simple geometric shapes (such as a rectangle), the MER criterion yields improvement in exploration efficiency (number of views needed to explore the C-space) over physical space based criteria.
['Lila Torabi', 'Moslem Kazemi', 'Kamal K. Gupta']
Configuration space based efficient view planning and exploration with occupancy grids
341,147
In this paper, we provide an overview of two Computer Science for High School teacher training workshops, offered at Fairfield University in 2012 and 2013. These professional development programs offered the skills necessary to integrate Google Apps education and interactive, metaphor-based computer games tools into middle and high school curricula, to help students learn computer science and engineering concepts. The first year workshop was primarily focused on the implementation of computer science and gaming concepts within STEM curriculum. The focus of the second year workshop was two-fold, first to continue the implementation of computer science and engineering concepts through STEM education, and second, to create connections and extensions for high school teachers who have already been introduced to STEM curricula and teaching models. Multiple urban and suburban school districts were included in a collaborative program with our university, designed to teach educators how to use computer science as a mean to make connections between different curriculum areas and teach higher order problem-solving skills. Various learning activities during the workshops are presented, and outcomes and teachers' feedback after attending these workshops are discussed.
['Amalia Rusu']
Introducing gaming tools for computing education in STEM related curricula
562,800
In this paper, we apply ring signature and blind signature concurrently and present a ring blind signature scheme which has the properties of ring signature and blind signature. We show how it plays an important role in some applications such as transferable e-cash with multiple banks.
['Chengyu Hu', 'Daxing Li']
Ring Blind Signature Scheme
156,274
Abstract#R##N#For the non-symmetric algebraic Riccati equations, we establish a class of alternately linearized implicit (ALI) iteration methods for computing its minimal non-negative solutions by technical combination of alternate splitting and successive approximating of the algebraic Riccati operators. These methods include one iteration parameter, and suitable choices of this parameter may result in fast convergent iteration methods. Under suitable conditions, we prove the monotone convergence and estimate the asymptotic convergence factor of the ALI iteration matrix sequences. Numerical experiments show that the ALI iteration methods are feasible and effective, and can outperform the Newton iteration method and the fixed-point iteration methods. Besides, we further generalize the known fixed-point iterations, obtaining an extensive class of relaxed splitting iteration methods for solving the non-symmetric algebraic Riccati equations. Copyright © 2006 John Wiley & Sons, Ltd.
['Zhong-Zhi Bai', 'Xiao-Xia Guo', 'Shufang Xu']
Alternately linearized implicit iteration methods for the minimal nonnegative solutions of the nonsymmetric algebraic Riccati equations
30,220
We present a scheme that improves accuracy of 2.4GHz RF tag based indoor positioning. The accuracy of indoor positioning using 2.4GHz RF tags is affected by propagation loss caused by human body shielding, especially in crowded situations. This paper proposes a RSSI compensation scheme that estimate a crowd density level based on detected 2.4 GHz RSSI. We developed a scanner system to sense BLE (Bluetooth Low Energy) tag for 2.4 GHz RSSI compensation. We deployed 40 BLE scanners and over 100 BLE tags, and collected data for 100 participants at an actual event over a period of four days. Using the collected data, we estimated crowd density level that can be applied to the proposed compensation scheme based on theoretical methods on propagation loss. Our scheme resulted in 59.8% higher accuracy than simple positioning method without the compensation.
['Kei Hiroi', 'Yoichi Shinoda', 'Nobuo Kawaguchi']
A better positioning with BLE tag by RSSI compensation through crowd density estimation
887,268
Learning from instructions or demonstrations is a fundamental property of our brain necessary to acquire new knowledge and develop novel skills or behavioral patterns. This type of learning is thought to be involved in most of our daily routines. Although the concept of instruction-based learning has been studied for several decades, the exact neural mechanisms implementing this process remain unrevealed. One of the central questions in this regard is, How do neurons learn to reproduce template signals (instructions) encoded in precisely timed sequences of spikes?#R##N##R##N#Here we present a model of supervised learning for biologically plausible neurons that addresses this question. In a set of experiments, we demonstrate that our approach enables us to train spiking neurons to reproduce arbitrary template spike patterns in response to given synaptic stimuli even in the presence of various sources of noise.#R##N##R##N#We show that the learning rule can also be used for decision-making tasks. Neurons can be trained to classify categories of input signals based on only a temporal configuration of spikes. The decision is communicated by emitting precisely timed spike trains associated with given input categories. Trained neurons can perform the classification task correctly even if stimuli and corresponding decision times are temporally separated and the relevant information is consequently highly overlapped by the ongoing neural activity.#R##N##R##N#Finally, we demonstrate that neurons can be trained to reproduce sequences of spikes with a controllable time shift with respect to target templates. A reproduced signal can follow or even precede the targets. This surprising result points out that spiking neurons can potentially be applied to forecast the behavior (firing times) of other reference neurons or networks.
['Filip Ponulak', 'Andrzej J. Kasinski']
Supervised learning in spiking neural networks with resume: Sequence learning, classification, and spike shifting
478,612
The type system in the Dart programming language is deliberately designed to be unsound: for a number of reasons, it may happen that a program encounters type errors at runtime although the static type checker reports no warnings. According to the language designers, this ensures a pragmatic balance between the ability to catch bugs statically and allowing a flexible programming style without burdening the programmer with a lot of spurious type warnings. In this work, we attempt to experimentally validate these design choices. Through an empirical evaluation based on open source programs written in Dart totaling 2.4 M LOC, we explore how alternative, more sound choices affect the type warnings being produced. Our results show that some, but not all, sources of unsoundness can be justified. In particular, we find that unsoundness caused by bivariant function subtyping and method overriding does not seem to help programmers. Such information may be useful when designing future versions of the language or entirely new languages.
['Gianluca Mezzetti', 'Anders Møller', 'Fabio Strocco']
Type unsoundness in practice: an empirical study of Dart
916,872
['Hiroshi Tenmoto', 'Mineichi Kudo', 'Masaru Shimbo']
PIECEWISE LINEAR CLASSIFIERS PRESERVING HIGH LOCAL RECOGNITION RATES
808,066
['Carlo Giovannella', 'Andrea Camusi']
Participatory grading in a blended course on Multimodal Interface and Systems
687,092
This study presents two predicted-based watermarking schemes, namely Ahead AC-Predicted Watermarking (AAPW) and Post AC-Predicted Watermarking (PAPW), by embedding information into low frequency AC coefficients of Discrete Cosine Transform (DCT). The proposed methods utilize the DC values of the neighboring blocks to predict the AC coefficients of the center block. The low frequency AC coefficients are modified to carry watermark information. The Least Mean Squares (LMS) is employed to yield the intermediate filters for cooperating with the neighboring DC coefficients to predict the original AC coefficients. During the LMS filter training, the training blocks are classified into different categories according to their texture angles and variances. The classified trained filter sets are then used to predict the AC coefficients even more precisely. As documented in the experimental results, the image quality and the embedded capacity of the proposed schemes are superior to former methods in the literature. Moreover, many attacks are addressed to show the robustness of the proposed methods.
['Jing-Ming Guo', 'Chia-Hao Chang']
Prediction-Based Watermarking Schemes for DCT-Based Image Coding
390,757
The context aware applications should focus on the Context adaptation, where the context change is reflected in the application, and the Content extension, where a new content of context is added without rebuilding the whole application. This paper defines Context Driven Component, which implements behaviors required by a context. An application is developed through composing the context driven components. It supports the context adaptation through replacing components or the content extension through adding components implementing behaviors relevant to the extended contents. The development using the context driven components will be analyzed in the following respects; the scale of context, the vertical decomposition compared to the existing way, and the implementation in Ubicomp.
['Hoijin Yoon', 'Byoungju Choi']
The Context Driven Component Supporting the Context Adaptation and the Content Extension
86,223
We develop a model by choosing the maximum entropy distribution from the set of models satisfying certain smoothness and independence criteria; we show that inference on this model generalizes local kernel estimation to the context of Bayesian inference on stochastic processes. Our model enables Bayesian inference in contexts when standard techniques like Gaussian process inference are too expensive to apply. Exact inference on our model is possible for any likelihood function from the exponential family. Inference is then highly efficient, requiring only O (log N) time and O (N) space at run time. We demonstrate our algorithm on several problems and show quantifiable improvement in both speed and performance relative to models based on the Gaussian process.
['William Vega-Brown', 'Marek Doniec', 'Nicholas Roy']
Nonparametric Bayesian inference on multivariate exponential families
182,124
We present practical algorithms for stratified autocalibration with theoretical guarantees of global optimality. Given a projective reconstruction, we first upgrade it to affine by estimating the position of the plane at infinity. The plane at infinity is computed by globally minimizing a least squares formulation of the modulus constraints. In the second stage, this affine reconstruction is upgraded to a metric one by globally minimizing the infinite homography relation to compute the dual image of the absolute conic (DIAC). The positive semidefiniteness of the DIAC is explicitly enforced as part of the optimization process, rather than as a post-processing step.#R##N##R##N#For each stage, we construct and minimize tight convex relaxations of the highly non-convex objective functions in a branch and bound optimization framework. We exploit the inherent problem structure to restrict the search space for the DIAC and the plane at infinity to a small, fixed number of branching dimensions, independent of the number of views. Chirality constraints are incorporated into our convex relaxations to automatically select an initial region which is guaranteed to contain the global minimum.#R##N##R##N#Experimental evidence of the accuracy, speed and scalability of our algorithm is presented on synthetic and real data.
['Manmohan Chandraker', 'Sameer Agarwal', 'David J. Kriegman', 'Serge J. Belongie']
Globally Optimal Algorithms for Stratified Autocalibration
217,260
In this paper, we present a generalized framework for active eavesdropping in a frequency hopping spread spectrum passive radio frequency identification system. In our model, there exists an adversarial reader who is able to transmit its own continuous wave signal outside the frequency band of the legitimate reader. Due to the fact that under backscatter modulation, the tag cannot distinguish different frequencies and simply sets the impedance in its circuitry to either low or high to reflect a bit of 1 or 0, and the adversarial reader’s received signal is a weighted sum of the response to both its own signal and the legitimate reader’s signal. Using this model, we provide a theoretical analysis of the capability of the adversarial reader in terms of the decoding error probability for slow frequency and fast frequency hopping systems. We derive analytic formulas and conduct experiments using software defined radios that act as the legitimate reader, the adversarial reader, and Intel Wireless Identification Sensing Platform tags with parameters as specified in EPC Gen2. Simulations are also used to validate our findings. We find from the theoretical analysis as well the experimental results that the active eavesdropper can achieve a better decoding error rate than a conventional passive eavesdropper, even in the case that the eavesdropper’s signal is a low power signal.
['Fei Huo', 'Patrick Mitran', 'Guang Gong']
Analysis and Validation of Active Eavesdropping Attacks in Passive FHSS RFID Systems
704,273
['Satish Grandhi', 'Bo Yang', 'Christian Spagnol', 'Samarth Gupta', 'Emanuel M. Popovici']
An EDA Framework for Reliability Estimation and Optimization of Combinational Circuits
867,113
The hand-eye calibration problem was first formulated decades ago and is widely applied in robotics, image guided therapy, etc. It is usually cast as the “AX = XB” problem where the matrices A, B, and X are rigid body transformations in SE(3). Many solvers have been proposed to recover X given data streams {Ai} and {Bi} with correspondence. However, exact correspondence might not be accessible in the real world due to the asynchronous sensors and missing data, etc. A probabilistic approach named “Batch method” was introduced in previous research of our lab, which doesn't require a prior knowledge of the correspondence between the two data streams {Ai} and {Bj}. Analogous to non-probabilistic approaches which require data selection to filter out ill-conditioned data pairs, the Batch method has restrictions on the data set {Ai} and {Bj} that can be used. We propose two new probabilistic approaches built on top of the Batch method by giving new definitions of the mean on SE(3), which alleviate the restrictions on the data set and significantly improve the calibration accuracy of X.
['Qianli Ma', 'Haiyuan Li', 'Gregory S. Chirikjian']
New probabilistic approaches to the AX = XB hand-eye calibration without correspondence
810,160
InSAR technology is applied to measuring surface deformation caused by groundwater extraction in this Paper. Cangzhou city, where the most severe ground subsidence in China has occurred, is selected as the work area. Residential, industrial and agricultural water of this region is provided by groundwater pumping and large-area funnel-shape subsidence has formed. Several meters of land subsidence has lead to great damage to urban infrastructure in the past decades. ERS1/2 and JERS-1 SAR data are both collected to detect the deformation in the past few years. This area is seriously affected by the vegetation coverage and large deformation, so the ERS interferograms often contain too much noise except ERS1/2 tandem, which is only preferable for DEM formation. JERS seems to be a little more robust than ERS in this case. Coherence of JERS pairs is not as good as expected, but regional error estimating and appropriate filtering can help to improve the interferogram. Trend and shape and magnitude of the deformation are clear after flattening and unwrapping. Appropriate land use and land cover are considered. And efforts are still being paid to improve processing method to deal with the noise caused by vegetation and atmospheres to get more precise measurements. InSAR technology is expected to be applied in other cities in China to monitor land subsidence induced by groundwater, gas, and oil pumping.
['Lixia Gong', 'Jingfa Zhang', 'Qingshi Guo']
Measure groundwater pumping induced subsidence with D-InSAR
198,519
In order to exploit biological molecular motors as nanomachines, we need to determine the physical principles that govern their operation. Here we first consider how a processive molecular motor utilizes heat and chemical free energy in order to perform mechanical work. We then examine the features that would allow such a motor to synthesize ATP
['Neil Thomas']
Molecular motors: Thermodynamics and ATP synthesis
82,304
As technologies for 3D acquisition become widely available, it is expected that 3D content documenting heritage artifacts will become increasingly popular. Nevertheless, to provide access to and enable the creative use of this content, it is necessary to address the challenges to its access. These include the automatic enrichment of 3D content with suitable metadata so that content does not get lost. To address these challenges, this article presents research on developing technologies to support the organization and discoverability of 3D content in the Cultural Heritage (CH) domain. This research takes advantage of the fact that heritage artifacts have been designed throughout the centuries with distinctive design styles. Hence, the shape and the decoration of an artifact can provide significant information on the history of the artifact. The main contributions of this article include an ontology for documenting 3D representations of heritage artifacts decorated with ornaments such as architectural mouldings. In addition, the article presents a complementary shape retrieval method based on shape saliency to improve the automatic classification of the artifact’s semantic information based on its 3D shape. This method is tested on a collection of Regency ornament mouldings found in domestic interiors. This content provides a rich dataset on which to base the exploration of issues common to many CH artifacts, such as design styles and decorative ornament.
['Karina Rodriguez Echavarria', 'Ran Song']
Analyzing the Decorative Style of 3D Heritage Collections Based on Shape Saliency
950,982
A targets detection algorithm is proposed for maritime surveillance by single-channel SAR images. It foresees a preliminary prescreening step, carried out using an adaptive threshold algorithm, followed by a discrimination phase, performed by sub-look analysis. The latter discriminates the pixels detected by the former step in three classes, i.e. targets, sea, and azimuth ambiguity. The algorithm is tested on single channel StripMap TerraSAR-X data. Results indicate that the selected algorithm shows promising detection performance and ambiguity rejection capabilities.
['Alfredo Renga', 'Maria Daniela Graziano', 'Antonio Moccia']
Prescreening and discrimation of maritime targets in single-channel SAR images
934,606
Images elicit a variety of emotional responses related to image content, overall aesthetic appeal, or a combination of both. One aspect of aesthetic appeal is harmony: the pleasing or congruent arrangement of parts producing internal calm or tranquility. We conducted a series of experiments to identify what low level features could predict harmony in an image. Subjective judgments of image harmony were collected for images representative of typical consumer photography. Our initial results show that for simplified images (pixelated to control for emotional responses to scenes and objects) impressions of image harmony depend on statistical properties of low level local features. The feature combinations vary for different individuals but typically involve edge contrast, average lightness and range of lightness. At the same time inclusion of Gestalt principles is needed to account for the subjects' data. Additionally, global low level features related to spatial image structure may help to explain results with black and white and color images.
['Elena A. Fedorovskaya', 'Carman Neustaedter', 'Wei Hao']
Image harmony for consumer images
60,995
Primary visual cortex (V1) contains overlaid feature maps for orientation (OR), motion direction selectivity (DR), and ocular dominance (OD). Neurons in these maps are connected laterally in patchy, long-range patterns that follow the feature preferences. Using the LISSOM model, we show for the first time how realistic laterally connected joint OR/OD/DR maps can self-organize from Hebbian learning of moving natural images. The model predicts that lateral connections will link neurons of either eye preference and with similar DR and OR preferences. These results suggest that a single self-organizing system may underlie the development of spatiotemporal feature preferences and lateral connectivity.
['James Bednar', 'Risto Miikkulainen']
Joint maps for orientation, eye, and direction preference in a self-organizing model of V1
103,950
Multiple cameras and collaboration between them make possible the integration of information available from multiple views and reduce the uncertainty due to occlusions. This paper presents a novel method for integrating and tracking multi-view observations using bidirectional belief propagation. The method is based on a fully connected graphical model where target states at different views are represented as different but correlated random variables, and image observations at a given view are only associated with the target states at the same view. The tracking processes at different views collaborate with each other by exchanging information using a message passing scheme, which largely avoids propagating wrong information. An efficient sequential belief propagation algorithm is adopted to perform the collaboration and to infer the multi-view target states. We demonstrate the effectiveness of our method on video-surveillance sequences.
['Wei Du', 'Justus H. Piater']
Multi-view Object Tracking Using Sequential Belief Propagation
897,717
One of the major problems in modeling images for vision tasks is that images with very similar structure may locally have completely different appearance, e.g., images taken under different illumination conditions, or the images of pedestrians with different clothing. While there have been many successful attempts to address these problems in application-specific settings, we believe that underlying a large set of problems in vision is a representational deficiency of intensity-derived local measurements that are the basis of most efficient models. We argue that interesting structure in images is better captured when the image is defined as a matrix whose entries are discrete indices to a separate palette of possible intensities, colors or other features, much like the image representation often used to save on storage. In order to model the variability in images, we define an image class not by a single index map, but by a probability distribution over the index maps, which can be automatically estimated from the data, and which we call probabilistic index maps. The existing algorithms can be adapted to work with this representation, as we illustrate in this paper on the example of transformation-invariant clustering and background subtraction. Furthermore, the probabilistic index map representation leads to algorithms with computational costs proportional to either the size of the palette or the log of the size of the palette, making the cost of significantly increased invariance to non-structural changes quite bearable.
['Nebojsa Jojic', 'Yaron Caspi']
Capturing image structure with probabilistic index maps
438,279
This paper studies clothing and attribute recognition in the fashion domain. Specifically, in this paper, we turn our attention to the compatibility of clothing items and attributes (Fig 1). For example, people do not wear a skirt and a dress at the same time, yet a jacket and a shirt are a preferred combination. We consider such inter-object or inter-attribute compatibility and formulate a Conditional Random Field (CRF) that seeks the most probable combination in the given picture. The model takes into account the location-specific appearance with respect to a human body and the semantic correlation between clothing items and attributes, which we learn using the max-margin framework. Fig 2 illustrates our pipeline. We evaluate our model using two datasets that resemble realistic applica- tion scenarios: on-line social networks and shopping sites. The empirical evaluation indicates that our model effectively improves the recognition performance over various baselines including the state-of-the-art feature designed exclusively for clothing recognition. The results also suggest that our model generalizes well to different fashion-related applications.
['Kota Yamaguchi', 'Takayuki Okatani', 'Kyoko Sudo', 'Kazuhiko Murasaki', 'Yukinobu Taniguchi']
Mix and Match: Joint Model for Clothing and Attribute Recognition
702,955
['Cai Heng Li', 'Shu Jiao Song']
Corrigendum to “A characterization of metacirculants” [J. Combin. Theory Ser. A 120 (1) (2013) 39–48]
912,592
Head pose is a crucial step for numerous face applications such as gaze tracking and face recognition. In this paper, we introduce a new method to learn the mapping between a set of features and the corresponding head pose. It combines a filter based feature selection and a Generalized Regression Neural Network where inputs are sequentially selected through a boosting process. We propose the Fuzzy Functional Criterion, a new filter used to select relevant features. At each step, features are evaluated using weights on examples computed using the error produced by the neural network at the previous step. This boosting strategy helps to focus on hard examples and selects a set of complementary features. Results are compared with two state-of-the-art methods on the Pointing 04 database.
['Kevin Bailly', 'Maurice Milgram']
Head pan angle estimation by a nonlinear regression on selected features
531,138
A new model is introduced for timing jitter in the analog-to-digital conversion process. In contrast with the well-known classical model, the proposed model takes into account the interaction between quantization and jitter. This is of special relevance when quantization plays a significant role, i.e., in the case of a low-resolution analog-to-digital converter. It is shown that the use of the classical model provides an increasing underestimate of jitter frequency-domain effects as the signal amplitude decreases, whereas such effects are properly predicted by the proposed model. Moreover, the sinusoidal waveform as well as multisine waveforms can be treated by the new model.
['Diego Bellan']
An improved model of jitter effects in analog-to-digital conversion
302,336
A numerical method is presented to solve a two-dimensional hyperbolic diffusion problem where is assumed that both convection and diffusion are responsible for flow motion. Since direct solutions based on implicit schemes for multidimensional problems are computationally inefficient, we apply an alternating direction method which is second order accurate in time and space. The stability of the alternating direction method is analyzed using the energy method. Numerical results are presented to illustrate the performance in different cases.
['Adérito Araújo', 'Cidália Neves', 'Ercília Sousa']
An alternating direction implicit method for a second-order hyperbolic diffusion equation with convection ☆
402,648
Remote sensors have begun to capture digital stereoscopic data. Although still monospectral (usually panchromatic), the capture of multispectral or hyperspectral stereoscopic data is just a matter of time. Digital photogrammetric workstations use area-based stereo-matching techniques based on the Pearson (product-moment) correlation coefficient. This is a technique that is not intended to take advantage of the multispectral data. The authors propose a new method that 1) can handle this multispectral information and 2) can take advantage of the spatial relations between pixel locations. The method is based on multidimensional scaling and Procrustes analysis. Our results indicate that the proposed new technique renders more robust results than classical methodology when noise in the original data is introduced
['Ángel M. Felicísimo', 'Aurora Cuartero']
Methodological Proposal for Multispectral Stereo Matching
219,200
['Evelina Koycheva', 'Stefan Hennig', 'Annerose Braune']
Integrating analysis capabilities into the model driven engineering process
922,930