abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
This paper explores the use of haptic stimuli as non-visual affordances to assist in learnability of bend gestures. We tested 48 haptic Tactons with simulated blind participants to understand what haptic sensation could intuitively map to bend location and direction. We identify that a short, single motor Tacton indicates reliably a bend location, while participants agreed that the combination of two motors with varying intensities could indicate bend direction. This work is the first to explore the use of Tactons to communicate bend gesture location and direction, to eventually create a tactile interaction method for blind smartphone users
['Matthew Ernst', 'Audrey Girouard']
Exploring Haptics for Learning Bend Gestures for the Blind
723,736
Abstract#R##N##R##N#A sophisticated, highly miniaturized, mobile robot has been developed to collect boundary layer velocity profile data over the surfaces of wind tunnel models. To reduce aerodynamic interference effects the robot was built with dimensions of only 150mm tall X 70mm long X 35mm wide, yet it contained a substantial number of electro-mechanical and electronic systems for accurately positioning small pressure probes, and for transducing their pressure data. The robot system, which could be controlled in a tele-operator mode from outside the wind tunnel, was capable of traversing a pressure probe normal to the model surface at two speeds, and was able to stop the probe at any desired location with a positional accuracy of 0.022mm. The robot also included systems that enabled it to move over the model surfaces in a chordwise sense, and to rotate the probe through 180° for use in reversed flow regions. When used operationally during actual wind tunnel testing, the robot functioned flawlessly, and returned high quality boundary layer data.
['Stuart Wilkinson']
A miniature robotic boundary layer data acquisition system
9,520
Real time locating systems (RTLS) determine and track the location of assets and person using active tags. Two or more readers can estimate the tag's range from each reader and determine its location. In order to determine tag's location, there are several methods. This paper presents a real time locating system which is based on time difference of arrival(TDOA) of a signal, for wireless networks using IEEE 802.15.4 radio. In order to measure the time that a signal arrives at, exact time measurement is crucial. This paper proposes a multi-phase radio method to provide exact time measurement. In addition, to calculate the time difference, readers in the network should be synchronized with themselves. We also present a precision time synchronization protocol. This paper includes the performance evaluation. The performance shows that our RTLS has accuracy of within 3 meters.
['Hyuntae Cho', 'Yeonsu Jung', 'Hoon Choi', 'Hyunsung Jang', 'Sanghyun Son', 'Yunju Baek']
Real Time Locating System for Wireless Networks using IEEE 802.15.4 Radio
86,355
Because changes in organizations and information technology environments are enduring, the alignment of the IT function with business objectives must not only be understood, but constantly renewed and adjusted. This is amply reflected in recent surveys of CIOs, which consistently suggest that the notion of alignment is a top challenge and management priority. Many CIOs face a double challenge when addressing the issue of alignment: they must first clarify top management's expectations and assumptions about IT, which may be contradictory, and then understand their implications for how the IT department should be managed (i.e., translate the function's strategic mission into an IT management model that adds value to the organization).#R##N##R##N#The characterization of the IT function has constituted a central and growing subject of research in the information systems field. Although the extant literature has much to teach us, knowledge in this area is nevertheless fragmented and has not been properly integrated. In response to these limitations, this study proposes and tests a new theory of the contribution of the IT function. Specifically, our objective is to offer an explanation of the contribution of the IT function in organizations with a typology of ideal profiles.#R##N##R##N#A field study was conducted in 24 large Canadian companies in order to validate a set of research propositions. Our results first suggest that there are five distinct "ideal" IT management profiles in organizations and each of these profiles tends to focus on specific sources of value. Next, we observed that IT functions that are close to the ideal of any given profile seem to be outperforming those with hybrid profiles. Finally, our findings provide a compelling explanation as to how ideal IT management profiles are adopted in organizations. The article concludes with a discussion of the theoretical and practical implications of the proposed theory.
['Manon Ghislaine Guillemette', 'Guy Paré']
Toward a new theory of the contribution of the it function in organizations
136,494
Depression is a familiar psychological disorder caused by a combination of genetic, biological, environmental, and psychological factors. Untreated depression carries a high cost in terms of relationship problems, family suffering, and loss of work productivity. However diagnosis and treatment of depression is difficult due to varied severity, frequency, and duration of symptoms in depressed individuals. In this study, correlation between depression levels and behavioral trends of individuals has been established through a survey involving around 120 undergraduate students. The survey outcome is analyzed from a psychological viewpoint and finally some design implications on an automated system of depression detection and support system have been proposed.
['Mashrura Tasnim', 'Rifat Shahriyar', 'Nowshin Nahar', 'Hossain Mahmud']
Intelligent depression detection and support system: Statistical analysis, psychological review and design implication
939,655
Since the demise of the Overnet network, the Kad network has become not only the most popular but also the only widely used peer-to-peer system based on a distributed hash table. It is likely that its user base will continue to grow in numbers over the next few years as, unlike the eDonkey network, it does not depend on central servers, which increases scalability and reliability. Moreover, the Kad network is more efficient than unstructured systems such as Gnutella. However, we show that today's Kad network can be attacked in several ways by carrying out several (well-known) attacks on the Kad network. The presented attacks could be used either to hamper the correct functioning of the network itself, to censor contents, or to harm other entities in the Internet not participating in the Kad network such as ordinary web servers. While there are simple heuristics to reduce the impact of some of the attacks, we believe that the presented attacks cannot be thwarted easily in any fully decentralized peer-to-peer system without some kind of a centralized certification and verification authority.
['Thomas Locher', 'David Mysicka', 'Stefan Schmid', 'Roger Wattenhofer']
Poisoning the Kad network
228,438
Dynamic topology is one of the main influence factors on network performability. However, it was always ignored by the traditional network performability assessment methods when analyzing large-scale mobile ad hoc networks (MANETs) because of the state explosion problem. In this paper, we address this problem from the perspective of complex network. A two-layer hierarchical modeling approach is proposed for MANETs performability assessment, which can take both the dynamic topology and multi-state nodes into consideration. The lower level is described by Markov reward chains (MRC) to capture the multiple states of the nodes. The upper level is modeled as a small-world network to capture the characteristic path length based on different mobility and propagation models. The hierarchical model can promote the MRC of nodes into a state matrix of the whole network, which can avoid the state explosion in large-scale networks assessment from the perspective of complex network. Through the contrast experiments with OPNET simulation based on specific cases, the method proposed in this paper shows satisfactory performance on accuracy and efficiency.
['Shuo Zhang', 'Ning Huang', 'Xiaolei Sun', 'Yue Zhang']
A Hierarchical Model for Mobile Ad Hoc Network Performability Assessment
924,056
When built, quantum repeater networks will require classical network protocols to control the quantum operations. However, existing work on repeaters has focused on the quantum operations themselves, with less attention paid to the contents, semantics, ordering and reliability of the classical control messages. In this paper we define and describe our implementation of the classical control protocols. The state machines and packet sequences for the three protocol layers are presented, and operation confirmed by running the protocols over simulations of the physical network. We also show that proper management of the resources in a bottleneck link allows the aggregate throughput of two end-to-end flows to substantially exceed that of a single flow. Our layered architectural framework will support independent evolution of the separate protocol layers.
['Luciano Aparicio', 'Rodney Van Meter', 'Hiroshi Esaki']
Protocol design for quantum repeater networks
308,409
Research on the Impact of Menu Structure of Smart Phones on Dual Task Performance
['Huining Xing', 'Hua Qin', 'Dingding Wang']
Research on the Impact of Menu Structure of Smart Phones on Dual Task Performance
848,896
The third generation wireless networks and wireless local networks possess complementary characteristics. Recently, there has been significant interest in providing algorithms and specifications that enable their inter-operability. In this paper we propose a novel cross-network, cross-layer algorithm that jointly performs 3G resource allocation and ad-hoc mode WLAN routing towards effectively increasing the performance of the 3G system. The metrics used in this joint design ensures that multi-user diversity is exploited without causing user starvation in the 3G system and the WLAN assistance does not cause an unfair treatment to any of the mobiles from a battery usage point of view. Furthermore, the design attempts to select the WLAN route so that the assistance does not become a major part of the internal WLAN traffic.
['F. Ozan Akgül', 'M. Oguz Sunay']
Energy efficient utilization of IEEE 802.11 hot spots in 3g wireless packet data networks
101,096
This paper explores the use of quadratic mutual information as a similarity criterion for dense, non-rigid registration of medical images. Quadratic mutual information between two random variables has been recently proposed as Euclidean distance between the joint density and the product of the marginals. It has been shown to have a smooth sample estimator, that can be computed without having to use numerical approximation techniques for computing the integral over densities. In this paper, we derive Euler-Lagrange equations for optimizing quadratic mutual information in a variational framework. We then obtain a dense deformation field for registering 3D tomography images. Our results demonstrate the applicability of this criterion for such a task, and yield ground for further analysis and research.
['Abhishek Singh', 'Ying Zhu', "Christophe Chefd'hotel"]
A variational approach for optimizing quadratic mutual information for medical image registration
513,805
For a wireless multi-tier heterogeneous network with orthogonal spectrum allocation across tiers, we optimize the association probability and the fraction of spectrum allocated to each tier so as to maximize rate coverage. In practice, the association probability can be controlled using a biased received signal power. The optimization problem is non-convex and we are forced to explore locally optimal solutions. We make two contributions in this paper: first, we show that there exists a relation between the first derivatives of the objective function with respect to each of the optimization variables. This can be used to simplify numerical solutions to the optimization problem. Second, we explore the optimality of the intuitive solution that the fraction of spectrum allocated to each tier should be equal to the tier association probability. We show that, in this case, a closed-form solution exists. Importantly, our numerical results show that there is essentially zero performance loss. The results also illustrate the significant gains possible by jointly optimizing the user association and the resource allocation.
['Sanam Sadr', 'Raviraj S. Adve']
Tier Association Probability and Spectrum Partitioning for Maximum Rate Coverage in Multi-Tier Heterogeneous Networks
71,638
Many areas in power systems require solving one or more nonlinear optimization problems. While analytical methods might suffer from slow convergence and the curse of dimensionality, heuristics-based swarm intelligence can be an efficient alternative. Particle swarm optimization (PSO), part of the swarm intelligence family, is known to effectively solve large-scale nonlinear optimization problems. This paper presents a detailed overview of the basic concepts of PSO and its variants. Also, it provides a comprehensive survey on the power system applications that have benefited from the powerful nature of PSO as an optimization technique. For each application, technical details that are required for applying PSO, such as its type, particle formulation (solution representation), and the most efficient fitness functions are also discussed.
['Y. del Valle', 'Ganesh K. Venayagamoorthy', 'Salman Mohagheghi', 'J.C. Hernandez', 'Ronald G. Harley']
Particle Swarm Optimization: Basic Concepts, Variants and Applications in Power Systems
509,595
Existing seismic instrumentation systems do not yet have the capability to recover the physical dynamics with sufficient resolution in real time. Currently, seismologists use centralised tomography inversion algorithm, which requires manual data gathering from each station and months to generate tomography. To address these issues a distributed approach is required which can avoid data collection from large number of sensors and perform in-network imaging to real-time tomography. In this paper, we present a distributed adaptive mesh refinement (AMR) solution to invert seismic tomography over large dense network, which avoids centralised computation and expensive data collection. Our approach first discretises the data and filters them using adaptive mesh to make it well-conditioned. The system is implemented and evaluated using a CORE emulator and we show that the filtered well-conditioned system has lower dimension and improved convergence rate than the original system, thereby decreasing the communicati...
['Goutham Kamath', 'Lei Shi', 'Edmond Chow', 'Wen Zhan Song']
Distributed tomography with adaptive mesh refinement in sensor networks
936,367
This paper proposes an innovative algorithm to find the two's complement of a binary number. The proposed method works in logarithmic time (O(logN)) instead of the worst case linear time (O(N)) where a carry has to ripple all the way from LSB to MSB. The proposed method also allows for more regularly structured logic units which can be easily modularized and can be naturally extended to any word size. Our synthesis results show that our method achieves up to 2.8× of performance improvement and up to 7.27× of power savings compared to the conventional method.
['Jung-Yup Kang', 'Jean-Luc Gaudiot']
A logarithmic time method for two's complementation
443,670
We present the first NoC-based hardware implementation of Neural Coding (NC), which is a new approach that opens outstanding perspectives for the design of associative memories and learning machines. We first propose optimized architectures of memories and processing elements that allow for an efficient distributed implementation. Then we introduce different NoC architectures to interconnect all elements, it provides the required scalability and takes advantage of parallel transfer opportunities. Performance, cost and energy consumption tradeoffs of various NoC solutions are compared and discussed. Based on previous implementation results, we run SystemC-TLM that validate the behavior of the algorithm and of the efficiency of the dedicated architecture. This work demonstrates that this architecture can meet expected requirements in terms of scalability and hierarchy, and consequently that NC-based architectures are compliant with efficient hardware implementations of a new and promising model of associative memories.
['Jean-Philippe Diguet', 'Marius Strum', 'Nicolas Le Griguer', 'Lydie Caetano', 'Martha Johanna Sepulveda']
Scalable NoC-based architecture of neural coding for new efficient associative memories
82,568
High penetration of distributed energy resources presents significant challenges and provides emerging opportunities for voltage regulation in power distribution systems. Advanced power-electronics technology makes it possible to control the reactive power output from these resources, in order to maintain a desirable voltage profile. This paper develops a local control framework to account for limits on reactive power resources using the gradient projection optimization method. Requiring only local voltage measurements, the proposed design does not suffer from the stability issues of (de-)centralized approaches caused by communication delays and noises. Our local voltage design is shown to be robust to potential asynchronous control updates among distributed resources in a "plug-and-play" distribution network.
['Hao Zhu', 'Na Li']
Asynchronous local voltage control in power distribution networks
754,702
Whole heart segmentation of 3D ultrasound (US), also referred to as echocardiography or simply echo, is useful in cardiac functional analysis to achieve quantitative diagnostic information of the heart. However, characteristics of US imaging such as limited field-of-view, artifacts and inconsistent intensity distribution makes automated approaches a challenge. In this paper, we present a framework for automatic whole heart segmentation from 3D echo. This work is motivated by the new technology of compounding 3D echo from 2D matrix array transducers. We propose to use the registration-based segmentation framework and adopt a new similarity measure combining local phase, intensity information and local geometry for registration. The experimental results demonstrated the proposed method had achieved an accuracy of 6.4% volume difference against the gold standard for the left ventricle segmentation and an average accuracy of 14% for segmentation of all four chambers and myocardium.
['X. Zhuang', 'Cheng Yao', 'Y. Ma', 'David J. Hawkes', 'Graeme P. Penney', 'Sebastien Ourselin']
Registration-based propagation for whole heart segmentation from compounded 3D echocardiography
202,869
A cost-effective computer supported collaborative learning for online education
['Charalambos S. Christou', 'Despo Ktoridou', 'Karimov Zafar']
A cost-effective computer supported collaborative learning for online education
909,882
A predictive, multiple model control strategy is developed by extension of self-organizing map (SOM) local dynamic modeling of nonlinear autonomous systems to a control framework. Multiple SOMs collectively model the global response of a nonautonomous system to a finite set of representative prototype controls. Each SOM provides a codebook representation of the dynamics corresponding to a prototype control. Different dynamic regimes are organized into topological neighborhoods where the adjacent entries in the codebook represent the global minimization of a similarity metric. The SOM is additionally employed to identify the local dynamical regime, and consequently implements a switching scheme that selects the best available model for the applied control. SOM based linear models are used to predict the response to a larger family of control sequences which are clustered on the representative prototypes. The control sequence which corresponds to the prediction that best satisfies the requirements on the system output is applied as the external driving signal.
['Mark A. Motter']
Predictive multiple model switching control with the self-organizing map
264,112
The robustness properties of integral sliding-mode controllers are studied. This note shows how to select the projection matrix in such a way that the euclidean norm of the resulting perturbation is minimal. It is also shown that when the minimum is attained, the resulting perturbation is not amplified. This selection is particularly useful if integral sliding-mode control is to be combined with other methods to further robustify against unmatched perturbations. H/sub /spl infin// is taken as a special case. Simulations support the general analysis and show the effectiveness of this particular combination.
['Fernando Castaños', 'Leonid Fridman']
Analysis and design of integral sliding manifolds for systems with unmatched perturbations
124,680
For an increasing number of modern database applications, efficient support of similarity search becomes an important task. Along with the complexity of the objects such as images, molecules and mechanical parts, also the complexity of the similarity models increases more and more. Whereas algorithms that are directly based on indexes work well for simple medium-dimensional similarity distance functions, they do not meet the efficiency requirements of complex high-dimensional and adaptable distance functions. The use of a multi-step query processing strategy is recommended in these cases, and our investigations substantiate that the number of candidates which are produced in the filter step and exactly evaluated in the refinement step is a fundamental efficiency parameter. After revealing the strong performance shortcomings of the state-of-the-art algorithm for k -nearest neighbor search [Korn et al. 1996], we present a novel multi-step algorithm which is guaranteed to produce the minimum number of candidates. Experimental evaluations demonstrate the significant performance gain over the previous solution, and we observed average improvement factors of up to 120 for the number of candidates and up to 48 for the total runtime.
['Thomas Seidl', 'Hans-Peter Kriegel']
Optimal multi-step k-nearest neighbor search
167,304
Cortical Color and the Cognitive Sciences
['Berit Brogaard', 'Dimitria Electra Gatzia']
Cortical Color and the Cognitive Sciences
960,798
Three Curriculum Maturing Cycles in Academic Curriculum Management Systems
['Kai Pata', 'Kairit Tammets', 'Vladimir Tomberg', 'Mohammad AL-Smadi', 'Mart Laanpere']
Three Curriculum Maturing Cycles in Academic Curriculum Management Systems
863,158
This article addresses certain Participatory Design (PD)-related aspects of the project OurCity that took place in Meri-Rastila, a multicultural suburb in East Helsinki, Finland. The aim of OurCity was to democratize design processes and to empower local residents to influence the redevelopment of their area. PD processes were a key component to the OurCity project and its activities, particularly in relation to the process of drafting an Alternative Master Plan (AMP) for the area. The plan competed with, and lost by a narrow margin to, the plan drafted by the Helsinki City Planning Department. The scope of PD was underestimated because AMP, the design object, was envisioned in isolation from the participatory process it entailed. Had PD been presented as crucial to the process, AMP would have greater impact. In this article, we argue that it is necessary to make PD processes more visible in the end products of participatory planning. We base this argument on firsthand experience as members of the OurCity team and on an analysis of printed media and digital texts.
['Mariana Salgado', 'Michail Galanakis']
"... so what?": limitations of participatory design on decision-making in urban planning
287,829
Preserving our cultural heritage -- from fragile historic textiles such as national flags to heavy and seemingly solid artifacts recovered from 9/11--requires careful monitoring of the state of the artifact and environmental conditions. The standard for a web-accessible textile fiber database is established with Dublin Core elements to address the needs of textile conservation. Dynamic metadata and classification standards are also incorporated to allow flexibility in recording changing conditions and deterioration over the life of an object. Dublin core serves as the basis for data sets of information about the changing state of artifacts and environmental conditions. With common metadata standards, such as Dublin Core, this critical preservation knowledge can be utilized by a range of scientists and conservators to determine optimum conditions for slowing the rate of deterioration, as well as comparative use in the preservation of other artifacts.
['Fenella G. France', 'Michael B. Toth']
Developing cultural heritage preservation databases based on Dublin Core data elements
307,529
A new algorithm for designing multilayer feedforward neural networks with single powers-of-two weights is presented. By applying this algorithm, the digital hardware implementation of such networks becomes easier as a result of the elimination of multipliers. This proposed algorithm consists of two stages. First, the network is trained by using the standard backpropagation algorithm. Weights are then quantized to single powers-of-two values, and weights and slopes of activation functions are adjusted adaptively to reduce the sum of squared output errors to a specified level. Simulation results indicate that the multilayer feedforward neural networks with single powers-of-two weights obtained using the proposed algorithm have generalization performance similar to that of the original networks with continuous weights. >
['C.Z. Tang', 'Hon Keung Kwan']
Multilayer feedforward neural networks with single powers-of-two weights
228,188
The active appearance model (AAM) is a powerful generative method for modeling and registering deformable objects. The project-out inverse compositional (POIC) method is one of the most common methods for AAM fitting and is also the fastest known method to date. However, it does not work well when the initialization is far from the optimum. In this paper, a two-stage fitting procedure based on POIC method is presented: at the first stage, we consider the model as a rigid body and roughly fit it to obtain large capture range; at the second stage, we deem the model as a deformable model and solve for local deformations to achieve accurate results. In other words, the algorithm firstly updates the global similarity transform parameters and then updates the global similarity transform parameters and the linear shape parameters simultaneously. Experimental results demonstrate that our method can achieve larger capture range with better convergence rates.
['Mingcai Zhou', 'Yangsheng Wang', 'Xiaoyan Wang', 'Xuetao Feng']
A Two-Stage Approach for AAM Fitting
315,437
Due to the world wide degradation of river catchments and their related aquatic resources the development of integrated management strategies has become an important issue. Tools and processes are required that support the integration of science, the needs of stakeholders and the local population, within existing political frameworks to achieve a sustainable catchment development. In this paper the potential of qualitative reasoning (QR) models for sustainable catchment management is evaluated by students and domain experts. This evaluation yields promising results. The evaluated QR models were found to represent complex knowledge in an understandable manner. Most people ‘largely or fully agreed’ that the presented QR models may significantly contribute to the understanding of students and stakeholders, concerning which entities and processes drive a sustainable development of a riverine landscape, and therefore enhances their decision-making capabilities. Due to its potential to integrate quantitative and qualitative knowledge, to build causal models, and to run dynamic simulations, the presented QR approach has great potential to become an important contribution to integrated catchment management at multiple levels of the implementation process thereof (such as education, decision-making, social learning, integration of different scientific disciplines, and communication).
['Andreas Zitek', 'S. Schmutz', 'S. Preis', 'Paulo Salles', 'Bert Bredeweg', 'S. Muhar']
Evaluating the potential of qualitative reasoning models to contribute to sustainable catchment management
21,453
The objective of this paper is to measure total factor productivity growth in a panel of sheep farms in Greece and assess the relative contribution of technical change, technical efficiency change and scale efficiency change in observed productivity growth. Such decomposition can be useful in planning well defined policies which can support the sector’s sustainable development via the optimization of input/output use. A stochastic frontier production function approach is adopted and maximum likelihood is used to estimate the parameters. The data used for the econometric estimation are obtained from the Greek Farm Accounting Data Network for the period 1997–2002. TFP has been growing in the sector but at a diminishing rate. The major determinant of productivity growth is technical change, which has been shifting the frontier by 2.4% on average during this period, but is counteracted to some extent by technical and scale inefficiency each reducing TFP growth by about 0.3% p.a.
['Katerina Melfou', 'Athanasios Theocharopoulos', 'Evangelos Papanagiotou']
Assessing productivity change with SFA in the sheep sector of Greece
30,601
Exploring User Data From a Game-like Math Tutor: A Case Study in Causal Modeling.
['Dovan Rai', 'Joseph E. Beck']
Exploring User Data From a Game-like Math Tutor: A Case Study in Causal Modeling.
741,939
Although increasingly sophisticated algorithms have been proposed to decompose intramuscular electromyography signals into the concurrent activities of individual motor units (MUs), the human operator is still able to improve decomposition results by visual inspection. The rationale for this paper was to combine components from previous decomposition procedures in an expert systems approach utilizing fuzzy logic and attempting to replicate the thought process of an accomplished decomposer in order to minimize the user interaction subsequently needed to enhance decomposition results. The decomposition procedure is discussed and examples are given of the type of information it can yield. The method has been used to identify the discharge activities of up to 15 MUs with up to 95% accuracy.
['Zeynep Erim', 'Winsean Lin']
Decomposition of Intramuscular EMG Signals Using a Heuristic Fuzzy Expert System
242,981
Development of the Stretcher with the Vibration Isolator Using Nonlinear-Structure
['Yuki Iwano', 'Satoru Horai', 'Koichi Osuka', 'br', 'Hisanori Amano']
Development of the Stretcher with the Vibration Isolator Using Nonlinear-Structure
981,233
Iterative Gain Enhancement in an Algorithmic ADC
['Timothy A. Monk', 'Paul J. Hurst', 'Stephen H. Lewis']
Iterative Gain Enhancement in an Algorithmic ADC
701,504
Verification of Concurrent Programs Using Trace Abstraction Refinement
['Franck Cassez', 'Frowin Ziegler']
Verification of Concurrent Programs Using Trace Abstraction Refinement
633,322
In this paper we present a balun low noise amplifier (LNA) in which the noise figure (NF) and power consumption are reduced by using a feedback biasing structure. The circuit is based on a conventional wideband balun LNA with noise cancellation. We propose to replace the typical current source of the CG stage by a transistor that establishes a feedback loop in that stage. This adds a degree of freedom which can be used to vary the CG transistor's transconductance, making it different from the typical value of 20 mS. Thus. we can increase the ratio between the gm of the CG and CS stages, which reduces the LNA NF, area, and power consumption, when compared with conventional circuits. Simulation results, with 65 nm CMOS transistors at 1.2 V supply, show that the LNA bandwidth is 3.4 GHz, the voltage gain is 21.8 dB, and the NF is less than 2.5 dB, with IIP3 above −5 dBm. The power dissipation is 4.3 mW. Another design, concerning linearity, has 18.6 dB voltage gain, NF below 2.9 dB and IIP3 above +3 dBm, over a 5 GHz bandwidth, with power consumption of 8.5 mW.
['Miguel Fernandes', 'Luis B. Oliveira', 'João Goes']
Wideband noise cancelling balun LNA with feedback biasing
874,412
The power of interactive 3D graphics, immersive displays, and spatial interfaces is still under-explored in domains where the main target is to enhance creativity and emotional experiences. This article presents a set of work the attempts to extent the frontiers of music creation as well as the experience of audiences attending to digital performances. The goal is to connect sounds to interactive 3D graphics that musicians can interact with and the audience can observe.
['Florent Berthaut', 'Martin Hachet']
Spatial Interfaces and Interactive 3D Environments for Immersive Musical Performances
896,960
In the paper, cooperative two-stage network games are studied. At the first stage of the game, players form a network, while at the second stage players choose their behaviors according to the network realized at the first stage. As a cooperative solution concept in the game, the core is considered. It is proved that some imputations from the core are time inconsistent, whereas one can design for them a time-consistent imputation distribution procedure. Moreover, the strong time consistency problem is also investigated.
['Hongwei Gao', 'Leon A. Petrosyan', 'Han Qiao', 'Artem Sedakov']
Cooperation in two-stage games on undirected networks
963,140
Flow Scheduling for End-Host Multihoming
['Nathanael Thompson', 'Guanghui He', 'Haiyun Luo']
Flow Scheduling for End-Host Multihoming
59,502
We analyze the outage probability of dual-hop full-duplex (FD) decode-and-forward (DF) relaying for an orthogonal frequency-division multiplexing (OFDM) system in the presence of in-phase and quadrature-phase (I/Q) imbalance (IQI). We derive tight analytical approximations that quantify the outage probability's functional dependence on the IQI level and the residual loopback self-interference (LSI) average power level. In addition, we derive the condition at which direct transmission outperforms FD-DF relay-assisted transmission in the presence of IQI. Furthermore, we investigate an opportunistic relaying (OR) approach and demonstrate its robustness against the detrimental effects of IQI and residual LSI. Our numerical results confirm the accuracy of our analysis.
['Mohamed Mokhtar', 'Naofal Al-Dhahir', 'Ridha Hamila']
OFDM Full-Duplex DF Relaying Under I/Q Imbalance and Loopback Self-Interference
879,139
For high-dimensional and massive data sets, a relevant subspace based contextual outlier detection algorithm is proposed. Firstly, the relevant subspace, which can effectively describe the local distribution of the various data sets, is redefined by using local sparseness of attribute dimensions. Secondly, a local outlier factor calculation formula in the relevant subspace is defined with probability density of local data sets, and the formula can effectively reflect the outlier degree of data object that does not obey the distribution of the local data set in the relevant subspace. Thirdly, attribute dimensions of constituting the relevant subspace and local outlier factor are defined as the contextual information, which can improve the interpretability and comprehensibility of outlier. Fourthly, the selection of N data objects with the greatest local outlier factor value is defined as contextual outliers. In the end, experimental results validate the effectiveness of the algorithm by using UCI data sets.
['Jifu Zhang', 'Xiaolong Yu', 'Yonghong Li', 'Sulan Zhang', 'Yaling Xun', 'Xiao Qin']
A relevant subspace based contextual outlier mining algorithm
637,265
Spectral Color Management using Interim Connection Spaces based on Spectral Decomposition.
['Shohei Tsutsumi', 'Mitchell R. Rosen', 'Roy S. Berns']
Spectral Color Management using Interim Connection Spaces based on Spectral Decomposition.
800,676
The advent of computer -supported collaborative work (CSCW) systems significantly impacts group collaboration. This paper reviews three CSCW systems: Lotus Notes, Xerox DocuShare, and SevenMountains Integrate. We focus on their different capabilities and uses in distributed group projects.
['Deniz Eseryel', 'Radha Ganesan', 'Gerald S. Edmonds']
Review of Computer-Supported Collaborative Work Systems
449,992
Recently, Sun et al., proposed a practical E-mail protocol providing perfect forward secrecy, but Dent showed that their protocol does not actually provide perfect forward secrecy. In this letter, we propose two new robust E-mail protocols, which indeed guarantee perfect forward secrecy
['Bum Han Kim', 'Jae Hyung Koo', 'Dong Hoon Lee']
Robust E-mail protocols with perfect forward secrecy
93,789
We propose a deterministic vector channel simulation model for generating the fading waveforms that satisfy not only rigorous temporal correlation but also arbitrary spatial correlation by means of Doppler phase difference sampling. The proposed method is more efficient than the conventional pseudonoise (PN) filtered Gaussian model with coloring process in evaluating the laboratory level performance of the mobile communication systems employing adaptive arrays or space diversity.
['Jong-Kyu Han', 'Jong-Gwan Yook', 'Han-Kyu Park']
A deterministic channel simulation model for spatially correlated Rayleigh fading
440,652
The intention of this paper is to explore the problem of pinning sampled-data synchronization of coupled reaction-diffusion neural networks with added inertia and time-varying delays. Through the proper variable substitution, the original system is transferred into first-order differential equations. Then, by constructing a suitable Lyapunov-Krasovskii functional (LKF), which uses more information of the delay bounds, global asymptotic synchronization criteria for the considered system are established in the form of LMIs. The acquired LMIs can be simply examined for their practicability by utilizing any of the accessible softwares. At last, two examples are furnished to manifest the efficacy of the derived criteria.
['S. Dharani', 'R. Rakkiyappan', 'Ju H. Park']
Pinning sampled-data synchronization of coupled inertial neural networks with reaction-diffusion terms and time-varying delays
939,181
We study a problem where a group of agents has to decide how a joint reward should be shared among them. We focus on settings where the share that each agent receives depends on the subjective opinions of its peers concerning that agent's contribution to the group. To this end, we introduce a mechanism to elicit and aggregate subjective opinions as well as for determining agents' shares. The intuition behind the proposed mechanism is that each agent who believes that the others are telling the truth has its expected share maximized to the extent that it is well-evaluated by its peers and that it is truthfully reporting its opinions. Under the assumptions that agents are Bayesian decision-makers and that the underlying population is sufficiently large, we show that our mechanism is incentive-compatible, budgetbalanced, and tractable. We also present strategies to make this mechanism individually rational and fair.
['Arthur Carvalho', 'Kate Larson']
A truth serum for sharing rewards
364,234
This paper presents an experiment designed to investigate the impact of scommunication in an immersive virtual environment.Participants were paired by gender and were randomly assigned to a CAVE-like system or a head-mounted display. Both were represented by a humanoid avatar in the shared 3D environment. The visual appearance of the avatars was either basic and genderless (like a "match-stick" figure), or more photorealistic and gender-specific. Similarly, eye gaze behavior was either random or inferred from voice, to reflect different levels of behavioral realism.Our comparative analysis of 48 post-experiment questionnaires confirms earlier findings from non-immersive studies using semi-photorealistic avatars, where inferred gaze significantly outperformed random gaze. However responses to the lower-realism avatar are adversely affected by inferred gaze, revealing a significant interaction effect between appearance and behavior. We discuss the importance of aligning visual and behavioral realism for increased avatar effectiveness.
['Maia Garau', 'Mel Slater', 'Vinoba Vinayagamoorthy', 'Andrea Brogni', 'Anthony Steed', 'M. Angela Sasse']
The impact of avatar realism and eye gaze control on perceived quality of communication in a shared immersive virtual environment
61,245
Linux based embedded node for capturing, compression and streaming of digital audio and video
['Francisco J. Suárez', 'Juan C. Granda', 'Julio Molleda', 'Daniel F. García']
Linux based embedded node for capturing, compression and streaming of digital audio and video
546,933
Association rule mining is a helpful tool to discover relations between items in transactions. But in some scenarios, it is also interesting to consider not only the presence of items, but the absence of them. In this paper, we introduce a methodology to obtain fuzzy association rules involving absent items. Additionally, our proposal is based on restriction level sets, a recent representation of fuzziness that extends that of fuzzy sets, and introduces some new operators, covering some misleading results obtained from usual fuzzy operators as, for example, negation. In our methodology, we define new measures for fuzzy association rules as RL-numbers, as well as we propose a new way of summarizing the resulting set of fuzzy association rules, distributed in restriction levels.
['Carlos Molina', 'Daniel Sánchez', 'José-María Serrano', 'M. Amparo Vila']
Finding fuzzy association rules via restriction levels
332,571
Statistical Delay QoS Protection for Primary Users in Cooperative Cognitive Radio Networks
['Yichen Wang', 'K. J. Ray Liu']
Statistical Delay QoS Protection for Primary Users in Cooperative Cognitive Radio Networks
382,221
Dynamic programming is a general technique for solving optimization problems. It is based on the division of problems into simpler subproblems that can be computed separately. In this paper, we show that Datalog with aggregates and other nonmonotonic constructs can express classical dynamic programming optimization problems in a natural fashion, and then we discuss the important classes of queries and applications that benefit from these techniques.
['Sergio Greco']
Dynamic programming in Datalog with aggregates
536,697
Visualization techniques for complex data are a workhorse of modern scientic pursuits. The goal of visualization is to embed high-dimensional data in a low-dimensional space while preserving structure in the data relevant to exploratory data analysis such as clusters. However, existing visualization methods often either fail to separate clusters due to the crowding problem or can only separate clusters at a single resolution. Here, we develop a new approach to visualization, tree preserving embedding (TPE). Our approach uses the topological notion of connectedness to separate clusters at all resolutions. We provide a formal guarantee of cluster separation for our approach that holds for nite samples. Our approach requires no parameters and can handle general types of data, making it easy to use in practice.
['Albert D. Shieh', 'Tatsunori Hashimoto', 'Edoardo M. Airoldi']
Tree preserving embedding
605,808
Banking service is a part and parcel in the reformist human society. Due to the proliferation of communication tech- nology, instead of conventional paper based banking system, sms-banking and m-banking are getting immense popularity. However, regardless of nature of transactions, maintaining security is a major concern in this sector. In this paper, a short message service (SMS) based m-banking protocol under GSM technology is presented. In view of ensuring a high level of security during client authentication and data transmission, in the proposed SMS-banking scheme, a digital watermarking technique is introduced. The proposed scheme, because of wide- spread use of cellular phones, offers an ease of implementation in conjunction with a high level of security.
['Md. Nazmus Sakib', 'A B M Rafi Sazzad', 'Syed Bahauddin Alam', 'Celia Shahnaz', 'Shaikh Anowarul Fattah']
Security Enhancement Protocol in SMS-Banking using Digital Watermarking Technique
468,993
A comparative study for chest radiograph image retrieval using binary texture and deep learning classification.
['Yaron Anavi', 'Ilya Kogan', 'Elad Gelbart', 'Ofer Geva', 'Hayit Greenspan']
A comparative study for chest radiograph image retrieval using binary texture and deep learning classification.
676,506
Social media has provided the organizations a massive platform to reach masses and is considered as a highly effective tool for organizations to connect and promote their brands and messages all over the world with huge potential to attract clientele with virtually no or very low investment. It has given organizations a collaborative innovation medium also where shared wisdom can help organizations bringing out innovation in their products and services. Social media reach can result in the spread of positive sentiment about the organization's products or services very fast to large number of people. However social media is a double edge sword. If it has potential to help boosting an organization's business, it can also result in propagating negative image and that too at a very fast speed and to very large user segment. A single negative comment on organization's blog or profile, or a complaint by some customer about organization's product or services on social media may cause a negative and long lasting impact on an organization's and/or a brand's reputation. Hence, social media can make or tarnish an organization's sustainability. Organizations need to be on high alert always. Staying away from social media is not a solution as you are not aloof in the virtual world of social media. Anyone can put anything on the media and information spread is so fast that no one can control the same. Though organizations can be benefitted immensely from use of social media, there are chances that either your internal dissatisfied employees or your customers or even your competitors can spoil your image by putting wrong or unwanted information on social media. Either competitors may hire specialized hackers or individual hackers may hack an organization's media campaign and convey totally opposite message to tarnish the organization's brand image and blackmail. Organizations need to keep a hawk eye on such brand assassins and their social activity and take counter measures very fast so as to defeat such wrongdoers. To succeed in the highly technologically advanced environment, organizations need to make use of social media for their own advantages. They need to conquer dark side of social media by remaining vigilant all the times and adopt such strategies that not only counter malicious intentions of wrong-doers but proved to be effective marketing strategies to effectively promote organizations' interests. This paper is an attempt to understand role of social media in making or tarnishing an organization's image and evolve out strategies to help the organizations conquering dark side of social media.
['Jitender Sharma']
Strategies to Overcome Dark Side of Social Media for Organizational Sustainability
797,443
The aim of this paper is to find an accurate and efficient algorithm for evaluating the summation of large sets of floating-point numbers. We present a new representation of the floating-point number system in which a number is represented as a linear combination of integers and the coefficients are powers of the base of the floating-point system. The approach allows to build up an accurate floating-point summation algorithm based on the fact that no rounding error occurs whenever two integer numbers are summed or a floating-point number is multiplied by powers of the base of the floating-point system. The proposed algorithm seems to be competitive in terms of computational effort and, under some assumptions, the computed sum is greatly accurate. With such assumptions, less-conservative in the practical applications, we prove that the relative error of the computed sum is bounded by the unit roundoff.
['Alfredo Eisinberg', 'Giuseppe Fedele']
Accurate floating-point summation: a new approach
247,151
In this paper, a session resumption-based end-to-end security scheme for healthcare Internet of things (IoT) is pro-posed. The proposed scheme is realized by employing certificate-based DTLS handshake between end-users and smart gatewaysas well as utilizing DTLS session resumption technique. Smartgateways enable the sensors to no longer need to authenticateand authorize remote end-users by handing over the necessarysecurity context. Session resumption technique enables end-usersand medical sensors to directly communicate without the needfor establishing the communication from the initial handshake. Session resumption technique has an abbreviated form of DTLShandshake and neither requires certificate-related nor public-keyfuntionalities. This alleviates some burden of medical sensors tono longer need to perform expensive operations. The energy-performance evaluations of the proposed scheme are evaluatedby developing a remote patient monitoring prototype based onhealthcare IoT. The energy-performance evaluation results showthat our scheme is about 97% and 10% faster than certificate-based and symmetric key-based DTLS, respectively. Also, thecertificate-based DTLS consumes about 2.2X more RAM and2.9X more ROM resources required by our scheme. While, ourscheme and symmetric key-based DTLS have almost similarRAM and ROM requirements. The security analysis reveals thatthe proposed scheme fulfills the requirements of end-to-end security and provides higher security level than related approachesfound in the literature. Thus, the presented scheme is a well-suited solution to provide end-to-end security for healthcare IoT.
['Sanaz Rahimi Moosavi', 'Tuan Nguyen Gia', 'Ethiopia Nigussie', 'Amir-Mohammad Rahmani', 'Seppo Virtanen', 'Hannu Tenhunen', 'Jouni Isoaho']
Session Resumption-Based End-to-End Security for Healthcare Internet-of-Things
576,261
It is proven that an initial value problem of an autonomous system verifying some conditions has exactly one open continuously differentiable trajectory on R, whose limit set is not one point. The proof of the result is based upon continuation methods.
['J. M. Soriano']
Open trajectories
708,614
Background: While the study of the information technology (IT) implementation process and its outcomes has received considerable attention, the examination of pre-adoption and pre-implementation stages of configurable IT uptake appear largely under-investigated. This paper explores managerial behaviour during the periods prior the effective implementation of a clinical information system (CIS) by two Canadian university multi-hospital centers. Methods: Adopting a structurationist theoretical stance and a case study research design, the processes by which CIS managers’ patterns of discourse contribute to the configuration of the new technology in their respective organizational contexts were longitudinally examined over 33 months. Results: Although managers seemed to be aware of the risks and organizational impact of the adoption of a new clinical information system, their decisions and actions over the periods examined appeared rather to be driven by financial constraints and power struggles between different groups involved in the process. Furthermore, they largely emphasized technological aspects of the implementation, with organizational dimensions being put aside. In view of these results, the notion of ‘rhetorical ambivalence’ is proposed. Results are further discussed in relation to the significance of initial decisions and actions for the subsequent implementation phases of the technology being configured. Conclusions: Theoretical and empirically grounded, the paper contributes to the underdeveloped body of literature on information system pre-implementation processes by revealing the crucial role played by managers during the initial phases of a CIS adoption.
['Charo Rodríguez', 'Marlei Pozzebon']
Understanding managerial behaviour during initial steps of a clinical information system adoption
58,589
Enterprises security is a complex problem. Pure technology-driven development methods are not sufficient to solve a broad range of enterprise security issues. This paper analyzes the complexity of enterprise security and proposes an organization-driven approach for the problem. The approach combines a set of Unified Modeling Language-based approaches to bridge the gap between enterprise security architecture models and security application development models. It allows an enterprise to coordinate security resources from an enterprise point of view, and develop security applications systematically and efficiently. A comprehensive case study is conducted to illustrate the approach. The study shows through the refinement of enterprise security goals, both software goals and software requirements for a security application can be obtained. In particular, a security application is built to support the specification and automated verification of separation of duty access policies using the Object Constraint Language and formal method Alloy.
['Lirong Dai', 'Yan Bai']
An Organization-Driven Approach for Enterprise Security Development and Management
137,673
The PEDANT genome database provides exhaustive annotation of 468 genomes by a broad set of bioinformatics algorithms. We describe recent developments of the PEDANT Web server. The all-new Graphical User Interface (GUI) implemented in Java TM allows for more efficient navigation of the genome data, extended search capabilities, user customization and export facilities. The DNA and Protein viewers have been made highly dynamic and customizable. We also provide Web Services to access the entire body of PEDANT data programmatically. Finally, we report on the application of association rule mining for automatic detection of potential annotation errors. PEDANT is freely accessible to academic users at http://pedant.gsf.de.
['M. Louise Riley', 'Thorsten Schmidt', 'Irena I. Artamonova', 'Christian Wagner', 'Andreas Volz', 'Klaus Heumann', 'Hans-Werner Mewes', 'Dmitrij Frishman']
PEDANT genome database: 10 years online
425,839
The relationship between the Wiener indices and the topological structures of alkanes is analyzed. The expressions for the Wiener distances between elements of these structures are derived, and the distance matrix is constructed for them; this matrix is naturally called the Wiener distance matrix. The expressions for the Wiener indices of polymers with units of arbitrary structure are obtained.
['E. A. Smolenskii']
The Wiener distance matrix for acyclic compounds and polymers.
574,020
A cooperation incentive scheme based on coalitional game theory for sparse and dense VANETs.
['Di Wu', 'Yanrong Gao', 'Guozhen Tan', 'Limin Sun', 'Jie Liang', 'Jiangchuan Liu']
A cooperation incentive scheme based on coalitional game theory for sparse and dense VANETs.
785,007
The magnetic-field-modulated brushless double-rotor machine (MFM-BDRM), composed of the stator, the modulating ring rotor, and the permanent-magnet (PM) rotor, is a new power-split device for hybrid electric vehicles (HEVs). Compared with traditional double-rotor machines (DRMs), the MFM-BDRM shows more complicated electromechanical energy conversion relations, due to its special operating principle—the magnetic field modulation principle. To analyze the speed relation in the MFM-BDRM, a diagrammatized method is proposed. It shows that the speeds of stator magnetic field, modulating ring rotor, and PM rotor present a collinear speed characteristic. On this basis, the torque relations of stator, modulating ring rotor, and PM rotor are investigated from the view of a conservative lossless system. Then, a lever-balanced torque map is proposed to analyze their torque characteristic. It shows that the torques of stator, modulating ring rotor, and PM rotor can be calculated by the lever balance principle. The power flow map is further proposed to analyze the power flow characteristic among three ports. In addition, comparison of the MFM-BDRM and the planetary gear shows that the MFM-BDRM can be totally equivalent to an electrical machine and a planetary gear, making it gain a great advantage particularly when the MFM-BDRM is used in HEVs. The electromagnetic performance of MFM-BDRM is investigated by a finite-element method, which shows that the MFM-BDRM has advantages of fine sinusoidal back electromotive force and low torque fluctuation. Finally, the speed and torque analysis and FE results are verified by experiment.
['Jingang Bai', 'Ping Zheng', 'Chengde Tong', 'Zhiyi Song', 'Quanbin Zhao']
Characteristic Analysis and Verification of the Magnetic-Field-Modulated Brushless Double-Rotor Machine
83,698
We address the problem of providing compositional hard real-time guarantees in a hierarchy of schedulers. We first propose a resource model to characterize a periodic resource allocation and present exact schedulability conditions for our proposed resource model under the EDF and RM algorithms. Using the exact schedulability conditions, we then provide methods to abstract the timing requirements that a set of periodic tasks demands under the EDF and RM algorithms as a single periodic task. With these abstraction methods, for a hierarchy of schedulers, we introduce a composition method that derives the timing requirements of a parent scheduler from the timing requirements of its child schedulers in a compositional manner such that the timing requirement of the parent scheduler is satisfied, if and only if the timing requirements of its child schedulers are satisfied.
['Insik Shin', 'Insup Lee']
Periodic resource model for compositional real-time guarantees
501,961
A services oriented system for bioinformatics applications on the grid.
['Giovanni Aloisio', 'Massimo Cafaro', 'Italo Epicoco', 'Sandro Fiore', 'Maria Mirto']
A services oriented system for bioinformatics applications on the grid.
817,604
We develop a velocity field tracking-control scheme with proportional integral force feedback to transport and manipulate deformable material with a mobile manipulator. We assume that the deformable object is a damped underactuated mechanical system. The input-to-state stability properties of its zero dynamics are used to derive bounds on the admissible end-effector velocities and accelerations. It is shown both analytically and in simulation that using deformation feedback, we can avoid exciting excessive object deformations.
['Herbert G. Tanner']
Mobile manipulation of flexible objects under deformation constraints
395,543
The problem of scheduling resources for tasks with variable requirements over time can be stated as follows. We are given two sequences of vectors A=A 1,?,A n and R=R 1,?,R m . Sequence A represents resource availability during n time intervals, where each vector A i has q elements. Sequence R represents resource requirements of a task during m intervals, where each vector R i has q elements. We wish to find the earliest time interval i, termed latency, such that for 1?k?m, 1?j?q: A i+k?1 j ?R k j , where A i+k?1 j and R k j are the jth elements of vectors A i+k?1 and R k , respectively. One application of this problem is I/O scheduling for multimedia presentations. #R##N##R##N#The fastest known algorithm to compute the optimal solution of this problem has ${\mathcal{O}}(n\sqrt{m}\log m)$ computation time (Amir and Farach, in Proceedings of the ACM-SIAM symposium on discrete algorithms (SODA), San Francisco, CA, pp. 212---223, 1991; Inf. Comput. 118(1):1---11, 1995). We propose a technique that approximates the optimal solution in linear time: ${\mathcal{O}}(n)$ . #R##N##R##N#We evaluated the performance of our algorithm when used for multimedia I/O scheduling. Our results show that 95% of the time, our solution is within 5% of the optimal.
['Martha Escobar-Molano', 'David A. Barrett']
Resource scheduling with variable requirements over time
516,349
This paper presents r, he complete integrated planning, executing and learning robotic agent ROGUE. We describe ROGUE’S task planner that interleaves high-level task planning with real world robot execution. It supports multiple, asynchronous goMs, suspends and interrupts tasks, mad monitors and compensates for failure. We present a general approach for learning situation-dependent rides from execution, which correlates environmental features with learning opportunities: thereby detecting patterns and allowing planners to predict and avoid falhlres. We present two implementations of the general learning approach, in the robot’s path planner, and in tim task planner. We present empirical data to show the effectiveness of RoGuE’s novel learning approach.
['Karen Zita Haigh', 'Manuela M. Veloso']
Planning, Execution and Learning in a Robotic Agent
162,930
The importance of automatic particle tracking for analyzing microscopy image data to discover hidden knowledge of complex biological systems has motivated the development of many tracking approaches. We have developed a new tracking approach that exploits information from multiple image scales and multiple time points, and directly combines the information in the optimization procedure. Our approach allows selecting an appropriate scale of a particle by using temporal information. Many-to-one and one-to-many associations are supported to deal with occlusion and deocclusion of particles. We have successfully applied our approach to real fluorescence microscopy image sequences displaying avian leukosis virus particles and quantified the performance.
['Astha Jaiswal', 'William J. Godinez', 'Maik J. Lehmann', 'Karl Rohr']
Direct combination of multi-scale detection and multi-frame association for tracking of virus particles in microscopy image data
819,422
We present an approach to parallel variational optical-flow computation by using an arbitrary partition of the image plane and iteratively solving related local variational problems associated with each subdomain. The approach is particularly suited for implementations on PC clusters because interprocess communication is minimized by restricting the exchange of data to a lower dimensional interface. Our mathematical formulation supports various generalizations to linear/nonlinear convex variational approaches, three-dimensional image sequences, spatiotemporal regularization, and unstructured geometries and triangulations. Results concerning the effects of interface preconditioning, as well as runtime and communication volume measurements on a PC cluster, are presented. Our approach provides a major step toward real-time two-dimensional image processing using off-the-shelf PC hardware and facilitates the efficient application of variational approaches to large-scale image processing problems.
['Timo Kohlberger', 'Christoph Schnörr', 'Andrés Bruhn', 'Joachim Weickert']
Domain decomposition for variational optical-flow computation
512,296
This paper describes a novel algorithm for sorting binary numbers in hardware, along with a custom VLSI hardware design for the same. For sorting n, k-bit binary numbers, our proposed algorithm takes O(n + 2k) time. Sorting is performed by assigning relative ranks to the input numbers. A rank matrix of size n × n is used to store ranks. Each row of the rank matrix corresponds to one of the n numbers, and it stores a single non-zero entry. The position of this entry represents the relative rank of the corresponding number. In the worst case, our algorithm requires n+2k clock cycles for assigning the final ranks. We start with a condition in which each number has an identical rank. In each of clock cycle, ranks are iteratively updated until the final ranks are determined after n+2k clock cycles. The proposed algorithm is implemented in a 65nm process, using a custom design approach to obtain a fast circuit. Our design is significantly faster than the fastest reported hardware sorting engine, with area performance which is superior for larger numbers.
['Srikanth Alaparthi', 'Kanupriya Gulati', 'Sunil P. Khatri']
Sorting binary numbers in hardware - A novel algorithm and its implementation
239,192
A new theoretical approach for designing a low-noise amplifier (LNA) for the ultra-wideband (UWB) radio is presented. Unlike narrowband systems, the use of the noise figure (NF) performance metric becomes problematic in UWB systems because of the difficulty in defining the signal-to-noise ratio (SNR). By defining the SNR as the matched filter bound (MFB), the NF measures the degree of degradation caused by the LNA in the achievable receiver performance after the digital decoding process. The optimum matching network that minimizes the NF as defined above has been solved. Since realizing the optimum matching network is in general difficult, an approach for designing a practical but suboptimum matching network is also presented. The NF performance of both the optimum and the suboptimum matching networks is studied as a function of the LNA gain.
['Jongrit Lerdworatawee', 'Won Namgoong']
Low noise amplifier design for ultra-wideband radio
324,951
Current proposals for joint power and rate allocation protocols in ad hoc networks require a large signaling overhead, and do not adhere to the natural time-scales of transport and power control mechanisms. We present a solution that overcomes these issues. We pose the protocol design as a network utility maximization problem and adopt primal decomposition techniques to devise a novel distributed cross-layer design for transport and physical layer that achieves the optimal network operation. Our solution has several attractive features compared to alternatives: it adheres to the natural time-scale separation between rapid power control updates and slower end-to-end rate adjustments; it allows simplified power control mechanisms with reduced signalling requirements, and distributed slow rate cross-layer signalling mechanisms; and it maintains feasibility at each iteration. We validate the theoretical framework and compare the solution alternatives with numerical examples.
['Pablo Soldati', 'Mikael Johansson']
Reducing Signaling and Respecting Time-Scales in Cross-Layer Protocols Design for Wireless Networks
330,493
Delivering superior expressive power over RDBMS, while maintaining competitive performance, has represented the main goal and technical challenge for deductive database research since its inception forty years ago. Significant progress toward this ambitious goal is being achieved by the DeALS system through the parallel bottom-up evaluation of logic programs, including recursive programs with monotonic aggregates, on a shared-memory multicore machine. In DeALS, a program is represented as an AND/OR tree, where the parallel evaluation instantiates multiple copies of the same AND/OR tree that access the tables in the database concurrently. Synchronization methods such as locks are used to ensure the correctness of the evaluation. We describe a technique which finds an ecient hash partitioning strategy of the tables that minimizes the use of locks during the evaluation. Experimental results demonstrate the eectiveness of the proposed technique — DeALS achieves competitive performance on non-recursive programs compared with commercial RDBMSs and superior performance on recursive programs compared with other existing systems.
['Mohan Yang', 'Alexander Shkapsky', 'Carlo Zaniolo']
Parallel Bottom-Up Evaluation of Logic Programs: DeALS on Shared-Memory Multicore Machines
673,637
This paper presents the Graphical User Interface (GUI) for a remote service used to solve the two-dimensional guillotine cutting stock problem as applied to the textile industry. This interface allows for the patterns in the regional woman's dress of the city of La Orotava, in the Canary Islands, to be defined and manipulated. The user must choose the garment, its quantity and its sizes, as well as, the amount of material available. This information is then used by the system to arrange the patterns. Individual patterns may also be selected. Internally, the system implements a variant of the Viswanathan-Bagchi algorithm to solve the two-dimensional cutting stock problem. The solution is calculated remotely in a dedicated server. The user may, however, keep track of all those possibilities that were rejected by the algorithm in its search for the final solution. He/She can also modify the final solution by dragging the patterns with the mouse into the desired position on the surface. The GUI and the remote service were implemented using Java, while the solving algorithm was coded using C/C++.
['Jesica de Armas', 'Coromoto León', 'Gara Miranda', 'Carlos Segura']
Remote service to solve the two-dimensional cutting stock problem: an application to the Canary Islands costume
136,409
To minimize distractions, a pervasive-computing environment must be context-aware. The authors define an activity-attention framework for context-aware computing, discuss the spatial and temporal aspects of applications they developed, and introduce a pervasive-computing architecture.
['Joshua Anhalt', 'Asim Smailagic', 'Daniel P. Siewiorek', 'Francine Gemperle', 'Daniel Salber', 'Sam Weber', 'James E. Beck', 'James Jennings']
Toward context-aware computing: experiences and lessons
458,254
Because spatial data are usually high-dimensional, complex and mass, we categorize the attributes of each spatial data object as spatial attributes and non-spatial attributes. We use spatial attributes to construct spatial index and determine spatial neighborhood, and use non-spatial attributes to compute outlying degree and spatial outlying degree factor, so as to solve the problem of index and the measurement of outlying degree. In addition, we propose two heuristic pruning strategies to realize fast pruning away those can not be candidate outliers in the data set. According to spatial self-correlation, the impact extent of neighborhood is added to compute attribute weighted values. At the same time, the weighted values are added to calculate pair-wise distance of each spatial object. In this paper, we propose a novel measure, spatial outlying degree factor (SODF), which captures the local behavior of datum in its spatial neighborhood. The experimental results show that the proposed SODF algorithm outperforms the other existing algorithms in detection accuracy, scalability, user dependency and efficiency.
['Anrong Xue', 'Lin Yao', 'Shiguang Ju', 'Weihe Chen', 'Handa Ma']
Algorithm for Fast Spatial Outlier Detection
366,697
Economic agents have the possibility to fund the protection of environmental public goods, such as natural ecosystems and biodiversity, facing unknown risks of collapse, which could help to back them up. On the base of the prediction markets, which meet with a degree of success since their introduction, we propose an evolutionary model of an option fund market for the threshold environmental public goods. We consider population dynamics of agents distributed into proportional fair-share contributors and free riders. The model outcomes show that the public goods could be provided when the agents exchanging option contracts are equally divided into buyers and sellers. This result only holds for a specific social belief over the probability of the public good safeguard, the strict equality between bids and asks, and the equality of all payoffs. Otherwise, providing public goods through option markets turns out to be inoperative.
['Arnaud Dragicevic']
Option Fund Market Dynamics for Threshold Public Goods
632,287
This paper presents a single chip implementation of a space-time algorithm for co-channel interference (CCI) and intersymbol interference (ISI) reduction in GSM/DCS systems. The temporal channel for the Viterbi receiver and the beamformer weights for the CCI rejection are estimated jointly by optimizing a suitable cost function for separable space-time channels. By taking into account of the current integration capabilities provided by the FPGA (field programmable gate array), the feasibility of a single chip JSTE solution based on a three-processor architecture for carrier beamforming, equalization and demodulation is demonstrated.
['Uberto Girola', 'Agostino Picciriello', 'David Vincenzoni']
Smart antenna receiver for GSM/DCS system based on single chip solution
304,766
Barak, Shaltiel Tromer showed how to construct a True Random Number Generator TRNG which is secure against an adversary who has some limited control over the environment.#R##N##R##N#In this paper we improve the security analysis of this TRNG. Essentially, we significantly reduce the entropy loss and running time needed to obtain a required level of security and robustness.#R##N##R##N#Our approach is based on replacing the combination of union bounds and tail inequalities for l-wise independent random variables in the original proof, by a more refined of the deviation of the probability that a randomly chosen item is hashed into a particular location.
['Maciej Skorski']
True Random Number Generators Secure in a Changing Environment: Improved Security Bounds
636,987
Use case models describe the behavior of a software system from the user's perspective. This paper presents a reverse engineering approach for recovering a use case model from object-oriented code. The approach identifies use cases by analyzing class method activation sequences triggered by input events and terminated by output events. The approach produces a structured use case model including diagrams at various levels of abstraction, comprising actors, use cases, associations between actors and use cases, and relationships among use cases. A case study carried out to validate the approach on a C++ small-sized system, produced encouraging results, showing the approach feasibility and highlighting aspects of the approach requiring further investigation.
['G.A. Di Lucca', 'Anna Rita Fasolino', 'U. De Carlini']
Recovering use case models from object-oriented code: a thread-based approach
407,675
Systems thinking is being employed in education for sustainable development (EfSD), including within developing countries' contexts, as part of programs at the United Nations University (UNU). Socio-cultural, organisational, pedagogical, technological and economic systems are considered. A system model of EfSD and associated issues will result.
['Timothy W. Barker']
A multiverse of systems: global challenges for educational technology
210,804
A 1.5D terrain is an x-monotone polyline with n vertices. An imprecise 1.5D terrain is a 1.5D terrain with a y-interval at each vertex, rather than a flxed y-coordinate. A realization of an imprecise terrain is a sequence of n y-coordinates|one for each interval| such that each y-coordinate is within its corresponding interval. For certain applications in terrain analysis, it is important to be able to flnd a realization of an imprecise terrain that is smooth. In this paper we model smoothness by considering the change in slope between consecutive edges of the terrain. The goal is to flnd a realization of the terrain where the maximum slope change is minimized. We present an exact algorithm that runs in O(n 2 ) time.
['Chris Gray', 'Maarten Löffler', 'Rodrigo I. Silveira']
Minimizing Slope Change in Imprecise 1.5D terrains.
198,236
The general case of multiplane wide-sense nonblocking multicast switching networks which are based on the baseline network are considered in this paper. Up to now, widesense nonblocking (WSNB) log2 (N, 0, p), log2 (N,m, p) and logd (N, 0, p) switching networks have been considered for multicast connections. A control algorithm under which these networks are WSNB is called the variable-size blocking window algorithm. In this paper, the path-searching algorithm is adopted for logd (N, m, p) networks with m extra stages and for general case d ≥ 2. The WSNB multicast conditions for logd (N, m, p) networks are presented (without proofs) and the optimal multicast switching networks are investigated.
['Grzegorz Danilewicz']
Optimization of Multicast logd(N, m, p) Switching Networks
452,048
While most interest in constellations of satellites in low Earth orbit has been for systems providing global mobile or fixed broadband communication services, there is also a role for small, inexpensive systems providing specialized services. At the smallest level this is represented by a single microsatellite offering a "store and forward" or messaging service for users who require regular low to medium volume data transfers. We describe the major characteristics of a typical store-and-forward system and outline some of the technological challenges that designers face. We provide a justification for using on-board buffering and focus on the selection of an appropriate channel multiple access scheme that offers certain advantages over conventional Aloha random access.
['Michael H. Hadjitheodosiou']
Store-and-forward data communications using small terminals and microsatellites
341,398
Fuzzy Clustering based Approach for Ontology Alignment
['Rihab Idoudi', 'Karim Saheb Ettabaa', 'Kamel Hamrouni', 'Basel Solaiman']
Fuzzy Clustering based Approach for Ontology Alignment
728,689
We construct multigraphs of any large order with as few as only four 2-decompositions into Hamilton cycles or only two 2-decompositions into Hamilton paths. Nevertheless, some of those multigraphs are proved to have exponentially many Hamilton cycles (Hamilton paths). Two families of large simple graphs are constructed. Members in one class have exactly 16 hamiltonian pairs and in another class exactly four traceable pairs. These graphs also have exponentially many Hamilton cycles and Hamilton paths, respectively. The exact numbers of (Hamilton) cycles and paths are expressed in terms of Lucas- or Fibonacci-like numbers which count 2-independent vertex (or edge) subsets on the n-path or n-cycle. A closed formula which counts Hamilton cycles in the square of the n-cycle is found for n>=5. The presented results complement, improve on, or extend the corresponding well-known Thomason's results.
['Zdzisław Skupień']
Sparse hamiltonian 2-decompositions together with exact count of numerous Hamilton cycles
418,172
The complexity of most embedded software limits our ability to assure safety after the fact, e.g., by testing or formal verification of code. Instead, to achieve high confidence in safety requires considering it from the start of system development and designing the software to reduce the potential for hazardous behavior. An approach to building safety into embedded software will be described that integrates system hazard analysis, user task analysis, traceability, and informal specifications combined with executable and analyzable models. The approach has been shown to be feasible and practical by applying it to complex systems experimentally and by its use on real projects.
['Nancy G. Leveson']
An Approach to Designing Safe Embedded Software
53,463
The goal of the present study is to ascertain the differential performance of a long molecular dynamics trajectory versus several shorter ones starting from different points in the phase space and covering the same sampling time. For this purpose we have selected the 16-mer peptide Bak16BH3 as model of study and carried out several samplings in explicit solvent. Samplings include a 8 us trajectory (sampling S1); two 4 us (sampling S2); four 2 us (sampling S3); eight 1 =s (sampling S4); sixteen 0.5 us (sampling S5) and eighty 0.1 us (sampling S6). Moreover, the 8 =s trajectory was further extended to 16 us to have reference values of the diverse properties measured. The diverse samplings were compared qualitatively and quantitatively. Among the former, we carried out a comparison of the conformational profile of the peptide using cluster analysis. Moreover, we also got insight into the interchange among these structures along the sampling process. Among the latter, we have computed the number of new conformational patterns sampled with time, using strings defined from the conformations attained by each of the residues in the peptide. We also compared the location and depth of the free energy surface minima obtained using a Principal Component Analysis. Finally, we also compared the helical profile per residue at the end of the sampling process. Results suggest that a few short molecular dynamics trajectories may provide a better sampling than one unique trajectory. Moreover, this procedure can also be advantageous to avoid getting trapped in a local minimum. However, caution should be exercised since short trajectories need to be long enough to overcome local barriers surrounding the starting point and the required sampling time depends on the number of degrees of freedom of the system under study. An effective way to get insight into the minimum MD trajectory length requires monitoring the convergence of different structural features as shown in the present work.
['Juan J. Perez', 'M Santos Tomás', 'Jaime Rubio-Martinez']
Assessment of the sampling performance of multiple-copy dynamics versus a unique trajectory
870,391
There is significant variability in the benefit provided by cochlear implants to severely deafened individuals. The reasons why some subjects exhibit low speech recognition scores are unknown; however, underlying physiological or psychophysical factors may be involved. Certain phenomena, such as indiscriminable electrodes and nonmonotonic pitch rankings, might hint at limitations in the ability of individual channels in the cochlear implant and/or sensorineural pathway to convey speech information. In this paper, four approaches for analyzing the results of a simple listening test using speech stimuli are investigated for the purpose of targeting channels of concern in order for follow-on psychophysical experiments to correctly identify channels performing in an "impaired" or anomalous manner. Listening tests were first conducted with normal-hearing subjects and acoustic models simulating channel-specific anomalies. Results indicate that these proposed analyses perform significantly better than chance in providing information about the location of anomalous channels. Vowel and consonant confusion matrices from six cochlear implant subjects were also analyzed to test the robustness of the proposed analyses to variability intrinsic to cochlear implant data. The current study suggests that confusion matrix analyses have the potential to expedite the identification of impaired channels by providing preliminary information prior to exhaustive psychophysical testing.
['Jeremiah J. Remus', 'Chandra S. Throckmorton', 'Leslie M. Collins']
Expediting the Identification of Impaired Channels in Cochlear Implants Via Analysis of Speech-Based Confusion Matrices
182,255
Demosaicking algorithms for RGBW color filter arrays.
['Mina Rafinazari', 'Eric Dubois']
Demosaicking algorithms for RGBW color filter arrays.
982,296
Patch-Based Low-Rank Matrix Completion for Learning of Shape and Motion Models from Few Training Samples
['Jan Ehrhardt', 'Matthias Wilms', 'Heinz Handels']
Patch-Based Low-Rank Matrix Completion for Learning of Shape and Motion Models from Few Training Samples
887,759
The skeletal structures of solid objects play an important role in medical and industrial applications. Given a volumetrically sampled solid object, our method extracts a nice-looking skeletal structure represented as a polygon mesh. The purpose is to achieve a noise-robust extraction of the skeletal mesh from a real-world object obtained using a scanning technology such as the CT scan method. We first approximate the input through a set of spherically supported polynomials that provide an adaptively smoothed intensity field, and then perform a polygonization process to find the extremal sheet of the field, which is regarded as a skeletal sheet in this research. In our polygonization, a subset of the weighted Delaunay tetrahedrization defined by a set of spherical supports is used as an adaptively sampled grid. The derivatives for detecting extremality are analytically evaluated at the tetrahedron vertices. We also demonstrate the effectiveness of our method by extracting skeletal meshes from noisy CT images.
['Yukie Nagai', 'Yutaka Ohtake', 'Kiwamu Kase', 'Hiromasa Suzuki']
Polygonizing skeletal sheets of CT-scanned objects by partitioin of unity approximations
331,667
Decision-Making as Phase Transitions In Sport.
['Duarte Araújo', 'Keith Davids', 'Luís Rocha', 'Sidónio Serpa', 'Orlando Fernandes']
Decision-Making as Phase Transitions In Sport.
790,717
Malware detectors require a specification of malicious behavior. Typically, these specifications are manually constructed by investigating known malware. We present an automatic technique to overcome this laborious manual process. Our technique derives such a specification by comparing the execution behavior of a known malware against the execution behaviors of a set of benign programs. In other words, we mine the malicious behavior present in a known malware that is not present in a set of benign programs. The output of our algorithm can be used by malware detectors to detect malware variants. Since our algorithm provides a succinct description of malicious behavior present in a malware, it can also be used by security analysts for understanding the malware. We have implemented a prototype based on our algorithm and tested it on several malware programs. Experimental results obtained from our prototype indicate that our algorithm is effective in extracting malicious behaviors that can be used to detect malware variants.
['Mihai Christodorescu', 'Somesh Jha', 'Christopher Kruegel']
Mining specifications of malicious behavior
499,034
Rarely are computing systems developed entirely by members of the communities they serve, particularly when that community is underrepresented in computing. Archive of Our Own (AO3), a fan fiction archive with nearly 750,000 users and over 2 million individual works, was designed and coded primarily by women to meet the needs of the online fandom community. Their design decisions were informed by existing values and norms around issues such as accessibility, inclusivity, and identity. We conducted interviews with 28 users and developers, and with this data we detail the history and design of AO3 using the framework of feminist HCI and focusing on the successful incorporation of values into design. We conclude with considering examples of complexity in values in design work: the use of design to mitigate tensions in values and to influence value formation or change.
['Casey Fiesler', 'Shannon Morrison', 'Amy Bruckman']
An Archive of Their Own: A Case Study of Feminist HCI and Values in Design
761,077
OAI is designed with a low-barrier technology approach, thus allowing institutions to provide content metadata with little effort. On the other hand, search capabilities are very limited on OAI data providers, and have to be provided by separate service providers. We propose that data providers form a peer-to-peer network which supports distributed search over all connected metadata repositories. Such an approach is already implemented for learning content metadata (project 'Edutella'). We describe how this technology could be reused in the OAI context. This would allow OAI repositories to provide distributed search capabilities and effortless integration of new archives within a peer-to-peer network with little additional implementation effort.
['Benjamin A. Ahlborn', 'Wolfgang Nejdl', 'Wolf Siberski']
OAI-P2P: a peer-to-peer network for open archives
325,366
Low overhead error-resiliency techniques such as algorithmic noise-tolerance (ANT) have been shown to be particularly effective for signal processing and machine learning kernels. However, the overhead of conventional ANT can be as large as 30% due to the use of explicit estimator block. To overcome this overhead, embedded ANT (E-ANT) is proposed [S. Zhang and N. Shanbhag, “Embedded error compensation for energy efficient DSP systems,” in Proc. GlobalSIP, Dec. 2014, pp. 30–34], where the estimator is embedded in the main computation block via data path decomposition (DPD). E-ANT reduces the logic overhead to be below 8% as compared with the 20%–30% associated with conventional reduced precision replica (RPR) ANT system while maintaining the same error compensation functionality. DPD was first proposed in our original paper [Zhang and Shanbhag, 2014] where its benefits were studied in the case of a simple multiply-accumulator (MAC) kernel. This paper builds upon [Zhang and Shanbhag, 2014] by 1) providing conditions for the existence of DPD, 2) demonstrating DPD for a variety of commonly employed kernels in signal processing and machine learning applications, and 3) evaluating the robustness improvement and energy savings of DPD at the system level for an SVM-based EEG seizure classification application. Simulation results in a commercial 45 nm CMOS process show that E-ANT can compensate for error rates up to 0.38 for errors in FE only, and 0.17 for errors in FE and CE, while maintain a true positive rate ${p}_{\rm tp} > 0.9$ and a false positive rate ${p}_{\rm fp}\leq 0.01$ . This represents a greater than 3-orders-of-magnitude improvement in error tolerance over the conventional system. This error tolerance is employed to reduce energy via the use of voltage overscaling (VOS). E-ANT is able to achieve 51% energy savings when errors are in FE only, and up to 43% savings when errors are in both FE and CE.
['Sai Zhang', 'Naresh R. Shanbhag']
Embedded Algorithmic Noise-Tolerance for Signal Processing and Machine Learning Systems via Data Path Decomposition
698,114