abstract
stringlengths 8
10.1k
| authors
stringlengths 9
1.96k
| title
stringlengths 6
367
| __index_level_0__
int64 13
1,000k
|
---|---|---|---|
Wire-Speed Regular-Expression Scanning at 20 Gbit/s and Beyond. | ['Jan van Lunteren', 'Christoph Hagleitner'] | Wire-Speed Regular-Expression Scanning at 20 Gbit/s and Beyond. | 799,282 |
In cellular systems, antenna tilt and transmit power are two most important parameters for tuning the performance. An optimal antenna tilt has strong impact in interference mitigation which leads to a better coverage and capacity in interference limited system like LTE. Legacy optimization procedures require drive test to tune these parameters. In this paper, an autonomous configuration scheme, which works on the measurement provided by the users, is presented. The base stations constantly change their antenna tilt or transmit power to maximize their own objective functions. This heuristic approach is fully decentralized. Hence, the complexity is moderate. The simulation results show significant improvement in system performance in terms of reduced drop rate and better median SINR. In mobile cellular networks, at the planning stage, some static parameters and objectives are considered. In ongoing network optimization, system parameters arefine tuned repetitively, based on periodically collected statistics, such as key performance indicators (KPI)s. Enormous efforts, such as drive tests, are put in the coverage area to collect those data. Due to the limitation of data sampling, the collected data cannot fully represent all the characteristics of the live network. Therefore, the trend of network operating is to extract real-time parameters from user and optimize the network in a self-organized manner. This has significant effect not only on the quality of service and user satisfaction, but also comes with inherent characteristic of cost cutting, in term of initial deployment and planning capital expenditures (CAPEX) and ongoing operational expenditure (OPEX). Antenna tilt is one of the important parameter that has dramatic impact on coverage, interference, path loss and delay spread [1]. In this paper, we proposed an algorithm of adjusting antenna tilt and transmission power, primarily based on channel quality indicator (CQI), which is a 4-bit feedback message sent from user with the information about channel condition [2]. Moreover, a system level analysis is done based on variations in the proposed algorithm. These variations are created to test use cases based on different preferences and their impact on the overall system. Additionally, the solution is kept scalable, since deci | ['Muhammad Aatiq Ismail', 'Xiang Xu', 'Rudolf Mathar'] | Autonomous antenna tilt and power configuration based on CQI for LTE cellular networks | 665,191 |
In real world machine vision problems, issues such as noise and variable scene illumination make edge and object detection difficult. There exists no universal edge detection method which works under all conditions. In this paper, we propose a logarithmic edge detection method. This achieves a higher level of scene illumination and noise independence. We present experimental results for this method, and compare results of the algorithm against several leading edge detection methods, such as Sobel and Canny. For an objective basis of comparison, we use Pratt's Figure of Merit. We further demonstrate the application of the algorithm in conjunction with Edge Detection based Image Enhancement (EDIE), showing that the use of this edge detection algorithm results in better image enhancement, as quantified by the Logarithmic AME measure. | ['Eric J. Wharton', 'Karen Panetta', 'Sos S. Agaian'] | Logarithmic edge detection with applications | 447,948 |
The main purpose of this paper is to describe the design objectives that were set when work started on developing the COSMIC method of measuring a functional size of software requirements, the problems that were tackled during the development and some measurement results that demonstrate that the method has met its objectives. The paper begins by setting two main general objectives for software measurement, namely to help control software activities and to estimate future activities. Various approaches to software size measurement are described and the reasons are given why the COSMIC method was developed. The method is briefly described, followed by an account of advances by the COSMIC community. The paper concludes by discussing the main technical challenges that must still be tackled (principally measurement automation) and the organizational challenges (principally gaining a wider acceptance of the benefits of software measurement). | ['Charles R. Symons', 'Alain Abran', 'Christof Ebert', 'Frank W. Vogelezang'] | Measurement of Software Size: Advances Made by the COSMIC Community | 971,056 |
Cloud Intelligence | ['Jérôme Darmont', 'Torben Bach Pedersen'] | Cloud Intelligence | 712,316 |
AbstractPrice dispersion of a homogeneous product reflects market efficiency and has significant implications on sellers’ pricing strategies. Two different perspectives, the supply and demand perspectives, can be adopted to examine this phenomenon. The former focuses on listing prices posted by sellers, and the latter uses transaction prices that consumers pay to obtain the product. However, no prior research has systematically compared both perspectives, and it is unclear whether different perspectives will generate different insights. Using a unique data set collected from an online market, we find that the dispersion of listing prices is three times higher than the dispersion of transaction prices. More interestingly, the drivers of price dispersion differ significantly between listing and transaction data. The dispersion of listing prices reflects sellers’ perception of market environment and their pricing strategies, and it may not fully capture consumer behavior manifested through the variation of t... | ['Kexin Zhao', 'Xia Zhao', 'Jing Deng'] | Online Price Dispersion Revisited: How Do Transaction Prices Differ from Listing Prices? | 798,866 |
The way in which companies benefit from open source software (OSS) communities varies and corresponds with the business strategy they maintain. One way of establishing influence in OSS communities is by deploying own resources to an OSS project. Assigning own paid developers to work for an OSS project is a suitable means to influence project work. On the other hand, the pertinent literature on user communities and governance in OSS maintains that a large proportion of influence individuals have in a community depends on their position in the community. This view is reflected by social capital theory, which posits that strong relationships and network positions that are advantageous to access information are valuable resources that affect different downstream variables, most importantly value creation. Thus, this study aims to extend research that has used social capital theory to investigate online communities by testing a conceptual model of social capital and individual’s value creation and assessing the influence of firm-sponsorship on the context. | ['Dirk Homscheid', 'Mario Schaarschmidt', 'Steffen Staab'] | Firm-sponsored developers in open source software projects: a social capital perspective | 718,230 |
Efficient (condition-based) maintenance planning and inventory control of spares for critical components jointly determine the effectiveness of a maintenance strategy and, thereby, balance system uptime and maintenance costs. Duplicating an optimal policy for a single-component system to a multi-component system is not necessarily optimal, while a separate or sequential optimization of the maintenance and inventory decisions is also not guaranteed to yield the lowest costs. We therefore consider the joint optimization of condition-based maintenance and spares planning for multi-component systems. We formulate our model as a Markov Decision Process, and minimize the long-run average cost per time unit. A key insight from our numerical results is that the (s, S) inventory policy, popular in theory as well as practice, can be far from optimal for systems consisting of few components. Significant savings can be obtained by basing both the maintenance decisions and the timing of ordering spare components on the system’s condition. | ['Minou C.A. Olde Keizer', 'Ruud H. Teunter', 'Jasper Veldman'] | Joint condition-based maintenance and inventory optimization for systems with multiple components | 850,322 |
Rays in city canyon is the most common mode of radio propagation. Our research is also based on this kind of urban assumption. This paper studies the impact of spatial building distribution on the path-loss exponent. The results are obtained by simulation. The simulation is based on a 3D ray tracing algorithm for the computation of the received power and on a stochastic geometry process for the generation of buildings. The evolution of the path-loss exponent against several terrain parameters is investigated. Parameters such as spatial density, the distribution of buildings and the distribution of antennas heights are considered. Some interesting results were found, and statistical fittings are proposed for the path-loss exponent from the results obtained by simulation. | ['Xiaoxing Yu', 'Jing Feng'] | Research on radio propagation modeling of city impact in 3D perspective | 213,283 |
To reason about programs written in a language, one needs to define its formal semantics, derive a reasoning mechanism (e.g. a program logic), and maximize the proof automation. Unfortunately, a compiler may involve multiple languages and phases; it is tedious and error prone to do so for each language and each phase.#R##N##R##N#We present an approach based on the use of higher order logic to ease this burden. All the Intermediate Representations (IRs) are special forms of the logic of a prover such that IR programs can be reasoned about directly in the logic. We use this technique to construct and validate an optimizing compiler. New techniques are used to compile-with-proof all the programs into the logic, e.g. a logic specification is derived automatically from the monad interpretation of a piece of assembly code. | ['Guodong Li'] | Validated compilation through logic | 521,947 |
We provide the first streaming algorithm for computing a provable approximation to the $k$-means of sparse Big data. Here, sparse Big Data is a set of $n$ vectors in $\mathbb{R}^d$, where each vector has $O(1)$ non-zeroes entries, and $d\geq n$. E.g., adjacency matrix of a graph, web-links, social network, document-terms, or image-features matrices. #R##N#Our streaming algorithm stores at most $\log n\cdot k^{O(1)}$ input points in memory. If the stream is distributed among $M$ machines, the running time reduces by a factor of $M$, while communicating a total of $M\cdot k^{O(1)}$ (sparse) input points between the machines. #R##N#% Our main technical result is a deterministic algorithm for computing a sparse $(k,\epsilon)$-coreset, which is a weighted subset of $k^{O(1)}$ input points that approximates the sum of squared distances from the $n$ input points to every $k$ centers, up to $(1\pm\epsilon)$ factor, for any given constant $\epsilon>0$. This is the first such coreset of size independent of both $d$ and $n$. #R##N#Existing algorithms use coresets of size at least polynomial in $d$, or project the input points on a subspace which diminishes their sparsity, thus require memory and communication $\Omega(d)=\Omega(n)$ even for $k=2$. #R##N#Experimental results real public datasets shows that our algorithm boost the performance of such given heuristics even in the off-line setting. Open code is provided for reproducibility. | ['Artem Barger', 'Dan Feldman'] | k-Means for Streaming and Distributed Big Sparse Data | 628,548 |
LTE/LTE-A networks have been successfully providing advanced broadband services to millions of users worldwide. Lately, it has been suggested to use LTE networks for mission-critical applications like public safety, smart grid and military communications. We have previously shown that LTE networks are vulnerable to Denial-of-Service (DOS) and loss of service attacks from smart jammers. In this paper, we extend our previous work on resilience of LTE networks to wideband multipath fading channel, SINR estimation in frequency domain and computation of utilities based on observable parameters under the framework of single-shot and repeated games with asymmetric information. In a single-shot game formulation, network utility is severely compromised at its solutions, i.e. at the Nash Equilibria (NE). We propose evolved repeated-game strategy algorithms to combat smart jamming attacks that can be implemented in existing deployments using current technology. | ['Farhan M. Aziz', 'Jeff S. Shamma', 'Gordon L. Stüber'] | Resilience of LTE networks against smart jamming attacks: Wideband model | 559,259 |
In this paper we study the network design arc set with variable upper bounds. This set appears as a common substructure of many network design problems and is a relaxation of several fundamental mixed-integer sets studied earlier independently. In particular, the splittable flow arc set, the unsplittable flow arc set, the single node fixed-charge flow set, and the binary knapsack set are facial restrictions of the network design arc set with variable upper bounds. Here we describe families of strong valid inequalities that cut off all fractional extreme points of the continuous relaxation of the network design arc set with variable upper bounds. Interestingly, some of these inequalities are also new even for the aforementioned restrictions studied earlier. © 2007 Wiley Periodicals, Inc. NETWORKS, Vol. 50(1), 17–28 2007 | ['Alper Atamtürk', 'Oktay Günlük'] | Network design arc set with variable upper bounds | 55,437 |
Voice control is a popular way to operate mobile devices, enabling users to communicate requests to their devices. However, adversaries can leverage voice control to trick mobile devices into executing commands to leak secrets or to modify critical information. Contemporary mobile operating systems fail to prevent such attacks because they do not control access to the speaker at all and fail to control when untrusted apps may use the microphone, enabling authorized apps to create exploitable communication channels. In this paper, we propose a security mechanism that tracks the creation of audio communication channels explicitly and controls the information flows over these channels to prevent several types of attacks. We design and implement AuDroid, an extension to the SE Linux reference monitor integrated into the Android operating system for enforcing lattice security policies over the dynamically changing use of system audio resources. To enhance flexibility, when information flow errors are detected, the device owner, system apps and services are given the opportunity to resolve information flow errors using known methods, enabling AuDroid to run many configurations safely. We evaluate our approach on 17 widely-used apps that make extensive use of the microphone and speaker, finding that AuDroid prevents six types of attack scenarios on audio channels while permitting all 17 apps to run effectively. AuDroid shows that it is possible to prevent attacks using audio channels without compromising functionality or introducing significant performance overhead. | ['Giuseppe Petracca', 'Yuqiong Sun', 'Trent Jaeger', 'Ahmad Atamli'] | AuDroid: Preventing Attacks on Audio Channels in Mobile Devices | 669,242 |
An AlCoCrCuFeNi high-entropy alloy (HEA) coating was fabricated on a pure magnesium substrate using a two-step method, involving plasma spray processing and laser re-melting. After laser re-melting, the microporosity present in the as-sprayed coating was eliminated, and a dense surface layer was obtained. The microstructure of the laser-remelted layer exhibits an epitaxial growth of columnar dendrites, which originate from the crystals of the spray coating. The presence of a continuous epitaxial growth of columnar HEA dendrites in the laser re-melted layer was analyzed based on the critical stability condition of a planar interface. The solidification of a columnar dendrite structure of the HEA alloy in the laser-remelted layer was analyzed based on the Kurz–Giovanola–Trivedi model and Hunt’s criterion, with modifications for a multi-component alloy. | ['T. M. Yue', 'Hui Xie', 'Xin Lin', 'Haiou Yang', 'Guanghui Meng'] | Microstructure of Laser Re-Melted AlCoCrCuFeNi High Entropy Alloy Coatings Produced by Plasma Spraying | 503,931 |
Towards a Classification Framework for Approaches to Enterprise Architecture Analysis | ['Birger Lantow', 'Dierk Jugel', 'Matthias Wißotzki', 'Benjamin Lehmann', 'Ole Zimmermann', 'Kurt Sandkuhl'] | Towards a Classification Framework for Approaches to Enterprise Architecture Analysis | 922,487 |
We describe the design and implementation of a video based augmented reality system capable of overlaying three dimensional graphical objects on live video of dynamic environments. The key feature of the system is that it is completely uncalibrated: it does not use any metric information about the calibration parameters of the camera or the 3D locations and dimensions of the environment's objects. The only requirement is the ability to track across frames at least four feature points that are specified by the user at system initialization time and whose world coordinates are unknown. Our approach is based on the following observation: given a set of four or more non coplanar 3D points, the projection of all points in the set can be computed as a linear combination of the projections of just four of the points. We exploit this observation by: tracking lines and fiducial points at frame rate; and representing virtual objects in a non Euclidean, affine frame of reference that allows their projection to be computed as a linear combination of the projection of the fiducial points. | ['Kiriakos N. Kutulakos', 'James R. Vallino'] | Affine object representations for calibration-free augmented reality | 917,897 |
This paper proposes an assessment model for Web-based systems in terms of non-functional properties of the system. The proposed model consists of two stages: (i) deriving quality metrics using goal-question-metric (GQM) approach; and (ii) evaluating the metrics to rank a Web based system using multi-element component comparison analysis technique. The model ultimately produces a numeric rating indicating the relative quality of a particular Web system in terms of selected quality attributes. We decompose the quality objectives of the web system into sub goals, and develop questions in order to derive metrics. The metrics are then assessed against the defined requirements using an assessment scheme. | ['Khaled M. Khan'] | Assessing quality of web based systems | 56,583 |
CogInfoCom Systems from an Interaction Perspective – A Pilot Application for EtoCom – | ['Gyorgy Persa', 'Adam Csapo', 'Péter Baranyi'] | CogInfoCom Systems from an Interaction Perspective – A Pilot Application for EtoCom – | 805,946 |
The number of non-rigid 3D models increases steadily in various areas. It is imperative to develop efficient retrieval system for 3D non-rigid models. As we know, global features fail to consistently describe the intra-class variability of non-rigid 3D models, the local features are more effective than global features for the retrieval of non-rigid 3D models. In this paper, we use Heat Kernel Signature (HKS) as the local features to represent non-rigid 3D models and further propose the retrieval method based on scale-invariant local features. Firstly, we extract key-points at multiple scales automatically. Then, the HKS local features are computed for each key-point. However, the HKS features are sensitive to scale. In order to solve this problem, we convert the scale problem into the translation problem using the diffusion Wavelets transform. To solve the translation problem, we use a kind of histogram equalization technique. Finally, we use the bipartite graph matching algorithm to compute similarity between the 3D models. Experimental results on two public benchmarks show that our method outperforms state-of-the-art methods for non-rigid 3D models retrieval. | ['Pengjie Li', 'Huadong Ma', 'Anlong Ming'] | A non-rigid 3D model retrieval method based on scale-invariant heat kernel signature features | 833,869 |
Community identification in networks has a wide range of practical applications, including data clustering and social network analysis. We present path-sharing, a new measure of betweenness, for use in identifying densely connected clusters in networks. We show that path-sharing performs well at identifying communities in artificial benchmark networks, giving performance comparable to that of state-of-the-art community identification techniques. We also demonstrate a practical use of path-sharing when used in community identification, by applying it to an image segmentation problem. | ['Paul McCarthy'] | Path-sharing: A new betweenness measure for community identification in networks | 175,574 |
Leaf image identification is a significant and challenging application of computer vision and image processing. A central issue associated with this task is how to effectively and efficiently describe the leaf images and measure their similarities. In this paper, a novel shape descriptor termed R-angle is proposed. R-angle describes the curvature of the contour by measuring the angle between the intersections of the shape contour with a circle of radius R centered at points sampled around the contour. It is intrinsically invariant to group transforms including scaling, rotation and translation. Varying the parameter R of the proposed R-angle naturally introduces the notation of scale, which we leverage to provide a coarse-to-fine description of the local curvature. A local scale arrangement is proposed by taking the distance between each contour point and the center of the shape to be the maximum scale for a given contour point. Two matching schemes, including L 1 -norm matching and dynamic programming based matching, are applied to measure the similarities of the leaf shapes. The retrieval experiments conducted on two challenging leaf image datasets indicate that the proposed method significantly outperforms the state-of-the-art methods for leaf identification. An additional experiment on an animal dataset also indicates its potential for general shape recognition. | ['Jie Cao', 'Bin Wang', 'Douglas Lindsay Brown'] | Similarity based leaf image retrieval using multiscale R-angle description | 885,518 |
A novel architecture, suitable for high-speed FIR decimation filters for single-bit sigma-delta modulation, is proposed. By using efficient data and coefficient representation, the total number of partial products is reduced, leading to low power consumption. The work focuses on filters whose design is based on cascaded comb filters, although the approach is applicable to any FIR filter. | ['Oscar Gustafsson', 'Henrik Ohlsson'] | A low power decimation filter architecture for high-speed single-bit sigma-delta modulation | 113,175 |
Networks-on-Chip (NoCs for short) are known as the most scalable and reliable on-chip communication architectures for multi-core SoCs with tens to hundreds IP cores. Proper mapping the IP cores on NoC tiles (or assigning threads to cores in chip multiprocessors) can reduce end-to-end delay and energy consumption. While almost all previous works on mapping consider higher priority for the application's flows with higher required bandwidth, a mapping strategy, presented in this paper, is introduced that considers multicast communication flows in addition to the normal unicast flows. To this end, multicast and unicast traffic flows are first characterized in terms of some new metrics which are then used for arranging communication flows based on their volume and priority. A heuristic approach is used to assign IP cores to NoC tiles. Simulation results for both synthetic and real applications show up to 49% (28% on average) performance improvement and 44% (22% on average) energy saving when compared to the best known mapping algorithm, nMap. | ['Amirali Habibi', 'Mouhammad Arjomand', 'Hamid Sarbazi-Azad'] | Multicast-Aware Mapping Algorithm for On-chip Networks | 167,835 |
This paper revises the theoretical background for upcoming dual-channel Radar satellite missions to monitor traffic from space. As it is well-known, an object moving with a velocity deviating from the assumptions incorporated in the focusing process will generally appear both displaced and blurred in the azimuth direction. To study the impact of these (and related) distortions in focused SAR images, the analytic relations between an arbitrarily moving point scatterer and its conjugate in the SAR image have been reviewed and adapted to dual-channel satellite specifications. To be able to monitor traffic under these boundary conditions in real-life situations, a specific detection scheme is proposed. This scheme integrates complementary detection and velocity estimation algorithms with knowledge derived from external sources as, e.g., road databases. | ['Stefan Hinz', 'Franz Meyer', 'Andreas Laika', 'Richard Bamler'] | Spaceborne Traffic Monitoring with Dual Channel Synthetic Aperture Radar Theory and Experiments | 434,181 |
Systematic state space traversal is a popular approach for detecting errors in multithreaded programs. Nevertheless, it is very expensive because any non-trivial program exhibits a huge number of possible interleavings. Some kind of guided and bounded search is often used to achieve good performance. We present two heuristics that are based on a hybrid static-dynamic analysis that can identify possible accesses to shared objects. One heuristic changes the order in which transitions are explored, and the second heuristic prunes selected transitions. Results of experiments on several Java programs, which we performed using our prototype implementation in Java Pathfinder, show that the hybrid analysis together with heuristics significantly improves the performance of error detection. | ['Pavel Parizek'] | Fast error detection with hybrid analyses of future accesses | 859,935 |
The retrieval of soccer highlights is a suitable technique for video indexing, required by the multimedia database management or for the development of television on demand. For these purposes, it should be interesting to have an automatic annotation of events happened in soccer games. One solution consists in analyzing the audio soundtrack associated to the soccer video and to detect the interesting frames. In this paper we use the adaptive time-frequency decomposition of the soundtrack as a feature extraction procedure. This decomposition is based on the Matching Pursuit concept and a dictionary composed of Gabor functions. The parameters provided by these transformations constitute the input of the classification stage. The results provided for real soccer video will prove the efficiency of the adaptive time-frequency representation as a feature extraction stage. | ['Jonathan Marchal', 'Cornel Ioana', 'Emanuel Radoi', 'André Quinquis', 'Sridhar Krishnan'] | Soccer Video Retrival Using Adaptive Time-Frequency Methods | 264,750 |
Extreme learning machine (ELM) has attracted considerable attention in recent years due to its numerous applications in classification and regression. In this study, the authors investigate the performance of an ELM-based threshold selection algorithm for 60 GHz millimetre wave time of arrival estimation using energy detector (ED). A hybrid metric based on the skewness, kurtosis, standard deviation, and slope of the ED values is employed. The optimal normalised threshold for different signal-to-noise ratios (SNRs) is investigated, and the effects of the integration period and channel model are examined. Performance results are presented which show that the proposed ELM-based algorithm provides high precision and better robustness than existing techniques over a wide range of SNRs for the IEEE 802.15.3c CM1.1 and CM2.1 channel models. Further, the performance is largely independent of the integration period and channel model. | ['Xiaolin Liang', 'Hao Zhang', 'Tingting Lu', 'T.A. Gulliver'] | Extreme learning machine for 60 GHz millimetre wave positioning | 897,430 |
This research develops and tests a model to examine the impact of peer-based learning on student outcomes in the context of technology-mediated learning (TML). Using a purposive sampling methodology, a survey was administered to 600 students of secondary schools in India. A sample of 443 complete responses was obtained. Results supported the key hypothesis demonstrating the impact of peer-related factors on student outcomes. Insights for the use of TML in schools are discussed, based on this study. | ['Mayuri Duggirala', 'Prakash Sai Lokachari'] | Impact of Peer-Related Factors on Student-Related Outcomes in Technology-Mediated Learning: Evidence from Secondary Schools in India | 517,967 |
This paper presents a new approach for Boolean decomposition based on the Boolean difference and cofactor analysis. Two simple tests provide sufficient and necessary conditions to identify AND and exclusive-OR (XOR) decompositions. The proposed method can decompose an n-input function in O(n • log n) cofactor and O(n) equivalence test operations. Recently, 2-to-1 multiplexers (MUX) have also been used to perform such decomposition. However, MUX with more inputs has been neglected. We provide sufficient and necessary conditions to obtain MUX decompositions of functions with an arbitrary number of inputs. | ['Vinicius Callegaro', 'Felipe S. Marranghello', 'Mayler G. A. Martins', 'Renato P. Ribas', 'André Inácio Reis'] | Bottom-up disjoint-support decomposition based on cofactor and boolean difference analysis | 583,491 |
Valve-sparing aortic root reconstruction is an up- and-coming approach for patients suffering from aortic valve insufficiencies which promises to significantly reduce complications. However, the success of the treatment strongly depends on the challenging task of choosing the correct size of the prosthesis, for which, up to now, surgeons solely have to rely on their experience. Here, we present a novel machine learning based approach, which might make it possible to predict the size of the prosthesis from pre-operatively acquired ultrasound images. We utilize support vector regression to train a prediction model on three geometric features extracted from the ultrasound data. In order to evaluate the accuracy and robustness of our approach we created a large data base of porcine aortic root geometries in a healthy state and an artificially dilated state. Our results indicate that prediction of correct prosthesis sizes is feasible. Furthermore, they suggest that it is crucial that the training data set faithfully represents the diversity of aortic root geometries. | ['J. Hagenah', 'Erik Werrmann', 'Michael Scharfschwerdt', 'Floris Ernst', 'Christoph Metzner'] | Prediction of individual prosthesis size for valve-sparing aortic root reconstruction based on geometric features | 918,422 |
In this paper, we propose a novel communication scheme for real-time video transmission over wireless sensor network based on the virtual multiple-input-multiple-output (MIMO) and the network coding technology. According to the distinctive characteristic of real-time video data, the video transmission distortion, energy consumption and the end-to-end delay performance are analyzed with virtual MIMO transmission manner for our approach. The greedy algorithm is used to optimize the video distortion performance with the total energy and end-to-end delay constraints. For multiple video source application in wireless video sensor network, we propose a virtual MIMO with network coding technique scheme to improve the network throughput performance and reduce end-to-end delay effectively. From the simulation results, our approach can get better distortion performance than the traditional manner when the intra cluster distance is set to be appropriate relative to the inter cluster distance. | ['Yong Liu', 'Lifeng Sun', 'Shiqiang Yang'] | Wireless Video Transmission Scheme Based on Virtual MIMO and Network Coding Technology | 313,634 |
This paper presents a systematic and computable method for choosing the regularization parameter appearing in Tikhonov-type regularization based on non-quadratic regularizers. First, we extend the notion of the L-curve, originally defined for quadratically regularized problems, to the case of non-quadratic functions. We then associate the optimal value of the regularization parameter for these non-quadratic problems with the corner of the resulting generalized L-curve. We identify the corner of this L-curve as the point of tangency between a straight line of arbitrary slope and the L-curve. This definition results in a corresponding algebraic equation which the optimal regularization parameter must satisfy. This algebraic equation naturally leads to an iterative algorithm for the optimal value of the regularization parameter. The convergence of this iterative algorithm is established. Simulation results confirm that the proposed method yields values of the regularization parameters that result in good reconstructions for non-quadratic problems. | ['Soontorn Oraintara', 'William Clement Karl', 'David A. Castanon', 'Truong Q. Nguyen'] | A method for choosing the regularization parameter in generalized Tikhonov regularized linear inverse problems | 298,618 |
In this paper we consider the nonlinear system \(\gamma _i(x_i)=\sum _{j=1}^{m}g_{ij} (x_j)\), \( 1\le i \le m\). We give sufficient conditions which imply the existence and uniqueness of positive solutions of the system. Our theorem extends earlier results known in the literature. Several examples illustrate the main result. | ['István Győri', 'Ferenc Hartung', 'Nahed A. Mohamady'] | Existence and uniqueness of positive solutions of a system of nonlinear algebraic equations | 942,315 |
This paper describes a prototype implementing a high degree of fault tolerance, reliability and resilience in distributed software systems. The prototype incorporates fault, configuration, accounting, performance and security (FCAPS) management using a signaling network overlay and allows the dynamic control of a set of nodes called Distributed Intelligent Managed Elements (DIMEs) in a network. Each DIME is a computing entity (implemented in Linux and in the future will be ported to Windows) endowed with self-management and signaling capabilities to collaborate with other DIMEs in a network. The prototype incorporates a new computing model proposed by Mikkilineni in 2010, with signaling network overlay over the computing network and allows parallelism in resource monitoring, analysis and reconfiguration. A workflow is implemented as a set of tasks, arranged or organized in a directed acyclic graph (DAG) and executed by a managed network of DIMEs. Distributed DIME networks provide a network computing model to create distributed computing clouds and execute distributed managed workflows with high degree of agility, availability, reliability, performance and security. | ['Giovanni Morana', 'Rao Mikkilineni'] | Scaling and Self-repair of Linux Based Services Using a Novel Distributed Computing Model Exploiting Parallelism | 141,794 |
Examples of Causal Probabilistic Expert Systems | ['M. Noormohammadian', 'Ulrich G. Oppel'] | Examples of Causal Probabilistic Expert Systems | 267,785 |
In this paper, we study strategies for generators making offers into electricity markets in circumstances where demand is unknown in advance. We concentrate on a model with smooth supply functions and derive conditions under which a single supply function can represent an optimal response to the offers of the other market participants over a range of demands. In order to apply this approach in practice, it may be necessary to approximate the supply functions of other players. We derive bounds on the loss in revenue that occurs in comparison with the exact supply function response, when a generator uses an approximation both for its own supply function and for the supply functions of other players. We also demonstrate the existence of symmetric supply-function equilibria. | ['Edward J. Anderson', 'Andrew B. Philpott'] | Using Supply Functions for Offering Generation into an Electricity Market | 269,492 |
Anonymous Hierarchical Identity-Based Encryption in Prime Order Groups | ['Yanli Ren', 'Shuozhong Wang', 'Xinpeng Zhang'] | Anonymous Hierarchical Identity-Based Encryption in Prime Order Groups | 630,485 |
Structural motifs are important for the integrity of a protein fold and can be employed to design and rationalize protein engineering and folding experiments. Such conserved segments represent the conserved core of a family or superfamily and can be crucial for the recognition of potential new members in sequence and structure databases. We present a database, MegaMotifBase, that compiles a set of important structural segments or motifs for protein structures. Motifs are recognized on the basis of both sequence conservation and preservation of important structural features such as amino acid preference, solvent accessibility, secondary structural content, hydrogen-bonding pattern and residue packing. This database provides 3D orientation patterns of the identified motifs in terms of inter-motif distances and torsion angles. Important applications of structural motifs are also provided in several crucial areas such as similar sequence and structure search, multiple sequence alignment and homology modeling. MegaMotifBase can be a useful resource to gain knowledge about structure and functional relationship of proteins. The database can be accessed from the URL http://caps.ncbs.res.in/MegaMotifbase/index.html | ['Ganesan Pugalenthi', 'Ponnuthurai N. Suganthan', 'Ramanathan Sowdhamini', 'Saikat Chakrabarti'] | MegaMotifBase: a database of structural motifs in protein families and superfamilies | 78,774 |
Given a multivariate real (or complex) polynomial p and a domain D, we would like to decide whether an algorithm exists to evaluate p(x) accurately for all x ∈ D using rounded real (or complex) arithmetic. Here "accurately" means with relative error less than 1, i.e., with some correct leading digits. The answer depends on the model of rounded arithmetic: We assume that for any arithmetic operator op(a, b), for example a+b or ab, its computed value is op(a, b)� (1+δ), where |δ| is bounded by some constant ǫ where 0 < ǫ ≪ 1, but δ is otherwise arbitrary. This model is the traditional one used to analyze the accuracy of floating point algorithms. Our ultimate goal is to establish a decision procedure that, for any p and D, either exhibits an accurate algorithm or proves that none exists. In contrast to the case where numbers are stored and manipulated as finite bit strings (e.g., as floating point numbers or rational numbers) we show that some polynomials p are impossible to evaluate accurately. The existence of an accurate algorithm will depend not just on p and D, but on which arithmetic operators are available (perhaps beyond +, −, and � ), which constants are available to the algorithm (integers, algebraic numbers, ...), and whether branching is permitted in the algorithm. For floating point computation, our model can be used to identify which accurate operators beyond +, − and � (e.g., dot products, 3x3 determinants, ...) are necessary to evaluate a particular p(x). Toward this goal, we present necessary conditions on p for it to be accurately evaluable on open real or complex domains D. We also give sufficient conditions, and describe progress toward a complete decision procedure. We do present a complete decision procedure for homogeneous polynomials p with integer coefficients,D = C n , and using only the arithmetic operations +, − and � . | ['James Demmel', 'Ioana Dumitriu', 'Olga Holtz'] | Toward accurate polynomial evaluation in rounded arithmetic | 108,804 |
Methods for learning word representations using large text corpora have received much attention lately due to their impressive performance in numerous natural language processing (NLP) tasks such as, semantic similarity measurement, and word analogy detection. Despite their success, these data-driven word representation learning methods do not consider the rich semantic relational structure between words in a co-occurring context. On the other hand, already much manual effort has gone into the construction of semantic lexicons such as the WordNet that represent the meanings of words by defining the various relationships that exist among the words in a language. We consider the question, can we improve the word representations learnt using a corpora by integrating the knowledge from semantic lexicons?. For this purpose, we propose a joint word representation learning method that simultaneously predicts the co-occurrences of two words in a sentence subject to the relational constrains given by the semantic lexicon. We use relations that exist between words in the lexicon to regularize the word representations learnt from the corpus. Our proposed method statistically significantly outperforms previously proposed methods for incorporating semantic lexicons into word representations on several benchmark datasets for semantic similarity and word analogy. | ['Danushka Bollegala', 'Alsuhaibani Mohammed', 'Takanori Maehara', 'Ken-ichi Kawarabayashi'] | Joint word representation learning using a corpus and a semantic lexicon | 626,864 |
Pruning is one of the effective techniques for improving the generalization error of neural networks. Existing pruning techniques are derived mainly from the viewpoint of energy minimization, which is commonly used in gradient-based learning methods. In recurrent networks, extended Kalman filter (EKF)–based training has been shown to be superior to gradient-based learning methods in terms of speed. This article explains a pruning procedure for recurrent neural networks using EKF training. The sensitivity of a posterior probability is used as a measure of the importance of a weight instead of error sensitivity since posterior probability density is readily obtained from this training method. The pruning procedure is tested using three problems: (1) the prediction of a simple linear time series, (2) the identification of a nonlinear system, and (3) the prediction of an exchange-rate time series. Simulation results demonstrate that the proposed pruning method is able to reduce the number of parameters and im... | ['John Sum', 'L. W. Chan', 'Chi-Sing Leung', 'Gilbert H. Young'] | Extended Kalman filter-based pruning method for recurrent neural networks | 519,557 |
Increasing the sampling rate of Analog-to-Digital Converters (ADC) is a main challenge in many fields and especially in telecommunications. Time-Interleaved ADCs (TI-ADC) were introduced as a technical solution to reach high sampling rates by time interleaving and multiplexing several low-rate ADCs at the price of a perfect synchronization between them. Indeed, as the signal reconstruction formulas are derived under the assumption of uniform sampling, a desynchronization between the elementary ADCs must be compensated upstream with an online calibration and expensive hardware corrections of the sampling device. Based on the observation that desynchronized TI-ADCs can be effectively modeled using a Periodic Non-uniform Sampling (PNS) scheme, we develop a general method to blindly estimate the time delays involved in PNS. The proposed strategy exploits the signal stationarity properties and thus is simple and quite generalizable to other applications. Moreover, contrarily to state-of-the-art methods, it applies to bandpass signals which is the more judicious application framework of the PNS scheme. | ['Jean-Adrien Vernhes', 'Marie Chabert', 'Bernard Lacaze', 'Guy Lesthievent', 'Roland Baudin', 'Marie-Laure Boucheret'] | Blind estimation of unknown time delay in periodic non-uniform sampling: Application to desynchronized time interleaved-ADCs | 795,003 |
The use of Digital Rights Management (DRM) technologies for the enforcement of digital media usage models is currently subject of a heated debate. Consumer organizations and national governments claim that DRM technology interferes with basic personal rights, such as the right to make copies for personal use or the right to use content on any platform of choice. This issue has lately gained increased attention by a trend in some European countries to force DRM vendors and online media stores to open up their respective DRM technologies, i.e. make them interoperable. In the first part of this talk we discuss the many obstacles to DRM interoperability, both technological, legal and business wise. In the second part we discuss discuss some potential solutions to the DRM interoperability problem. In particular, we present the Coral DRM interoperability framework that allows multiple DRM systems to seamlessly work together while at the same time requiring minimal modification to existing DRMs. | ['Ton Kalker'] | DRM Interoperability | 669,036 |
Three-qubit quantum gates are key ingredients for quantum error correction and quantum-information processing. We generate quantum-control procedures to design three types of three-qubit gates, namely Toffoli, controlled-NOT-NOT, and Fredkin gates. The design procedures are applicable to a system comprising three nearest-neighbor-coupled superconducting artificial atoms. For each three-qubit gate, the numerical simulation of the proposed scheme achieves 99.9% fidelity, which is an accepted threshold fidelity for fault-tolerant quantum computing. We test our procedure in the presence of decoherence-induced noise and show its robustness against random external noise generated by the control electronics. The three-qubit gates are designed via the machine-learning algorithm called subspace-selective self-adaptive differential evolution. | ['Ehsan Zahedinejad', 'Joydip Ghosh', 'Barry C. Sanders'] | Designing high-fidelity single-shot three-qubit gates: A machine learning approach | 633,323 |
Traditional iterative contraction based polygonal mesh simplification (PMS) algorithms usually require enormous amounts of main memory cost in processing large meshes. On the other hand, fast out-of-core algorithms based on the grid re-sampling scheme usually produce low quality output. In this paper, we propose a novel cache-based approach to large polygonal mesh simplification. The new approach introduces the use of a cache layer to accelerate external memory accesses and to reduce the main memory cost to constant. Through the analysis on the impact of heap size to the locality of references, a constant sized heap is suggested instead of a large greedy heap. From our experimental results, we find that the new approach is able to generate very good quality approximations efficiently with very low main memory cost. | ['Hung-Kuang Chen', 'Chin-Shyurng Fahn', 'Jeffrey J. P. Tsai', 'Ming-Bo Lin'] | A novel cache-based approach to large polygonal mesh simplification | 415,471 |
A test controller for BIST of Boundary Scan Boards is described. It consists of a test processor core, with an optimized architecture for controlling the board-level BST infrastructure, and a system level testability bus interjace, allowing the implementation of a hierarchical test strategy. Automatic test pattern generation for this dedicated processor simplifies the task of providing a board-level BIST solution. | ['J.S. Matos', 'F.S. Pinto', 'J.M.M. Ferreira'] | A boundary scan test controller for hierarchical BIST | 28,020 |
Multidimensional scaling (MDS) is a collection of data analytic techniques for constructing configurations of points from dissimilarity information about interpoint distances. Classsical MDS assumes a fixed matrix of dissimilarities. However, in some applications, e.g., the problem of inferring 3-dimensional molecular structure from bounds on interatomic distances, the dissimilarities are free to vary, resulting in optimization problems with a spectral objective function. A perturbation analysis is used to compute first- and second-order directional derivatives of this function. The gradient and Hessian are then inferred as representers of the derivatives. This coordinate-free approach reveals the matrix structure of the objective and facilitates writing customized optimization software. Also analyzed is the spectrum of the Hessian of the objective. | ['Robert Michael Lewis', 'Michael W. Trosset'] | Sensitivity analysis of the strain criterion for multidimensional scaling | 325,513 |
The connection between the time-varying gap metric and two-block problems is utilized to obtain criteria for robust stabilization of linear, discretetime, time-varying systems. In particular we give a formula for the optimal minimal angle for a stabilizable linear time-varying system and show that it has a maximally stabilizing controller. | ['Avraham Feintuch'] | The time-varying gap and coprime factor perturbations | 95,903 |
On the structure of the core of balanced games. | ['Anton Stefanescu'] | On the structure of the core of balanced games. | 782,723 |
The ASETA project (acronym for Adaptive Survey- ing and Early treatment of crops with a Team of Autonomous vehicles) is a multi-disciplinary project combining cooperating airborne and ground-based vehicles with advanced sensors and automated analysis to implement a smart treatment of weeds in agricultural fields. The purpose is to control and reduce the amount of herbicides, consumed energy and vehicle emissions in the weed detection and treatment process, thus reducing the environmental impact. The project addresses this issue through a closed loop cooperation among a team of unmanned aircraft system (UAS) and unmanned ground vehicles (UGV) with advanced vision sensors for 3D and multispectral imaging. This paper presents the scientific and technological challenges in the project, which include multivehicle estimation and guidance, het- erogeneous multi-agent systems, task generation and allocation, remote sensing and 3D computer vision. | ['Wajahat Kazmi', 'Morten Bisgaard', 'Francisco Jose Garcia-Ruiz', 'Karl Damkjær Hansen', 'Anders la Cour-Harbo'] | Adaptive Surveying and Early Treatment of Crops with a Team of Autonomous Vehicles | 499,203 |
The PACC Starter Kit is an eclipse-based development environment that combines a model-driven development approach with reasoning frameworks that apply performance, safety, and security analyses. These analyses predict runtime behavior based on specifications of component behavior and are accompanied by some measure of confidence. | ['James Ivers', 'Gabriel A. Moreno'] | Model-driven development with predictable quality | 35,739 |
Multi-exponentiation is a common and time consuming operation in public-key cryptography. Its elliptic curve counterpart, called multi-scalar multiplication is extensively used for digital signature verification. Several algorithms have been proposed to speed-up those critical computations. They are based on simultaneously recoding a set of integers in order to minimize the number of general multiplications or point additions. When signed-digit recoding techniques can be used, as in the world of elliptic curves, Joint Sparse Form (JSF) and interleaving w-NAF are the most efficient algorithms. In this paper, a novel recoding algorithm for a pair of integers is proposed, based on a decomposition that mixes powers of 2 and powers of 3. The so-called Hybrid Binary-Ternary Joint Form require fewer digits and is sparser than the JSF and the interleaving w-NAF. Its advantages are illustrated for elliptic curve double-scalar multiplication; the operation counts show a gain of up to 19%. | ['Jithra Adikari', 'Vassil S. Dimitrov', 'Laurent Imbert'] | Hybrid Binary-Ternary Joint Form and Its Application in Elliptic Curve Cryptography | 305,229 |
Quantum Circuits for the Unitary Permutation Problem | ['Stefano Facchini', 'Simon Perdrix'] | Quantum Circuits for the Unitary Permutation Problem | 635,247 |
We design the first truthful-in-expectation, constant-factor approximation mechanisms for NP -hard cases of the welfare maximization problem in combinatorial auctions with nonidentical items and in combinatorial public projects. Our results apply to bidders with valuations that are nonnegative linear combinations of gross-substitute valuations, a class that encompasses many of the most well-studied subclasses of submodular functions, including coverage functions and weighted matroid rank functions. Our mechanisms have an expected polynomial runtime and achieve an approximation factor of 1 − 1/ e . This approximation factor is the best possible for both problems, even for known and explicitly given coverage valuations, assuming P ≠ NP . Recent impossibility results suggest that our results cannot be extended to a significantly larger valuation class. Both of our mechanisms are instantiations of a new framework for designing approximation mechanisms based on randomized rounding algorithms. The high-level idea of this framework is to optimize directly over the (random) output of the rounding algorithm , rather than the usual (and rarely truthful) approach of optimizing over the input to the rounding algorithm. This framework yields truthful-in-expectation mechanisms, which can be implemented efficiently when the corresponding objective function is concave. For bidders with valuations in the cone generated by gross-substitute valuations, we give novel randomized rounding algorithms that lead to both a concave objective function and a (1 − 1/ e )-approximation of the optimal welfare. | ['Shaddin Dughmi', 'Tim Roughgarden', 'Qiqi Yan'] | Optimal Mechanisms for Combinatorial Auctions and Combinatorial Public Projects via Convex Rounding | 890,085 |
Aspect-oriented modeling is proposed to design the architecture of fault tolerant systems. Notations are introduced that support the separate and modularized design of functional and dependability aspects in UML class diagrams. This notation designates sensitive parts of the architecture and selected architecture patterns that implement common redundancy techniques. A model weaver is presented that constructs both the integrated model of the system and the dependability model on the basis of the analysis sub-models attached to the architecture patterns. In this way fault tolerance mechanisms can be systematically analyzed when they are integrated into the system. | ['P. Domokos', 'István Majzik'] | Design and analysis of fault tolerant architectures by model weaving | 1,333 |
The Effective Plant Area Index (PAIe) of forest canopy is an important parameter in the canopy reflectance modeling and validation. PAIe can be transformed to leaf area index (LAI) with clumping index. But it is still very difficult to obtain its ground truth value by in situ measurement. In this study, we measured PAIe of 3 typical wood land sites in China by means of three indirect optical techniques: plant canopy analyzer LAI-2000, TRAC, and Digital Fisheye Camera. The 5 measured stands include Qinghai spruce, peach, poplar, willow and silver chain. In this paper, the forest canopy PAIe measured by those three instruments is compared firstly. Then our new approach is how to use the three measured data to get the better PAIe estimation with less overall error. The method of minimizing overall error is adopted to produce PAIe of the sample plots in our study sites, which has less overall error comparing with the PAIe measured by each individual instrument. This approach is also validated by using computer simulated wide-angle viewing pictures when true LAI/PAIe values are given. | ['Zhuo Fu', 'Jindi Wang', 'Jinling Song', 'Hongmin Zhou', 'Huaguo Huang', 'Baisong Chen'] | Comparison of three indirect field measuring methods for forest canopy leaf area index estimation | 338,276 |
The use of a tracking method for developing a directional-view display system with directional sound is described. The proposed system allows an individual to experience directional sound in a multi-view display environment. A projection-type display system is used, because a high definition display and large size display screen can be easily realized. We implemented a tracking system with an infrared camera and infrared light emitting diodes to track the viewers' positions. A viewing zone analysis that permits complete separation between neighboring view images to be calculated and experimental results for two observers are presented. | ['Youngmin Kim', 'Joonku Hahn', 'Young-Hoon Kim', 'Jonghyun Kim', 'Gilbae Park', 'Sung-Wook Min', 'Byoungho Lee'] | A Directional-View and Sound System Using a Tracking Method | 32,904 |
In this paper, we consider the problem of multi-parameter estimation in the presence of compound Gaussian clutter for cognitive radar by the variational Bayesian method. The advantage of variational Bayesian is that the estimation of multi-variate parameters is decomposed to problems of estimation of univariate parameters by variational approximation, thus enabling analytically tractable approximate posterior densities in complex statistical models consisting of observed data, unknown parameters, and hidden variables. We derive the asymptotic Bayesian Cramer–Rao bounds and demonstrate by numerical simulations that the proposed approach leads to improved estimation accuracy than the expectation maximization method and the exact Bayesian method in the case of non-Gaussian nonlinear signal models and small data sample size. | ['Anish C. Turlapaty', 'Yuanwei Jin'] | Multi-Parameter Estimation in Compound Gaussian Clutter by Variational Bayesian | 746,342 |
TOWARDS PERFORMANCE PREDICTION FOR CLOUD COMPUTING ENVIRONMENTS BASED ON GOAL-ORIENTED MEASUREMENTS | ['Michael Hauck', 'Jens Happe', 'Ralf H. Reussner'] | TOWARDS PERFORMANCE PREDICTION FOR CLOUD COMPUTING ENVIRONMENTS BASED ON GOAL-ORIENTED MEASUREMENTS | 740,653 |
Recently, for an efficient and safe production process of the aquaculture products, there exists repeated request for an environment monitoring system to efficiently measure and monitor aquaculture farms environment. Considering aquaculture farms' humid and spacious environment, the monitoring system is preferred to have a wireless data transmission capability. To automatically collect the environmental data and transmit the data to the server, a specially designed device with a proper data display feature is required. In this paper, we design and describe the monitoring system that measures and monitors the aquaculture farm's environment. Also, the Sensor Data Logger, the core part of the system, is detailed in its implementation. The monitoring system offers ubiquitous access to the measured data either from the internet or the mobile phones. | ['Soonhee Han', 'Young-Man Kang', 'Kyehwa Park', 'Moonsuk Jang'] | Design of Environment Monitoring System for Aquaculture Farms | 243,363 |
We describe an architecture for coping with latency and asynchrony of multisensory events in interactive virtual environments. We propose to decompose multisensory interactions into a series of discrete, perceptually significant events, and structure the application architecture within this event-based context. We analyze the sources of latency, and develop a framework for event prediction and scheduling. Our framework decouples synchronization from latency, and uses prediction to reduce latency when possible. We evaluate the performance of the architecture using vision-based motion sensing and multisensory rendering using haptics, sounds, and graphics. The architecture makes it easy to achieve good performance using commodity off-the-shelf hardware. | ['Timothy Edmunds', 'Dinesh K. Pai'] | An event architecture for distributed interactive multisensory rendering | 460,351 |
A major component of a bit-time computer simulation program is the Boolean compiler. The compiler accepts the Boolean functions representing the simulated computer's digital circuits, and generates corresponding sets of machine instructions which are subsequently executed on the “host” computer. Techniques are discussed for increasing the sophistication of the Boolean compiler so as to optimize bit-time computer simulation. The techniques are applicable to any general-purpose computer. | ['Jesse H. Katz'] | Optimizing bit-time computer simulation | 523,559 |
Vegetation is an important part of terrestrial ecosystems. Although vegetation dynamics have explicit spatial and temporal dimensions, the study of the temporal process is in its infancy. Evaluation of temporal scaling behavior could provide a unique perspective for exploring the temporal nature of vegetation dynamics. In this study, the Global Inventory Modeling and Mapping Studies (GIMMS) Normalized Difference Vegetation Index (NDVI) was used to reflect vegetation dynamics, and the temporal scaling behavior of the NDVI in China was determined via detrended fluctuation analysis (DFA). Our main objectives were to reveal the temporal scaling behavior of NDVI time series and to understand variation among vegetation types. First, DFA revealed similar exponents, which ranged from 0.6 to 0.9, for all selected pixels, implying that a long-range correlation was generally present in the NDVI time series at the individual pixel scale. We then extended the analysis to all of China and found that 99.30% of the pixel exponents ranged from 0.5 to 1. These results suggest that the NDVI time series displays strong long-range correlation throughout most of China; however, the exponents exhibited regional variability. To explain these differences, we further analyzed the exponents for 12 vegetation types based on a vegetation map of China. All of the vegetation types exhibited well-defined long-range correlation, with exponents ranging from 0.7189 to 0.8436. For all vegetation types, the maximum and average value and standard deviation of the exponents decreased with increasing annual maximum NDVI values, suggesting that low vegetation density is much more sensitive to external factors. These findings may be useful for understanding vegetation dynamics as a complex, temporally varying phenomenon. | ['Xiaoyi Guo', 'Hongyan Zhang', 'Tao Yuan', 'Jianjun Zhao', 'Zhenshan Xue'] | Detecting the Temporal Scaling Behavior of the Normalized Difference Vegetation Index Time Series in China Using a Detrended Fluctuation Analysis | 445,718 |
In this analysis paper, we investigate the effect of phonetic clustering based on place and manner of articulation for the enhancement of throat-microphone speech through spectral envelope mapping. Place of articulation (PoA) and manner of articulation (MoA) dependent GMM-based spectral envelope mapping schemes have been investigated using the reflection coefficient representation of the linear prediction model. Reflection coefficients are expected to localize mapping performance within the concatenation of lossless tubes model of the vocal tract. In experimental studies, we evaluate spectral mapping performance within clusters of the PoA and MoA using the log-spectral distortion (LSD) and as function of reflection coefficient mapping using the mean-square error distance. Our findings indicate that highest degradations after the spectral mapping occur with stops and liquids of the MoA, and velar and alveolar classes of the PoA. The MoA classification attains higher improvements than the PoA classification. | ['M. A. Tugtekin Turan', 'Engin Erzin'] | A phonetic classification for throat microphone enhancement | 310,649 |
Full-duplex communication (FDC) can potentially double the network capacity by allowing a device to transmit and receive simultaneously on the same frequency band. In this study, a novel resource allocation and user scheduling algorithm is proposed to maximise the network throughput for a cellular network with full-duplex (FD) base stations (BSs). The authors consider that FDC is utilised at the BS with imperfect self-interference (SI) cancellation while user devices only work in the traditional half-duplex (HD) way. In addition, to potentially cancel co-channel interference caused by other users, the opportunistic interference cancellation (OIC) technique is applied at user side. Since FDC does not always perform better than HD due to residual SI (RSI), a joint mode selection, user scheduling, and channel allocation problem is formulated to maximise the system throughput. The optimisation problem is non-convex and NP-hard, thereby a suboptimal heuristic algorithm with low computational complexity is proposed. Numerical results demonstrate that user diversity gain, FD gain, and OIC gain can be achieved by the proposed algorithm, respectively. The performance of FDC depends on the intensity of RSI and the distribution of user devices. | ['Guanding Yu', 'Dingzhu Wen', 'Fengzhong Qu'] | Joint user scheduling and channel allocation for cellular networks with full duplex base stations | 647,326 |
This paper presents an outline of human-human interaction to establish a framework to understand how a behaviour based approach can be developed in the design of a human-robot interactive strategy. To approach the conceptual design guidelines for an interactive human-robot strategy, the mathematical model of human behaviour during transferring the compliant object to a receiver without any types of communication has been strategically analysed. The Auto Regressive Moving Average with Exogenous Input (ARMAX) system identification has been applied to identify the human arm model. A set of experiments have been designed (based on BoxBehnken), along with the influence variables affecting the human forces, which consist of mass, friction and target displacement. The estimated ARMAX models were shown to be good matching with the actual experimental data, where the best-fit percentages of human force profiles are between 88.73%-97.2%; the proposed models can then be used to present the human arm characteristics effectively. | ['Paramin Neranon', 'Robert Bicker'] | Human-human interaction using a behavioural control strategy | 508,360 |
Short Message Service (SMS) over circuit switched (CS), Unstructured Supplementary Service Data (USSD) and GPRS are the main bearers to support different Machine Type Communications (MTC) over a 2G network. Communications of a large amount of Machine to Machine (M2M) terminals may have a significant impact on worldwide deployed 2G networks. This paper compares the efficiency for each of the 2G bearers when M2M terminals perform mobile originated (MO) data calls under a predefined M2M application. The efficiency of each bearer is evaluated at Layers 2 (L2) and 3 (L3) by the ratio of user data payload to the total L3 messages and by the amount of M2M terminals the radio interface can support within one hour. Results show that USSD is the most efficient bearer in L3. When considering L2 messages in radio interface, GPRS proves to be the most efficient bearer. | ['Fang Ming', 'Xing Zhu', 'Miguel Torres', 'Luis Anaya', 'Leo Patanapongpibul'] | GSM/GPRS Bearers Efficiency Analysis for Machine Type Communications | 229,699 |
PR-OWL 2 RL - A Language for Scalable Uncertainty Reasoning on the Semantic Web information. | ['Laécio L. Santos', 'Rommel N. Carvalho', 'Marcelo Ladeira', 'Weigang Li', 'Gilson Libório Mendes'] | PR-OWL 2 RL - A Language for Scalable Uncertainty Reasoning on the Semantic Web information. | 781,796 |
Electricity Demand and Population Dynamics Prediction from Mobile Phone Metadata | ['Brian Wheatman', 'Alejandro Noriega', 'Alex Pentland'] | Electricity Demand and Population Dynamics Prediction from Mobile Phone Metadata | 858,580 |
This paper presents an FPGA based real-time lane detection system for automotive applications. To reduce the computational complexity, the conventional Canny-Hough lane detection algorithm is modified for achieving the real-time processing. The prototype design is realized by using the commercialized FPGA platform and the processing rate is enhanced by 41% compared to the previous detection algorithm. | ['Seokha Hwang', 'Youngjoo Lee'] | FPGA-based real-time lane detection for advanced driver assistance systems | 976,932 |
Markov sources have been shown to be efficient pseudo-random pattern generators in SCAN-BIST. In this paper we give a new design for Markov sources. The new design first reduces the ATPG test set by removing the test cubes with low sampling probability and then produces test sequences based on a unique dynamic transition selection technique. Dynamic transition selection offers four transition options namely Markov source, inverted Markov source, fixed 0 and fixed 1. Experimental results show that the proposed design significantly reduces the test length to achieve 100% stuck-at fault coverage at the expense of a modest increase in the number of gates required to implement the test pattern generator. | ['Aftab A. Farooqi', 'Richard Gale', 'Sudhakar M. Reddy', 'Brian Nutter', 'Chris Monico'] | Markov source based test length optimized SCAN-BIST architecture | 483,212 |
Effective management of ICT (information and communications technology) and cooling is critical in modern data centres for high energy efficiency. This survey paper gives an overview of the joint optimization between ICT and cooling management under conventional air-cooled technology in the data centre. We first review the enabling techniques of ICT and cooling management in the data centre, which provide the opportunity to dynamically control the server utilization and operate cooling equipment. We then present the coupling models between ICT and cooling management, which are the basis of the optimization approaches for green data centre. The joint optimization of ICT and cooling management is considered under a set of performance metrics, including thermal requirement, power consumption, and application delay. We summarize the workload scheduling algorithms designed from the optimization approaches. We also present some testbeds in the data centres. Finally, we discuss on the future trends of the management in the data centre. | ['Weiwen Zhang', 'Yonggang Wen', 'Y. W. Wong', 'Kok Chuan Toh', 'Chiu-Hao Chen'] | Towards Joint Optimization Over ICT and Cooling Systems in Data Centre: A Survey | 703,420 |
Global Navigation Satellite Systems (GNSS) like the Global Positioning System (GPS) are susceptible to electronic interference which threatens the reliability of the systems outputs, precise time and localization. Interference comes from natural and predatory sources in the form of increased in-band noise and structured attacks. The structured attack, called spoofing, is designed to trick the receiver into reporting an incorrect navigation solution as if it were accurate. Modern automobiles are becoming more reliant on GPS for localization, automation, and safety. Vehicles are also equipped with a variety of sensors (e.g. Radars, Lidars, wheel encoders) that provide situational awareness which may be leveraged in a GPS spoofing detection scheme. The proposed spoofing detection and mitigation system relies on an existing Cooperative Adaptive Cruise Control (CACC) system to provide inter-vehicle ranging and data sharing. The inter-vehicle ranges are used to detect a spoofing attack, and the mitigation system removes the attacking signal from the incoming data stream. The spoofing detection and removal system is tested using data recorded with a fielded CACC system on two commercial trucks. Intermediate frequency (IF) GPS data is collected during the test. Since live sky spoofing is legal, the IF data recording allows for post process spoofing injection in a controlled environment. In post process, the spoofing signal is shown to “capture” the onboard GPS receiver. The proposed system uses the spoofed IF GPS data along with recorded observables from the CACC system to detection and remove the attack. | ['Nathaniel Carson', 'Scott M. Martin', 'Joshua Starling', 'David M. Bevly'] | GPS spoofing detection and mitigation using Cooperative Adaptive Cruise Control system | 869,978 |
Control of underwater vehicles is a thoroughly investigated subject but still an open problem, because of the environmental disturbances, the highly nonlinear behaviour of vehicles, the complexity of the vehicle hydrodynamics, etc. In this paper, we are interested in depth control of a bioinspired U-CAT underwater AUV in real operating conditions. Two depth control schemes are proposed, including a PID controller and a nonlinear RISE feedback controller. The proposed controllers are implemented on the robot, then tested in an open water environment. The obtained results are presented and discussed through different experimental scenarios to illustrate the efficiency of the proposed controllers, not only to successfully control the depth, but also to be robust towards external disturbances and parameters uncertainties. we conclude that RISE controller is more robust towards environmental disturbances and outperforms the PID controller when the robot is tested in real operating condition. | ['Ahmed Chemori', 'Keijo Kuusmik', 'Taavi Salumae', 'Maarja Kruusmaa'] | Depth control of the biomimetic U-CAT turtle-like AUV with experiments in real operating conditions | 809,422 |
This paper combines Markov Random Field and subspaces to perform object tracking. We first sample some particles using particle filter, and then divide each particle to patches. For each particle, we optimize each patch's position and use Markov Random Field to represent the structure of the patches, including each patch's own position and the relations between neighbor patches. We also evaluate each patch and the whole sub image according to their subspaces respectively. Experimental results demonstrated the efficiency of our method. | ['Lin Ma', 'Weiming Hu'] | Using Markov Random Field and subspaces to perform object tracking | 921,254 |
Motivation: Automatic error correction of high-throughput sequencing data can have a dramatic impact on the amount of usable base pairs and their quality. It has been shown that the performance of tasks such as de novo genome assembly and SNP calling can be dramatically improved after read error correction. While a large number of methods specialized for correcting substitution errors as found in Illumina data exist, few methods for the correction of indel errors, common to technologies like 454 or Ion Torrent, have been proposed. Results: We present Fiona, a new stand-alone read error–correction method. Fiona provides a new statistical approach for sequencing error detection and optimal error correction and estimates its parameters automatically. Fiona is able to correct substitution, insertion and deletion errors and can be applied to any sequencing technology. It uses an efficient implementation of the partial suffix array to detect read overlaps with different seed lengths in parallel. We tested Fiona on several real datasets from a variety of organisms with different read lengths and compared its performance with state-of-the-art methods. Fiona shows a constantly higher correction accuracy over a broad range of datasets from 454 and Ion Torrent sequencers, without compromise in speed. Conclusion: Fiona is an accurate parameter-free read error–correction method that can be run on inexpensive hardware and can make use of multicore parallelization whenever available. Fiona was implemented using the SeqAn library for sequence analysis and is publicly available for download at http://www.seqan.de/projects/fiona. Contact: [email protected] or [email protected] Supplementary information: Supplementary data are available at Bioinformatics online. | ['M. Schulz', 'David Weese', 'Manuel Holtgrewe', 'Viktoria Dimitrova', 'Sijia Niu', 'Knut Reinert', 'Hugues Richard'] | Fiona: A Parallel and Automatic Strategy for Read Error Correction | 508,167 |
The adoption of new imaging modalities offers new challenges for the modelling of image formation and image restoration. Millimeter wave imaging, a totally passive method for imaging at microwave frequencies, requires statistical models that are different from normal visible optical assumptions. We examine some of the relevant issues and derive a non-linear Bayes estimate of the object, given a passive millimeter wave image. > | ['Bobby R. Hunt', 'David DeKruger'] | Bayesian restoration of millimeter wave imagery | 376,752 |
In this paper, we describe a model whose focus is on data visualization. We assume the data are provided in adjacency format, as is frequently the case in practice. As an example, individuals who buy item a are likely to buy or consider buying items b, c, and d, also. We present a simple technique for obtaining distance measures between data points. Armed with the resulting distance matrix, we show how Sammon maps can be used to visualize the data points. An application to the college selection process is discussed in detail. | ['Edward Condon', 'Bruce L. Golden', 'Shreevardhan Lele', 'S. Raghavan', 'Edward A. Wasil'] | A visualization model based on adjacency data | 42,305 |
We present a system that constructs “implicit shape models” for classes of rigid 3D objects and utilizes these models to estimating the pose of class instances in single 2D images. We use the framework of implicit shape models to construct a voting procedure that allows for 3D transformations and projection and accounts for self occlusion. The model is comprised of a collection of learned features, their 3D locations, their appearances in different views, and the set of views in which they are visible. We further learn the parameters of a model from training images by applying a method that relies on factorization. We demonstrate the utility of the constructed models by applying them in pose estimation experiments to recover the viewpoint of class instances. | ['Mica Arie-Nachimson', 'Ronen Basri'] | Constructing implicit 3D shape models for pose estimation | 912,494 |
Partially observable Markov decision processes (POMDPs) provide an elegant mathematical framework for modeling complex decision and planning problems in stochastic domains in which states of the system are observable only indirectly, via a set of imperfect or noisy observations. The modeling advantage of POMDPs, however, comes at a price -- exact methods for solving them are computationally very expensive and thus applicable in practice only to very simple problems. We focus on efficient approximation (heuristic) methods that attempt to alleviate the computational problem and trade off accuracy for speed. We have two objectives here. First, we survey various approximation methods, analyze their properties and relations and provide some new insights into their differences. Second, we present a number of new approximation methods and novel refinements of existing techniques. The theoretical results are supported by experiments on a problem from the agent navigation domain. | ['Milos Hauskrecht'] | Value-function approximations for partially observable Markov decision processes | 515,234 |
The preclinical development of antitumor drugs greatly benefits from the availability of models capable of predicting tumor growth as a function of the drug administration schedule. For being of practical use, such models should be simple enough to be identifiable from standard experiments conducted on animals. In the present paper, a stochastic model is derived from a set of minimal assumptions formulated at cellular level. Tumor cells are divided in two groups: proliferating and nonproliferating. The probability that a proliferating cell generates a new cell is a function of the tumor weight. The probability that a proliferating cell becomes nonproliferating is a function of the plasma drug concentration. The time-to-death of a nonproliferating cell is a random variable whose distribution reflects the nondeterministic delay between drug action and cell death. The evolution of the expected value of tumor weight obeys two differential equations (an ordinary and a partial differential one), whereas the variance is negligible. Therefore, the tumor growth dynamics can be well approximated by the deterministic evolution of its expected value. The tumor growth inhibition model, which is a lumped parameter model that in the last few years has been successfully applied to several antitumor drugs, is shown to be a special case of the minimal model presented here. | ['Paolo Magni', 'Massimiliano Germani', 'G. De Nicolao', 'G. Bianchini', 'M. Simeoni', 'Italo Poggesi', 'Maurizio Rocchetti'] | A Minimal Model of Tumor Growth Inhibition | 110,074 |
In the audio event classification or detection research field, the representation of the audio itself is important. Many researchers tried to apply Deep Belief Network (DBN) to learn new representations of the audio. The mel filter-bank feature, which is obtained based on mel scale, is commonly used as the low level representation of the audio in the pre-processing procedure of DBN. However, the mel bands used in mel filter-bank feature may not be sufficient for the comprehensive representation of the diverse audio events in the real world and then it will make it difficult for DBN to learn good audio features. In this paper, two steps are taken to explore and tackle the problem. In the first step, we conduct a comparison of the effects among different arrangements of frequency bands to DBN feature learning in the audio event recognition. Here the arrangements of frequency bands include mel bands, bark bands, linear bands and pyramid bands. In the second step, in order to utilize the different classification capabilities of the DBN features on different audio events, we adopt the Adaboost algorithm to fuse them. We conduct the experiments on real datasets collected from findsound website, and the results verifies that our proposed audio event classification system, which uses diverse features selected by Adaboost from all sets of DBN features, outperforms the one using only one kind of DBN feature set. | ['Feng Guo', 'Xiaoou Chen', 'Deshun Yang'] | Audio event recognition based on DBN features from multiple filter-bank representations | 566,425 |
RegulonDB is a database storing the biological information behind the transcriptional regulatory network (TRN) of the bacterium Escherichia coli. It is one of the key bioinformatics resources for Systems Biology investigations of bacterial gene regulation. Like most biological databases, the content drifts with time, both due to the accumulation of new information and due to refinements in the underlying biological concepts. Conclusions based on previous database versions may no longer hold. Here, we study the change of some topological properties of the TRN of E. coli, as provided by RegulonDB across 16 versions, as well as a simple index, digital control strength, quantifying the match between gene expression profiles and the transcriptional regulatory networks. While many of network characteristics change dramatically across the different versions, the digital control strength remains rather robust and in tune with previous results for this index.#R##N##R##N#Our study shows that: (i) results derived from network topology should, when possible, be studied across a range of database versions, before detailed biological conclusions are derived, and (ii) resorting to simple indices, when interpreting high-throughput data from a network perspective, may help achieving a robustness of the findings against variation of the underlying biological information.#R##N##R##N#Database URL: www.regulondb.ccg.unam.mx | ['Moritz Emanuel Beber', 'Georgi Muskhelishvili', 'Marc-Thorsten Hütt'] | Effect of database drift on network topology and enrichment analyses: a case study for RegulonDB. | 711,858 |
EcosimPro and its EL Object-Oriented Modeling Language | ['Alberto Jorrín', 'César de Prada', 'Pedro Cobas'] | EcosimPro and its EL Object-Oriented Modeling Language | 659,857 |
Abstract In this paper, we propose a new method for object tracking robust to the intersection with other objects with similar appearance and to the great rotation of the camera. The method uses 3D information of the feature points and the camera position by the image-based localization method. The movement information of the camera is used by the prediction process and the calculation process of the likelihood. Furthermore, the method extracts foreground objects by the homography transformation. The result is used by the likelihood function and by the process judging whether the target object is occluded by a background object or not. The proposed method can track the target object robustly when the camera rotates greatly and when the target object is occluded by the background object or the other object. Results are demonstrated by experiments using real videos. | ['Shinji Fukui', 'Sou Hayakawa', 'Yuji Iwahori', 'Tsuyoshi Nakamura', 'Manas Kamal Bhuyan'] | Particle Filter Based Tracking with Image-based Localization☆ | 871,725 |
How Secure and Quick is QUIC? Provable Security and Performance Analyses. | ['Robert Lychev', 'Samuel Jero', 'Alexandra Boldyreva', 'Cristina Nita-Rotaru'] | How Secure and Quick is QUIC? Provable Security and Performance Analyses. | 793,445 |
Improving customer satisfaction and implementing electronic customer relationship management e-CRM can help banks to gain their financial goals. Literature review shows lack of investigation on the relationship between e-CRM and brand personality. In this research, the influence of e-CRM on the online brand personality brand personality of website has been examined using structural equation modelling SEM in Mellat Bank as a well-known bank in Iran. The results imply that there is a positive and strong relationship between e-CRM services and brand personality. | ['Arash Shahin', 'Mahshid Gharibpoor', 'Shirin Teymouri', 'Elham Bagheri Iraj'] | Studying the influence of e-CRM on web-based brand personality - the case of Mellat Bank | 337,556 |
Acquisition of English speech rhythm by monolingual children. | ['Mikhail Ordin', 'Leona Polyanskaya'] | Acquisition of English speech rhythm by monolingual children. | 799,139 |
Current routing approaches in wireless sensor and actor networks (WSANs) display lack of unification for different traffic patterns because they are designed separately for sensor-actor and actor-actor communications. In this paper, we explore the capabilities and compounding advantages of directional antennas and actors and propose a unified routing protocol supporting arbitrary traffic in WSANs. The proposed routing protocol uses actors as the main routing anchors as much as possible because they have enough energy and computing power and use directional anycast routing to reduce the total energy consumption of the overall network. The performance of this routing protocol is compared with conventional ones by simulation. | ['Ngoc-Thanh Dinh', 'Younghan Kim'] | Directional anycast routing in wireless sensor and actor networks | 338,914 |
Heterogeneous cellular networks (HetCNets) offer a promising solution to cope with the current cellular coverage crunch. Due to the large transmit power disparity, while following maximum power received (MPR) association scheme, a larger number of users are associated with macro-cell BS (MBS) than small-cell BSs (SBSs). Therefore, an imbalance load arrangement takes place across the HetCNets. Hence, using cell range expansion-based cell association, we can balance the load across the congested MBS. However, using MPR association scheme, users’ offloading leads to two challenges: 1) macro-cell interference , in which the MBS interferes with the offloaded users, and 2) coupled downlink-uplink cell association , in which a random user associates with a single tier’s base station (BS) both in uplink (UL) and downlink (DL) directions. This paper aims to address these problems while considering a two-tier scenario consisting of small-cell and macro-cell tiers. For the MBS interference mitigation, we employ a reverse frequency allocation (RFA) scheme. Besides coupled DL–UL association (Co-DUA), this paper also highlights the notion of decoupled DL–UL association (De-DUA). In De-DUA, a random user associates with two different tiers’ BSs, i.e., with one tier’s BS in the DL direction and with the other tier’s BS in the UL direction. Our results illustrate that, in comparison with the Co-DUA, De-DUA with RFA employment achieves a better coverage performance. | ['Fazal Muhammad', 'Ziaul Haq Abbas', 'Ghulam Abbas', 'Lei Jiao'] | Decoupled Downlink-Uplink Coverage Analysis with Interference Management for Enriched Heterogeneous Cellular Networks | 892,394 |
A widespread network involving cortical and subcortical brain structures forms the neural substrate of human spatial navigation. Most studies investigating plasticity of this network have focused on the hippocampus. Here, we investigate age differences in cortical thickness changes evoked by four months of spatial navigation training in 91 men aged 20-30 or 60-70 years. Cortical thickness was automatically measured before, immediately after, and four months after termination of training. Younger as well as older navigators evidenced large improvements in navigation performance that were partly maintained after termination of training. Importantly, training-related cortical thickening in left precuneus and paracentral lobule were observed in young navigators only. Thus, spatial navigation training appears to affect cortical brain structure of young adults, but there is reduced potential for experience-dependent cortical alterations in old age. (C) 2011 Elsevier Inc. All rights reserved. | ['Elisabeth Wenger', 'Sabine Schaefer', 'Hannes Noack', 'Simone Kühn', 'Johan Mårtensson', 'Hans-Jochen Heinze', 'Emrah Düzel', 'Lars Bäckman', 'Ulman Lindenberger', 'Martin Lövdén'] | Cortical thickness changes following spatial navigation training in adulthood and aging | 353,106 |
Interacción en Tiempo Real para un Sistema de Escultura Virtual | ['Alejandro León', 'Francisco Velasco', 'Francisco Soler'] | Interacción en Tiempo Real para un Sistema de Escultura Virtual | 632,032 |
Preemptive ReduceTask Scheduling for Fair and Fast Job Completion | ['Yandong Wang', 'Jian Tan', 'Weikuan Yu', 'Li Zhang', 'Xiaoqiao Meng', 'Xiaobing Li'] | Preemptive ReduceTask Scheduling for Fair and Fast Job Completion | 601,573 |
In this paper, the effect of gate tunneling current in ultra-thin gate oxide MOS devices of effective length (L/sub eff/) of 25nm (oxide thickness=1.1 nm), 50 nm (oxide thickness=1.5 nm) and 90 nm (oxide thickness=2.5 nm) is studied using device simulation. Overall leakage in a stack of transistors is modeled and the opportunities for leakage reduction in the standby mode of operation are explored for scaled technologies. It is shown that, as the contribution of gate leakage relative to the total leakage increases with technology scaling, traditional techniques become ineffective in reducing overall leakage current in a circuit. A novel technique of input vector selection based on the relative contributions of gate and subthreshold leakage to the overall leakage is proposed for reducing total leakage in a circuit. This technique results in 44% savings in total leakage in 50-nm devices compared to the conventional stacking technique. | ['Saibal Mukhopadhyay', 'Cassondra Neau', 'Riza Tamer Cakici', 'Amit Agarwal', 'Chris H. Kim', 'Kaushik Roy'] | Gate leakage reduction for scaled devices using transistor stacking | 89,694 |
User mobility is rapidly becoming an important and popular feature in today's networks. This is especially evident in wireless/cellular environments. While useful and desirable, user mobility raises a number of important security-related issues and concerns. One of them is the issue of tracking mobile user's movements and current whereabouts. Ideally, no entity other than the user himself and a responsible authority in the user's home domain should know either the real identity or the current location of the mobile user. At present, environments supporting user mobility either do not address the problem at all or base their solutions on the specific hardware capabilities of the user's personal device, e.g., a cellular telephone. This paper discusses a wide range of issues related to anonymity in mobile envlronments, reviews current state-of-the-art approaches and proposes several potential solutions. Solutions vary in complexity, degree of protection and assumptions about the underlying environment. | ['Amir Herzberg', 'Hugo Krawczyk', 'Gene Tsudik'] | On Travelling Incognito | 922,202 |
In immersive virtual reality simulations in which users are immersed into full body humanoids, it is typically the case that the humanoid size and proportions have to match those of the immersed user for the immersion to be realistic. However, a key aim of these simulations may be to study how users of different body sizes and proportions interact with the environment. We have developed a real time motion retargeting method by which users can be immersed into different humanoids and other kinematically similar avatars, and still experience a realistic feeling of immersion. A set of experiments aimed at studying the realism of the immersion indicate that users indeed experience a realistic sense of immersion into different humanoids. | ['Weiwei Zhao', 'Viswanathan Madhavan'] | Realistic immersion of a user into humanoids of different sizes and proportions in lmmersive virtual reality | 172,928 |
An explicit lattice realization of a non-Abelian topological memory is presented. The correspondence between logical and physical states is seen directly by use of the stabilizer formalism. The resilience of the encoded states against errors is studied and compared to that of other memories. A set of non-topological operations are proposed to manipulate the encoded states, resulting in universal quantum computation. This work provides insight into the non-local encoding non-Abelian anyons provide at the microscopical level, with an operational characterization of the memories they provide. | ['James R. Wootton', 'Ville Lahtinen', 'Jiannis K. Pachos'] | Universal quantum computation with a non-abelian topological memory | 523,729 |
Evaluation of a Hyperlinked Consumer Health Dictionary for reading EHR notes. | ['Laura Slaughter', 'Karl Øyri', 'Erik Fosse'] | Evaluation of a Hyperlinked Consumer Health Dictionary for reading EHR notes. | 551,105 |
Walking fruit flies are attracted by near-by objects. They estimate the distance to these objects by the parallax motion of their images on the retina. Here we provide evidence from robot simulations that distance is assessed by motion integration over large parts of the visual field and time periods of 0.5 s to 2 s. The process in flies is not selective to image motion created by the self-motion of the fly but also sensitive to object motion and to the pattern contrast of objects. Added visual motion (e.g. oscillations) makes objects more attractive than their stationary counterparts. Front-to-back motion, the natural parallax motion on the eyes of a forward-translating fly, is preferred. A group of several more distant objects can be more attractive than one close object. Objects, that are most attractive in the fronto-lateral eye-field, act as deterrent in the rear visual field. Time to course changes doubles from front to rear. A cybernetical model based on weighted motion integration in just four compartments (frontal to plusmn100deg lateral and plusmn100deg to plusmn160deg in the rear) can reproduce fly behavior. Implemented on a freely moving camera-equipped robot with panoramic vision it can reproduce various aspects of the orientation behavior of freely walking flies without the necessity to recognize objects. Tracks of walking fruit flies and traces of the robot model obtained in up-scaled environments have been rigorously compared in various arrangements of landmarks. | ['Markus Mronz', 'Roland Strauss'] | Visual motion integration controls attractiveness of objects in walking flies and a mobile robot | 535,400 |