abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
Whispering is a unique expression mode that is specific to auditory communication. Individuals switch their vocalization mode to whispering especially when affected by inner emotions in certain social contexts, such as in intimate relationships or intimidating social interactions. Although this context-dependent whispering is adaptive, whispered voices are acoustically far less rich than phonated voices and thus impose higher hearing and neural auditory decoding demands for recognizing their socio-affective value by listeners. The neural dynamics underlying this recognition especially from whispered voices are largely unknown. Here we show that whispered voices in humans are considerably impoverished as quantified by an entropy measure of spectral acoustic information, and this missing information needs large-scale neural compensation in terms of auditory and cognitive processing. Notably, recognizing the socio-affective information from voices was slightly more difficult from whispered voices, probably based on missing tonal information. While phonated voices elicited extended activity in auditory regions for decoding of relevant tonal and time information and the valence of voices, whispered voices elicited activity in a complex auditory-frontal brain network. Our data suggest that a large-scale multidirectional brain network compensates for the impoverished sound quality of socially meaningful environmental signals to support their accurate recognition and valence attribution.
['Sascha Frühholz', 'Wiebke Trost', 'Didier Grandjean']
Whispering - The hidden side of auditory communication.
876,487
Innovative interfaces for Serious Games
['Javier Marco', 'Eva Cerezo', 'Sandra Baldassarri']
Innovative interfaces for Serious Games
674,114
BIRA: Improved Predictive Exchange Word Clustering.
['Jon Dehdari', 'Liling Tan', 'Josef van Genabith']
BIRA: Improved Predictive Exchange Word Clustering.
827,194
Multimedia systems store and retrieve largeamounts of data which require extremely high disk bandwidth and their performance critically depends on the efficiency of disk storage. However, existing magnetic disksare designed for small amounts of data retrievals geared totraditional operations; with speed improvements mainly focused on how to reduce seek time and rotational latency.When the same mechanism is applied to multimedia systems,overheads in disk I/O can result in dramatic deterioration insystem performance. In this paper, we present a mathematical model to evaluate the performance of constant-density recording disks, and use this model to analyze quantitatively the performance of multimedia data request streams.We show that high disk throughput may be achieved by suitably adjusting the relevant parameters. In addition to demonstrating quantitatively that constant-density recording disksperform significantly better than traditional disks for multimedia data storage, a novel disk-partitioning scheme whichplaces data according to their bandwidths is presented.
['Philip Kwok Chung Tse', 'Clement H. C. Leung']
Improving multimedia systems performance using constant-density recording disks
445,147
This paper proposes a new hybrid architecture that consists of a deep Convolu-tional Network and a Markov Random Field. We show how this architecture is successfully applied to the challenging problem of articulated human pose estimation in monocular images. The architecture can exploit structural domain constraints such as geometric relationships between body joint locations. We show that joint training of these two model paradigms improves performance and allows us to significantly outperform existing state-of-the-art techniques.
['Jonathan Tompson', 'Arjun Jain', 'Yann LeCun', 'Christoph Bregler']
Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation
185,091
Functional Stream Derivatives of Context-Awareness on P2P Networks
['Phan Cong Vinh', 'Nguyen Thanh Tung', 'Nguyen Van Phuc', 'Nguyen Hai Thanh']
Functional Stream Derivatives of Context-Awareness on P2P Networks
131,213
This study presents the use of a multi-channel opto-electronic sensor (OEPS) to effectively monitor critical physiological parameters whilst preventing motion artefact as increasingly demanded by personal healthcare. The aim of this work was to study how to capture the heart rate (HR) efficiently through a well-constructed OEPS and a 3-axis accelerometer with wireless communication. A protocol was designed to incorporate sitting, standing, walking, running and cycling. The datasets collected from these activities were processed to elaborate sport physiological effects. t-test, Bland-Altman Agreement (BAA), and correlation to evaluate the performance of the OEPS were used against Polar and Mio-Alpha HR monitors. No differences in the HR were found between OEPS, and either Polar or Mio-Alpha (both p > 0.05); a strong correlation was found between Polar and OEPS (r: 0.96, p < 0.001); the bias of BAA 0.85 bpm, the standard deviation (SD) 9.20 bpm, and the limits of agreement (LOA) from −17.18 bpm to +18.88 bpm. For the Mio-Alpha and OEPS, a strong correlation was found (r: 0.96, p < 0.001); the bias of BAA 1.63 bpm, SD 8.62 bpm, LOA from −15.27 bpm to +18.58 bpm. These results demonstrate the OEPS to be capable of carrying out real time and remote monitoring of heart rate.
['Abdullah S. Al-Zahrani', 'Sijung Hu', 'Vicente Azorin-Peris', 'Laura A. Barrett', 'Dale W. Esliger', 'Matthew Hayes', 'Shafique Akbare', 'Jerome Achart', 'Sylvain Kuoch']
A Multi-Channel Opto-Electronic Sensor to Accurately Monitor Heart Rate against Motion Artefact during Exercise
320,117
Ascent sequences were introduced by the author (in conjunction with others) to encode a class of permutations that avoid a single length- three bivincular pattern, and were the central object through which other combinatorial correspondences were discovered. In this note we prove the non-trivial fact that generalized ballot sequences are ascent sequences.
['Mark Dukes']
Generalized ballot sequences are ascent sequences
570,081
This paper deals with the problem of Constant False Alarm Rate (CFAR) detection of thermal anomalies in multispectral satellite data. The goal is to provide robustness to the algorithm proposed in [1], with respect to the presence of outliers in the analysis window. In [1], data from 4 μm and 11 μm MODIS bands, that are statistically correlated, are re-projected through a Principal Component Analysis (PCA) to obtain uncorrelated data, a necessary condition for the final stage of CFAR detection. Unfortunately, the sample covari-ance matrix used in the PCA can be strongly affected by the presence of thermal anomalies, therefore a robust estimator is needed. To this aim, the Minimum Covariance Determinant estimator (MCD) is introduced in the PCA yielding an analysis that is little influenced by the presence of anomalies while provides results similar to the usual PCA for uncontaminated data. Experimental results have shown that many detections can be missed if the MCD estimator is not used in the presence of anomalies, even if their number is not so high but their values are able to modify significantly the sample covariance matrix. The robust multiband CFAR algorithm has been applied to a MODIS image and results have been compared with those from NASA-DAAC MOD14.
['T. Beltramonte', 'C. Clemente', 'M. Di Bisceglie', 'C. Galdi']
Robust multiband detection of thermal anomalies using the Minimum Covariance Determinant estimator
431,368
We present some theoretical and experimental results of an important caching problem which arises frequently in data intensive scientific applications that are run in data-grids. Such applications often need to process several files simultaneously, i.e., the application runs only if all its needed files are present in some disk cache accessible to the compute resource of the application. The set of files requested by an application, all of which must be in cache for the application to run, is called a file-bundle. This requirement introduces the need for cache replacement algorithms that are based on file-bundles rather then individual files. We show that traditional caching algorithms such as Least Recently Used (LRU) and GreedyDual-Size (GDS) are not optimal in this case since they are not sensitive to file-bundles and may hold in the cache non-relevant combinations of files. We propose and analyze a new cache replacement algorithm specifically adapted to deal with file-bundles. Results of experimental studies of the new algorithm, using a disk cache simulation model under a wide range of conditions such as file request distributions, relative cache size, file size distribution, and incoming job queue size, show significant improvement over traditional caching algorithms such as GDS.
['Ekow J. Otoo', 'Doron Rotem', 'A. Romosan', 'Sridhar Seshadri']
File caching in data intensive scientific applications on data-grids
56,230
Abstract#R##N##R##N#Biomolecular association and dissociation reactions take place on complicated interaction free energy landscapes that are still very hard to characterize computationally. For large enough distances, though, it often suffices to consider the six relative translational and rotational degrees of freedom of the two particles treated as rigid bodies. Here, we computed the six-dimensional free energy surface of a dimer of water-soluble alpha-helices by scanning these six degrees of freedom in about one million grid points. In each point, the relative free energy difference was computed as the sum of the polar and nonpolar solvation free energies of the helix dimer and of the intermolecular coulombic interaction energy. The Dijkstra graph algorithm was then applied to search for the lowest cost dissociation pathways based on a weighted, directed graph, where the vertices represent the grid points, the edges connect the grid points and their neighbors, and the weights are the reaction costs between adjacent pairs of grid points. As an example, the configuration of the bound state was chosen as the source node, and the eight corners of the translational cube were chosen as the destination nodes. With the strong electrostatic interaction of the two helices giving rise to a clearly funnel-shaped energy landscape, the eight lowest-energy cost pathways coming from different orientations converge into a well-defined pathway for association. We believe that the methodology presented here will prove useful for identifying low-energy association and dissociation pathways in future studies of complicated free energy landscapes for biomolecular interaction. © 2009 Wiley Periodicals, Inc. J Comput Chem, 2010
['Ling Wang', 'Boris Stumm', 'Volkhard Helms']
Graph-theoretical identification of dissociation pathways on free energy landscapes of biomolecular interaction.
71,781
We present a dynamic graphical model (DGM) for automated multi-instrument musical transcription. By multi-instrument transcription, we mean a system capable of listening to a recording in which two or more instruments are playing, and identifying both the notes that were played and the instruments that played them. Our transcription system models two musical instruments, each capable of playing at most one note at a time. We present results for two-instrument transcription on piano and violin sounds.
['Brian Vogel', 'Michael I. Jordan', 'David Wessel']
Multi-instrument musical transcription using a dynamic graphical model
434,503
Nonlinear filtering is a major problem in statistical signal processing applications and numerous techniques have been proposed in the literature. Since the seminal work that led to the Kalman filter to the more advanced particle filters, the goal has been twofold: to design algorithms that can provide accurate filtering solutions in general systems and, importantly, to reduce their complexity. If Gaussianity can be assumed, the family of sigma-point KFs is a powerful tool that provide competitive results. It is known that the quadrature KF provides the best performance among the family, although its complexity grows exponentially on the state dimension. This article details the asymptotic complexity of the legacy method and discusses strategies to alleviate this cost, thus making quadrature-based filtering a real alternative in high-dimensional Gaussian problems.
['Pau Closas', 'Jordi Vila-Valls', 'Carles Fernandez-Prades']
Computational complexity reduction techniques for quadrature Kalman filters
606,507
Output tracking in SISO fully linearizable nonlinear systems with a time delay is considered using integral sliding mode control based technique. Using Pade approximations for the delay, the actual delayed output is replaced by its approximation, by which the problem is reduced to the tracking of a nonminimum phase control system. This system is transformed into a corresponding state tracking problem, where the state tracking profiles are generated by the equations of the stable system centre. The integral sliding mode control approach is developed and good output tracking results are obtained. Smith Predictor is used to compensate the difference between the actual delayed output and its approximation and sliding mode observer/first order exact sliding mode differentiator is used to deal with the perturbed system. A one-link robot arm example is used to show the effectiveness of this proposed method.
['Gang Liu', 'A.S.I. Zinober', 'Yuri B. Shtessel']
An integral SMC based approach to time-delay system tracking
226,146
During these years, the population of Chinese cable TV consumers is increasing quickly. There are over thirty TV stations providing teletext service. At the same time, there are new requirements to teletext. This paper introduces the teletext standard of ETS300706[5] and it’s display method on the OSD without VBI module. Teletext data extraction from MPEGII stream and data decoding in the set-top box are focused in this paper. A new method is used to cache packets quickly and display pages steadily on the OSD. The experiment shows that how the customers select their favorite page to watch from Teletext screen.
['Kaixiong Su', 'Yihong Peng']
A Method for Teletext Display
96,287
The recent proliferation of increasingly capable mobile devices has given rise to mobile crowd sensing (MCS) systems that outsource the collection of sensory data to a crowd of participating workers that carry various mobile devices. Aware of the paramount importance of effectively incentivizing participation in such systems, the research community has proposed a wide variety of incentive mechanisms. However, different from most of these existing mechanisms which assume the existence of only one data requester, we consider MCS systems with multiple data requesters, which are actually more common in practice. Specifically, our incentive mechanism is based on double auction, and is able to stimulate the participation of both data requesters and workers. In real practice, the incentive mechanism is typically not an isolated module, but interacts with the data aggregation mechanism that aggregates workers' data. For this reason, we propose CENTURION, a novel integrated framework for multi-requester MCS systems, consisting of the aforementioned incentive and data aggregation mechanism. CENTURION's incentive mechanism satisfies truthfulness, individual rationality, computational efficiency, as well as guaranteeing non-negative social welfare, and its data aggregation mechanism generates highly accurate aggregated results. The desirable properties of CENTURION are validated through both theoretical analysis and extensive simulations.
['Haiming Jin', 'Lu Su', 'Klara Nahrstedt']
CENTURION: Incentivizing Multi-Requester Mobile Crowd Sensing
982,987
In the noninvasive bio-impedance technique, small amplitude currents are applied to the body and the developing potentials on its surface are measured. This noninvasive technique is used to monitor physiological and pathological processes, which alter the values or the spatial distribution of the electrical impedance inside the human body. A possible application of the bio-impedance technique is monitoring brain cryosurgery procedure-a surgical technique that employs freezing to destroy undesirable tissues. A numerical solver was developed to evaluate the ability of an induced-current bio-impedance system to monitor the growth of the frozen tissue inside the head in simulation. The forward-problem bio-impedance solver, which is based on the finite volume method in generalized two-dimensional (2-D) coordinate systems, was validated by a comparison to a known analytical solution for body-fitted and Cartesian meshing grids. The sensitivity of the developed surface potential to the ice-ball area was examined using a 2-D head model geometry, and was found to range between 0.8/spl times/10/sup -2/ and 1.68/spl times/10/sup -2/ (relative potential difference/mm/sup 2/), depending on the relative positioning of the excitation coil and the head. The maximal sensitivity was achieved when the coil was located at the geometrical center of the model.
['Alexander Gergel', 'Sharon Zlochiver', 'Moshe Rosenfeld', 'Shimon Abboud']
Induced current bio-impedance technique for monitoring cryosurgery procedure in a two-dimensional head model using generalized coordinate systems
537,953
In physical layer security systems there is a clear need to exploit the radio link characteristics to automatically generate an encryption key between two end points. The success of the key generation depends on the channel reciprocity, which is impacted by the non-simultaneous measurements and the white nature of the noise. In this paper, an OFDM subcarriers' channel responses based key generation system with enhanced channel reciprocity is proposed. By theoretically modelling the OFDM subcarriers' channel responses, the channel reciprocity is modelled and analyzed. A low pass filter is accordingly designed to improve the channel reciprocity by suppressing the noise. This feature is essential in low SNR environments in order to reduce the risk of the failure of the information reconciliation phase during key generation. The simulation results show that the low pass filter improves the channel reciprocity, decreases the key disagreement, and effectively increases the success of the key generation.
['Junqing Zhang', 'Roger F. Woods', 'Alan Marshall', 'Trung Q. Duong']
An effective key generation system using improved channel reciprocity
327,357
This paper describes an augmented reality system that incorporates a real-time dense stereo vision system. Analysis of range and intensity data is used to perform two functions: 1) 3D detection and tracking of the user's fingertip or a pen to provide natural 3D pointing gestures, and 2) computation of the 3D position and orientation of the user's viewpoint without the need for fiducial mark calibration procedures, or manual initialization. The paper describes the stereo depth camera, the algorithms developed for pointer tracking and camera pose tracking, and demonstrates their use within an application in the field of oil and gas exploration.
['Gaile G. Gordon', 'Mark Billinghurst', 'Melanie L. Bell', 'John Woodfill', 'B. Kowalik', 'Alex Erendi', 'Janet Tilander']
The use of dense stereo range data in augmented reality
59,918
We study the lexicographic centre of multiple objective optimization. Analysing the lexicographic-order properties yields the result that, if the multiple objective programming's lexicographic centre is not empty, then it is a subset of all efficient solutions. It exists if the image set of multiple objective programming is bounded below and closed. The multiple objective linear programming's lexicographic centre is nonempty if and only if there exists an efficient solution to the multiple objective linear programming. We propose a polynomial-time algorithm to determine whether there is an efficient solution to multiple objective linear programming, and we solve the multiple objective linear programming's lexicographic centre by calculating at most the same number of dual linear programs as the number of objective functions and a system of linear inequalities.
['Zhang Jiangao', 'Shitao Yang']
On the Lexicographic Centre of Multiple Objective Optimization
552,144
In this paper, we propose an iterative interstream interference cancellation technique for system with frequency selective multiple-input multiple-output (MIMO) channel. Our method is inspired by the fact that the cancellation of the interstream interference can be regarded as a reduction in the magnitude of the interfering channel. We show that, as iteration goes on, the channel experienced by the equalizer gets close to the single input multiple output (SIMO) channel and, therefore, the proposed SIMO-like equalizer achieves improved equalization performance in terms of normalized mean square error. From simulations on downlink communications of 2 x 2 MIMO systems in high speed packet access universal mobile telecommunications system standard, we show that the proposed method provides substantial performance gain over the conventional receiver algorithms.
['Hyougyoul Yu', 'Byonghyo Shim', 'Tae Won Oh']
Iterative interstream interference cancellation for MIMO HSPA+ system
265,047
Étude de performance des systèmes de découverte de ressources
['Heithem Abbes', 'Christophe Cérin', 'Jean-Christophe Dubacq', 'Mohamed Jemni']
Étude de performance des systèmes de découverte de ressources
763,888
This interactive poster session features seven research groups exploring how interactive, dynamic visualizations impact student learning. Six empirical studies report on promising designs for visualizations. These studies use logs of student interactions and embedded assessments to document the quality and trajectory of learning and to capture the cognitive and social processes mediated by the visualization. The review synthesizes previous and current empirical work. It offers design guidelines, principles, patterns, and examples to inform those designing interactive, dynamic visualization and aligned learning support. These posters show why dynamic visualizations are both difficult to design and valuable for science instruction.
['Marcia C. Linn', 'Chris Quintana', 'Hsin-Yi Chang', 'Ji Shen', 'Jennifer L. Chiu', 'Douglas B. Clark', 'Muhsin Menekse', "Cynthia M. D'Angelo", 'Sharon Schleigh', 'Stephanie Touchman', 'Kevin W. McElhaney', 'Keisha Varma', 'Aaron Price', 'Hee-Sun Lee']
Improving the design and impact of interactive, dynamic visualizations for science learning
245,127
With the rapid development of the Internet, Web2.0 as the next generation of networking services emphasizes social interaction and share of user-generated content in a collaborative environment. It has evolved and transferred the Internet into a platform by supporting rich digital media technology for the development of innovative business and educational applications. In conjunction with Web 3D technology, social networking has already begun to foster an intuitive and immersive system that allows effective visual communication and delivers real time natural interactive experience for enhancing user motivation and engagement compared with the traditional static and text-oriented Web. This paper presents an augmented social interactive learning approach to incorporating social networking services on Web2.0 into traditional distance learning and on-site teaching for blended learning. This paper also discusses the key issues including user interaction and communication forms and examines different educational activities involving user content generation, e-tutoring, and role-playing.
['Li Jin', 'Zhigang Wen']
An Augmented Social Interactive Learning Approach through Web2.0
207,470
Incremental Mining of Significant URLs in Real-Time and Large-Scale Social Streams
['Cheng-Ying Liu', 'Chi-Yao Tseng', 'Ming-Syan Chen']
Incremental Mining of Significant URLs in Real-Time and Large-Scale Social Streams
601,204
Optimisation des calculs redondants générés par l'exécution de triggers: implantation et évaluation.
['Françoise Fabret', 'François Llirbat', 'K. Sklani', 'Eric Simon']
Optimisation des calculs redondants générés par l'exécution de triggers: implantation et évaluation.
776,187
Linearized Reed-Solomon codes are defined. Higher weight distribution of those codes are determined.
['Haode Yan', 'Yan Liu', 'Chunlei Liu']
Higher weight distribution of linearized Reed-Solomon codes
603,773
Hewlett-Packard (HP) developed a standard and common process for analysis coupled with advancement in inventory optimization techniques to invent a new and robust way to design supply-chain networks. This new methodology piloted by HP's Digital Imaging division has received sponsorship from HP's Executive Supply- Chain Council and is now being deployed across the entire company. As of May 2003, a dozen product lines have been exposed to this methodology, with four product lines already integrating this process into both the configuration of their new-product supply chains and the improvement of existing-product supply chains. The team will highlight the application of these new capabilities within HP's Digital Camera and Inkjet Supplies businesses. The realized savings from these first two projects exceeds $130 million.
['Corey A. Billington', 'Gianpaolo Callioni', 'Barrett Crane', 'John D. Ruark', 'Julie Unruh Rapp', 'Trace White', 'Sean P. Willems']
Accelerating the Profitability of Hewlett-Packard's Supply Chains
198,228
Monitoring Wireless Sensor Networks (WSNs) are composed of sensor nodes that report temperature, relative humidity, and other environmental parameters. The time between two successive measurements is a critical parameter to set during the WSN configuration because it can impact the WSN's lifetime, the wireless medium contention and the quality of the reported data. As trends in monitored parameters can significantly vary between scenarios and within time, identifying a sampling interval suitable for several cases is also challenging. In this work, we propose a dynamic sampling rate adaptation scheme based on reinforcement learning, able to tune sensors' sampling interval on-the-fly, according to environmental conditions and application requirements. The primary goal is to set the sampling interval to the best value possible so as to avoid oversampling and save energy, while not missing environmental changes that can be relevant for the application. In simulations, our mechanism could reduce up to 73% the total number of transmissions compared to a fixed strategy and, simultaneously, keep the average quality of information provided by the WSN. The inherent flexibility of the reinforcement learning algorithm facilitates its use in several scenarios, so as to exploit the broad scope of the Internet of Things.
['Gabriel Martins Dias', 'Maddalena Nurchis', 'Boris Bellalta']
Adapting sampling interval of sensor networks using on-line reinforcement learning
810,052
We present a framework for physics-based animation of deforming solids and fluids. By merging the equations of solid mechanics with the Navier-Stokes equations using a particle-based Lagrangian approach, we are able to employ a unified method to animate both solids and fluids as well as phase transitions. Central to our framework is a hybrid implicit-explicit surface generation approach, which is capable of representing fine surface detail as well as handling topological changes in interactive time for moderately complex objects. The generated surface is represented by oriented point samples, which adapt to the new position of the particles by minimizing the potential energy of the surface subject to geometric constraints. We illustrate our algorithm on a variety of examples ranging from stiff elastic and plasto-elastic materials to fluids with variable viscosity.
['Richard Keiser', 'Bart Adams', 'Dominique Gasser', 'Paolo Bazzi', 'Philip Dutré', 'Markus H. Gross']
A unified Lagrangian approach to solid-fluid animation
502,393
Bongard problems present an outstanding challenge to artificial intelligence. They consist of visual pattern understanding problems on which the task of the pattern perceiver is to find an abstract aspect of distinction between two classes of figures. This paper examines the philosophical question of whether objects in Bongard problems can be ascribed an ap riori, metaphysical, existence—the ontological question of whether objects, and their boundaries, come pre-defined, independently of any understanding or context. This is an essential issue, because it determines whether a priori symbolic representations can be of use for solving Bongard problems. The resulting conclusion of this analysis is that in the case of Bongard problems there can be no units ascribed an ap riori existence—and thus the objects dealt with in any specific problem must be found by solution methods (rather than given to them). This view ultimately leads to the emerging alternatives to the philosophical doctrine of metaphysical realism. © 2000 Elsevier Science B.V. All rights reserved.
['Alexandre Linhares']
A glimpse at the metaphysics of Bongard problems
324,582
Purpose – This study was designed to investigate the information needs and use of the private construction materials sector in Kuwait.Design/methodology/approach – A structured instrument, expert‐reviewed and pilot‐tested, was used for data collection. A total of 20 companies were surveyed by interviewing a senior official identified by each firm.Findings – These firms mostly used financial, marketing, legal, forecasting, and managerial information. Personnel and statistical information was the least used. The most important sources of information included government agencies, the internet, the Kuwait Chamber of Commerce and Industry (KCCI), and financial service agencies in addition to several informal sources. Information use activity was found to be at a low level.Research limitations/implications – Since the study was limited to private companies that dealt with construction materials, its findings should be used for this sector only.Practical implications – The KCCI, following one of its official rol...
['Mumtaz Anwar', 'Arwa Tuqan']
Information needs and use in the construction materials sector in Kuwait
164,477
The MINER Act of 2006 requires the installation of post-accident, two-way, communications and electronic tracking systems for all coal mines. A through-the-earth (TTE) wireless communication system sends its signal directly through the overburden of a mine but can have limitation relative to performance, reliability, and transmission range. The National Institute for Occupational Safety and Health (NIOSH) conducted experiments at a coal mine for different TX/RX antenna arrangements using a NOISH TTE prototype system. This system uses multi-turn, relative small TX loop antenna instead of single turn, relative large TX loop antenna. The objectives of the test are to evaluate the performance of the system, to evaluate the path loss and optimize working frequency for the mine, to characterize surface and underground electromagnetic noise, and to investigate the feasibility of horizontal TTE communication and its advantage over vertical TTE communications. In this paper, the performance of a magnetic loop TTE communication system was evaluated for various antenna arrangements. A fairly large communication range was achieved for horizontal TTE transmission. While vertical TTE communication between underground and the surface may be restricted by factors like deployment challenges of the surface TX antenna and short transmission ranges, horizontal TTE communication within the tunnel can reach relative large distances and can thus establish a more reliable communication. Moreover, the combination of vertical and horizontal TTE communication may provide a way to considerably increase communication range.
['Lincan Yan', 'Carl Sunderman', 'Bruce G. Whisner', 'Nicholas W. Damiano', 'Chenming Zhou']
Antenna arrangement investigation for through-the-earth (TTE) communications in coal mines
571,910
A newly upgraded version of the BCVEGPY, a generator for hadronic production of the meson B c B c and its excited states, is available. In comparison with the previous one (Chang et al., 2006), the new version is to apply an improved hit-and-miss technology to generating the un-weighted events much more efficiently under various simulation environments. The codes for production of 2S 2 S -wave B c B c states are also given here.
['Chao-Hsi Chang', 'Xian-You Wang', 'Xing-Gang Wu']
BCVEGPY2.2: A Newly Upgraded Version for Hadronic Production of the Meson $B_c$ and Its Excited States
86,972
Advance Encryption Standard (AES) hardware implementation in FPGA as well as in ASIC has been intensely discussing, especially in high-throughput (over several tens Gbps). However, low area designs have also been investigated in recent years for the embedded hardware applications. This paper presents a 32-bit AES implementation with a low area of 156 slices and a throughput of 876 Mbps, which outperformed the best reported result of 648 Mbps throughput found in literature.
['Chi Jeng Chang', 'Chi Wu Huang', 'Kuo Huang Chang', 'Yi G. Chen', 'Chung Cheng Hsieh']
High throughput 32-bit AES implementation in FPGA
301,409
For conventional multiple-input and multiple-output (MIMO) selective decode-and-forward (S-DF) relaying network, without global channel knowledge, base station (BS) normally allocates its transmission power for the channel links between itself and its served users. In this case, if the received signal power at relay station (RS) is larger enough, RS will help BS decode and forward the messages. However, such strategy may lose system optimality if BS has the location information of RS. Motivated by this, in this paper, a RS-oriented BS power allocation algorithm for MIMO S-DF relaying is proposed to enhance the system performance. Specifically, BS firstly allocates its transmission power to help as much as data streams to be correctly decoded at RS. Then, if there is still some power left, BS will allocate the residual power for its direct channel links to the users to further increase the system performance. Computer simulations are carried out to examine the proposed scheme in terms of sumrate and bit-error-rate. It is shown that the proposed scheme outperforms the conventional schemes, especially when the RS is placed in the middle of BS and the users.
['Jiancao Hou', 'Na Yi', 'Yi Ma']
Relay-oriented source power allocation for MIMO selective decode-and-forward relaying
839,523
Scheduling with minimizing the total resource consumption.A branch-and-bound algorithm to search for the optimal solution.A lower bound based on job preemption. In response to the effects of global warming and environmental concerns, energy consumption has become a crucial issue. In this study, we consider a parallel-machine scheduling problem where the objective is to minimize the sum of resource consumption and outsourcing cost given that the maximum tardiness of all jobs does not exceed a given bound. We show that the problem is polynomially solvable for the pre-emption case and strongly NP-hard for the non-preemption case. A branch-and-bound algorithm and a hybrid metaheuristic algorithm are proposed to obtain exact and approximate solutions. Some experimental results are given to evaluate the algorithms.
['Zhaohui Liu', 'Wen-Chiung Lee', 'Jen-Ya Wang']
Resource consumption minimization with a constraint of maximum tardiness on parallel machines
727,807
Feedback stabilization of an ensemble of non interacting half spins described by the Bloch equations is considered. This system may be seen as an interesting example for infinite dimensional systems with continuous spectra. We propose an explicit feedback law that stabilizes asymptotically the system around a uniform state of spin +1/2 or -1/2. The proof of the convergence is done locally around the equilibrium in the H^1 topology. This local convergence is shown to be a weak asymptotic convergence for the H^1 topology and thus a strong convergence for the C^0 topology. The proof relies on an adaptation of the LaSalle invariance principle to infinite dimensional systems. Numerical simulations illustrate the efficiency of these feedback laws, even for initial conditions far from the equilibrium.
['Karine Beauchard', 'Paulo Sérgio Pereira da Silva', 'Pierre Rouchon']
Stabilization for an ensemble of half-spin systems
235,769
Hybrid Planning by Combining SMT and Simulated Annealing.
['Jaroslaw Skaruz', 'Artur Niewiadomski', 'Wojciech Penczek']
Hybrid Planning by Combining SMT and Simulated Annealing.
752,631
We describe an interactive visualization and modeling program for the creation of protein structures "from scratch." The input to our program is an amino acid sequence - decoded from a gene - and a sequence of predicted secondary structure types for each amino acid - provided by external structure prediction programs. Our program can be used in the set-up phase of a protein structure prediction process; the structures created with it serve as input for a subsequent global internal energy minimization, or another method of protein structure prediction. Our program supports basic visualization methods for protein structures, interactive manipulation based on inverse kinematics, and visualization guides to aid a user in creating "good" initial structures.
['Oliver Kreylos', 'Nelson Max', 'Bernd Hamann', 'Silvia N. Crivelli', 'E. Wes Bethel']
Interactive protein manipulation
269,733
This paper describes the GRD (Genetic Reconfiguration of DSPs) chip, which is evolvable hardware designed for neural network applications. The GRD chip is a building block for the configuration of a scalable neural network hardware system. Both the topology and the hidden layer node functions of a neural network mapped on the GRD chips are dynamically reconfigured using a genetic algorithm (GA). Thus, the most desirable network topology and choice of node functions (e.g., Gaussian or sigmoid function) for a given application can be determined adaptively. This approach is particularly suited to applications requiring the ability to cope with time-varying problems and real-time constraints. The GRD chip consists of a 100 MHz 32-bit RISC processor and 15 33 MHz 16-bit DSPs connected in a binary-tree network. The RISC processor is the NEC V830 which executes mainly the GA. According to chromosomes obtained by the GA, DSP functions and the interconnection among them are dynamically reconfigured. The GRD chip does not need a host machine for this reconfiguration. This is desirable for embedded systems in practical industrial applications. Simulation results on chaotic time series prediction are two orders of magnitude faster than on a Sun Ultra 2.
['Masahiro Murakawa', 'Shuji Yoshizawa', 'Isamu Kajitani', 'Xin Yao', 'Nobuki Kajihara', 'Masaya Iwata', 'Tetsuya Higuchi']
The GRD chip: genetic reconfiguration of DSPs for neural network processing
126,699
Virtual reality in your living room: technical perspective
['Patrick Baudisch']
Virtual reality in your living room: technical perspective
606,094
Webometrics and web mining are two fields where research is focused on quantitative analyses of the web. This literature review outlines definitions of the fields, and then focuses on their methods and applications. It also discusses the potential of closer contact and collaboration between them. A key difference between the fields is that webometrics has focused on exploratory studies, whereas web mining has been dominated by studies focusing on development of methods and algorithms. Differences in type of data can also be seen, with webometrics more focused on analyses of the structure of the web and web mining more focused on web content and usage, even though both fields have been embracing the possibilities of user generated content. It is concluded that research problems where big data is needed can benefit from collaboration between webometricians, with their tradition of exploratory studies, and web miners, with their tradition of developing methods and algorithms.
['David Gunnarsson Lorentzen']
Webometrics benefitting from web mining? An investigation of methods and applications of two research fields
446,118
ABSTRACTThe International Journal of Geographic Information Science (IJGIS), established in 1987, is the first academic journal devoted solely to Geographical Information Science (GIS) research. This editorial highlights milestones of the journal development and its influences on the field. IJGIS research articles and special issues have been effective in publishing the state of the art and emerging research accomplishments. In light of the changing landscape of GIS, IJGIS welcome papers on meta-analysis studies, literature reviews, and research foresight. This editorial outlines the underlying thinking and expectations for these papers in future volumes. IJGIS aspires to publish research of high novelty and broad interest that pushes the boundary of fundamental and applied GIS. As an independent, multidisciplinary journal driven by the community of authors, reviewers, and readers, community support is key to realizing the aspiration of a major influence on GIS research.
['May Yuan']
30 years of IJGIS: the changing landscape of geographical information science and the road ahead
897,692
A transition-code based method is proposed to reduce the linearity testing time of pipelined analog-to-digital converters (ADCs). By employing specific architecture-dependent rules, only a few specific transition codes need to be measured to accomplish the accurate linearity test of a pipelined ADC. In addition, a simple digital Design-for-Test (DfT) circuit is proposed to help correctly detect transition codes corresponding to each pipelined stage. With the help of the DfT circuit, the proposed method can be applied for pipelined ADCs with digital error correction (DEC). Experimental results of a practical chip show that the proposed method can achieve high test accuracy for a 12-bit 1.5-bit/stage pipelined ADC with different nonlinearities by measuring only 9.3% of the total measured samples of the conventional histogram based method.
['Jin-Fu Lin', 'Soon-Jyh Chang', 'Te-Chieh Kung', 'Hsin-Wen Ting', 'Chih-Hao Huang']
Transition-Code Based Linearity Test Method for Pipelined ADCs With Digital Error Correction
115,932
Computer aided detection for low-dose CT colonography
['Gabriel Kiss', 'Johan Van Cleynenbreugel', 'Stylianos Drisis', 'Didier Bielen', 'Paul Suetens']
Computer aided detection for low-dose CT colonography
885,082
Formulas are presented for the maximum accuracy with which waveform parameters can be estimated in the presence of Gaussian noise. These formulas are presented in a form which is not widely known and which is very simple to use, especially in the white noise case. The unknown parameters may include the conventional ones, such as range, range rate, and acceleration, as well as others depending on target motion and configuration, properties of the transmission medium, or even sensor bias errors. A discussion is given of the conditions of validity of the formulas. The relationship between the approach followed here and other approaches to the parameter estimation problem is discussed.
['Peter Swerling']
Parameter estimation accuracy formulas
43,203
Information seeking does not occur in a vacuum but invariably is motivated by some wider task. It is well accepted that to understand information seeking we must understand the task context within which it takes place. Writing is amongst the most common tasks within which information seeking is embedded. This paper considers how writing can be understood in order to account for embedded information seeking. Following Sharples, the paper treats writing as a design activity and explore parallels between the psychology of design and information seeking. Significant parallels can be found and ideas from the psychology of design offer explanations for a number of information seeking phenomena. Next, a design-oriented representation of writing tasks as a means of providing an account of phenomena such as information seeking uncertainty and focus refinement is developed. The paper illustrates the representation with scenarios describing the work of newspaper journalists.
['Simon Attfield', 'Ann Blandford', 'John Dowell']
Information seeking in the context of writing - A design psychology interpretation of the "problematic situation"
149,918
A fourth-order 1-bit continuous-time delta-sigma modulator designed in a 65 nm process for portable ultrasound scanners is presented in this paper. The loop filter consists of RC-integrators, with programmable capacitor arrays and resistors, and the quantizer is implemented with a high-speed clocked comparator and a pull-down clocked latch. The feedback signal is generated with voltage DACs based on transmission gates. Using this implementation, a small and low-power solution required for portable ultrasound scanner applications is achieved. The modulator has a bandwidth of 10 MHz with an oversampling ratio of 16 leading to an operating frequency of 320 MHz. The design occupies an area of 0.0175 mm 2 and achieves a SNR of 45 dB consuming 489 μΑ at a supply voltage of 1.2 V; the resulting FoM is 197 fJ/conversion. The results are based on simulations with extracted parasitics including process and mismatch variations.
['Pere Llimos Muntal', 'Ivan Harald Holger Jørgensen', 'Erik Bruun']
A 10 MHz bandwidth continuous-time delta-sigma modulator for portable ultrasound scanners
970,493
Theory of Rough Sets provides good foundations for the attribute reduction processes in data mining. For numeric attributes, it is enriched with appropriately designed discretization methods. However, not much has been done for symbolic attributes with large numbers of values. The paper presents a framework for the symbolic value partition problem, which is more general than the attribute reduction, and more complicated than the discretization problems.We demonstrate that such problem can be converted into a series of the attribute reduction phases. We propose an algorithm searching for a (sub)optimal attribute reduct coupled with attribute value domains partitions. Experimental results show that the algorithm can help in computing smaller rule sets with better coverage, comparing to the standard attribute reduction approaches.
['Fan Min', 'Qihe Liu', 'Chunlan Fang', 'Jianzhong Zhang']
Reduction based symbolic value partition
222,334
With the modeling technology of Petri nets and colored Petri Nets, task logics are described in Emergency Plan Business Process. With boundedness, the reachability tree algorithm of Petri nets is designed. On this basis, corresponding Markov chain is constructed; the algorithm of analysis performance of stochastic Petri nets is provided. Through the case, the utilization ratio of resources and design efficiency in the designed model are analyzed, and the validity of the algorithm verified.
['Weidong Huang', 'Zhe Tong']
Research on Emergency Plan Business Process modeling based on colored Petri Nets
151,995
PriceCast Fuel: Agent Based Fuel Pricing
['Alireza Derakhshan', 'Frodi Hammer', 'Yves Demazeau']
PriceCast Fuel: Agent Based Fuel Pricing
835,725
The Constant Proportion Portfolio Insurance (CPPI) technique is a dynamic capital-protection strategy that aims at providing investors with a guaranteed minimum level of wealth at the end of a specified time horizon. A pertinent concern of issuers of CPPI products is when to perform portfolio readjustments. One way of achieving this is through the use of rebalancing triggers; this constitutes the main focus of this paper. We propose a genetic programming (GP) approach to evolve trigger-based rebalancing strategies that rely on some tolerance bounds around the CPPI multiplier, as well as on the time-dependent implied multiplier, to determine the timing sequence of the portfolio readjustments. We carry out experiments using GARCH datasets, and use two different types of fitness functions, namely variants of Tracking Error and Sortino ratio, for multiple scenarios involving different data and/or CPPI settings. We find that the GP-CPPI strategies yield better results than calendar-based rebalancing strategies in general, both in terms of expected returns and shortfall probability, despite the fitness measures having no special functionality that explicitly penalises floor violations. Since the results support the viability and feasibility of the proposed approach, potential extensions and ameliorations of the GP framework are also discussed.
['Dietmar Maringer', 'Tikesh Ramtohul']
GP-based rebalancing triggers for the CPPI
286,146
This paper describes an adaptive neural control system for governing the movements of a robotic wheelchair. It presents a new model of recurrent neural network based on a RBF architecture and combining in its architecture local recurrence and synaptic connections with FIR filters. This model is used in two different control architectures to command the movements of a robotic wheelchair. The training equations and the stability conditions of the control system are obtained. Practical tests show that the results achieved using the proposed method are better than those obtained using PID controllers or other recurrent neural networks models
['Luciano Boquete', 'Rafael Barea', 'Ricardo García', 'Manuel Mazo', 'M. Angel Sotelo']
Control of a Robotic Wheelchair Using Recurrent Networks
130,846
In this paper, we propose novel serial concatenated irregular convolutional-coded (IRCC) irregular precoded linear dispersion codes (IR-PLDCs), which are capable of operating near the multiple-input-multiple output (MIMO) channel's capacity. The irregular structure facilitates the proposed system's near-capacity operation across a wide range of SNRs, while maintaining a vanishing bit error ratio (BER). Each coding block of the proposed scheme and all the iterative decoding parameters are designed with near-capacity operation in mind, using extrinsic information transfer charts. By applying the irregular design principle to both the inner and outer codes, the proposed IRCC-aided IR-PLDC scheme becomes capable of operating as close as 0.9 dB to the MIMO channel's capacity for SNRs in excess of a certain threshold..
['Nan Wu', 'Lajos Hanzo']
Near-Capacity Irregular-Convolutional-Coding-Aided Irregular Precoded Linear Dispersion Codes
184,734
Adaptive radar detection and estimation schemes are often based on the independence of the secondary data used for building estimators and detectors. This paper relaxes this constraint and deals with the non-trivial problem of deriving detection and estimation schemes for joint spatial and temporal correlated radar measurements. Latest results from Random Matrix theory, used for large dimensional regime, allows to build a Toeplitz estimate of the spatial covariance matrix while the temporal covariance matrix is then estimated in a conventional way (Sample Covariance Matrix, M-estimates). These two joint estimates of the spatial and temporal covariance matrices leads to build Adaptive Radar Detectors, like Adaptive Normalized Matched Filter (ANMF). We show that taking care of the spatial covariance matrix may lead to significant performance improvements compared to classical procedures.
['Romain Couillet', 'Maria Greco', 'Jean Philippe Ovarlez', 'Frédéric Pascal 0001']
RMT for whitening space correlation and applications to radar detection
601,526
LiveDescribe Web Redefining What and How Entertainment Content Can Be Accessible to Blind and Low Vision Audiences
['Margot Whitfield', 'Raza Mir Ali', 'Deborah I. Fels']
LiveDescribe Web Redefining What and How Entertainment Content Can Be Accessible to Blind and Low Vision Audiences
847,580
We consider the problem of communicating a message m in the presence of a malicious jamming adversary (Calvin), who can erase an arbitrary set of up to pn bits, out of n transmitted bits X = (x 1 , …, x n ). The capacity of such a channel when Calvin is exactly causal, i.e. Calvin's decision of whether or not to erase bit x i depends on his observations (x 1 , …, x i ) was recently characterized [1], [2] to be 1 − 2p. In this work we show two (perhaps) surprising phenomena. Firstly, we demonstrate via a novel code construction that if Calvin is delayed by even a single bit, i.e. Calvin's decision of whether or not to erase bit x i depends only on (x 1 , …, x i−1 ) (and is independent of the “current bit” x i ) then the capacity increases to 1 − p when the encoder is allowed to be stochastic. Secondly, we show via a novel jamming strategy for Calvin that, in the single-bit-delay setting, if the encoding is deterministic (i.e. the transmitted codeword X is a deterministic function of the message m) then no rate asymptotically larger than 1 − 2p is possible with vanishing probability of error, hence stochastic encoding (using private randomness at the encoder) is essential to achieve the capacity of 1− p against a one-bit-delayed Calvin.
['Bikash Kumar Dey', 'Sidharth Jaggi', 'Michael Langberg', 'Anand D. Sarwate']
A bit of delay is sufficient and stochastic encoding is necessary to overcome online adversarial erasures
882,079
Evolvable Hardware (EHW) is a combination of evolutionary algorithm and reconfigurable hardware devices. Due to its flexible and adaptive ability, EHW-based solutions receive a lot of attention in industrial applications. One of the obstacles to realize an EHW-based method is its very long training time. This study deals with the parallelism of EHW-based design of image filters using graphic processing units (GPUs). The design process is analyzed and decomposed into some smaller processes that can run in parallel. Pixel-based data for training and verifying EHW solutions are partitioned according to the architecture of GPU. Several strategies for deploying parallel processes are developed and implemented. With the proposed method, significant improvements on the efficiency of training EHW models are gained. Using a GPU with 240 cores, a speedup of 64 times is obtained. This paper evaluates and compares the performance of the proposed method with other ones.
['Chih-Hung Wu', 'Chin-Yuan Chiang', 'Yi-Han Chen']
Parallelism of Evolutionary Design of Image Filters for Evolvable Hardware Using GPU
239,582
Rapid changes in the business environment call for more flexible and adaptive workflow systems. Researchers have proposed that workflow management systems (WfMSs) comprising multiple agents can provide these capabilities. We have developed a multi-agent based workflow system, JBees, which supports distributed process models and the adaptability of executing processes. Modern workflow systems should also have the flexibility to integrate available Web services as they are updated. In this paper, we discuss how our agent-based architecture can be used to bind and access Web services in the context of executing a workflow process model. We use an example from the diamond processing industry to show how our agent architecture can be used to integrate Web services with WfMSs.
['Bastin Tony Roy Savarimuthu', 'Maryam Purvis', 'Martin K. Purvis', 'Stephen Cranefield']
Integrating Web Services with Agent Based Workflow Management System (WfMS)
278,424
SIMD programs can devote substantial time to manipulating the underlying hardwares context registers status bits that determine whether processors in the SIMD array execute or skip the current instruction. This paper describes two optimizations, implemented in a compiler for a data parallel C, that reduce the overhead of manipulating and accessing context registers. The first optimization uses observations about a program's nesting structure to eliminate context register save/restore operations performed by guarded parallel control constructs. The second uses two-version code to eliminate context register acceaaes performed by individual instructions.
['Maya Gokhale', 'Phil Pfeiffer']
SIMD Optimizations in a Data Parallel C
487,375
An indirect approach for building hex-dominant meshes is proposed: a tetrahedral mesh is constructed at first and is recombined to create a maximum amount of hexahedra. The efficiency of the recombination process is known to significantly depend on the quality of the sampling of the vertices. A good vertex sampling depends itself on the quality of the underlying frame field that has been used to locate the vertices. An iterative procedure to obtain a high quality three-dimensional frame field is presented. Then, a new point insertion algorithm based on a frame field smoothness is developed. Points are inserted in priority in smooth frame field regions. The new approach is tested and compared with simpler strategies on various geometries. The new method leads to hex-dominant meshes exhibiting either an equivalent or a larger volume ratio of hexahedra (up to 20%) compared to the frontal point insertion approach. A frame field smoothness-based algorithm for bulk point insertion is proposed.An iterative procedure for smoothing the frame field is proposed.The impact of geometric singularities on the final mesh is reduced.Volumic ratio of hexahedra is increased up to twenty percents.
['Paul-Emile Bernard', 'Jean-François Remacle', 'Nicolas Kowalski', 'Christophe Geuzaine']
Frame field smoothness-based approach for hex-dominant meshing
569,951
This paper addresses the problem of target detection in dynamic environments in a semi-supervised data-driven setting with low-cost passive sensors. A key challenge here is to simultaneously achieve high probabilities of correct detection with low probabilities of false alarm under the constraints of limited computation and communication resources. In general, the changes in a dynamic environment may significantly affect the performance of target detection due to limited training scenarios and the assumptions made on signal behavior under a static environment. To this end, an algorithm of binary hypothesis testing is proposed based on clustering of features extracted from multiple sensors that may observe the target. First, the features are extracted individually from time-series signals of different sensors by using a recently reported feature extraction tool, called symbolic dynamic filtering. Then, these features are grouped as clusters in the feature space to evaluate homogeneity of the sensor responses. Finally, a decision for target detection is made based on the distance measurements between pairs of sensor clusters. The proposed procedure has been experimentally validated in a laboratory setting for mobile target detection. In the experiments, multiple homogeneous infrared sensors have been used with different orientations in the presence of changing ambient illumination intensities. The experimental results show that the proposed target detection procedure with feature-level sensor fusion is robust and that it outperforms those with decision-level and data-level sensor fusion.
['Yue Li', 'Devesh K. Jha', 'Asok Ray', 'Thomas A. Wettergren']
Information Fusion of Passive Sensors for Detection of Moving Targets in Dynamic Environments
707,956
We target the sparse 3D reconstruction of dynamic objects observed by multiple unsynchronized video cameras with unknown temporal overlap. To this end, we develop a framework to recover the unknown structure without sequencing information across video sequences. Our proposed compressed sensing framework poses the estimation of 3D structure as the problem of dictionary learning. Moreover, we define our dictionary as the temporally varying 3D structure, while we define local sequencing information in terms of the sparse coefficients describing a locally linear 3D structural interpolation. Our formulation optimizes a biconvex cost function that leverages a compressed sensing formulation and enforces both structural dependency coherence across video streams, as well as motion smoothness across estimates from common video sources. Experimental results demonstrate the effectiveness of our approach in both synthetic data and captured imagery.
['Enliang Zheng', 'Dinghuang Ji', 'Enrique Dunn', 'Jan Michael Frahm']
Sparse Dynamic 3D Reconstruction from Unsynchronized Videos
579,543
We propose and analyze in this paper new linger management techniques which are applicable for RAKE receivers operating in the soft handover region. These schemes employ "distributed" types of generalized selection combining (GSC) and minimum selection GSC schemes in order to minimize the impact of sudden connection loss of one of the active base stations. By accurately quantifying the average error rate, we show through numerical examples that our newly proposed distributed schemes offer a clear advantage in comparison with their conventional counterparts.
['Seyeong Choi', 'Mohamed-Slim Alouini', 'Khalid A. Qaraqe']
Finger Management Schemes for Minimum Call Drop in the Soft Handover Region
121,876
P2P file-sharing networks such as Kazaa, eDonkey, and Limewire boast millions of users. Because of scalability concerns and legal issues, such networks are moving away from the semicentralized approach that Napster typifies toward more scalable and anonymous decentralized P2P architectures. Because they lack any central authority, these networks provide a new, interesting context for the expression of human social behavior. However, the activities of P2P community members are sometimes at odds with what real-world authorities consider acceptable. One example is the use of P2P networks to distribute illegal pornography. To gauge the form and extent of P2P-based sharing of illegal pornography, we analyzed pornography-related resource-discovery traffic in the Gnutella P2P network. We found that a small yet significant proportion of Gnutella activity relates to illegal pornography: for example, 1.6 percent of searches and 2.4 percent of responses are for this type of material. But does this imply that such activity is widespread in the file-sharing population? On the contrary, our results show that a small yet particularly active subcommunity of users searches for and distributes illegal pornography, but it isn't a behavioral norm.
['Daniel Hughes', 'James Walkerdine', 'Geoff Coulson', 'Stephen Gibson']
Peer-to-peer: is deviant behavior the norm on P2P file-sharing networks?
97,091
Planning ahead through space and time: from neuropsychology to motor control.
['Mariama Dione', 'Laurent Ott', 'Yvonne Delevoye-Turrell']
Planning ahead through space and time: from neuropsychology to motor control.
751,433
TOWARDS MODEL-DRIVEN EVOLUTION OF DATA WAREHOUSES
['Christian Kurze', 'Marcus Hofmann', 'Frieder Jacobi', 'André Müller', 'Peter Gluchowski']
TOWARDS MODEL-DRIVEN EVOLUTION OF DATA WAREHOUSES
762,821
In this paper, we propose a method is to improve the performance of information retrieval systems (IRS) by increasing the selectivity of relevant documents on the web. Indeed, a significant number of relevant documents on the web are not returned by an IRS (specifically a search engine), because of the richness of natural language Arabics. For this purpose the search engine does not reach high performance and does not meet the needs of users. To remedy this problem, we propose a method of enrichment of the query. This method relies on many steps. First, identification of significant terms (simple and composed) present in the query. Then, generation of a descriptive list and its assignment to each term that has been identified as significant in the query. A descriptive list is a set of linguistic knowledge of different types (morphological, syntactic and semantic). In this paper we are interested in the statistical treatment, based on the similarity method. This method exploits the weighting functions of Salton TF-IDF and TF-IEF on the list generated in the previous step. TF-IDF function identifies relevant documents, while the TF-IEF's role is to identify the relevant sentence. The terms of high weight (which are terms which may be correlated to the context of the response) are incorporated into the original query. The application of this method is based on a corpus of documents belonging to a closed domain.
['Souheyl Mallat', 'Houssem Abdellaoui', 'Mohsen Maraoui', 'Mounir Zrigui']
Method of enriching queries by contextual information to approve of information retrieval system in Arabic
912,009
Compiler optimizations play a key role in unlocking the performance of the PA-8000 (L. Gwennap, 1994), an innovative dynamically scheduled machine which is the first implementation of the 64 bit PA 2.0 member of the HP PA-RISC architecture family. This wide superscalar, long out of order machine provides significant execution bandwidth and automatically hides latency at runtime; however despite its ample hardware resources, many of the optimizing transformations which proved effective for the PA-8000 served to augment its ability to exploit the available bandwidth and to hide latency. While legacy codes benefit from the PA-8000's sophisticated hardware, recompilation of old binaries can be vital to realizing the full potential of the PA-8000, given the impact of new compilers in achieving peak performance for this machine.
['Anne M. Holler']
Compiler optimizations for the PA-8000
913,627
Multistage Interconnection Networks (MINs) are used to interconnect different processing modules in various parallel systems or on high bandwidth networks. In this paper an integrated performance methodology is presented. A new approximate performance model for self-routing MINs consisting of symmetrical switches which are subject to a backpressure blocking mechanism is analyzed. Based on this, the steady-state distribution of the queue utilization is estimated and then all important performance metrics are calculated. Moreover, a general evaluation factor which helps in choosing a better performance MIN in comparison with other similar MIN architecture specifications is defined. The model was exemplified for the case of symmetrical single- and double-buffered MINs. It provides accurate results and converges very quickly. The obtained results were validated by extensive simulations and were compared to existing related work in the literature.
['John D. Garofalakis', 'Eleftherios Stergiou']
An Approximate Analytical Performance Model for Multistage Interconnection Networks with Backpressure Blocking Mechanism
275,749
In this paper, we present new techniques for collision search in the hash function SHA-0. Using the new techniques, we can find collisions of the full 80-step SHA-0 with complexity less than 239 hash operations.
['Xiaoyun Wang', 'Hongbo Yu', 'Yiqun Lisa Yin']
Efficient collision search attacks on SHA-0
133,030
Monitoring life-long diseases requires continuous measurements and recording of physical vital signs. Most of these diseases are manifested through unexpected and non-uniform occurrences and behaviors. It is impractical to keep patients in hospitals, health-care institutions, or even at home for long periods of time. Monitoring solutions based on smartphones combined with mobile sensors and wireless communication technologies are a potential candidate to support complete mobility-freedom, not only for patients, but also for physicians. However, existing monitoring architectures based on smartphones and modern communication technologies are not suitable to address some challenging issues, such as intensive and big data, resource constraints, data integration, and context awareness in an integrated framework. This manuscript provides a novel mobile-based end-to-end architecture for live monitoring and visualization of life-long diseases. The proposed architecture provides smartness features to cope with continuous monitoring, data explosion, dynamic adaptation, unlimited mobility, and constrained devices resources. The integration of the architecture׳s components provides information about diseases׳ recurrences as soon as they occur to expedite taking necessary actions, and thus prevent severe consequences. Our architecture system is formally model-checked to automatically verify its correctness against designers׳ desirable properties at design time. Its components are fully implemented as Web services with respect to the SOA architecture to be easy to deploy and integrate, and supported by Cloud infrastructure and services to allow high scalability, availability of processes and data being stored and exchanged. The architecture׳s applicability is evaluated through concrete experimental scenarios on monitoring and visualizing states of epileptic diseases. The obtained theoretical and experimental results are very promising and efficiently satisfy the proposed architecture׳s objectives, including resource awareness, smart data integration and visualization, cost reduction, and performance guarantee.
['Mohamed Adel Serhani', 'Mohamed El Menshawy', 'Abdelghani Benharref']
SME2EM: Smart mobile end-to-end monitoring architecture for life-long diseases
613,262
Motivated by the prevalent store brand entry and the differing legal environments for pricing flexibility, this paper studies the interaction between a manufacturer’s channel strategy and retailers’ store brand decisions, under both flexible wholesale price (FWP) scheme (where the manufacturer can charge different prices to the retailers) and uniform wholesale price (UWP) scheme (where a uniform price should be offered). Under FWP scheme, a retailer has a lower incentive to introduce a store brand under single channel than under dual channel, and thus, single channel can be a strategy to prevent store brand entry. This strategy is effective when the store brands are moderately competitive. Conversely, under UWP scheme, a retailer has a lower incentive to introduce a store brand under dual channel. As a result, the manufacturer prefers dual channel, and single channel is rarely adopted under UWP scheme. Under FWP scheme, the retailers’ store brand introduction decisions are mostly symmetric under dual channel due to the less dependent wholesale prices charged by the manufacturer and their ex ante symmetric roles. But under UWP scheme, a retailer may gain more profit by not introducing a store brand if its competitor has already introduced one, which gives rise to a much larger region of asymmetric dual-channel setting. We also identify two interesting impacts of increasing competitiveness of store brand. First, under UWP scheme, fewer retailers should introduce a store brand regardless of its increasing competitiveness under certain conditions. Second, in contrast to the existing literature that shows the retailer should increase the price of the increasingly competitive store brand, we find that the retailer should, instead, decrease the price of store brand when its base demand is large. Finally, we show that although the manufacturer has greater pricing flexibility under FWP scheme, he never earns a larger profit than under UWP scheme. Whereas the retailer’s profit can be either larger or smaller under FWP scheme.
['Yannan Jin', 'Xiaole Wu', 'Qiying Hu']
Interaction between Channel Strategy and Store Brand Decisions
831,747
A numerical technique for evaluating risky projects with fuzzy real options is developed. Fuzzy real options are based on hybrid variables that represent the market risk of a project, which is derived from data, and the private risk, which is usually estimated by experts. These hybrid variables can be evaluated using an extension of Least Squares Monte-Carlo simulation that produces numerical evaluations of fuzzy real options based on the generation and backward induction of sample paths. A major advantage of this methodology is its ability to determine values regardless of whether or not an analytic solution exists. To illustrate, two fuzzy real options models are evaluated using the proposed algorithm: one, on brownfields, for comparison with analytic outputs for fuzzy real options; the other, on oil development, for comparison to the results of the Integrated Valuation Procedure (IVP), another algorithm to assess private risk. The results indicate that the generalized Least Squares Monte-Carlo simulation produces similar results to the analytic valuation of fuzzy real options, when this is possible. Moreover, the use of fuzzy real options can overcome the private risk problem without invoking IVP, which is preferable because expert linguistic estimates are easier to use in a fuzzy environment.
['Qian Wang', 'D.M. Kilgour', 'Keith W. Hipel']
Fuzzy Real Options for Risky Project Evaluation Using Least Squares Monte-Carlo Simulation
141,738
Extending sensor networks into the cloud using amazon web services
['Kevin C. Lee', 'D. Murray', 'Danny Hughes', 'Wouter Joosen']
Extending sensor networks into the cloud using amazon web services
893,720
Pairing-Based Cryptography has become relevant in industry mainly because of the increasing interest in Identity-Based protocols. A major deterrent to the general use of pairing-based protocols is the complex nature of such protocols; efficient implementation of pairing functions is often difficult as it requires more knowledge than previous cryptographic primitives. In this paper we present a tool for automatically generating optimized code for pairing functions.#R##N##R##N#Our cryptographic compiler chooses the most appropriate pairing function for the target family of curves, either the Tate, ate, R-ate or Optimal pairing function, and generates its code. It also generates optimized code for the final exponentiation using the parameterisation of the chosen pairing-friendly elliptic curve.
['Luis J. Dominguez Perez', 'Michael L. Scott']
Designing a code generator for pairing based cryptographic functions
542,540
Online dating systems are a common way to discover romantic partners. Yet there persists a gap in knowledge regarding how users of these systems determine which potential partners are worthy of in-person meetings, as well as the outcomes of these in-person meeting decisions. The objective of this dissertation is two-fold: 1) to understand how online dating system users make decisions to meet or not meet potential romantic partners in-person, and 2) to understand how online dating system designs currently support--and could better support--predictions of initial in-person attraction to potential romantic partners.
['Douglas Zytko']
Enhancing Evaluation of Potential Romantic Partners Online
929,037
The financial credit crash of 2007 and 2008, based on various explanations of debt derivatives and housing bubbles that caused many financial institutions to lend money to those who could not repay, has its foundations in agency theory and faulty business strategy. Agency theory is the conflict of interest that may arise between the agent (the Chief Executive Officer or CEO or other managers) and the principals (shareholders). This conflict can cause the firm – and ultimately, the shareholders – to lose money. In the eyes of the shareholders, the goal of every CEO should be to maximise shareholder wealth. Agency theory comes into play when the CEO does not act in the best interest of the shareholders. Investment banks are now trying to recoup funds from homeowners, only to find out that they cannot afford the payments; they are then stuck with a foreclosed home and have to try to sell it in an economy where house values have dropped in the last year. Examples from the mortgage, financial and automotive industries are cited to illustrate certain points associated with agency theory-based problems.
['Alan D. Smith']
Agency theory and the financial crisis from a strategic perspective
72,919
In this paper we describe the challenges presented when describing and composing biodiversity informatics resources. We present a novel metadata framework as a tool to capture information about these resources and their interrelationships, explaining how this framework is designed to support workflow composition. We show how our approach can help in guiding users toward useful resources, and discuss the impact that this framework could have on real composition problems. Eventually this framework will support user enhancement of the resource metadata and a rating system for user feedback on the usefulness of suggestions made by the system.
['Russell P. McIver', 'Andrew Jones', 'Richard John White']
A Framework for Supporting the Composition of Biodiversity Informatics Resources
118,465
This paper introduces a novel way to enhance input devices to sense a user's foot motion. By measuring the electrostatic potential of a user, this device can sense the user's footsteps and jumps without requiring any external sensors such as a floor mat or sensors embedded in shoes. We apply this sensing principle to the gamepad to explore a new class of game interactions that combine the player's physical motion with gamepad manipulations. We also discuss other possible in-put devices that can be enhanced by the proposed sensing architecture such as a portable music player that can sense foot motion through the headphone and musical instruments that can be affected by the players' motion.
['Jun Rekimoto', 'Hua Wang']
Sensing GamePad: electrostatic potential sensing for enhancing entertainment oriented interactions
327,162
CART Versus CHAID Behavioral Biometric Parameter Segmentation Analysis
['Ionela Roxana Glăvan', 'Daniel Petcu', 'Emil Simion']
CART Versus CHAID Behavioral Biometric Parameter Segmentation Analysis
763,407
This paper proposes a demand response strategy for energy management within a smart grid community of residential households. Some of the households own renewable energy systems and energy storage systems (ESS) and sell the excess renewable energy to the residences that need electrical energy. The proposed strategy comprises methods that provide benefits for the residential electricity users and for the load aggregator. Specifically, we propose an off-line algorithm that schedules the renewable resources integration by trading energy between the renewable energy producers and buyers. Moreover, we propose a geometric programming based optimization method that uses the ESS for balancing the community's power grid load and for reducing the grid consumption cost. Simulations show that the proposed method may lead to a community's grid consumption cost reduction of 10.5%. It may also achieve balanced load profiles with peak to average ratio (PAR) close to unity, the average PAR reduction being 52%.
['Adriana Chis', 'Jayaprakash Rajasekharan', 'Jarmo Lunden', 'Visa Koivunen']
Demand response for renewable energy integration and load balancing in smart grid communities
961,349
The performance analysis of peer-to-peer (P2P) networks calls for a new kind of queueing model, in which jobs and service stations arrive randomly. Except in some simple special cases, in general, the queueing model with varying service rate is mathematically intractable. Motivated by the P-K formula for M/G/1 queue, we developed a limiting analysis approach based on the connection between the fluctuation of service rate and the mean queue length. Considering the two extreme service rates, we proved the conjecture on the lower bound and upper bound of mean queue length previously postulated. Furthermore, an approximate P-K formula to estimate the mean queue length is derived from the convex combination of these two bounds and the conditional mean queue length under the overload condition. We confirmed the accuracy of our approximation by extensive simulation studies with different system parameters. We also verified that all limiting cases of the system behavior are consistent with the predictions of our formula.
['Jian Zhang', 'Tony T. Lee', 'Tong Ye', 'Weisheng Hu']
On Pollaczek-Khinchine Formula for Peer-to-Peer Networks
779,607
Traditionally, Grid users have been identifiable and traceable beyond reasonable doubt by their digital certificates. However, Grids are used in an ever-increasing variety of contexts and thus, the number of usage scenarios has augmented accordingly. In bio-medicine and other health-related fields a need for anonymous access to Grid resources has been identified. Anonymous access to resources prevents the resource owners and other external parties from tracing the users and their actions. Such anonymity of resource usage in Grids is needed above all in commercial contexts, e.g. protecting the development process of a new medicine by anonymizing the accesses to medical research databases. In this paper we identify the requirements and define an architecture for pseudonymity system addressing these needs. Also the protocols used between the components are defined.
['Joni Hahkala', 'Henri Mikkonen', 'Mika Silander', 'John White']
A Pseudonymity System for Grids
346,207
The plasticity of the adult memory network for integrating novel word forms (lexemes) was investigated with whole-head magnetoencephalography (MEG). We showed that spoken word forms of an (artificial) foreign language are integrated rapidly and successfully into existing lexical and conceptual memory networks. The new lexemes were learned in an untutored way, by pairing them frequently with one particular object (and thus meaning), and infrequently with 10 other objects (learned set). Other novel word forms were encountered just as often, but paired with many different objects (nonlearned set). Their impact on semantic memory was assessed with cross-modal priming, with novel word forms as primes and object pictures as targets. The MEG counterpart of the N400 (N400m) served as an indicator of a semantic (mis)match between words and pictures. Prior to learning, all novel words induced a pronounced N400m mismatch effect to the pictures. This component was strongly reduced after training for the learned novel lexemes only, and now closely resembled the brain's response to semantically related native-language words. This result cannot be explained by mere stimulus repetition or stimulus-stimulus association. Thus, learned novel words rapidly gained access to existing conceptual representations, as effectively as related native-language words. This association of novel lexemes and conceptual information happened fast and almost without effort. Neural networks mediating these integration processes were found within left temporal lobe, an area typically described as one of the main generators of the N400 response.
['Christian Dobel', 'Markus Junghöfer', 'Caterina Breitenstein', 'Benedikt Klauke', 'Stefan Knecht', 'Christo Pantev', 'Pienie Zwitserlood']
New names for known things: On the association of novel word forms with existing semantic information
281,213
Kruppa equation based camera self-calibration is one of the classical problems in computer vision. Most state-of-the-art approaches directly solve the quadratic constraints derived from Kruppa equations, which are computationally intensive and difficult to obtain initial values. In this paper, we propose a new initialization algorithm by estimating the unknown scalar in the equation, thus the camera parameters can be computed linearly in a closed form and then refined iteratively via global optimization techniques. We prove that the scalar can be uniquely recovered from the infinite homography and propose a practical method to estimate the homography from a physical or virtual plane located at a far distance to the camera. Extensive experiments on synthetic and real images validate the effectiveness of the proposed method.
['Guanghui Wang', 'Q. M. Jonathan Wu', 'Wei Zhang']
Camera self-calibration under the constraint of distant plane
431,568
Whittle Index Policy for Crawling Ephemeral Content
['Konstantin Avrachenkov', 'Vivek S. Borkar']
Whittle Index Policy for Crawling Ephemeral Content
914,119
Geometry Question Generator: Question and Solution Generation, Validation and User Evaluation
['Rahul Singhal', 'Martin Henz']
Geometry Question Generator: Question and Solution Generation, Validation and User Evaluation
670,910
Privacy amplification is the task by which two cooperating parties transform a shared weak secret, about which an eavesdropper may have side information, into a uniformly random string uncorrelated from the eavesdropper. Privacy amplification against passive adversaries, where it is assumed that the communication is over a public but authenticated channel, can be achieved in the presence of classical as well as quantum side information by a single-message protocol based on strong extractors. #R##N#In 2009 Dodis and Wichs devised a two-message protocol to achieve privacy amplification against active adversaries, where the public communication channel is no longer assumed to be authenticated, through the use of a strengthening of strong extractors called non-malleable extractors which they introduced. Dodis and Wichs only analyzed the case of classical side information. #R##N#We consider the task of privacy amplification against active adversaries with quantum side information. Our main result is showing that the Dodis-Wichs protocol remains secure in this scenario provided its main building block, the non-malleable extractor, satisfies a notion of quantum-proof non-malleability which we introduce. We show that an adaptation of a recent construction of non-malleable extractors due to Chattopadhyay et al. is quantum proof, thereby providing the first protocol for privacy amplification that is secure against active quantum adversaries. Our protocol is quantitatively comparable to the near-optimal protocols known in the classical setting.
['Gil Cohen', 'Thomas Vidick']
Privacy Amplification Against Active Quantum Adversaries
863,245
Hardwired Critical Action Panels for Emergency Preparedness: - Design Principles and CAP Design for Offshore Petroleum Platforms.
['Bojana Petkov', 'Alf Ove Braseth']
Hardwired Critical Action Panels for Emergency Preparedness: - Design Principles and CAP Design for Offshore Petroleum Platforms.
774,057
Background: Deconvolution is a mathematical process of resolving an observed function into its constituent elements from which the signal was formed. In the field of biomedical research, deconvolution analysis is applied to obtain single cell-type or tissue specific signatures from a mixed signal. Recent development of next generation sequencing technology suggests RNA-seq as a fast and accurate method for obtaining transcriptomic profiles. Although linearity assumption is required in most of deconvolution algorithms, few studies have been conducted to investigate which RNA-seq quantification methods yield the optimum linear space for deconvolution analysis. Results: Using a benchmark RNA-seq dataset, we investigated the linearity of abundance estimated from seven most popular RNA-seq quantification methods both at the gene and isoform levels. Linearity is evaluated through parameters estimation, concordance analysis and residual analysis based on a multiple linear regression model. Results show that count data gives poor parameter estimations, large intercepts and high inter-sample variability; while TPM value from Kallisto and Salmon shows high linearity in all analyses. Conclusions: Salmon and Kallisto TPM data gives the best fit to the linear model studied. This suggests that TPM values estimated from Salmon and Kallisto are the ideal RNA-seq measurements for deconvolution studies.
['Haijing Jin', 'Ying-Wooi Wan', 'Zhandong Liu']
Comprehensive Evaluation of RNA-seq Quantification Methods for Linearity
949,987
The failure of sensor nodes could hinder the usefulness and the effectiveness of a WNS. In this paper we present a method to repair network partitioning by using a mobile node. By reasoning upon the degree of connectivity with neighbours, the mobile node finds the position where to deploy new sensor nodes in order to restore the network connectivity. Factors influencing the algorithm performance are discussed. Simulations show that the proposed method is effective and efficient not withstanding packet loss.
['Gianluca Dini', 'Marco Pelagatti', 'Ida Maria Savino']
Repairing network partitions in Wireless Sensor Networks
285,335
We consider linear dynamical systems defined by differential algebraic equations. The associated input-output behaviour is given by a transfer function in the frequency domain. Physical parameters of the dynamical system are replaced by random variables to quantify uncertainties. We analyse the sensitivity of the transfer function with respect to the random variables. Total sensitivity coefficients are computed by a nonintrusive and by an intrusive method based on the expansions in series of the polynomial chaos. In addition, a reduction of the state space is applied in the intrusive method. Due to the sensitivities, we perform a model order reduction within the random space by changing unessential random variables back to constants. The error of this reduction is analysed. We present numerical simulations of a test example modelling a linear electric network.
['Roland Pulch', 'E. Jan W. ter Maten', 'Florian Augustin']
Sensitivity analysis and model order reduction for random linear dynamical systems
140,468
Many companies experience nonstationary demand because of short product life cycles, seasonality, customer buying patterns, or other factors. We present a practical model for managing inventory in a supply chain facing stochastic, nonstationary demand. Our model is based on the guaranteed service modeling framework. We first describe how inventory levels should adapt to changes in demand at a single stage. We then show how nonstationary demand propagates in a supply chain, allowing us to link stages and apply a multiechelon optimization algorithm designed originally for stationary demand. We describe two successful applications of this model. The first is a tactical implementation to support monthly safety stock planning at Microsoft. The second is a strategic project to evaluate the benefits of using an inventory pool at Case New Holland.
['John J. Neale', 'Sean P. Willems']
Managing Inventory in Supply Chains with Nonstationary Demand
358,176
'Children should learn to code', is a simple message that in recent years, has dominated the scene around the development of the computing and technology curriculum. A range of government, public, and privately funded initiatives have highlighted the value of introducing children to coding and computer science. Its role is the broader agenda of encouraging better engagement with Science, Technology, Engineering, Arts and Mathematics (STEAM). As a result of this increased focus, a number of products blending hardware, software, creative thinking and play have emerged onto the market. This paper introduces a co-design process of transforming a playful learning technology product, 'littleBits', into an online digital onboarding experience, to assist creative minds in understanding the technology.
['Mark Lochrie', 'Glenn Matthys', 'Adrian Gradinar', 'Andy Dickinson', 'Onno Baudouin', 'Paul Egglestone']
Co-designing a physical to digital experience for an onboarding and blended learning platform
847,388
With a steady pace of adoption of service-oriented architecture, companies have made significant progresses in implementing various kinds of Web services and converting existing applications to service-oriented architecture. As significant number of services have been implemented and put into actual use, many service-oriented enterprises are faced with the problem of how to manage these services in an efficient manner. In this paper, we propose a framework for a more efficient management of these services. In this framework, the creation and maintenance of enterprise solutions are modeled by flows, finite state machines (FSMs) among other formal models. For instance, each enterprise solution would be modeled as composite services that can be described by respective flow and FSM models. These solution models can then be stored, and later retrieved for the execution of these composite services. Furthermore, formal models are also used to maintain and update these service-oriented solutions to improve the efficiency and quality of service management by taking advantage of the underlying service-oriented architecture. In this paper, we first provide a normalized categorization of enterprise services. We then describe the framework for managing services in service-oriented enterprises. We also discuss how this framework can help manage enterprise services more efficiently. Finally, we give a real-world example to illustrate how this framework can be used in practice.
['Ying Huang', 'Santhosh Kumaran', 'Jen-Yao Chung']
A service management framework for service-oriented enterprises
53,656
Exploiting Linked Open Data to Uncover Entity Types
['Jie Gao', 'Suvodeep Mazumdar']
Exploiting Linked Open Data to Uncover Entity Types
566,909
Coherent feedback control considers purely quantum controllers in order to overcome disadvantages such as the acquisition of suitable quantum information, quantum error correction, etc. These approaches lack a systematic characterization of quantum realizability. Recently, a condition characterizing when a system described as a linear stochastic differential equation is quantum was developed. Such condition was named physical realizability, and it was developed for linear quantum systems satisfying the quantum harmonic oscillator canonical commutation relations. In this context, open two-level quantum systems escape the realm of the current known condition. When compared to linear quantum system, the challenges in obtaining such condition for such systems radicate in that the evolution equation is now a bilinear quantum stochastic differential equation and that the commutation relations for such systems are dependent on the system variables. The goal of this paper is to provide a necessary and sufficient condition for the preservation of the Pauli commutation relations, as well as to make explicit the relationship between this condition and physical realizability.
['Luis A. Duffaut Espinosa', 'Zibo Miao', 'Ian R. Petersen', 'Valery A. Ugrinovskii', 'Matthew R. James']
Preservation of commutation relations and physical realizability of open two-level quantum systems
575,384
The DataWeb/VDC integration.
['Micah Altman', 'Cavan Capps']
The DataWeb/VDC integration.
804,789