abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
The Complexity of Languages Resulting from the Concatenation Operation
['Galina Jirásková', 'Alexander Szabari', 'Juraj Šebej']
The Complexity of Languages Resulting from the Concatenation Operation
852,755
Mixing multitrack music is an expert task where characteristics of the individual elements and their sum are manipulated in terms of balance, timbre and positioning, to resolve technical issues and to meet the creative vision of the artist or engineer. In this paper we conduct a mixing experiment where eight songs are each mixed by eight different engineers. We consider a range of features describing the dynamic, spatial and spectral characteristics of each track, and perform a multidimensional analysis of variance to assess whether the instrument, song and/or engineer is the determining factor that explains the resulting variance, trend, or consistency in mixing methodology. A number of assumed mixing rules from literature are discussed in the light of this data, and implications regarding the automation of various mixing processes are explored. Part of the data used in this work is published in a new online multitrack dataset through which public domain recordings, mixes, and mix settings (DAW projects) can be shared.
['Brecht De Man', 'Brett Leonard', 'Richard King', 'Joshua D. Reiss']
AN ANALYSIS AND EVALUATION OF AUDIO FEATURES FOR MULTITRACK MUSIC MIXTURES
679,053
A multiprocessor's memory consistency model imposes ordering constraints among loads, stores, atomic operations, and memory fences. Even for consistency models that relax ordering among loads and stores, ordering constraints still induce significant performance penalties due to atomic operations and memory ordering fences. Several prior proposals reduce the performance penalty of strongly ordered models using post-retirement speculation, but these designs either (1) maintain speculative state at a per-store granularity, causing storage requirements to grow proportionally to speculation depth, or (2) employ distributed global commit arbitration using unconventional chunk-based invalidation mechanisms. In this paper we propose InvisiFence, an approach for implementing memory ordering based on post-retirement speculation that avoids these concerns. InvisiFence leverages minimalistic mechanisms for post-retirement speculation proposed in other contexts to (1) track speculative state efficiently at block-granularity with dedicated storage requirements independent of speculation depth, (2) provide fast commit by avoiding explicit commit arbitration, and (3) operate under a conventional invalidation-based cache coherence protocol. InvisiFence supports both modes of operation found in prior work: speculating only when necessary to minimize the risk of rollback-inducing violations or speculating continuously to decouple consistency enforcement from the processor core. Overall, InvisiFence requires approximately one kilobyte of additional state to transform a conventional multiprocessor into one that provides performance-transparent memory ordering, fences, and atomic operations.
['Colin Blundell', 'Milo M. K. Martin', 'Thomas F. Wenisch']
InvisiFence: performance-transparent memory ordering in conventional multiprocessors
242,624
From operational definitions to abstract semantics
['S. Purushothaman', 'Jill Seaman']
From operational definitions to abstract semantics
69,786
The discrete multitone modulation (DMT) systems have been widely used in various applications. The DMT system can be considered as a dual of a subband coder, obtained by using the synthesis bank as the transmitter and analysis bank as the receiver. In designing optimal subband coders, the objective is to minimize output quantization noise, whereas in the problem of designing optimal DMT system, the objective function to be minimized is the transmitted power. We show that the design of optimal DMT systems can be formulated as a hypothetical design problem of optimal subband coders. The solution of optimal DMT system can be obtained using existing design methods for optimal biorthogonal subband coders.
['Yuan-Pei Lin', 'P. P. Vaidyanathan', 'See-May Phoong']
On the duality of optimal DMT systems and biorthogonal subband coders
3,198
MAC Precomputation with Applications to Secure Memory.
['Juan A. Garay', 'Vladimir Kolesnikov', 'Rae McLellan']
MAC Precomputation with Applications to Secure Memory.
789,032
Abstract: Question answering tasks have shown remarkable progress with distributed vector representation. In this paper, we investigate the recently proposed Facebook bAbI tasks which consist of twenty different categories of questions that require complex reasoning. Because the previous work on bAbI are all end-to-end models, errors could come from either an imperfect understanding of semantics or in certain steps of the reasoning. For clearer analysis, we propose two vector space models inspired by Tensor Product Representation (TPR) to perform knowledge encoding and logical reasoning based on common-sense inference. They together achieve near-perfect accuracy on all categories including positional reasoning and path finding that have proved difficult for most of the previous approaches. We hypothesize that the difficulties in these categories are due to the multi-relations in contrast to uni-relational characteristic of other categories. Our exploration sheds light on designing more sophisticated dataset and moving one step toward integrating transparent and interpretable formalism of TPR into existing learning paradigms.
['Moontae Lee', 'Xiaodong He', 'Wen-tau Yih', 'Jianfeng Gao', 'Li Deng', 'Paul Smolensky']
Reasoning in Vector Space: An Exploratory Study of Question Answering
550,024
An iterative algorithm for the design of multichannel cosine-modulated quadrature mirror-image filter (QMF) banks with near-perfect reconstruction is proposed. The objective function is formulated as a quadratic function in each step whose minimum point can be obtained using a closed-form solution. This approach has high design efficiency and leads to filter banks with high stopband attenuation and low aliasing and amplitude distortions. The proposed approach is then extended to the design of multichannel cosine-modulated QMF banks with low reconstruction delays, which are often required, especially in real-time applications. Several design examples are included to demonstrate the proposed algorithms, and some comparisons are made with existing designs.
['Hua Xu', 'Wu-Sheng Lu', 'A. Antoniou']
Efficient iterative design method for cosine-modulated QMF banks
263,762
We built a multimodal biofeedback system integrated with low-cost sensors and applied the system to support meditation training utilizing multimodal biofeedback to the user. Our meditation support system employs Electroencephalogram, heart rate variability and eye tracking. The first two biosignals are employed to assess the mental stress during meditation, and eye tracking is equipped to detect an interval where the user engages in meditation by monitoring the open/closed states of the eyes.
['Wataru Hashiguchi', 'Junya Morita', 'Takatsugu Hirayama', 'Kenji Mase', 'Kazunori Yamada', 'Mayu Yokoya']
Multimodal biofeedback system integrating low-cost easy sensing devices
932,084
We propose Regularized Max Pooling (RMP) for image classification. RMP classifiesan image (oran image region) by extracting featurevectors at multiple subwindows at multiple locations and scales. Unlike Spatial Pyramid Matching where the subwindows are defined purely based on geometric correspondence, RMP accounts for the deformation of discriminative parts. The amount of deformation and the discriminative ability for multiple parts are jointly learned during training. RMP outperforms the state-of-the-art performance by a wide margin on the challenging PASCAL VOC2012 dataset for human action recognition on still images.
['Minh Hoai']
Regularized Max Pooling for Image Categorization
493,664
A multivariable laboratory process that consists of four interconnected water tanks is presented. The linearized dynamics of the system have a multivariable zero that is possible to move along the real axis by changing a valve. The zero can be placed in both the left and the right half-plane. In this way the quadruple-tank process is ideal for illustrating many concepts in multivariable control, particularly performance limitations due to multivariable right half-plane zeros. The location and the direction of the zero have an appealing physical interpretation. Accurate models are derived from both physical and experimental data and decentralized control is demonstrated on the process.
['Karl Henrik Johansson']
The quadruple-tank process: a multivariable laboratory process with an adjustable zero
326,065
Based on the spaceflight mission characteristics from the dominating country such as US and Russia etc., the data processing of telemetry position control instructions was hard to be judged and the utilization rate of information was low. In view of these questions, a real time processing method of position controlling instructions with fuzzy set theory was presented in this paper. The results of the analysis showed that all acquired outburst information could be fully utilized and more satisfying processing results of the telemetry position instructions could be obtained.
['Guo-bao Wang', 'Jiuming Lv', 'Hong Chen', 'Ruiming Jia']
Real-time processing method of telemetry position controlling instructions with fuzzy set theory
115,425
Background#R##N#Currently, most genome annotation is curated by centralized groups with limited resources. Efforts to share annotations transparently among multiple groups have not yet been satisfactory.
['Robin D. Dowell', 'R.M. Jokerst', 'Allen Day', 'Sean R. Eddy', 'Lincoln Stein']
The distributed annotation system.
405,325
This paper deals, for the first time, with an analysis of localization capabilities of weakly supervised categorization systems. Most existing categorization approaches have been tested on databases, which (a) either show the object(s) of interest in a very prominent way so that their localization can hardly be judged from these experiments, or (b) at least the learning procedure was done with supervision, which forces the system to learn only object relevant data. These approaches cannot be directly compared to a nearly unsupervised method. The main contribution of our paper thus is twofold: First, we have set up a new database which is sufficiently complex, balanced with respect to background, and includes localization ground truth. Second, we show, how our successful approach for generic object recognition [14] can be extended to perform localization, too.To analyze its localization potential, we develop localization measures which focus on approaches based on Boosting [5]. Our experiments show that localization depends on the object category, as well as on the type of the local descriptor.
['Andreas Opelt', 'Axel Pinz']
Object localization with boosting and weak supervision for generic object recognition
474,242
A new approach of backward coding of wavelet trees (BCWT) is presented. Contrary to the common "forward" coding of wavelet trees from the highest level (lowest resolution), the new approach starts coding from the lowest level and goes backward by building a map of maximum quantization levels of descendants. BCWT eliminates several major bottlenecks of existing wavelet-tree-based codecs, namely tree-scanning, bitplane coding and dynamic lists management. Compared to SPIHT, BCWT encodes and decodes up to eight times faster without sacrificing PSNR. At the same time, BCWT provides desirable features such as low complexity, low memory usage, and resolution scalability.
['Jiangling Guo', 'Sunanda Mitra', 'Brian Nutter', 'Tanja Karp']
A fast and low complexity image codec based on backward coding of wavelet trees
217,462
Industry and academia alike are increasingly becoming aware of the fact that innovation does not take place in isolated cells or functions within the firm. During the last the years the term open innovation has emphasized the im- portance of internal and external collaboration in order to increase the competi- tiveness of companies. Although the idea of involving internal and external ac- tors in the new product development (NPD) process is not new, the knowledge about the benefits and pitfalls is still limited. This paper aims to contribute to refining the concept of open innovation, by investigating how strategic priori- ties influence the degree of external and internal involvement in the NPD pro- cess, moderated by contextual factors. Results based on analyses of 584 companies from the International Manu- facturing Strategy Survey (IMSS) 2005 indicate that suppliers are heavily in- volved in the NPD process in firms in B2C markets aiming at increasing the in- novation volume. For B2B companies the reverse picture emerges. However, when the aim is to increase the radicality of new products, suppliers and cus- tomers are heavily involved for firms in B2B markets. Further, market uncer- tainty, and to some extent company size, seems to moderate the relationships between strategy and involvement considerably.
['Bjørge Timenes Laugen', 'Astrid Heidemann Lassen']
Collaborative Innovation: Internal and External Involvement in New Product Development
600,304
We propose a non-homogeneous conditional random field (CRF) built over an adjacency graph of superpixels for contextual region grouping. Our model includes spatially dependent potentials that capture contextual interactions of the data as well as the labels. Both superpixels and segments are described with local statistics which take into account their contexts in the image. This results the non-homogeneity of the fields which improves the region grouping process of natural images. In our energy formulation, the similarity is measured by a likelihood ratio learned from a human labeled ground truth. The inference is performed using a cluster sampling method, the Swendsen-Wang cut algorithm. Results are shown on various natural images.
['Olfa Besbes', 'Nozha Boujemaa', 'Ziad Belhadj']
Non-homogeneous Conditional Random Fields for Contextual Image Segmentation
28,173
Analysis of User-generated Content for Improving YouTube Video Recommendation
['Michele Galli', 'Davide Feltoni Gurini', 'Fabio Gasparetti', 'Alessandro Micarelli', 'Giuseppe Sansonetti']
Analysis of User-generated Content for Improving YouTube Video Recommendation
754,468
In this paper, we focus on a linearized backward Euler scheme with a Galerkin finite element approximation for the time-dependent nonlinear Schrodinger equation. By splitting an error estimate into two parts, one from the spatial discretization and the other from the temporal discretization, we obtain unconditionally optimal error estimates of the fully-discrete backward Euler method for a generalized nonlinear Schrodinger equation. Numerical results are provided to support our theoretical analysis and efficiency of this method.
['Wentao Cai', 'Jian Li', 'Zhangxin Chen']
Unconditional convergence and optimal error estimates of the Euler semi-implicit scheme for a generalized nonlinear Schrödinger equation
723,131
Summary Objectives: The objectives of this study were to assess patients' willingness to use e-mail to obtain specific test results, assess their expectations regarding response times, and identify any demographic trends. Methods: A cross-sectional survey of primary care patients was conducted in 19 clinics of a large multi-specialty group practice associated with an 186,000-member Health Maintenance Organization. The outcome measures were proportion of patients with current e-mail access, their willingness to use it for selected general clinical services and to obtain specific test results, and their expectations of timeliness of response. Results: The majority of patients (58.3%) reported having current e-mail access and indicated strong willingness to use it for communication. However, only 5.8% reported having ever used it to communicate with their physician. Patients were most willing to use e-mail to obtain cholesterol and blood sugar test results, but less willing to use it to obtain brain CT scan results. Patients' expectations of timeliness were generally very high, particularly for high-stakes tests such as brain CT scan. Significant differences of willingness and expectations were found by age group, education, and income. Conclusions: These findings indicate that most patients are willing to use e-mail to communicate with their primary care providers even for specific test results and that patients will hold providers to high standards of timeliness regarding response. The implication is that integration of e-mail communications into primary care ought to assure prompt and accurate patient access to a plethora of specific clinical services.
['Glen R. Couchman', 'Samuel N. Forjuoh', 'Terry G. Rascoe', 'Michael Reiß', 'Bruce Koehler', 'Kimberly L. van Walsum']
E-mail communications in primary care: what are patients' expectations for specific test results?
356,624
EHR Use and Diabetes Care: Does primary care team unity moderate improvements in care quality?
['Ilana Graetz', 'Mary Reed', 'Thomas G. Rundall', 'Stephen M. Shortell', 'John Hsu']
EHR Use and Diabetes Care: Does primary care team unity moderate improvements in care quality?
737,955
This paper presents the modified multiband excitation model used for speech coding. In many MBE model coders, speech quality is degraded when incorrect voicing decisions are made, particularly for high-pitched female speakers. The MMBE addresses this issue with a modified voiced/unvoiced decision algorithm and a more robust pitch estimate. The listening quality of speech produced using the MMBE model is superior to the FS-1016 CELP coder and is at least comparable with the new 2400 bps MELP coder chosen as the new 2400 bps Federal Standard.
['Michele L. Jamrozik', 'John N. Gowdy']
Modified multiband excitation model at 2400 bps
195,891
Grammar-Driven Workload Generation for Efficient Evaluation of Signature-Based Network Intrusion Detection Systems
['Min Shao', 'Min Sik Kim', 'Victor C. Valgenti', 'Jungkeun Park']
Grammar-Driven Workload Generation for Efficient Evaluation of Signature-Based Network Intrusion Detection Systems
839,511
The electricity industry has embarked on an unprecedented process of modernization and transformation to meet the emerging needs of a highly reliable, efficient and sustainable society. Although DOE has defined a coherent set of broad objectives for the smart grid, there are growing concerns that the current architecture of the electricity industry and the associate infrastructure controls may not be able to support these objectives. A broader vision of what will be possible in the future grid is highly needed as well as the definition of formal control architecture. This paper proposes some of the core elements that can ensure that smart grid related efforts, such as regulatory policy, market structure, control infrastructure and power technologies, realize the smart grid objectives in a measurable manner. We propose a distributed, prosumer-based grid control architecture. This architecture is very flexible and ultimately enables a paradigm that can foster innovation while supporting the current grid and a large number of emerging requirements.
['Santiago Grijalva', 'Mitch Costley', 'Nathan Ainsworth']
Prosumer-based control architecture for the future electricity grid
237,572
Large projects, especially those planned and managed by government agencies, often incur substantial cost overruns. The tolerance, particularly on the part of members of Congress, for these cost overruns has decreased, thus increasing the need for accurate, defensible cost estimates. Important aspects of creating responsible cost estimates are accounting for the uncertainties in these estimates, expressing the estimates clearly, and communicating them to decision makers. Our method for estimating cost uncertainties can be used at all stages of a project. It combines the principles of probabilistic risk analysis with procedures for expert elicitation to incorporate uncertainties and extraordinary events in cost estimates. The Department of Energy implemented this process to select a new tritium supply source. During this implementation, we identified four key issues in modeling cost risks: how to consider correlations among cost components, how to aggregate assessments of multiple experts, how to manage communication and information sharing among experts, and what is an appropriate discount rate for cost estimates.
['Robin L. Dillon', 'Richard S. John', 'Detlof von Winterfeldt']
Assessment of Cost Uncertainties for Large Technology Projects: A Methodology and an Application
219,401
Accent type and phrase boundary estimation using acoustic and language models for automatic prosodic labeling.
['Tomoki Koriyama', 'Hiroshi Suzuki', 'Takashi Nose', 'Takahiro Shinozaki', 'Takao Kobayashi']
Accent type and phrase boundary estimation using acoustic and language models for automatic prosodic labeling.
750,850
This paper addresses Wireless Networks for Metropolitan Transports (WNMT), a class of moving or vehicle to- infrastructure networks that may be used by public transportation systems to provide broadband access to their vehicles, stops, and passengers. We propose the WiMetroNet, a WNMT that is auto-configurable and scalable. It is based on a new Ad hoc routing protocol, the Wireless Metropolitan Routing Protocol (WMRP), which, coupled with data plane optimizations, was designed to be scalable to thousands of nodes.
['Manuel Ricardo', 'Gustavo Carneiro', 'Pedro Fortuna', 'Filipe Abrantes', 'Jaime Dias']
WiMetroNet A Scalable Wireless Network for Metropolitan Transports
97,714
We present methods for developing high performance computational kernels and dense linear algebra routines. The microarchitecture of AMD processors is analyzed with the goal to achieve peak computational rates. Approaches for implementing matrix multiplication algorithms are suggested for hierarchical memory computers. Block versions of matrix multiplication and LU-decomposition algorithms are considered. The obtained performance results for AMD processors are discussed in comparison with other approaches.
['O. A. Bessonov', 'Dominique Fougère', 'B. Roux']
Development of efficient computational kernels and linear algebra routines for out-of-order superscalar processors
486,416
This paper notes that the autonomy of 'boundary spanning' units are becoming critically important as organizations continue to decentralize. This dynamic business change and the consequent new configurations are believed to have a significant and pervasive impact upon the role of IS/IT. The formal 'technology push' common to these circumstances is clearly inappropriate given the emergence of these increasingly diverse and complex structures. IS/IT leveraged advantages may be enabled through move emphasis upon informal managerial coalitions and interdependent groups where successful business transformation incorporates behavioral adjustments. This paper presents an analysis of three case studies in this respect which demonstrate that the competitive environment encompasses multiple dimensions of change. This will necessitate attention to improving the organization culture through intense information sharing and communication enhanced through IS/IT implementation.
['Gurpreet Dhillon', 'Ray Hackney']
IS/IT and dynamic business change
527,578
Unlike high-end smartphones, which are equipped with both accelerometers and gyroscopes, low-end models are normally equipped with accelerometers only. This paper presents a detection method that only utilizes the data collected from the accelerometers in smartphones to detect fall events. Moreover, compared to other methods that require smartphones to be placed on the waist, the proposed method additionally allows them to be in the pockets of clothes and pants. According to the experimental results, the proposed method can detect fall events effectively without generating false alarms.
['Shang-Lin Hsieh', 'Cheng-Ta Yang']
Detecting falls with low-end smartphones
783,378
Developed by Paul Kocher, Joshua Jaffe, and Benjamin Jun in 1999, Differential Power Analysis (DPA) represents a unique and powerful cryptanalysis technique. Insight into the encryption and decryption behavior of a cryptographic device can be determined by examining its electrical power signature. This paper describes a novel approach for implementation of the AES algorithm which provides a significantly improved strength against differential power analysis with a minimal additional hardware overhead. Our method is based on randomization in composite field arithmetic which entails an area penalty of only 7% while does not decrease the working frequency, does not alter the algorithm and keeps perfect compatibility with the published standard. The efficiency of the proposed technique was verified by practical results obtained from real implementation on a Xilinx Spartan-II FPGA.
['Massoud Masoumi', 'Sohail Mohammadi']
A new and efficient approach to protect AES against differential power analysis
52,673
Digital watermarking for relational databases emerged as a candidate so- lution to provide copyright protection, tamper detection, traitor tracing, maintaining integrity of relational data. Many watermarking techniques have been proposed in the literature to address these purposes. In this paper, we survey the current state-of-the- art and we classify them according to their intent, the way they express the watermark, the cover type, the granularity level, and their verifiability.
['Raju Halder', 'Shantanu Pal', 'Agostino Cortesi']
Watermarking Techniques for Relational Databases: Survey, Classification and Comparison
124,182
Frequent subgraph mining (FSM) is an important task for exploratory data analysis on graph data. Over the years, many algorithms have been proposed to solve this task. These algorithms assume that the data structure of the mining task is small enough to fit in the main memory of a computer. However, as the real-world graph data grows, both in size and quantity, such an assumption does not hold any longer. To overcome this, some graph database-centric methods have been proposed in recent years for solving FSM; however, a distributed solution using MapReduce paradigm has not been explored extensively. Since MapReduce is becoming the de-facto paradigm for computation on massive data, an efficient FSM algorithm on this paradigm is of huge demand. In this work, we propose a frequent subgraph mining algorithm called FSM-H which uses an iterative MapReduce based framework. FSM-H is complete as it returns all the frequent subgraphs for a given user-defined support, and it is efficient as it applies all the optimizations that the latest FSM algorithms adopt. Our experiments with real life and large synthetic datasets validate the effectiveness of FSM-H for mining frequent subgraphs from large graph datasets. The source code of FSM-H is available from www.cs.iupui.edu/~alhasan/software/
['Mansurul Bhuiyan', 'Mohammad Al Hasan']
An Iterative MapReduce Based Frequent Subgraph Mining Algorithm
172,162
We study the problem of leader selection in leader-follower multi-agent systems that are subject to stochastic disturbances. This problem arises in applications such as vehicle formation control, distributed clock synchronization, and distributed localization in sensor networks. We pose a new leader selection problem called the in-network leader selection problem. Initially, an arbitrary node is selected to be a leader, and in all consequent steps the network must have exactly one leader. The agents must collaborate to find the leader that minimizes the variance of the deviation from the desired trajectory, and they must do so within the network using only communication between neighbors. To develop a solution for this problem, we first show a connection between the leader selection problem and a class of discrete facility location problems. We then leverage a previously proposed self-stabilizing facility location algorithm to develop a self-stabilizing in-network leader selection algorithm for acyclic graphs.
['Stacy Patterson']
In-network leader selection for acyclic graphs
98,004
The reliable detection of unused spectrum while meeting a required probability of detecting primary user activity is a key functionality of cognitive radio systems. In cooperative spectrum sensing, the detection results of multiple cognitive radios are combined to a global result with high reliability. In order to transmit the local decisions a signaling channel is required. It can be realized by ultra-wideband underlay communications in arallel to primary user activity. Due to strict limitations on the transmission power for ultra-wideband communications, it is crucial to carefully allocate power levels to minimize channel errors, which could corrupt the transmission of local decisions. In this paper, an approach for the allocation of transmission power levels for the signaling channel is proposed, which aims to maximize the global probability of detecting spectrum holes while detecting primary user activity with a given reliability. Numerical results based on real spectrum measurements show the feasibility and illustrate the performance of the approach.
['Daniel Bielefeld', 'Gernot Fabeck', 'Milan Zivkovic', 'Rudolf Mathar']
Energy Efficient Ultra-Wideband Signaling for Cooperative Spectrum Sensing in Cognitive Radio
148,337
Proof of retrievability (POR) is a technique for ensuring the integrity of data in outsourced storage services. In this paper, we address the construction of POR protocol on the standard model of interactive proof systems. We propose the first interactive POR scheme to prevent the fraudulence of prover and the leakage of verified data. We also give full proofs of soundness and zero-knowledge properties by constructing a polynomialtime rewindable knowledge extractor under the computational Diffie-Hellman assumption. In particular, the verification process of this scheme requires a low, constant amount of overhead, which minimizes communication complexity.
['Yan Zhu', 'Huaixi Wang', 'Zexing Hu', 'Gail Joon Ahn', 'Hongxin Hu']
Zero-knowledge proofs of retrievability
631,743
In array processing, compressive sensing (CS) has recently emerged as a new sampling paradigm for estimating the direction-of-arrival (DOA) from a relatively small number of observations. The paper derives an analytical expression for the mutual coherence of the CS dictionary for a single-input, multiple output (SIMO) radar to advocate the consideration of incoherency issues by researchers exploring potential gains with CS. We couple time reversal (TR) with the Cs/SIMO radar to illustrate a possible increase in the incoherency of the CS dictionary with time reversal.
['Mohammad H. S. Sajjadieh', 'Amir Asif']
Compressive sensing in time reversal radars: Incoherency analysis
889,599
In this paper, the wire-driven weight-compensation mechanism is proposed. This mechanism consists of a parallelogram linkage mechanism that has an extended portion with the wired double pulley. This is a lighter solution compared to others since no springs are utilized. Below, a detailed description of this new weight-compensation mechanism and performances of Float Arm V with this new mechanism are given.
['Shigeo Hirose', 'Tomoyuki Ishii', 'Atsuo Haishi']
Float arm V: hyper-redundant manipulator with wire-driven weight-compensation mechanism
182,491
Software Distributed Shared Memory (SDSM) systems usually have the large coherence granularity that is imposed by the underlying virtual memory page size. To alleviate the coherence overheads such as the network traffic to preserve the coherence, or page misses caused by false sharing, relaxed memory models are widely accepted for the SDSM systems. In the relaxed memory models, when a shared page is modified, invalidation requests to other copies are deferred until a synchronization point and, in addition, the requests are transferred only to the processor acquiring the synchronization variable. On a barrier, however, the invalidation requests must be transferred to all the processors that participate in the barrier. As a result, it tends to induce heavy network traffic, and also may lead to useless page misses by false sharing. In this paper, we propose a method to alleviate the coherence overheads of barrier synchronization in shared-memory parallel programs. It performs static analysis to examine data dependency between processors across global barriers, and then inserts special primitives into the program in order to exploit the dependency information at run time. The static analysis finds out code regions where a processor modifies data that will be used only by some of the other processors. At run time, the coherence messages for the data are transferred only to the processors with the help of the inserted primitives. In particular, if the modified data will not be used by any other processors, the primitives enforce that the coherence messages are delivered only to master processor when the parallel execution of the program is finished. We evaluated the performance of this method in a 16-node software DSM system supporting AURC protocol. Program-driven simulation was performed with five benchmark programs: Jacobi, Red-black SOR, Expl, LU, and Water-nsquared. For the applications, the experimental results show that our method can reduce the coherence messages by up to about 98%, and also can improve the execution time by up to about 26%.
['Jae Bum Lee', 'Chu Shik Jhon']
Reducing Coherence Overhead of Barrier Synchronization in Software DSMs
235,384
In this paper, we investigate geometrical properties of the rank metric space and covering properties of rank metric codes. We first establish an analytical expression for the intersection of two balls with rank radii, and then derive an upper bound on the volume of the union of multiple balls with rank radii. Using these geometrical properties, we derive both upper and lower bounds on the minimum cardinality of a code with a given rank covering radius. The geometrical properties and bounds proposed in this paper are significant to the design, decoding, and performance analysis of rank metric codes.
['Maximilien Gadouleau', 'Zhiyuan Yan']
Bounds on covering codes with the rank metric
238,578
Mobile communications and Internet have become one of the most important services nowadays. Most mobile devices, such as cell phone, PDA, and laptop computer, equipped with the WiFi, GPS, and Bluetooth as their standard, build-in equipment, and people have got used to use mobile services in their daily life. One interesting service is Wifi-based indoor positioning system (IPS) that attracts many researchers devote their effort to it. Many researches in IPS determine the user location by the method of scene analysis. This method needs to collect the RSSI of APs from the interested place beforehand to build the building's WiFi radio fingerprint database and this task is time-consuming. Meanwhile, it also needs to resample very often in order to keep the accuracy of the positioning result. In this paper, we design and test a fast setup algorithm for collecting those sampling information. The Android smartphone and its build-in motion sensors were used to help collecting the AP's RSSI. We use the motion sensor to detect the pace while walking and collect the AP's RSSI in every step. The method gets through the sampling work in a walking duration which is much shorter than traditional method. This paper also compares the fast setup method with the traditional method from the positioning accuracy point of view. Experiments show that there is no significant difference on positioning accuracy between them.
['Hung Huan Liu', 'Chung-Wei Liao', 'Wei-Hsiang Lo']
The fast collection of radio fingerprint for WiFi-based indoor positioning system
134,278
Brain connections formed during the nurturing period of an infant's development are fundamental for survival. In this paper, elementary brain (neural interconnection pattern) evolution is simulated for various individuals in two similar artificial species. The simulation yields information about the learning, performance and brain structure of the population over time. Concepts from Categorical Neural Semantic Theory (CNST) are used to analyze the development of neural structure as evolution progresses. FlatWorld, a virtual two dimensional environment, is used to test survival skills of simple embodied neural agents. A combination of Genetic Algorithms (GA) and Neural Networks (NN) is applied within FlatWorld to study the relationship between the nurturing of the infant individuals during their developmental period with their subsequent behavior in the environment and the evolution of the associated brain structures. The results show evidence that during evolution, learning performance increases when brain structures required from CNST are formed, and that survival skills increase over evolutionary time-scales due to the formation of these structures.
['Martha Perez-Arriaga', 'Thomas P. Caudell']
A study of brain structure evolution in simple embodied neural agents using genetic algorithms and category theory
160,692
This article presents a formal dialogue game for adjudication dialogues. Existing AI & law models of legal dialogues and argumentation-theoretic models of persuasion are extended with a neutral third party, to give a more realistic account of the adjudicator's role in legal procedures. The main feature of the model is a division into an argumentation phase, where the adversaries plea their case and the adjudicator has a largely mediating role, and a decision phase, where the adjudicator decides the dispute on the basis of the claims, arguments and evidence put forward in the argumentation phase. The model allows for explicit decisions on admissibility of evidence and burden of proof by the adjudicator in the argumentation phase. Adjudication is modelled as putting forward arguments, in particular undercutting and priority arguments, in the decision phase. The model reconciles logical aspects of burden of proof induced by the defeasible nature of arguments with dialogical aspects of burden of proof as something that can be allocated by explicit decisions on legal grounds.
['Hendrik Prakken']
A formal model of adjudication dialogues
254,861
Early parallel architectures where shared memory systems (UMA, NUMA), which had the disadvantage of the shared memory bottleneck that limited the scalability of the system. In contrast, distributed memory architectures with message passing (NORMAs) provided any desired scalability; however, at the cost of a substantial communication latency. The latency could be reduced by custom communication hardware (examples: SUPRENUM, MANNA) yet since there was still a software routine involved, the remaining latency was in the order of microseconds. Therefore, and because of the simpler programming model of shared memory, it became the trend of the nineties to return to UMAs and NUMAs, employing powerful communication hardware to minimize the remote memory access time.
['Ulrich Bruening', 'Wolfgang K. Giloi']
Future building blocks for parallel architectures
125,288
A flight tested Right-of-Way (RoW) compliant algorithm has been developed as part of ongoing research efforts in the development of Airborne Sense and Avoid (ABSAA) technologies by the University of North Dakota Unmanned Aircraft Systems Engineering (UASE) team. This paper presents the results of development, implementation, and flight testing of a RoW algorithm during the summer of 2011 in a restricted airspace using a combination of varying intercept angles for the cases of a single intruder and dual intruders. These tests yielded positive results demonstrating the RoW compliance to enhance the UASE ABSAA system that was developed under a series of projects funded by the Department of Defense (DoD). The work presented implements the future NextGen National Airspace System (NAS) technologies and has the ability to incorporate multiple sensor streams into the decision space. An integral part of the NextGen NAS is the FAA’s final rule regarding “Automatic Dependent Surveillance-Broadcast (ADS-B) Out Performance Requirements to Support Air Traffic Control (ATC) Service,” which will propel forward the transition from a radar based system to a satellite driven system. This mandated technology allows for the development of a robust ABSAA system for Unmanned Aircraft Systems (UAS).
['Kyle Foerster', 'Michael Mullins', 'Naima Kaabouch', 'William H. Semke']
Flight Testing of a Right-of-Way Compliant ADS-B-based Miniature Sense and Avoid System
707,723
The backtracking search algorithm (BSA) is a recently proposed evolutionary algorithm (EA) that has been used for solving optimisation problems. The structure of the algorithm is simple and has only a single control parameter that should be determined. To improve the convergence performance and extend its application domain, a new algorithm called the learning BSA (LBSA) is proposed in this paper. In this method, the globally best information of the current generation and historical information in the BSA are combined to renew individuals according to a random probability, and the remaining individuals have their positions renewed by learning knowledge from the best individual, the worst individual, and another random individual of the current generation. There are two main advantages of the algorithm. First, some individuals update their positions with the guidance of the best individual (the teacher), which makes the convergence faster, and second, learning from different individuals, especially when avoiding the worst individual, increases the diversity of the population. To test the performance of the LBSA, benchmark functions in CEC2005 and CEC2014 were tested, and the algorithm was also used to train artificial neural networks for chaotic time series prediction and nonlinear system modelling problems. To evaluate the performance of LBSA with some other EAs, several comparisons between LBSA and other classical algorithms were conducted. The results indicate that LBSA performs well with respect to other algorithms and improves the performance of BSA.
['Debao Chen', 'Feng Zou', 'Renquan Lu', 'Peng Wang']
Learning backtracking search optimisation algorithm and its application
904,983
A closed curve in the plane can be described in several ways. We show that a simple representation in terms of radius of curvature versus normal direction has certain advantages. In particular, convolutional filtering of the extended circular image leads to a closed curve. Similar filtering operations applied to some other representations of the curve do not guarantee that the result corresponds to a closed curve. In one case, where a closed curve is produced, it is smaller than the original. A description of a curve can be based on a sequence of smoothed versions of the curve. This is one reason why smoothing of closed curves is of interest.
['Berthold K. P. Horn', 'E. J. Weldon']
Filtering Closed Curves
484,555
An architectural solution for designing a low-reference-spur PLL is proposed. A spur-frequency boosting block is inserted between the phase-frequency detector and the charge pump to boost the charge pump input frequency. Hence, the spur at the reference frequency is eliminated and is frequency-boosted to a higher frequency, f B , at which the PLL gain is much less resulting in greater suppression. Quantitative analysis of the charge pump spurs is presented to clarify the different tradeoffs affecting the output spurs level. The proposed technique breaks the classical trade off between the different PLL parameters. It adds a degree of freedom in PLL design to reduce the reference spur level without reducing neither the loop bandwidth nor the voltage-controlled oscillator's gain (K VCO ). A 3.6 GHz PLL prototype is fabricated using UMC 90 nm digital CMOS technology. A -74 dBc reference-spur suppression is measured along with a (K VCO /ω ref ) ratio of 16.67 and a (ω GBW /ω ref ) ratio of 1/20. The proposed architecture provides additional spur suppression of 30 dB compared to a conventional PLL and, to the best of the authors' knowledge, this PLL provides the best normalized reference-spur rejection in literature. The prototype occupies 0.063 mm 2 .
['Mohamed M. Elsayed', 'Mohammed M. Abdul-Latif', 'Edgar Sánchez-Sinencio']
A Spur-Frequency-Boosting PLL With a −74 dBc Reference-Spur Suppression in 90 nm Digital CMOS
249,032
In this paper, we derive general asymptotic moment generating function (MGF) expressions of the GSC output signal- to-noise ratio (SNR) for generalized correlated fading channels assuming large average signal-to-noise ratio (ASNR). Based on the MGF result, the asymptotic diversity and combining gains for correlated-diversity GSC are derived. Our analytical results reveal that over correlated channels when the channel covariance matrix is full rank the diversity gain of GSC is to equivalent to that of maximum ratio combining (MRC) with independent fading branches. The combining gains for different modulation formats and fading types in correlated channels are also derived. As is known and analytically verified in this paper, for channels without line-of-sight (LoS) components, correlation generally degrades the GSC combining gain. However, we show that for Rician channels the LoS phase vector affects the performance, and near-optimal LoS phase vector brings a larger combining gain than even the independent fading channels.
['Yao Ma', 'Robert Schober', 'Subbarayan Pasupathy']
Asymptotic Gains of Generalized Selection Combining Over Correlated Fading Channels
367,214
The equivalence of leaf languages of tree adjoining grammars and monadic linear context-free grammars was shown about a decade ago. This paper presents a proof of the strong equivalence of these grammar formalisms. Non-strict tree adjoining grammars and monadic linear context-free grammars define the same class of tree languages. We also present a logical characterisation of this tree language class showing that a tree language is a member of this class iff it is the two-dimensional yield of an MSO-definable three-dimensional tree language.
['Stephan Kepser', 'James Rogers']
The Equivalence of Tree Adjoining Grammars and Monadic Linear Context-free Tree Grammars
976,242
Data imbalance is common in many vision tasks where one or more classes are rare. Without addressing this issue conventional methods tend to be biased toward the majority class with poor predictive accuracy for the minority class. These methods further deteriorate on small, imbalanced data that has a large degree of class overlap. In this study, we propose a novel discriminative sparse neighbor approximation (DSNA) method to ameliorate the effect of class-imbalance during prediction. Specifically, given a test sample, we first traverse it through a cost-sensitive decision forest to collect a good subset of training examples in its local neighborhood. Then we generate from this subset several class-discriminating but overlapping clusters and model each as an affine subspace. From these subspaces, the proposed DSNA iteratively seeks an optimal approximation of the test sample and outputs an unbiased prediction. We show that our method not only effectively mitigates the imbalance issue, but also allows the prediction to extrapolate to unseen data. The latter capability is crucial for achieving accurate prediction on small dataset with limited samples. The proposed imbalanced learning method can be applied to both classification and regression tasks at a wide range of imbalance levels. It significantly outperforms the state-of-the-art methods that do not possess an imbalance handling mechanism, and is found to perform comparably or even better than recent deep learning methods by using hand-crafted features only.
['Chen Huang', 'Chen Change Loy', 'Xiaoou Tang']
Discriminative Sparse Neighbor Approximation for Imbalanced Learning
629,754
Engineering Enzyme-Driven Dynamic Behaviour in Lipid Vesicles
['Ylenia Miele', 'Tamás Bánsági', 'Annette F. Taylor', 'Pasquale Stano', 'Federico Rossi']
Engineering Enzyme-Driven Dynamic Behaviour in Lipid Vesicles
826,119
Disaster evacuation studies are important but difficult or impossible to conduct in the real world. Evacuation simulation in a virtual world can be an important tool to obtain data on the escape and choice behavior of people. However, to obtain accurate "realistic" data, the engagement of participants is a key challenge. Therefore, we describe the making of an engaging evacuation scenario called "Everscape", and highlight the collaborative effort of researchers from the informatics and transportation fields. Further, we describe encouraging results from a pilot study, which investigates the level of engagement of participants of the Everscape experience.
['Eurico Doirado', 'Mignon van den Berg', 'Hans van Lint', 'Serge P. Hoogendoorn', 'Helmut Prendinger']
Everscape: the making of a disaster evacuation experience
521,843
Weibo is the Chinese counterpart of Twitter, which has attracted hundreds of millions of users. Just like other Online Social Networks (hereafter OSNs), Weibo has a large number of fake accounts. They are created to sell their following links to customers, who want to boost their follower counts. These bogus accounts are difficult to identify individually, especially when they are created by sophisticated programs or controlled by human beings directly. This paper proposes a novel fake account detection method that is based on the very purpose of the existence of these accounts: they are created to follow their targets en masse, resulting in high-overlapping between the follower lists of their customers. This paper investigates the top Weibo accounts whose follower lists duplicate or nearly duplicate each other (hereafter called near-duplicates). Discovering near-duplicates is a challenging task. The network is large; the data in its entirety are not available; the pair-wise comparison is very expensive. We developed a sampling-based approach to discover all the near-duplicates of the top accounts, who have at least 50,000 followers. In the experiment, we found 395 near-duplicates, which leads us to 11.90 million fake accounts (4.56 % of total users) who send 741.10 million links (9.50 % of the entire edges). Furthermore, we characterize four typical structures of the spammers, cluster these spammers into 34 groups, and analyze the properties of each group.
['Yi Zhang', 'Jianguo Lu']
Discover millions of fake followers in Weibo
706,245
We apply the iterative Viterbi algorithm (IVA) to decode a concatenated multidimensional TCM in which a trellis code is used as the inner code and a simple even parity code is used as the outer code.
['Qi Wang', 'Lei Wei']
Iterative Viterbi algorithm for concatenated multidimensional TCM
851,697
Power Capping techniques are used to restrict power consumption of computer systems to a thermally safe limit. Current many-core systems employ dynamic voltage and frequency scaling (DVFS), power gating (PG) and scheduling methods as actuators for power capping. These knobs arc oriented towards power actuation, while the need for performance and energy savings are increasing in the dark silicon era. To address this, we propose approximation (APPX) as another knob for close-looped power management, lending performance and energy efficiency to existing power capping techniques. We use approximation in a pro-active way for long-term performance-energy objectives, complementing the short-term reactive power objectives. We implement an approximation-enabled power management framework, APPEND, that dynamically chooses an application with appropriate level of approximation from a set of variable accuracy implementations. Subject to the system dynamics, our power manager chooses an effective combination of knobs - APPX, DVFS and PG, in a hierarchical way to ensure power capping with performance and energy gains. Our proposed approach yields 1.5× higher throughput, improved latency upto 5×, better performance per energy and dark silicon mitigation compared to state-of-the-art power management techniques over a set of applications ranging from high to no error resilience.
['Anil Kanduri', 'Mohammad Hashem Haghbayan', 'Amir-Mohammad Rahmani', 'Pasi Liljeberg', 'Axel Jantsch', 'Nikil Dutt', 'Hannu Tenhunen']
Approximation knob: power capping meets energy efficiency
912,550
We present a high-level enterprise system architecture that closely models the domain ontology of resource and information ows in enterprises. It is: Process-oriented: formal, user-denable
['Fritz Henglein', 'Ken Friis Larsen', 'Jakob Grue Simonsen', 'Christian Stefansen']
POETS: process-oriented event-driven transaction systems
381,860
Estimating the positions of sensor nodes is a fundamental and crucial problem in wireless sensor networks. In this paper, three novel subspace methods for node localization in a fully connected network are devised with the use of range measurements. Biases and mean square errors of the sensor node position estimates are also derived. Computer simulations are included to contrast the performance of the proposed algorithms with the conventional subspace positioning method, namely, classical multidimensional scaling, as well as Cramer-Rao lower bound.
['Frankie K. W. Chan', 'Hing-Cheung So', 'Wing-Kin Ma']
A Novel Subspace Approach for Cooperative Localization in Wireless Sensor Networks Using Range Measurements
302,040
This paper presents downlink spatial scheduling which nullifies interference among multiplexed signals perfectly in multiuser MIMO systems. Under the base station?s zero-forcing transmit beamforming, each terminal can receive packet of interest without interference from the other multiplexed packets. The scheduling algorithm successively selects appropriate terminals for packet transmission in the presence of already selected packets. We show that the downlink spatial scheduler is equivalent to the virtual uplink spatial scheduler in terms of received signal characteristics. Applying the uplink scheduling concept to downlink, the downlink spatial scheduler achieves much higher system throughput than the system without spatial scheduling. Also, it is shown that the spatial scheduler has similar system throughput in uplink and downlink.
['Yoshitaka Hara', 'Loïc Brunel', 'Kazuyoshi Oshima']
Downlink Spatial Scheduling with Mutual Interference Cancellation in Multiuser MIMO Systems
120,415
In 2006, Gaurav Gupta and Josef Pieprzyk presented an attack on the branch-based software watermarking scheme proposed by Ginger Myles and Hongxia Jin in 2005. The software watermarking model is based on replacing jump instructions or unconditional branch statements (UBS) by calls to a fingerprint branch function (FBF) that computes the correct target address of the UBS as a function of the generated fingerprint and integrity check. If the program is tampered with, the fingerprint and/or integrity checks change and the target address is not computed correctly. Gupta and Pieprzyk's attack uses debugger capabilities such as register and address lookup and breakpoints to minimize the requirement to manually inspect the software. Using these resources, the FBF and calls to the same is identified, correct displacement values are generated and calls to FBF are replaced by the original UBS transferring control of the attack to the correct target instruction. In this paper, we propose a watermarking model that provides security against such debugging attacks. Two primary measures taken are shifting the stack pointer modification operation from the FBF to the individual UBSs, and coding the stack pointer modification in the same language as that of the rest of the code rather than assembly language to avoid conspicuous contents. The manual component complexity increases from O (1) in the previous scheme to O (n) in our proposed scheme.
['Gaurav Gupta', 'Josef Pieprzyk']
Software Watermarking Resilient to Debugging Attacks
142,747
We address the problem of recognizing 2-D shapes in images via multi-class classifications. Our approach has three key elements. First, a signed distance transform is introduced to represent a shape more informatively. Second, a filter bank is generated such that its filters can capture multiple-scale local and global features between two shapes of different classes. We then apply boosting to combine useful filters to construct discriminant classifiers. Third, in implementing our system, a new classification architecture is developed to accomplish multi-class recognition. To examine the claimed efficiencies, we consider an example of document recognition by pinpointing the strengths of our method through experimental results and comparisons.
['Yen-Yu Lin', 'Tyng-Luh Liu']
Shape recognition using fast boosted filtering
423,112
To simultaneously support multimedia services with different signaling rates and quality-of-service requirements in optical code division multiple access (CDMA) networks, a new class of multilength, constant-weight optical orthogonal codes (OOCs) with good correlation properties is constructed algebraically in this paper. The performance of these new OOCs in an optical CDMA system with double-media services is analyzed. In contrast to conventional CDMA, our study shows that the performance of these multilength OOCs worsens as the code length increases, allowing prioritization in optical CDMA. Finally, an application of these multilength OOCs to integrate different types of multimedia services is briefly discussed.
['Wing C. Kwong', 'Guu-Chang Yang']
Design of multilength optical orthogonal codes for optical CDMA multimedia networks
129,445
Sine the publication of The Psychology of Human-Computer Interaction , the GOMS model has been one of the most widely known theoretical concepts in HCI. This concept has produced severval GOMS analysis techniques that differ in appearance and form, underlying architectural assumptions, and predictive power. This article compares and contrasts four popular variantsof the GOMS family (the Keystroke-Level Model, the original GOMS formulation, NGOMSL, and CPM-GOMS) by applying them to a single task example.
['Bonnie E. John', 'David E. Kieras']
The GOMS family of user interface analysis techniques: comparison and contrast
305,112
Patient wellness: geoanalysis of biological and environmental data.
['Giovanni Canino', 'Pietro Hiram Guzzi', 'Mariagrazia Scarpino', 'Giuseppe Tradigo', 'Pierangelo Veltri']
Patient wellness: geoanalysis of biological and environmental data.
993,225
Incorporating Static Environment Elements into the EKF-Based Visual SLAM
['Adam Schmidt']
Incorporating Static Environment Elements into the EKF-Based Visual SLAM
646,027
The paper describes DISCOVR, a distributed collaborative video recorder. DISCOVR is a P2P application that combines asynchronous file sharing with synchronous on-demand media streaming. DISCOVR uses a flat entity ID space, with the entity being any of the media file, header, mega packets, index and metadata. All DISCOVR entities may be asynchronously or synchronously distributed. DISCOVR adopts a sender-driven priority based sharing protocol. If the user is on-demand viewing a media file, those packets that are to be viewed in the near future will be put on the synchronous access list, which prompts its connected peers and the peers that are indirectly connected to fulfill the distribution of the on-demand packets in high priority. By letting the peers engage in both asynchronous sharing and synchronous on-demand streaming, DISCOVR promotes the peers to remain online longer, thus improve the availability of the P2P system and the overall performance.
['Jin Li', 'Cheng Huang']
DISCOVR: Distributed Collaborative Video Recorder
317,403
Generally, pattern recognition systems is designed with a relatively small amount of training data on parameter estimates; moreover, during test, finite sample size of testing data might bring trouble for the expected bias and variance of the models. Plug-in test statistics suffer from large estimation errors, often causing the performance to degrade as the measurement vector dimension increases. Thresholding dimensionality reduction method is briefly introduced first. An extension of this idea as informative component extraction is discussed for recognition system, especially in biometrics. A novel nominal model as the population distribution is introduced to reduce the dimension. Two different kind of benefits are obtained from this method are discussed. The modified test statistic is evaluated with a set of processed physical signals. Authentication testing for the exponential distribution is examined first. Special attention is paid to a high dimension Gaussian model with unknown mean and variances. Moreover, the performance is examined with different sample size.
['Mei Chen', 'Yan Liu']
From thresholding dimension reduction to informative component extraction
434,738
By leading immune concepts and methods into quantum-inspired evolutionary algorithm (QEA), a novel algorithm, the immune quantum-inspired evolutionary algorithm (IQEA), is proposed. On condition of preserving QEA's advantages, IQEA utilizes some characteristics and knowledge in the pending problems for restraining the repeat and ineffective operations during evolution, so as to improve the algorithm efficiency. The experimental results of the knapsack problem show that the performance of IQEA is superior to the conventional EA (CEA), the immune EA (IEA) and QEA.
['Ying Li', 'Yanning Zhang', 'Rongchun Zhao', 'Licheng Jiao']
The immune quantum-inspired evolutionary algorithm
530,846
In this paper, we present an integer linear programming (ILP) approach, called CoRe-ILP , for finding an optimal time consistent cophylogenetic host-parasite reconciliation under the cophylogenetic event model with the events cospeciation, duplication, sorting, host switch, and failure to diverge. Instead of assuming event costs, a simplified model is used, maximizing primarily for cospeciations and secondarily minimizing host switching events. Duplications, sortings, and failure to diverge events are not explicitly scored. Different from existing event based reconciliation methods, CoRe-ILP can use (approximate) phylogenetic branch lengths for filtering possible ancestral host-parasite interactions. Experimentally, it is shown that CoRe-ILP can successfully use branch length information and performs well for biological and simulated data sets. The results of CoRe-ILP are compared with the results of the reconciliation tools Jane 4 , Treemap 3b , NOTUNG 2.8 Beta , and Ranger-DTL . Algorithm CoRe-ILP is implemented using IBM ILOG CPLEX Optimizer 12.6 and is freely available from http://pacosy.informatik.uni-leipzig.de/core-ilp.
['Nicolas Wieseke', 'Tom Hartmann', 'Matthias Bernt', 'Martin Middendorf']
Cophylogenetic Reconciliation with ILP
570,741
Often, tasks are collected for multi-task learning (MTL) because they share similar feature structures. Based on this observation, in this paper, we present novel algorithm-dependent generalization bounds for MTL by exploiting the notion of algorithmic stability. We focus on the performance of one particular task and the average performance over multiple tasks by analyzing the generalization ability of a common parameter that is shared in MTL. When focusing on one particular task, with the help of a mild assumption on the feature structures, we interpret the function of the other tasks as a regularizer that produces a specific inductive bias. The algorithm for learning the common parameter, as well as the predictor, is thereby uniformly stable with respect to the domain of the particular task and has a generalization bound with a fast convergence rate of order $\mathcal {O}(1/n)$ , where $n$ is the sample size of the particular task. When focusing on the average performance over multiple tasks, we prove that a similar inductive bias exists under certain conditions on the feature structures. Thus, the corresponding algorithm for learning the common parameter is also uniformly stable with respect to the domains of the multiple tasks, and its generalization bound is of the order $\mathcal {O}(1/T)$ , where $T$ is the number of tasks. These theoretical analyses naturally show that the similarity of feature structures in MTL will lead to specific regularizations for predicting, which enables the learning algorithms to generalize fast and correctly from a few examples.
['Tongliang Liu', 'Dacheng Tao', 'Mingli Song', 'Stephen J. Maybank']
Algorithm-Dependent Generalization Bounds for Multi-Task Learning
705,115
Modern computer systems are increasing in complexity, spread, and scale in order to meet the diverse and sophisticated needs of the users. In the development of these systems, we inevitably use legacy codes and off-the shelf software as black box software in order to shorten the development time and lower the development cost. These systems are often connected via a network to utilize services provided by other systems, whereas services and network performance may change while in operation. We often need to change the specification and implementation of a system due to the changes of environments and users' requirements. In addition, threats caused by viruses and unauthorized accesses have to be properly removed. Therefore, modern computer systems inherently involve incompleteness of specifications and implementations and uncertainty of environments and requirements.
['Mario Tokoro', 'Karama Kanoun', 'Kimio Kuramitsu', 'Jean-Charles Fabre']
WOSD 2011 the first international workshop on open systems dependability
356,452
In this paper, we perform the design space exploration for an H.264/AVC video embedding transcoder. Specifically, the design space is pruned for the sub-modules including inverse transform, inter and intra prediction, and deblocking filter with various microarchitecture designs, processing order, memory hierarchy, and granularity of synchronization. In addition, we propose an efficient deblocking filter suitable for 8 times 8 block pipeline. Compared to the traditional designs, our proposed deblocking filter reduces memory requirement, processing latency, and access frequency to the local memory. The synthesized logic gate count is only 8K using the 0.18 um technology with the maximum frequency of 162 MHz. For rapid exploration, all the design alternatives are simulated with higher level of abstraction using the transaction level modeling to explore 160 design combinations. Our simulation results provide an extensive tradeoff analysis among processing speed, cost, and utilization. Besides, the cost-normalized hardware utilization where the cost of each sub-module weights its associated utilization assists the system designers to keep a balance across different modules.
['Chih-Hung Li', 'Wen-Hsiao Peng', 'Tihao Chiang']
Design space exploration of an H.264/AVC-based video embedding transcoder using transaction level modeling
358,129
Polynomials, Quantum Query Complexity, and Grothendieck's Inequality.
['Andris Ambainis']
Polynomials, Quantum Query Complexity, and Grothendieck's Inequality.
840,850
There is a well-recognized need for accurate timing verification tools that account for the functional behavior of component interfaces, and thereby do not traverse false combinational and sequential paths. Such tools, however, are susceptible to an exponential increase in task complexity as the circuit size and functional complexity of components increase. The viability of accurate timing verifiers hinges on their ability to efficiently analyze the smallest subset of circuit behaviors, while verifying the timing characteristics of the overall space of behaviors. This paper presents theoretical results that address this issue for the timing verification of interacting FSMs.
['Ajay J. Daga', 'William P. Birmingham']
Interface finite-state machines: definition, minimization, and decomposition
8,070
Sparse matrices are in the kernel of numerical applications. Their compressed storage, which permits both operations and memory savings, generates irregular access patterns, reducing the performance of the memory hierarchy In this work we present a probabilistic model for the prediction of the number of misses of a direct mapped cache memory, considering sparse matrices with a uniform entries distribution. The number of misses is directly related to the program execution time and the memory hierarchy performance. The model considers the three types of standard interferences: intrinsic, self-cross interferences. We explain in detail the modeling of a representative matrix operation such as the sparse matrix-dense matrix product, considering several loop orderings, and include validation results that show the model accuracy.
['Ramón Doallo', 'Basilio B. Fraguela', 'Emilio L. Zapata']
Direct mapped cache performance modeling for sparse matrix operations
219,775
Radioactive tank waste remediation and decontamination and decommissioning (D&D) of contaminated Department of Energy (DOE) facilities, and other nuclear cleanup tasks require extensive remote handling technologies. The unstructured nature of these tasks and limitations of the current sensor and computer decision-making technologies prohibit the use of completely autonomous systems for remote manipulation. This paper presents a new methodology in which model-based computer assistance is incorporated into human controlled teleoperator systems. This approach implies a form of assistance function in which the human input is enhanced rather than superseded by the computer. A specific task of cutting a pipe with a saw is chosen as an example to demonstrate the implementation of the assistance functions in D&D size reduction tasks and the results are presented.
['Karan A. Manocha', 'Norali Pernalete', 'Rajiv V. Dubey']
Variable position mapping based assistance in teleoperation for nuclear cleanup
68,740
Automatic surgicalworkflow estimation method for brain tumor resection using surgical navigation information
['Ryoichi Nakamura', 'Tomoaki Aizawa', 'Yoshihiro Muragaki', 'Takashi Maruyama', 'Hiroshi Iseki']
Automatic surgicalworkflow estimation method for brain tumor resection using surgical navigation information
990,053
Many Information Retrieval (IR) and Natural language processing (NLP) systems require textual similarity measurement in order to function, and do so with the help of similarity measures. Similarity measures function differently, some measures which work better on highly similar texts do not always do so well on highly dissimilar texts. In this paper, we evaluated the performances of eight popular similarity measures on four levels (degree) of textual similarity using a corpus of plagiarised texts. The evaluation was carried out in the context of candidate selection for plagiarism detection. Performance was measured in terms of recall, and the best performed similarity measure(s) for each degree of textual similarity was identified. Results from our Experiments show that the performances of most of the measures were equal on highly similar texts, with the exception of Euclidean distance and Jensen-Shannon divergence which had poorer performances. Cosine similarity and Bhattacharryan coefficient performed best on lightly reviewed text, and on heavily reviewed texts, Cosine similarity and Pearson Correlation performed best and next best respectively. Pearson Correlation had the best performance on highly dissimilar texts. The results also show term weighing methods and n-gram document representations that best optimises the performance of each of the similarity measures on a particular level of intertextual similarity.
['Victor U. Thompson', 'Christo Panchev', 'Michael Michael Oakes']
Performance evaluation of similarity measures on similar and dissimilar text retrieval
687,165
Older people with dementia often decline in short-term memory and forget what to do next to complete their activities of daily living (ADLs), such as tea-making and tooth- brushing. Therefore, they need caregivers to remind they what to do to complete these activities. However, the steady growth of aging population makes the (relatively) shortage of traditional care resources more and more serious. In this paper, we propose a prototype called CoReDA (context- aware reminding system for daily activities) to help elderly with dementia complete different ADLs instead of caregivers. By using the wireless sensor node - PAVENET, CoReDA can obtain elderly people's information of tool usage in different ADLs. Based on this information, CoReDA uses TD (X) Q-Learning technique to provide elderly people their personalized guidance to complete ADLs.
['Hua Si', 'Seung Jin Kim', 'Nao Kawanishi', 'Hiroyuki Morikawa']
A Context-aware Reminding System for Daily Activities of Dementia Patients
267,406
Apps are a popular concept allowing end-users to easily extend their devices such as smartphones or computers with specific functionality. We believe that the App-concept is not only well applicable to single devices but also to complete environments. In this work we introduce Smart Space Apps as a concept for interweaving networked smart things of an environment through pluggable Apps written in JavaScript. By introducing a unified schema that access IoT platforms, smart home appliances and smart devices such as phones and tablets in the same way we go further than current platforms. We have implemented this concept and demonstrate its utility with three example Apps.
['Thomas Kubitza']
Apps for Environments: Demonstrating Pluggable Apps for Multi-Device IoT-Setups
945,339
The OSN's (On-line Social Networks) have reached an incredible popularity in modern Internet. Those systems have been present in the daily lives of countless people helping them to share personal experiences, expectations and opinions. So high popularity has made of such networks complex systems. To understand the operation and phenomena that occur in such networks, there are metrics and models that capture aspects of their structures. The purpose of this work is to understand the complex reality of eBay e-commerce network, their connections and the dynamics of its users. Data were collected using a Web crawler developed in this work, and it resulted in a database of approximately 87 million transactions and 15 million different dealer users. From these data, the characterization was made estimating network metrics, like dealer users' degree distribution, that gave us key insights about the eBay negotiation network. We found that there are users who bought/sold for more than 100.000 different persons. We also found that a user A interacted over 4.000 times with another user B in just 3 months. Those and other interesting results, such as average distance and feedbacks ratings, were obtained, analyzed and discussed in this work.
['Cinthya de M. França', 'Antonio A. Rocha', 'Pedro Braconnot Velloso']
On the structural properties of eBay's network
997,992
Regional Reports and Presentations of Regions In the Czech Republic.
['Jiri Vanek', 'Jan Jarolimek', 'Michal Stoces', 'Pavel Simek', 'Eva Kánská']
Regional Reports and Presentations of Regions In the Czech Republic.
753,475
As a hyper-redundant robot, a 3D snake-like robot can perform many other configurations and types of locomotion adapted to environment except for mimicking the natural snake locomotion. The natural snake locomotion usually limits locomotion capability of the robot because of inadequacy in the mechanism and actuation to imitate characters of natural snake such as the too many DOFs and the characteristics of the muscle. In order to apply snake-like robots to the unstructured environment, the researchers have designed many gaits for increasing the adaptability to a variety of surroundings. The twist-related locomotion is an effective gait achieved by jointly driving the pitching-DOF and yawing-DOF, with which the snake-like robot can move on rough ground and even climb up some obstacles. In this paper, the twist-related locomotion function is firstly solved, and simplified to be expressed by sine or cosine function. The 2D locomotion such as V-shape and U-shape is achieved. Also by applying it to the serpentine locomotion or other types of locomotion, the snake-like robot can complete composite locomotion that combines the serpentine locomotion or others with twist-related locomotion. Then we extend the twist related locomotion to 3D space. Finally, the experimental results are presented to validate all above analyses
['Changlong Ye', 'Shugen Ma', 'Bin Li', 'Yuechao Wang']
Twist-related Locomotion of a 3D Snake-like Robot
786,903
A better layout which can make a system of chain stores more competitive is a precondition for finding new sites of stores. Layout optimizations based on Weighted Node Voronoi Diagrams which are effective tools to investigate dominance regions in a grid road system or in a radial-circular road model are important ways to configure a new store-site. The position of every store can be seen as a node in the road network, and weights of nodes are used to indicate reality factors relating to stores such as the scale and effectiveness. The nodes with weight values are considered as generators which can be different types of functions. Since each generator represents a set of varied weight values, it is difficult to exactly determine the optimal position and its corresponding influence range. This paper presents a new method of layout optimization based on Weighted Node Voronoi Diagrams, which is in accordance with the traditional discrete construction methodologies. It is demonstrated that algorithm proposed in this paper is superior to the similar traditional techniques because the algorithm does not require the structures or other additional information of nodes. Examples show the effectiveness of our methodology in optimization of chain store layout.
['Jingna Liu', 'Xiaoyun Sun', 'Shujuan Liu']
Weighted Node Network Voronoi Diagram and its application to optimization of chain stores layout
689,167
The area of learning in multi-agent systems is today one of the most fertile grounds for interaction between game theory and artificial intelligence. We focus on the foundational questions in this interdisciplinary area, and identify several distinct agendas that ought to, we argue, be separated. The goal of this article is to start a discussion in the research community that will result in firmer foundations for the area.
['Yoav Shoham', 'Robert C. Powers', 'Trond Grenager']
If multi-agent learning is the answer, what is the question?
384,156
In this paper a discrete-continuous project scheduling problem with discounted cash flows is considered. Each activity requires for its processing discrete resources and an amount of a continuous resource. Processing rate of an activity is a concave function of the amount of the continuous resource allotted to this activity at a time. A positive cash flow is associated with the completion of each activity. The objective is the maximization of the net present value of all cash flows of the project. Two heuristics for allocating the continuous resource are proposed, and compared to optimum on a basis of a computational experiment. Some conclusions as well as directions for future research are given.
['Grzegorz Waligóra', 'R. Różycki']
Heuristic solving some discrete-continuous project scheduling problems with discounted cash flows
899,259
We consider a hybrid control system and general optimal control problems for this system. We suppose that the switching strategy imposes restrictions on control sets and weprovide necessary conditions for an optimal hybrid trajectory, stating a Hybrid Necessary Principle (HNP). Our result generalizes various necessary principles available in the literature.
['Mauro Garavello', 'Benedetto Piccoli']
Hybrid Necessary Principle
551,609
The evolution of communication technology and the proliferation of electronic devices have rendered adversaries powerful means for targeted attacks via all sorts of accessible resources. In particular, due to the intrinsic interdependence and ubiquitous connectivity of modern communication systems, adversaries can devise malware that propagates through intermediate hosts to approach the target, to which we refer as transmissive attacks. Inspired by biology, the transmission pattern of such an attack in the digital space much resembles the spread of an epidemic in real life. This article describes transmissive attacks, summarizes the utility of epidemic models in communication systems, and draws connections between transmissive attacks and epidemic models. Simulations, experiments, and ongoing research challenges on transmissive attacks are also addressed.
['Pin-Yu Chen', 'Ching-Chao Lin', 'Shin-Ming Cheng', 'Hsu-Chun Hsiao', 'Chun-Ying Huang']
Decapitation via digital epidemics: a bio-inspired transmissive attack
652,388
Using a digital marketing platform for the promotion of an internet based health encyclopedia in saudi arabia.
['Asma Al Ateeq', 'Eman Al Moamary', 'Tahani Daghestani', 'Yahya Al Muallem', 'Majed Al Dogether', 'Abdulrahman Alsughayr', 'Majid M. Altuwaijri', 'Mowafa S. Househ']
Using a digital marketing platform for the promotion of an internet based health encyclopedia in saudi arabia.
745,837
Stable laser light sources are necessary for atom interferometry based experiments on space platforms such as sounding rockets or satellites. Diode lasers, commonly used as light sources in these experiments, lack long term stability, therefore external frequency stabilization systems are required. Commonly used spectroscopy based systems require the correct optical atomic reference transition to be manually identified before analog circuits can start tracking this transition. For automated systems, latencies below 100 fis are required making the design of such systems challenging.
['Christian Spindeldreier', 'T. Wendrich', 'E. M. Rasel', 'W. Ertmer', 'Holger Blume']
FPGA-based frequency estimation of a DFB laser using Rb spectroscopy for space missions
951,647
An automatic loop gain control algorithm (ALGC) for a bang-bang (BB) clock and data recovery (CDR) is proposed. The proposed algorithm finds the optimum loop gain using the autocorrelation of a BBPD output signal for minimum MSE performance. Mathematical proof of the algorithm is presented for both rotator-based and VCO-based CDRs with finite loop delay. A 25 Gb/s transceiver IC is fabricated using a 40 nm CMOS process to validate the performance of the algorithm. The power consumptions of TX and RX are 37.8 mW and 46.8 mW, respectively and the synthesized area implementing a digital loop filter together with the proposed ALGC occupies 140 $\mu$ m $\times$ 170 $\mu$ m.
['Soon-Won Kwon', 'Joon-Yeong Lee', 'Jin-Hee Lee', 'Kwangseok Han', 'Taeho Kim', 'Sangeun Lee', 'Jeong-Sup Lee', 'Taehun Yoon', 'Hyosup Won', 'Jinho Park', 'Hyeon-Min Bae']
An Automatic Loop Gain Control Algorithm for Bang-Bang CDRs
567,602
In this paper, approximations of an optimal level-crossing predictor for a zero-mean stationary linear dynamical system driven by Gaussian noise in state-space form are investigated. The study of this problem is motivated by the practical implications for design of an optimal alarm system, which will elicit the fewest false alarms for a fixed detection probability in this context. This work introduces the use of Kalman filtering in tandem with the optimal level-crossing prediction problem. It is shown that there is a negligible loss in overall accuracy when using approximations to the theoretically optimal predictor, at the advantage of greatly reduced computational complexity.
['Rodney Martin']
A State-Space Approach to Optimal Level-Crossing Prediction for Linear Gaussian Processes
175,899
Summary: The linear mixed model is the state-of-the-art method to account for the confounding effects of kinship and population structure in genome-wide association studies (GWAS). Current implementations test the effect of one or more genetic markers while including prespecified covariates such as sex. Here we develop an efficient implementation of the linear mixed model that allows composite hypothesis tests to consider genotype interactions with variables such as other genotypes, environment, sex or ancestry. Our R package, lrgpr, allows interactive model fitting and examination of regression diagnostics to facilitate exploratory data analysis in the context of the linear mixed model. By leveraging parallel and out-of-core computing for datasets too large to fit in main memory, lrgpr is applicable to large GWAS datasets and next-generation sequencing data. Availability and implementation: lrgpr is an R package available from lrgpr.r-forge.r-project.org Contact: [email protected] Supplementary information: Supplementary data are available at Bioinformatics online.
['Gabriel Hoffman', 'Jason G. Mezey', 'Eric E. Schadt']
lrgpr: interactive linear mixed model analysis of genome-wide association studies with composite hypothesis testing and regression diagnostics in R
175,673
Consumer acceptance of accountable-eHealth systems.
['Randike Gajanayake', 'Renato Iannella', 'Tony Sahama']
Consumer acceptance of accountable-eHealth systems.
741,969
Evolving technology and increasing pin-bandwidth motivate the use of high-radix routers to reduce the diameter, latency, and cost of interconnection networks. High-radix networks, however, require longer cables than their low-radix counterparts. Because cables dominate network cost, the number of cables, and particularly the number of long, global cables should be minimized to realize an efficient network. In this paper, we introduce the dragonfly topology which uses a group of high-radix routers as a virtual router to increase the effective radix of the network. With this organization, each minimally routed packet traverses at most one global channel. By reducing global channels, a dragonfly reduces cost by 20% compared to a flattened butterfly and by 52% compared to a folded Clos network in configurations with ≥ 16K nodes.We also introduce two new variants of global adaptive routing that enable load-balanced routing in the dragonfly. Each router in a dragonfly must make an adaptive routing decision based on the state of a global channel connected to a different router. Because of the indirect nature of this routing decision, conventional adaptive routing algorithms give degraded performance. We introduce the use of selective virtual-channel discrimination and the use of credit round-trip latency to both sense and signal channel congestion. The combination of these two methods gives throughput and latency that approaches that of an ideal adaptive routing algorithm.
['John Y S Kim', 'William J. Dally', 'Steve Scott', 'Dennis Abts']
Technology-Driven, Highly-Scalable Dragonfly Topology
373,998
A Schema-Less Data Model for the Web
['Liu Chen', 'Mengchi Liu', 'Ting Yu']
A Schema-Less Data Model for the Web
671,290
Traffic self-similarity has been discovered to be a ubiquitous phenomenon in modern communication networks and multimedia systems. Due to its scale-invariant bursty nature, performance modelling of self-similar traffic poses greater challenges and exhibits more complexity than that of traditional non-bursty traffic. As a result, most existing studies on analytical modelling of priority queueing systems with self-similar inputs have been restricted to a simplified scenario where only two traffic flows are taken into account. This study contributes to performance modelling and analysis of priority queueing systems by proposing a novel and efficient queue-decomposition approach to handle multi-class self-similar traffic. Specifically, we extend the well-known method of empty buffer approximation in order to decompose the priority queueing system equivalently into a group of single-server single-queue systems. We further obtain the analytical upper and lower bounds of the queue length distributions for individual traffic flows.
['Xiaolong Jin', 'Geyong Min']
Modelling Priority Queueing Systems with Multi-Class Self-Similar Network Traffic
117,763
This paper treats a multiuser relay scenario, where multiple user equipments have a two-way communication with a common base station in the presence of a buffer-equipped relay station. Each of the uplink (UL) and downlink (DL) transmission can take place over a direct or over a relayed path. Traditionally, the UL and the DL path of a given two-way link are coupled , that is, either both are direct links or both are relayed links. By removing the restriction for coupling, one opens the design space for a decoupled two-way links. Following this, we devise two protocols: orthogonal decoupled UL/DL buffer-aided (ODBA) relaying protocol and non-ODBA (NODBA) relaying protocol. In NODBA, the receiver can use successive interference cancellation to extract the desired signal from a collision between UL and DL signals. For both protocols, we characterize the transmission decision policies in terms of maximization of the average two-way sum rate of the system. The numerical results show that decoupling association and non-orthogonal radio access lead to significant throughput gains for two-way traffic.
['Rongkuan Liu', 'Petar Popovski', 'Gang Wang']
Decoupled Uplink and Downlink in a Wireless System With Buffer-Aided Relaying
893,837
Amputation of the dominant hand forces patients to use their non-dominant hand exclusively. Whether this chronic forced use of the non-dominant hand would affect the handedness preference remains an open question. In this study, the handedness preference in amputees was evaluated using a hand laterality judgment task by comparing recognition speeds of their lost hand and remaining hand. A handedness index was defined as lateralization between response times to the left hand and the right hand. Healthy controls responded faster to the pictures of dominant hand than that of non-dominant hand, while amputees did not show this handedness advantage. The handedness index was significantly correlated to the response time, accuracy, amputation age and the time post amputation. Amputees with poorer performance experienced severer handedness change and new amputees were more likely to suffer handedness change. Our results suggest that handedness is changed after dominant side amputation and the handedness change might not be induced by chronic use of the intact hand.
['Xiaoli Guo', 'Yuanyuan Lyu', 'Robin Bekrater-Bodmann', 'Herta Flor', 'Shanbao Tong']
Handedness change after dominant side amputation: Evaluation from a hand laterality judgment task.
665,550
In public places we can observe that many conventional displays are replaced by digital displays, a lot of them networked. These displays mainly show advertising in a similar way to television's commercial break not exploiting the opportunities of the new medium [1]. Several approaches of interaction between mobile devices and public displays have been investigated over the last 15 years. In this demo we concentrate on challenges that are specific to public displays used for advertising. In particular we focus on how new approaches for interaction with content, means for content creation, and tools for follow-ups can be implemented based on mobile devices. With Digifieds we present a research system that has been used to explore different research questions and to showcase the potential of interactive advertising in public space.
['Stefan Schneegaß', 'Florian Alt', 'A. Schmidt']
Demo: mobile interaction with ads on public display networks
444,311