abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
The integration of heterogeneous ontologies is often hampered by different upper level categories and relations. We report on an on-going effort to align clinical terminology/ontology SNOMED CT with the formal upper-level ontology BioTopLite. This alignment introduces several constraints at the OWL-DL level. The mapping was done manually by analysing formal and textual definitions. Descriptive logic classifiers interactively checked mapping steps, using small modules for increasing performance. We present an effective workflow, using modules of several scales. However, only part of the classes and relations could easily be mapped. The implications for future evolution of SNOMED CT are discussed. It seems generally feasible to use a highly constrained upper-level ontology as an upper level for the benefit of future SNOMED CT versions that are more interoperable with other biomedical ontologies.
['Stefan Schulz', 'Catalina Martínez-Costa']
Harmonizing SNOMED CT with BioTopLite: An Exercise in Principled Ontology Alignment.
740,606
Probabilistic management of charge/discharge of EVs: An approximation procedure
['Masoud Jabbari', 'Taher Niknam', 'Aliasghar Baziar', 'Ali Farzadian', 'Alireza Zare']
Probabilistic management of charge/discharge of EVs: An approximation procedure
677,600
Smartphones are becoming more and more widespread and powerful and their use is becoming common in applications such as transport, healthcare, security, and surveillance. In this paper we describe the preliminary results of our experience in using an Android smartphone as a tool which can be used in emergency situations to validate the identity of people through the use of two functionalities provided by the most recent generations of smartphones: NFC and face recognition.
['Antonia Rana', 'Andrea Ciardulli']
Identity verification through face recognition, Android smartphones and NFC
608,331
NASA's future deep-space missions will require onboard software upgrade. A challenge that arises from this is that of guarding the system against performance loss caused by residual design faults in the new version of a spacecraft/science function. Accordingly, we have developed a methodology called guarded software upgrading (GSU). The GSU framework is based on the Baseline X2000 First Delivery Architecture, which comprises three high-performance computing nodes with local DRAMs and multiple subsystem microcontroller nodes that interface with a variety of devices. All nodes are connected by a high-speed fault tolerant bus network that complies with the commercial interface standard IEEE 1394. Since application-specific techniques are an effective strategy for reducing fault tolerance cost, we exploit the characteristics of our target system and application. To ensure low development cost, we take advantage of inherent system resource redundancies as the means of fault tolerance.
['Kam S. Tso', 'Ann T. Tai', 'Leon Alkalai', 'Savio N. Chau', 'William H. Sanders']
GSU middleware architecture design
318,594
Quantum noise is a signal-dependent, Poisson-distributed noise and the dominant noise source in digital mammography. Quantum noise removal or equalization has been shown to be an important step in the automatic detection of microcalcifications. However, it is often limited by the difficulty of robustly estimating the noise parameters on the images. In this study, a nonparametric image intensity transformation method that equalizes quantum noise in digital mammograms is described. A simple Look-Up-Table for Quantum Noise Equalization LUT-QNE is determined based on the assumption that noise properties do not vary significantly across the images. This method was evaluated on a dataset of 252 raw digital mammograms by comparing noise statistics before and after applying LUT-QNE. Performance was also tested as a preprocessing step in two microcalcification detection schemes. Results show that the proposed method statistically significantly improves microcalcification detection performance.
['Alessandro Bria', 'Claudio Marrocco', 'Jan-Jurre Mordang', 'Nico Karssemeijer', 'Mario Molinara', 'Francesco Tortorella']
LUT-QNE: Look-Up-Table Quantum Noise Equalization in Digital Mammograms
855,511
This paper presents three methods that can be used to recognize paraphrases. They all employ string similarity measures applied to shallow abstractions of the input sentences, and a Maximum Entropy classifier to learn how to combine the resulting features. Two of the methods also exploit WordNet to detect synonyms and one of them also exploits a dependency parser. We experiment on two datasets, the MSR paraphrasing corpus and a dataset that we automatically created from the MTC corpus. Our system achieves state of the art or better results.
['Prodromos Malakasiotis']
Paraphrase Recognition Using Machine Learning to Combine Similarity Measures
151,637
Matrix factorization based techniques, such as nonnegative matrix factorization (NMF) and concept factorization (CF), have attracted a great deal of attentions in recent years, mainly due to their ability of dimension reduction and sparse data representation. Both techniques are of unsupervised nature and thus do not make use of a priori knowledge to guide the clustering process. This could lead to inferior performance in some scenarios. As a remedy to this, a semi-supervised learning method called Pairwise Constrained Concept Factorization (PCCF) was introduced to incorporate some pairwise constraints into the CF framework. Despite its improved performance, PCCF uses only a priori knowledge and neglects the proximity information of the whole data distribution; this could lead to rather poor performance (although slightly improved comparing to CF) when only limited a priori information is available. To address this issue, we propose in this paper a novel method called Constrained Neighborhood Preserving Concept Factorization (CNPCF). CNPCF utilizes both a priori knowledge and local geometric structure of the dataset to guide its clustering. Experimental studies on three real-world clustering tasks demonstrate that our method yields a better data representation and achieves much improved clustering performance in terms of accuracy and mutual information comparing to the state-of-the-arts techniques.
['Mei Lu', 'Li Zhang', 'Xiangjun Zhao', 'Fanzhang Li']
Constrained neighborhood preserving concept factorization for data representation
702,030
Recently there has been much interest in combining the speed of layer-2 switching with the features of layer-3 routing. This has been prompted by numerous proposals, including: IP Switching [1], Tag Switching [2], ARIS [3], CSR [4], and IP over ATM [5]. In this paper, we study IP Switching and evaluate the performance claims made by Newman et al in [1] and [6]. In particular, using ten network traces, we study how well IP Switching performs with traffic found in campus, corporate, and Internet Service Provider (ISP) environments. Our main finding is that IP Switching will lead to a high proportion of datagrams that are switched; over 75% in all of the environments we studied. We also investigate the effects that different flow classifiers and various timer values have on performance, and note that some choices can result in a large VC space requirement. Finally, we present recommendations for the flow classifier and timer values, as a function of the VC space of the switch and the network environment being served.
['Steven Lin', 'Nick McKeown']
A simulation study of IP switching
406,938
We present a new interaction handling model for physics-based sound synthesis in virtual environments. A new three-level surface representation for describing object shapes, visible surface bumpiness, and microscopic roughness (e.g. friction) is proposed to model surface contacts at varying resolutions for automatically simulating rich, complex contact sounds. This new model can capture various types of surface interaction, including sliding, rolling, and impact with a combination of three levels of spatial resolutions. We demonstrate our method by synthesizing complex, varying sounds in several interactive scenarios and a game-like virtual environment. The three-level interaction model for sound synthesis enhances the perceived coherence between audio and visual cues in virtual reality applications.
['Zhimin Ren', 'Hengchin Yeh', 'Ming C. Lin']
Synthesizing contact sounds between textured models
278,985
Human Brainnetome Atlas and Its Potential Applications in Brain-Inspired Computing
['Lingzhong Fan', 'Hai Li', 'Shan Yu', 'Tianzi Jiang']
Human Brainnetome Atlas and Its Potential Applications in Brain-Inspired Computing
953,311
Vaccination of both newborns and susceptibles is included in a transmission model for a disease that confers immunity. The interplay of the vaccination strategy together with the vaccine efficacy and waning is studied. In particular, it is shown that a backward bifurcation leading to bistability can occur. Under mild parameter constraints, compound matrices are used to show that each orbit limits to an equilibrium. In the case of bistability, this global result requires a novel approach since there is no compact absorbing set.
['Julien Arino', 'C. Connell McCluskey', 'P. van den Driessche']
GLOBAL RESULTS FOR AN EPIDEMIC MODEL WITH VACCINATION THAT EXHIBITS BACKWARD BIFURCATION
27,068
SICRaS: A Semantic Big Data Platform for Fighting Tax Evasion and Supporting Social Policy Making.
['Paolo Bouquet', 'Giovanni Adinolfi', 'Lorenzo Zeni', 'Stefano Bortoli']
SICRaS: A Semantic Big Data Platform for Fighting Tax Evasion and Supporting Social Policy Making.
800,060
Message sequence charts (MSCs) are one widespread method for understanding interactions between components within complex systems. Although the language for MSCs is standardized, techniques for displaying them are far from standard and the current test and typographic based methods do not scale to industrial sized MSCs. We have developed some novel displays of MSCs that use color, interaction, and linked views to display large MSCs and verified their effectiveness with a user study.
['Stephen G. Eick', 'Amy R. Ward']
An interactive visualization for message sequence charts
131,858
We have designed a mobile robot system architecture that uses a blackboard to coordinate and integrate several real time activities. An activity is an organizational unit, or module, designed to perform a specific function, such as traversing a hallway, going down steps, crossing over an open channel on the floor, or tracking a landmark. An activity resembles a behavior in that it controls the robot to perform a specific task. It differs from a behavior in that it is designed to perform the specific task in a narrow application domain, whereas a behavior generally resembles a biological response-that is, an organism's response to a stimulus. The activity based blackboard system consists of two hierarchical layers for strategic and reactive reasoning: a blackboard database to keep track of the state of the world and a set of activities to perform real time navigation. >
['Ramiro Liscano', 'Allan Manz', 'Elizabeth R. Stuck', 'Reda E. Fayek', 'J.-Y. Tigli']
Using a blackboard to integrate multiple activities and achieve strategic reasoning for mobile-robot navigation
474,355
Ultra wide-band (UWB) is now under consideration as an alternative physical layer technology for wireless PAN. UWB radio uses base-band pulses of very short duration, thereby spreading the energy of radio signal very thinly over gigahertz. The power spectral density (PSD) of UWB signals consists of a continuous component and discrete component. Generally speaking, the discrete component presents greater interference to narrow-band communication systems than the continuous component. Frame synchronization is commonly used in multiple access systems, including wireless PAN systems. The sync word will generate strong PSD. In this paper, we devise a more efficient and better performance mechanism to suppress the discrete component of the PSD of UWB signals by randomizing the pattern of UWB signals. The mechanism can also be applied to payload data to smooth the PSD of USB signals.
['Shaomin Samuel Mo', 'Alexander D. Gelman', 'Jay Gopal']
Frame synchronization in UWB using multiple sync words to eliminate line frequencies
489,401
Planning and reasoning with processes
['Brian Drabble']
Planning and reasoning with processes
426,563
Voltage/Frequency Scaling (VFS) and Device Power Management (DPM) are two popular techniques commonly employed to save energy in real-time embedded systems. VFS policies aim at reducing the CPU energy, while DPM-based solutions involve putting the system components (e.g., memory or I/O devices) to low-power/sleep states at runtime, when sufficiently long idle intervals can be predicted. Despite numerous research papers that tackled the energy minimization problem using VFS or DPM separately, the interactions of these two popular techniques are not yet well understood. In this paper, we undertake an exact analysis of the problem for a real-time embedded application running on a VFS-enabled CPU and using multiple devices. Specifically, by adopting a generalized system-level energy model, we characterize the variations in different components of the system energy as a function of the CPU processing frequency. Then, we propose a provably optimal and efficient algorithm to determine the optimal CPU frequency as well as device state transition decisions to minimize the system-level energy. We also extend our solution to deal with workload variability. The experimental evaluations confirm that substantial energy savings can be obtained through our solution that combines VFS and DPM optimally under the given task and energy models.
['Vinay Devadas', 'Hakan Aydin']
On the Interplay of Voltage/Frequency Scaling and Device Power Management for Frame-Based Real-Time Embedded Applications
277,298
Feature extraction and classification are two intertwined components in pattern recognition. Our hypothesis is that for each type of target, there exists an optimal set of features in conjunction with a specific classifier, which can yield the best performance in terms of classification accuracy using least amount of computation, measured by the number of features used. In this paper, our study is in the context of an application in wireless sensor networks (WSNs). Due to the extremely limited resources on each sensor platform, the decision making is prune to fault, making sensor fusion a necessity. We present a concept of dynamic target classification in WSNs. The main idea is to dynamically select the optimal combination of features and classifiers based on the ldquoprobabilityrdquo that the target to be classified might belong to a certain category. We use two data sets to validate our hypothesis and derive the optimal combination sets by minimizing a cost function. We apply the proposed algorithm to a scenario of collaborative target classification among a group of sensors in WSNs. Experimental results show that our approach can significantly reduce the computational time while at the same time, achieve better classification accuracy, compared with traditional classification approaches, making it a viable solution in practice.
['Ying Sun', 'Hairong Qi']
Dynamic target classification in wireless sensor networks
447,963
Bandwidth Efficient PIR from NTRU.
['Yarkin Doröz', 'Berk Sunar', 'Ghaith Hammouri']
Bandwidth Efficient PIR from NTRU.
799,543
Les donnees des systemes d'analyse en ligne (OLAP, On-Line Analytical Processing) sont traditionnellement gerees par des bases de donnees relationnelles. Malheureusement, il devient difficile de gerer des megadonnees (de gros volumes de donnees, « Big Data »). Dans un tel contexte, comme alternative, les environnements « Not-Only SQL » (NoSQL) peuvent fournir un passage a l'echelle tout en gardant une certaine flexibilite pour un systeme OLAP. Nous definissons ainsi des regles pour convertir un schema en etoile, ainsi que son optimisation, le treillis d'agregats pre-calcules, en deux modeles logiques NoSQL : oriente-colonnes ou oriente-documents. En utilisant ces regles, nous implementons et analysons deux systemes decisionnels, un par modele, avec MongoDB et HBase. Nous comparons ces derniers sur les phases de chargement des donnees (generees avec le benchmark TPC-DS), de calcul d'un treillis et d'interrogation.
['Max Chevalier', 'Mohammed El Malki', 'Arlind Kopliku', 'Olivier Teste', 'Ronan Tournier']
Entrepôts de données multidimensionnelles NoSQL
802,780
A simple discrete time model of a thermal unit has been formally developed for designing automatic generation control (AGC) controllers. This model has been developed using data obtained from specific tests and historical records. This model consists of a nonlinear block followed by a linear one. The nonlinear block consists of a dead band and a load change rate limiter, while the linear block consists of a second-order linear model and an offset. Although most of these elements have already been included in unit models for AGC presented in the literature, a certain mix up exists about which of them are necessary. This is clarified in this paper. It has been found that the unit response is mainly determined by the rate limiter, while the other model components are used for a better fitting to the real response. An identification procedure is proposed to estimate the values of the model's parameters.
['Ignacio Egido', 'Fidel Fernandez-Bernal', 'Luis Rouco', 'Eloisa Porras', 'Angel Saiz-Chicharro']
Modeling of thermal generating units for automatic generation control purposes
154,538
IT service providers typically must comply with service level agreements that are part of their usage contracts with customers. Not only IT infrastructure is subject to service level guarantees such as availability or response time but also service management processes as defined by the IT Infrastructure Library (ITIL) such as change and incident processes and the fulfillment of service requests. SLAs relating to service management processes typically address metrics such as initial response time and fulfillment time. Large service providers have the choice of which internal service delivery team or external service provider they assign to parts of a service process, each provider having different costs or prices associated with it for different turn-around times at different risk. This choice in QoS and cost of different service providers can be used to manage the trade-off between penalty costs and fulfillment cost. This paper proposes a model as a basis for service provider choice at process runtime, taking into account the progress of a process so far and the availability of service capacity at service suppliers. This model can be used to reduce total service costs of IT service providers deciding on alternative delivery teams and external service providers when needed and based on current process performance.
['Genady Grabarnik', 'Heiko Ludwig', 'Larisa Shwartz']
Dynamic management of outsourced service processes’ QoS in a service provider - service supplier environment
503,865
Inside the "Black Box": Investigating the Link between Organizational Readiness and IT Implementation Success.
['Nasser Shahrasbi', 'Guy Paré']
Inside the "Black Box": Investigating the Link between Organizational Readiness and IT Implementation Success.
585,539
In this paper, we revisit the point-in-polyhedron problem. After reviewing previous work, we develop further insight into the problem. We then claim that, for a given testing point and a three-dimensional polyhedron, a single determining triangle can be found which suffices to determine whether the point is inside or outside the polyhedron. This work can be considered to be an extension and implementation of Horn's work, which inspired us to propose a theorem for obtaining determining triangles. Building upon this theorem, algorithms are then presented, implemented, and tested. The results show that although our code has the same asymptotic time efficiency as commonly used octree-based ray crossing methods, in practice it is usually several times and sometimes more than ten times faster, while other costs such as preprocessing time and memory requirements remain the same. The ideas proposed in this paper are simple and general. They thus extend naturally to multi-material models, i.e., polyhedrons subdivided into smaller regions by internal boundaries.
['Jianfei Liu', 'Chen Y', 'José M. Maisog', 'George Luta']
A new point containment test algorithm based on preprocessing and determining triangles
433,430
In the editorial by J.C. Bezdek (ibid., p.1), an example is presented to demonstrate differences between fuzzy membership and probability. The authors argue that probability can be used in a way much more closely analogous to this use of fuzzy membership, weakening the argument for the latter. >
['William H. Woodall', 'Robert E. Davis']
Comments on "Editorial: fuzzy models - what are they and why?
535,307
Binary patterns electrically set into a 12 element PLZT line composer have been recorded on a thermoplastic-photoconductor recording medium as Fourier transform (FT) holograms. A short description of the line composer, thermoplastic-photoconductive recording medium and optical system is given as background information for the experimental results. The reconstructed patterns show contrast ratios of 40:1.
['Charles D. Butter', 'T. C. Lee']
Thermoplastic Holographic Recording of Binary Patterns in PLZT Line Composer
127,061
a b s t r a c t It has been shown that the underestimated by DFT/LDA(GGA) band-gap can be efficiently corrected by an averaging procedure of transition energies over a region close to the direct band-gap transition, which we call the ∆(EIG) method (the differences in the Kohn-Sham eigenvalues). For small excitations the averaging appears to be equivalent to the ∆(SCF ) approach (differences in the self-consistent energies), which is a consequence of Janak's theorem and has been confirmed numerically. The Gaussian distribution in k-space for electronic excitation has been used (occupation numbers in the ∆(SCF ) or eigenenergy sampling in the ∆(EIG)). A systematic behavior of the k-space localization parameter σk correcting the band-gap has been observed in numerical experiments. On that basis some sampling schemes for band- gap correction have been proposed and tested in the prediction of the band-gap behavior in InxGa(1−x)N semiconducting alloy, and a very good agreement with independent calculations has been obtained. In the context of the work the issue of electron localization in the r-space has been discussed which, as it has been predicted by Mori-Sanchez et al. (P. Mori-Sanchez, A.J. Cohen, W. Yang, Phys. Rev. Lett. 100 (2008) 146401), should reduce the effect of the convex behavior of the LDA/GGA functionals and improve the band-gap prediction within DFT/LDA(GGA). A scheme for electron localization in r-space has been suggested.
['P. Scharoch', 'M. J. Winiarski']
An efficient method of DFT/LDA band-gap correction
33,712
In this paper, we compare the use of argument structure in Chinese verb sense taxonomy (CVT), which is a hierarchy structure constituted by verb lexical classes, and Chinese Propbank (CPB). The major difference of the usage of argument structure between these two independent resources is: CVT uses argument structure to cluster verb senses, while CPB uses it to distinguish verb senses from each other. First, we introduce these two language resources, the use of argument structure in them is then presented and discussed. In the end, we find there is way to link these two independent resources.
['Xiaopeng Bai', 'Bin Li']
Comparing Argument Structure in Chinese Verb Taxonomy and Chinese Propbank
645,275
We have investigated competitive elements and different forms of feedback through automated assessment in a Data Structures and Algorithms course. It is given at the start of the second year and has about 140 students. In 2011 we investigated the effects of introducing competitive elements utilizing automated assessment. In 2012 we investigated how feedback through automated assessment on the labs influences student's ways of working, their performance, and their relations to the examining staff. The students get immediate feedback concerning correctness and efficiency. When judged correct the assistants make sure the program also fulfill requirements such as being well structured. After the course, we investigated the students attitudes to, and experiences from, using automated assessment via a questionnaire. 80% of the students are positive and it positively influenced their ways of working. 50% said they put in more effort because of automated judging. Moreover, assessment is seen as more objective as it is executed in the exact same manner for everyone. Both of these statements are confirmed by assessing the labs from 2011 using the same automated tool as was used in 2012. Our conclusions are that feedback through automated assessment gives wanted positive effects and is perceived as positive by the students.
['Tommy Färnqvist', 'Fredrik Heintz']
Competition and Feedback through Automated Assessment in a Data Structures and Algorithms Course
830,939
Oblivious low-distortion subspace embeddings are a crucial building block for numerical linear algebra problems. We show for any real p; 1 p <1, given a matrix M 2 R n d with n d, with constant probability we can choose a matrix with max(1;n 1 2=p )poly(d) rows and n columns so that simultaneously for all x2 R d ,kMxkp k Mxk1 poly(d)kMxkp: Importantly, M can be computed in the optimalO(nnz(M)) time, where nnz(M) is the number of non-zero entries ofM . This generalizes all previous oblivious subspace embeddings which required p 2 [1; 2] due to their use of p-stable random variables. Using our matrices , we also improve the best known distortion of oblivious subspace embeddings of ‘1 into ‘1 with ~ O(d) target dimension in O(nnz(M)) time from ~ O(d 3 ) to
['David P. Woodruff', 'Qin Zhang']
Subspace Embeddings and $\ell_p$-Regression Using Exponential Random Variables
101,319
Development of a Pneumatic Surgical Manipulator IBIS IV
['Kotaro Tadano', 'Kenji Kawashima', 'Kazuyuki Kojima', 'br', 'Naofumi Tanaka']
Development of a Pneumatic Surgical Manipulator IBIS IV
978,543
Petri nets are often used to model and analyze workflows. Many workflow languages have been mapped onto Petri nets in order to provide formal semantics or to verify correctness properties. Typically, the so-called Workflow nets are used to model and analyze workflows and variants of the classical soundness property are used as a correctness notion. Since many workflow languages have cancelation features, a mapping to workflow nets is not always possible. Therefore, it is interesting to consider workflow nets with reset arcs. Unfortunately, soundness is undecidable for workflow nets with reset arcs. In this paper, we provide a proof and insights into the theoretical limits of workflow verification.
['Wil M. P. van der Aalst', 'Kees M. van Hee', 'Arthur H. M. ter Hofstede', 'Natalia Sidorova', 'H. M. W. Verbeek', 'Marc Voorhoeve', 'Moe Thandar Wynn']
Soundness of Workflow Nets with Reset Arcs
357,317
Recent studies involving the 3-dimensional conformation of chromatin have revealed the important role it has to play in different processes within the cell. These studies have also led to the discovery of densely interacting segments of the chromosome, called topologically associating domains. The accurate identification of these domains from Hi-C interaction data is an interesting and important computational problem for which numerous methods have been proposed. Unfortunately, most existing algorithms designed to identify these domains assume that they are non-overlapping whereas there is substantial evidence to believe a nested structure exists. We present an efficient methodology to predict hierarchical chromatin domains using chromatin conformation capture data. Our method predicts domains at different resolutions and uses these to construct a hierarchy that is based on intrinsic properties of the chromatin data. The hierarchy consists of a set of non-overlapping domains, that maximize intra-domain interaction frequencies, at each level. We show that our predicted structure is highly enriched for CTCF and various other chromatin markers. We also show that large-scale domains, at multiple resolutions within our hierarchy, are conserved across cell types and species. Our software, Matryoshka, is written in C++11 and licensed under GPL v3; it is available at https://github.com/COMBINE-lab/matryoshka.
['Laraib Malik', 'Robert Patro']
Rich chromatin structure prediction from Hi-C data
712,451
Communication system mismatch represents a major influence in the losses of both speech quality and speaker recognition system performance. Although microphone and handset differences have been considered for speaker recognition e.g., NIST SRE, nonlinear communication system differences, such as modulation/demodulation Mod/DeMod carrier mismatch, have yet to be explored. While such mismatch was common in traditional analog communications, today, with the diversity and blending of communication technologies, it is reconsidered as a major distortion. This paper is focused on estimating and correcting the frequency-shift distortion resulting from Mod/DeMod carrier frequency mismatch in high-frequency single sideband HF-SSB speech. To overcome the drawbacks of existing solutions, a two-step algorithm is proposed to improve estimation performance. In the first step, the offset of speech is scaled to a small frequency interval, which eliminates or reduces the nonuniqueness issue due to the periodicity within the spectrum; the second step performs fine tuning within the estimated predetermined uniqueness interval UI. For the first time, a statistical framework is developed for UI detection, where an innovative acoustic feature is proposed to represent alternative frequency shifts. Additionally, in the estimation process, statistical techniques such as GMM-SVM, i-Vector, and deep neural networks are applied in the first step to improve the estimation accuracy. An evaluation using DARPA RATS HF-SSB data shows that the proposed algorithm achieves a significant improvement in the estimation performance up to +35.6% improvement in accuracy, speech quality measurement up to +27.3% relative improvement in the PESQ score, and speaker verification up to +59.9% relative improvement in equal error rate.
['Hua Xing', 'John H. L. Hansen']
Single Sideband Frequency Offset Estimation and Correction for Quality Enhancement and Speaker Recognition
928,382
Wyner-Ziv (WZ) video coding is a particular case of distributed video coding, which is a recent video coding paradigm based on the Slepian-Wolf and WZ theorems. Contrary to available prediction-based standard video codecs, WZ video coding exploits the source statistics at the decoder, allowing the development of simpler encoders. Until now, WZ video coding did not reach the compression efficiency performance of conventional video coding solutions, mainly due to the poor quality of the side information, which is an estimate of the original frame created at the decoder in the most popular WZ video codecs. In this context, this paper proposes a novel side information refinement (SIR) algorithm for a transform domain WZ video codec based on a learning approach where the side information is successively improved as the decoding proceeds. The results show significant and consistent performance improvements regarding state-of-the-art WZ and standard video codecs, especially under critical conditions such as high motion content and long group of pictures sizes.
['Ricardo Martins', 'Catarina Brites', 'João Ascenso', 'Fernando Lobo Pereira']
Refining Side Information for Improved Transform Domain Wyner-Ziv Video Coding
190,556
Decremental Possibilistic K-Modes.
['Asma Ammar', 'Zied Elouedi', 'Pawan Lingras']
Decremental Possibilistic K-Modes.
797,765
This paper presents novel computer-aided design (CAD) techniques for mitigating IR-drops in field-programmable gate arrays (FPGAs). The proposed placement and routing relies on reducing the switching activities in local regions in the FPGA fabric to improve the profile of the supply voltage distribution. The proposed techniques reduce IR-drops and the variance of the supply voltage distribution across all the nodes in the power grid network. The proposed CAD techniques are efficient as they do not require solving the power grid model at every placement and routing iteration. A reduction of up to 53% in maximum IR-drop and up to 66% reduction in standard deviation of is obtained from the design techniques proposed in this paper with an average impact of 3% on circuit delay.
['Akhilesh Kumar', 'Mohab Anis']
IR-Drop Management in FPGAs
338,259
Purpose – The purpose of this paper is to report on results of an investigation into the impact of adding privacy salient information (defined through the theory of planned behaviour) into the user interface (UI) of a faux social network. Design/methodology/approach – Participants were asked to create their profiles on a new social network specifically for Nottingham Trent University students by answering a series of questions that vary in the sensitivity of personal information requested. A treatment is designed that allows participants to review their answers and make amendments based on suggestions from the treatment. A dynamic privacy score that improves as amendments are made is designed to encourage privacy-oriented behaviour. Results from the treatment group are compared to a control group. Findings – Participants within the treatment group disclosed less than those in the control with statistical significance. The more sensitive questions in particular were answered less when compared to the contr...
['Thomas Hughes-Roberts']
Privacy as a secondary goal problem: an experiment examining control
566,628
Although the quality, performance and future of public library services in the UK is a matter of debate, there is little doubt that in recent years, despite claims relating to the emergence of a cyber-society, interest in library buildings and the library as ‘place’ has been intense, almost matching that seen during the Carnegie era of mass public library building in the early 20th century. Tapping into this renewed enthusiasm for the library built form, this article analyses evidence collected by the Mass-Observation Archive (MOA) in response to a request for written commentary on public library buildings, an investigation commissioned by the author. The MOA contains evidence, stretching back to the 1930s, of the British public’s daily lives and attitudes. The Archive’s data-collection method takes the form of essay-style contributions, varying from a few sentences to thousands of words, submitted from anonymous volunteer correspondents. A total of 180 essays (from 121 women and 59 men) were received, an...
['Alistair Black']
‘We don’t do public libraries like we used to’: Attitudes to public library buildings in the UK at the start of the 21st century
513,479
Using helpful sets to improve graph bisections.
['Ralf Diekmann', 'Burkhard Monien', 'Robert Preis']
Using helpful sets to improve graph bisections.
780,209
When specifying change for an existing system, the history and functionality of the system to be replaced has to be considered. This avoids neglecting important system functionality and repeating errors. The properties and the rationale behind the existing system can be elicited by analysing concrete system-usage scenarios [Pohl, K., Weidenhaupt, K., Domges, R., Haumer, P., Jarke, M., Klamma, R., 1999. Process-integrated (modelling) environments (PRIME): foundation and implementation framework. ACM Transactions on Software Engineering and Methodology (TOSEM), vol. 8, no. 4, pp. 343–410]. The results of the analysis of the existing system are then typically represented using conceptual models. To establish conceptual models of high quality reviewing the models is common practice. The problem faced with when reviewing conceptual models, is that the reviewer cannot assess and therefore understand the basis (concrete system usage) on which the conceptual models were built.#R##N##R##N#In this paper, we present an approach to overcome this problem. We establish Extended Traceability, by recording concrete system-usage scenarios using rich media (e.g. video, speech, graphic) and interrelating the recorded observations with the conceptual models. We discuss the main improvements for review processes and illustrate the advantages with excerpts from a case study performed in a mechanical engineering company.
['Peter Haumer', 'Matthias Jarke', 'Klaus Pohl', 'Klaus Weidenhaupt']
Improving reviews of conceptual models by extended traceability to captured system usage
193,451
With the accelerated development of robot technologies, optimal control becomes one of the central themes of research. In traditional approaches, the controller, by its internal functionality, finds appropriate actions on the basis of the history of sensor values, guided by the goals, intentions, objectives, learning schemes, and so forth. While very successful with classical robots, these methods run into severe difficulties when applied to soft robots, a new field of robotics with large interest for human-robot interaction. We claim that a novel controller paradigm opens new perspective for this field. This paper applies a recently developed neuro controller with differential extrinsic synaptic plasticity to a muscle-tendon driven arm-shoulder system from the Myorobotics toolkit. In the experiments, we observe a vast variety of self-organized behavior patterns: when left alone, the arm realizes pseudo-random sequences of different poses. By applying physical forces, the system can be entrained into definite motion patterns like wiping a table. Most interestingly, after attaching an object, the controller gets in a functional resonance with the object's internal dynamics, starting to shake spontaneously bottles half-filled with water or sensitively driving an attached pendulum into a circular mode. When attached to the crank of a wheel the neural system independently develops to rotate it. In this way, the robot discovers affordances of objects its body is interacting with.
['Georg Martius', 'Rafael Hostettler', 'Alois Knoll', 'Ralf Der']
Compliant control for soft robots: Emergent behavior of a tendon driven anthropomorphic arm
964,650
Actian Vector in Hadoop (VectorH for short) is a new SQL-on-Hadoop system built on top of the fast Vectorwise analytical database system. VectorH achieves fault tolerance and storage scalability by relying on HDFS, and extends the state-of-the-art in SQL-on-Hadoop systems by instrumenting the HDFS replication policy to optimize read locality. VectorH integrates with YARN for workload management, achieving a high degree of elasticity. Even though HDFS is an append-only filesystem, and VectorH supports (update-averse) ordered tables, trickle updates are possible thanks to Positional Delta Trees (PDTs), a differential update structure that can be queried efficiently. We describe the changes made to single-server Vectorwise to turn it into a Hadoop-based MPP system, encompassing workload management, parallel query optimization and execution, HDFS storage, transaction processing and Spark integration. We evaluate VectorH against HAWQ, Impala, SparkSQL and Hive, showing orders of magnitude better performance.
['Andrei Costea', 'Adrian Ionescu', 'Bogdan Răducanu', 'Michał Switakowski', 'Cristian Bârca', 'Juliusz Sompolski', 'Alicja Łuszczak', 'Michał Szafrański', 'Giel de Nijs', 'Peter Boncz']
VectorH: Taking SQL-on-Hadoop to the Next Level
819,867
Parametrized families of PDEs arise in various contexts such#R##N#as inverse problems, control and optimization, risk assessment,#R##N#and uncertainty quantification. In most of these applications, the number of parameters is large or perhaps even infinite. Thus, the development of numerical methods for these parametric problems is faced with the possible curse of dimensionality. #R##N#This article is directed at (i) identifying and understanding which #R##N#properties of parametric equations #R##N#allow one to avoid this curse and (ii) developing and analyzing #R##N#effective numerical methodd which fully exploit #R##N#these properties and, in turn, are immune to the growth in dimensionality.#R##N##R##N#The first part of this article studies the smoothness and approximability of the#R##N#solution map, that is, the map $a\mapsto u(a)$ #R##N#where $a$ is the parameter value and $u(a)$ is#R##N#the corresponding solution to the PDE.#R##N#It is shown that for many relevant parametric PDEs, the #R##N#parametric smoothness of this map is typically holomorphic and also highly#R##N#anisotropic in that the relevant parameters #R##N#are of widely varying importance in describing the solution. #R##N#These two properties are then exploited to establish convergence rates of #R##N#$n$-term approximations to the solution map for which each term is separable in #R##N#the parametric and physical variables. These results reveal that, #R##N#at least on a theoretical level, the solution map#R##N#can be well approximated by discretizations of moderate complexity, #R##N#thereby showing how the curse of dimensionality is broken. #R##N#This theoretical analysis is carried out through concepts of approximation theory#R##N#such as best $n$-term approximation, sparsity, and $n$-widths. #R##N#These notions determine a priori the best possible performance of numerical#R##N#methods and thus serve as a benchmark for concrete#R##N#algorithms.#R##N##R##N#The second part of this article turns to the development of numerical algorithms based on the theoretically established sparse separable approximations. #R##N#The numerical methods studied fall into two general categories.#R##N#The first uses polynomial expansions #R##N#in terms of the parameters to approximate the solution map. The second one#R##N#searches for suitable low dimensional spaces for #R##N#simultaneously approximating all members of the parametric family.#R##N#The numerical implementation of these approaches is carried out through adaptive and greedy algorithms. An a priori analysis of the performance of these algorithms establishes how well they meet the theoretical benchmarks.
['Albert Cohen', 'Ronald A. DeVore']
Approximation of high-dimensional parametric PDEs
482,710
This study compares six change detection techniques to study land cover change associated with tropical forest (El Rawashda forest reserve, Gedaref State, Sudan). For this site, Landsat 7 Enhanced Thematic Mapper (ETM+) data acquired on March 22, 2003 and Aster data acquired on February 26, 2006 were used. The change detection techniques employed in this study were Post-Classification Comparison (PCC), image differencing of different vegetation indices (Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI) and Transformed Difference Vegetation Index (TDVI)), Principal Component Analysis (PCA), Multivariate Alteration Detection (MAD), Change Vector Analysis (CVA) and Tasseled Cap Analysis (TCA). As field validation data did not exist for 2003, a manual classification was performed, then a change map was conducted to locate and identify change in vegetation. This change map was used as a reference to quantitatively assess the accuracy of each change-detection techniques. Based on accuracy assessment, the most successful technique was the PCC technique with an accuracy of 94%. This was followed by the MAD technique with an accuracy 88.8%. However, among vegetation indices techniques, TDVI stood out as better than NDVI and SAVI in its ability to accurately identify vegetation change.
['Wafa Nori', 'Hussein M. Sulieman', 'Irmgard Niemeyer']
Detection of land cover changes in El Rawashda Forest, Sudan: A systematic comparison
263,241
I show that W-like entangled quantum states are not a necessary quantum resource for totally correct anonymous leader election protocols. This is proven by defining a symmetric quantum state that is n-partite SLOCC inequivalent to the W state, and then constructing a totally correct anonymous leader election protocol using this state. This result, which contradicts the previous necessity result of D'Hondt and Panangaden, furthers our understanding of how non- local quantum states can be used as a resource for distributed computation.
['Alexander R. Norton']
W-like states are not necessary for totally correct quantum anonymous leader election
806,834
The Ideal of Program Correctness: Third Computer Journal Lecture
['Tony Hoare']
The Ideal of Program Correctness: Third Computer Journal Lecture
295,663
Learning Speed Improvement Using Multi-GPUs on DNN-Based Acoustic Model Training in Korean Intelligent Personal Assistant
['Donghyun Lee', 'Kwang-Ho Kim', 'Heeeun Kang', 'Sangho Wang', 'Sungyong Park', 'Ji-Hwan Kim']
Learning Speed Improvement Using Multi-GPUs on DNN-Based Acoustic Model Training in Korean Intelligent Personal Assistant
724,486
Unmasking the Mystique:: Utilizing Narrative Character-Playing Games to Support English Language Fluency
['Jennifer Killham', 'Adam Saligman', 'Kelli M. Jette']
Unmasking the Mystique:: Utilizing Narrative Character-Playing Games to Support English Language Fluency
902,975
A methodology is developed to realized optimal channel input conditional distributions, which maximize the finite-time horizon directed information, for channels with memory and feedback, by information lossless randomized strategies. The methodology is applied to general Time-Varying Multiple Input Multiple Output (MIMO) Gaussian Linear Channel Models (G-LCMs) with memory, subject to average transmission cost constraints of quadratic form. The realizations of optimal distributions by randomized strategies are shown to exhibit a decomposion into a deterministic part and a random part. The decomposition reveals the dual role of randomized strategies, to control the channel output process and to transmit new information over the channels. Moreover, a separation principle is shown between the computation of the optimal deterministic part and the random part of the randomized strategies. The dual role of randomized strategies generalizes the Linear-Quadratic-Gaussian (LQG) stochastic optimal control theory to directed information pay-offs. The characterizations of feedback capacity are obtained from the per unit time limits of finite-time horizon directed information, without imposing \'a priori assumptions, such as, stability of channel models or ergodicity of channel input and output processes. For time-invariant MIMO G-LCMs with memory, it is shown that whether feedback increases capacity, is directly related to the channel parameters and the transmission cost function, through the solutions of Riccati matrix equations, and moreover for unstable channels, feedback capacity is non-zero, provided the power exceeds a critical level.
['Charalambos D. Charalambous', 'Christos K. Kourtellaris', 'Sergey Loyka']
Capacity Achieving Distributions & Information Lossless Randomized Strategies for Feedback Channels with Memory: The LQG Theory of Directed Information-Part II
714,686
A Computational Approach to the Borwein-Ditor Theorem
['Aleksander Galicki', 'André Nies']
A Computational Approach to the Borwein-Ditor Theorem
843,219
For the effect of polarization interference on satellite-to-ground data transmission system, the major factors that affect polarization interference are concluded, the theoretical model of polarization interference is derived, the deteriorations of bit error rate (BER) of different modulation systems and channel coding modes are simulated, the transmission performance test results of each system in the same degree of polarization interference are compared, and the rules that satellite-to-ground remote sensing data transmission system are affected by polarization interference in different E b /N 0 working thresholds are analyzed.
['Zhisong Hao', 'Zhiming Zheng', 'Fangmin Xu', 'Zhichao Qin']
Effect analysis of polarization interference on satellite-to-ground remote sensing data transmission
943,115
Human-robot interaction (HRI) is now well enough understood to allow us to build useful systems that can function outside of the laboratory. We are studying long-term interaction in natural user environments and describe the implementation of a robot designed to help individuals effect behavior change while dieting. Our robotic weight loss coach is compared to a standalone computer and a paper log in a controlled study. We describe the software model used to create successful long-term HRI. We summarize the experimental design, analysis, and results of our study, the first where a sociable robot interacts with a user to achieve behavior change. Results show that participants track their calorie consumption and exercise for nearly twice as long when using the robot than with the other methods and develop a closer relationship with the robot. Both are indicators of longer-term success at weight loss and maintenance and show the effectiveness of sociable robots for long-term HRI.
['Cory D. Kidd', 'Cynthia Breazeal']
Robots at home: Understanding long-term human-robot interaction
226,842
This paper presents a novel technique for direct conversion of digital complex time series into radio frequency (RF) band. Most of the operations in this method are implemented by software and/or digital circuits. The proposed method, is composed of some signal processing procedures, some switching circuits, a phase shifter (all-digital phase-locked loop), and an analog RF filter. The switching technique, which is used for a, joint amplitude and phase modulation, makes the method highly power efficient. The signal processing procedure results in a highly linear converter and, shapes the power spectrum of the output in order to satisfy the required power masking properties in a given application. The all-digital phase-locked loop or the phase shifter along with some simple digital circuits control the switching times of the output. The complex input sequence is first converted into phase and amplitude after over-sampling using a CORDIC processor. The phase sequence controls the amplitude and phase of a six-step waveform which has three-levels at the output, i.e., zero and plusmnA(t), where plusmnA(t) is controlled by the amplitude sequence. In this paper, the theoretical and some practical aspects of the proposed technique are presented
['Alireza Heidar-barghi', 'Saeed Gazor']
A Digital Architecture for Direct Digital-to-RF Converters
42,864
While printable electronics (PE) has the potential to provide a low cost means to manufacture electronic solutions, it suffers serious levels of component variations. While many of the problems facing PE are early technological development yield-related problems, some are more fundamental to its manufacturing nature. In this work, methods to construct PE circuits that compensate for these variations will be described. These methods include a combination of printing, severing, and ink deposition techniques that are performed both during and after manufacturing. Data obtained from working prototypes made using the PE technology at the National Research Council (NRC) of Canada will be described.
['Adam Gordon', 'Gordon W. Roberts', 'Christian Jesús B. Fayomi']
Low-cost trimmable manufacturing methods for printable electronics
878,915
Using recent characterisations of topologies of spaces of vector fields for general regularity classes—e.g., Lipschitz, finitely differentiable, smooth, and real analytic—characterisations are provided of geometric control systems that utilise these topologies. These characterisations can be expressed as joint regularity properties of the system as a function of state and control. It is shown that the common characterisations of control systems in terms of their joint dependence on state and control are, in fact, representations of the fact that the natural mapping from the control set to the space of vector fields is continuous. The classes of control systems defined are new, even in the smooth category. However, in the real analytic category, the class of systems defined is new and deep. What are called “real analytic control systems” in this article incorporate the real analytic topology in a way that has hitherto been unexplored. Using this structure, it is proved, for example, that the trajectories of a real analytic control system corresponding to a fixed open-loop control depend on initial condition in a real analytic manner. It is also proved that control-affine systems always have the appropriate joint dependence on state and control. This shows, for example, that the trajectories of a control-affine system corresponding to a fixed open-loop control depend on initial condition in the manner prescribed by the regularity of the vector fields.
['Saber Jafarpour', 'Andrew D. Lewis']
Locally convex topologies and control theory
567,642
A transacted memory that is implemented using EEPROM technology offers persistence, undoability and auditing. The transacted memory system is formally specified in Z, and refined in two steps to a prototype C implementation / SPIN model. Conclusions are offered both on the transacted memory system itself and on the development process involving multiple notations and tools.
['Pieter H. Hartel', 'Michael J. Butler', 'Eduard de Jong', 'Mark Longley']
Transacted Memory for Smart Cards
512,258
Background#R##N#Text mining and data integration methods are gaining ground in the field of health sciences due to the exponential growth of bio-medical literature and information stored in biological databases. While such methods mostly try to extract bioentity associations from PubMed, very few of them are dedicated in mining other types of repositories such as chemical databases.
['Nikolas Papanikolaou', 'Georgios A. Pavlopoulos', 'Theodosios Theodosiou', 'Ioannis S. Vizirianakis', 'Ioannis Iliopoulos']
DrugQuest - a text mining workflow for drug association discovery
816,430
Understanding Open Source Communities as Complex Adaptive Systems: A Case of the R Project Community
['Georg J. P. Link', 'Matt Germonprez']
Understanding Open Source Communities as Complex Adaptive Systems: A Case of the R Project Community
905,881
Multi-lab el learning has attracted much attention during the past few years. Many multi-label approaches have been developed, mostly working with surrogate loss functions because multi-label loss functions are usually difficult to optimize directly owing to their non-convexity and discontinuity. These approaches are effective empirically, however, little effort has been devoted to the understanding of their consistency, i.e., the convergence of the risk of learned functions to the Bayes risk. In this paper, we present a theoretical analysis on this important issue. We first prove a necessary and sufficient condition for the consistency of multi-label learning based on surrogate loss functions. Then, we study the consistency of two well-known multi-label loss functions, i.e., ranking loss and hamming loss. For ranking loss, our results disclose that, surprisingly, none of convex surrogate loss is consistent; we present the partial ranking loss, with which some surrogate losses are proven to be consistent. We also discuss on the consistency of univariate surrogate losses. For hamming loss, we show that two multi-label learning methods, i.e., one-vs-all and pairwise comparison, which can be regarded as direct extensions from multi-class learning, are inconsistent in general cases yet consistent under the dominating setting, and similar results also hold for some recent multi-label approaches that are variations of one-vs-all. In addition, we discuss on the consistency of learning approaches that address multi-label learning by decomposing into a set of binary classification problems.
['Wei Gao', 'Zhi-Hua Zhou']
On the Consistency of Multi-Label Learning
805,648
Optical networking special issue based on selected IEEE ICOCN 2015 papers
['Lei Guo', 'Hoon Kim', 'Alan Pak Tao Lau', 'Shilong Pan']
Optical networking special issue based on selected IEEE ICOCN 2015 papers
894,976
In this work we focus on the problem of image caption generation. We propose an extension of the long short term memory (LSTM) model, which we coin gLSTM for short. In particular, we add semantic information extracted from the image as extra input to each unit of the LSTM block, with the aim of guiding the model towards solutions that are more tightly coupled to the image content. Additionally, we explore different length normalization strategies for beam search to avoid bias towards short sentences. On various benchmark datasets such as Flickr8K, Flickr30K and MS COCO, we obtain results that are on par with or better than the current state-of-the-art.
['Xu Jia', 'Efstratios Gavves', 'Basura Fernando', 'Tinne Tuytelaars']
Guiding the Long-Short Term Memory Model for Image Caption Generation
590,227
The performance of memory and I/O systems is insufficient to catch up with that of COTS (Commercial Off-The-Shelf) CPU. PC clusters using COTS CPU have been employed for HPC. A cache-based processor is far less effective than a vector processor in applications with low spatial locality. Moreover, for HPC, Google-like server farms and database processing, insufficient capacity of main memory poses a serious problem. Power consumption of a Google-like server farm or a high-end HPC PC cluster is huge. In order to overcome these problems, we propose a concept of a memory and network enhancer equipped with scatter and gather vector access functions, high-performance network connectivity, and capacity extensibility. Communication mechanisms named LHS and LHC are also proposed. LHS and LHC are architectures for reducing latency for mixed messages with small controlling data and large data body. Examples of the killer applications of this new type of hardware are presented. This paper presents not only concepts and simulations but also real hardware prototypes named DIMMnet-2 and DIMMnet-3. This paper presents the evaluations concerning memory issues and network issues. We evaluate the module with NAS CG benchmark class C and Wisconsin benchmarks as applications with memory issues. Although evaluation for CG class C is difficult with conventional cycle-accurate simulation methods, we obtained the result for class C with our original method. As a result, we find that the module can improve its maximum performance about 25 times more with Wisconsin benchmarks. However, the results on a cache-based PC show the cache-line flushing degrades acceleration ratio. This shows the high potential of the proposed extended memory module and processors in combination with DMA-based main memory access such as SPU on Cell/B.E. that does not need cache-line flushing. The LHS and LHC communication mechanisms are evaluated in this paper. The evaluations of their effects on latency are shown.
['Noboru Tanabe', 'Hirotaka Hakozaki', 'Hiroshi Ando', 'Yasunori Dohi', 'Zhengzhe Luo', 'Hironori Nakajo']
An enhancer of memory and network for applications with large-capacity data and non-continuous data accessing
829,632
Improving Power Efficiency in WBAN Communication Using Wake Up Methods
['Stevan Jovica Marinkovic', 'Emanuel M. Popovici', 'Emil Jovanov']
Improving Power Efficiency in WBAN Communication Using Wake Up Methods
563,583
The computer industry is increasingly dependent on open architectural standards for their competitive success. This paper describes a new approach to secure system design in which the various representations of the architecture of a software system are described formally and the desired security properties of the system are proven to hold at the architectural level. The main ideas are illustrated by means of the X/Open distributed transaction processing reference architecture, which is formalized and extended for secure access control as defined by the Bell-LaPadula model. The extension allows vendors to develop individual components independently and with minimal concern about security. Two important observations were gleaned on the implications of incorporating security into software architectures.
['Mark Moriconi', 'Xiaolei Qian', 'Robert A. Riemenschneider', 'Li Gong']
Secure software architectures
246,003
Special Issue: Experimental Economics in Practice: Experimental Economics and Supply-Chain Management.
['Rachel Croson', 'Karen Donohue']
Special Issue: Experimental Economics in Practice: Experimental Economics and Supply-Chain Management.
732,638
Outlier detection is a key element for intelligent financial surveillance system. The detection procedures generally fall into two categories: comparing every transaction against its account history and further more, comparing against a peer group to determine if the behavior is unusual. The later approach shows particular merits in efficiently extracting suspicious transaction and reducing false positive rate. Peer group analysis concept is largely dependent on a cross-datasets outlier detection model. In this paper, we propose a new cross outlier detection model based on distance definition incorporated with the financial transaction data features. An approximation algorithm accompanied with the model is provided to optimize the computation of the deviation from tested data point to the reference dataset. An experiment based on real bank data blended with synthetic outlier cases shows promising results of our model in reducing false positive rate while enhancing the discriminative rate remarkably
['Tang Jun']
A Peer Dataset Comparison Outlier Detection Model Applied to Financial Surveillance
357,544
This report gives an overview of the second ECOOP Workshop on Formal Techniques for Java Programs. It explains the motivation for such a workshop and summarizes the presentations and discussions.
['Sophia Drossopoulou', 'Susan Eisenbach', 'Bart Jacobs', 'Gary T. Leavens', 'Peter Müller', 'Arnd Poetzsch-Heffter']
Formal Techniques for Java Programs
417,041
With the development of mobile communication, adaptive algorithms are often chosen in real time process because of their high speed. However, existing adaptive algorithms are not suitable in frequency selective fading channels. In this paper, an improved adaptive synchronization algorithm is proposed for the application under frequency selective fading channels. A novel correlation function is proposed to reduce the effect of intersymbol interference, which only requires coarse channel length estimation. A gate value is also used to combine both the constant and decreasing step size methods; thus, the proposed algorithm is capable of distinguishing between tracking and convergent stages, and then choosing the right step size. Furthermore, an accelerating method is introduced to make the algorithm track faster, i.e. making the algorithm move more steps if it searches in the same direction as the last step. It is shown that the mean square errors of time and frequency offset estimates can be reduced to 0.9243 and 0.003 respectively even when the channel length estimate is not good, which totally satisfies Beek's requirement for synchronization. Simulation also shows that it takes only 10-20 symbols for the proposed algorithm to track the change of channel delay. Compared with the existing adaptive algorithm, the proposed algorithm has a higher accuracy with a higher tracking speed, and the proposed algorithm also has advantages over some nonadaptive algorithms with higher accuracy and lower computational complexity.
['Jian Chen', 'Ming Li', 'Yonghong Kuo']
Adaptive OFDM synchronization algorithm in frequency selective fading channels
12,912
Development of a Piezoelectric Screwdriver for Recessless Screws
['Hiroshi Kawano', 'Hideyuki Ando']
Development of a Piezoelectric Screwdriver for Recessless Screws
980,149
Many topology-dependent transmission scheduling algorithms have been proposed to minimize the time-division multiple-access frame length in multihop packet radio networks (MPRNs), in which changes of the topology inevitably require recomputation of the schedules. The need for constant adaptation of schedules-to-mobile topology entails significant problems, especially in highly dynamic mobile environments. Hence, topology-transparent scheduling algorithms have been proposed, which utilize Galois field theory and Latin squares theory. We discuss the topology-transparent broadcast scheduling design for MPRNs. For single-channel networks, we propose the modified Galois field design (MGD) and the Latin square design (LSD) for topology-transparent broadcast scheduling. The MGD obtains much smaller minimum frame length (MFL) than the existing scheme while the LSD can even achieve possible performance gain when compared with the MGD, under certain conditions. Moreover, the inner relationship between scheduling designs based on different theories is revealed and proved, which provides valuable insight. For topology-transparent broadcast scheduling in multichannel networks, in which little research has been done, the proposed multichannel Galois field design (MCGD) can reduce the MFL approximately M times, as compared with the MGD when M channels are available. Numerical results show that the proposed algorithms outperform existing algorithms in achieving a smaller MFL.
['Zhijun Cai', 'Mi Lu', 'Costas N. Georghiades']
Topology-transparent time division multiple access broadcast scheduling in multihop packet radio networks
424,649
Quantitative workload analysis and prediction using Google cluster traces
['Bingwei Liu', 'Yinan Lin', 'Yu Chen']
Quantitative workload analysis and prediction using Google cluster traces
866,217
Mining long frequent sequences that contain a combinatorial number of frequent subsequences or using very low support thresholds to mine sequential patterns is both time-and memory-consuming. The mining of closed sequential patterns, sequential generator patterns, and maximum sequences has been proposed to overcome this problem. This paper proposes an algorithm for generating all sequential generator patterns. This algorithm uses a vertical approach to listing and counting the support of sequence based on the prime block encoding approach to represent candidate sequences and determine the frequency for each candidate. The search space of the proposed algorithm is much smaller than those of other algorithms because super sequence frequency-based pruning and non-generator-based pruning are applied. Besides, hash tables are also used for fast checking the existed sequential generator patterns. Experimental results conducted on synthetic and real databases show that the proposed algorithm is effective.
['Thi–Thiet Pham', 'Jiawei Luo', 'Tzung Pei Hong']
An efficient algorithm for mining sequential generator pattern using prefix trees and hash tables
419,931
Internet protocol TV (IPTV) has emerged as a new platform in converged environments. In particular, due to its support for the convergence of devices, IPTV allows almost every IP-enabled terminal to simultaneously consume the same rich media service delivered over an IP-based network. However, this convergence potentially leads to a significant decrease in both quality of service (QoS) and quality of experience (QoE) in IPTV services because different terminals present the same service differently based on their capabilities, thus making the service inconsistent. Recently, the LASeR standard, a technology competing for rich media representation, was revised to deal with this problem. This paper presents the relevant amendment of the standard and, using a prototype implementation, shows the effectiveness of the approach taken by the LASeR.
['Byoung-Dai Lee']
Provisioning of adaptive rich media services in consideration of terminal capabilities in IPTV environments
327,284
Recognizing an action from a sequence of 3D skeletal poses is a challenging task. First, different actors may perform the same action in various styles. Second, the estimated poses are sometimes inaccurate. These challenges can cause large variations between instances of the same class. Third, the datasets are usually small, with only a few actors performing few repetitions of each action. Hence training complex classifiers risks over-fitting the data. We address this task by mining a set of key-pose-motifs for each action class. A key-pose-motif contains a set of ordered poses, which are required to be close but not necessarily adjacent in the action sequences. The representation is robust to style variations. The key-pose-motifs are represented in terms of a dictionary using soft-quantization to deal with inaccuracies caused by quantization. We propose an efficient algorithm to mine key-pose-motifs taking into account of these probabilities. We classify a sequence by matching it to the motifs of each class and selecting the class that maximizes the matching score. This simple classifier obtains state-of the-art performance on two benchmark datasets.
['Chunyu Wang', 'Yizhou Wang', 'Alan L. Yuille']
Mining 3D Key-Pose-Motifs for Action Recognition
821,548
We consider wireless ad-hoc networks and implement failure detections mechanisms. These failure detectors provide elementary information for high level distributed algorithms such as consensus, election or agreement. The aim is to guarantee a quality of service for these mechanisms. Stochastic models for tuning failure detectors are proposed based on frequency analysis and contention modelling. Tuning methods are suggested for setting time-out delays. The theoretical results were validated experimentally on a wireless platform, based on a statistical analysis of the measurements.
['Corine Marchand', 'Jean-Marc Vincent']
Performance tuning of failure detectors in wireless ad-hoc networks: modelling and experiments
498,834
The purpose of this paper is two fold: first, I look to show Oaklander's (The ontology of time. New York: Prometheus Books, 2004) theory of time to be false. Second, I show that the only way to salvage the B-theory is via the adopting of the causal theory of time, and allying this to Oaklander's claim that tense is to be eliminated. I then raise some concerns with the causal theory of time. My conclusion is that, if one adopts eternalism, the unreality of time looks a better option than the B-theory.
['Jonathan Tallant']
What is it to “B” a relation?
51,970
A Framework for Virtual Restoration of Ancient Documents by Combination of Multispectral and 3D Imaging
['Gianfranco Bianco', 'Fabio Bruno', 'Anna Tonazzini', 'Emanuele Salerno', 'Pasquale Savino', 'Barbara Zitová', 'Filip Sroubek', 'Elena Console']
A Framework for Virtual Restoration of Ancient Documents by Combination of Multispectral and 3D Imaging
75,391
We investigate the problem of synchronization in a network of homogeneous Wilson-Cowan oscillators with diffusive coupling. Such networks can be used to model the behavior of populations of neurons in cortical tissue, referred to as neural mass models. A new approach is proposed to address conditions for local synchronization for this type of neural mass models. By analyzing the linearized model around a limit cycle, we study synchronization within a network with direct coupling. We use both analytical and numerical approaches to link the presence or absence of synchronized behavior to the location of eigenvalues of the Laplacian matrix. For the analytical part, we apply two-time scale averaging and the Chetaev theorem, while, for the remaining part, we use a recently proposed numerical approach. Sufficient conditions are established to highlight the effect of network topology on synchronous behavior when the interconnection is undirected. These conditions are utilized to address points that have been previously reported in the literature through simulations: synchronization might persist or vanish in the presence of perturbation in the interconnection gains. Simulation results confirm and illustrate our results.
['Saeed Ahmadizadeh', 'Dragan Nesic', 'Dean R. Freestone', 'David B. Grayden']
On synchronization of networks of Wilson-Cowan oscillators with diffusive coupling
813,273
Implementation of quantified self technologies in workplaces relies on the ontological premise of Cartesian dualism with mind dominant over body. Contributing to debates in new materialism, we demonstrate that workers are now being asked to measure our own productivity and health and well-being in art-houses and warehouses alike in both the global north and south. Workers experience intensified precarity, austerity, intense competition for jobs and anxieties about the replacement of labour-power with robots and other machines as well as, ourselves replaceable, other humans. Workers have internalised the imperative to perform, a subjectification process as we become observing entrepreneurial subjects and observed, objectified labouring bodies. Thinking through the implications of the use of wearable technologies in workplaces, this article shows that these technologies introduce a heightened Taylorist influence on precarious working bodies within neoliberal workplaces.
['Phoebe Moore', 'Andrew Robinson']
The quantified self: What counts in the neoliberal workplace
642,814
Improved Complexity Bounds for Computing with Planar Algebraic Curves
['Alexander Kobel', 'Michael Sagraloff']
Improved Complexity Bounds for Computing with Planar Algebraic Curves
745,141
This paper proposed the advantages of using a 2D/3D hybrid imagery system over the use of 3D by itself. A hybrid imagery system was created by projecting a 3D (stereo) image in between and overlapping onto two adjacent 2D images. The negative effect where 2D and 3D images overlap was studied and resolved. Then sensations subject experienced from the visual cues under the different conditions were attained. Participant's sensations while looking at the different forms of imagery on both a flat screen and a flat/inclined screen combination were then attained. The data for the 2D/3D hybrid system were compared with that attained for a 3D image system on its own (without 2D images). Results indicate that there are benefits to using a 2D/3D hybrid system over 3D by itself.
['Shinji Tasaki', 'Takehisa Matsushita', 'Kazuhiro Koshi', 'Chikamune Wada', 'Hiroaki Koga']
Sense of virtual reality : Effectiveness of replacing 3D imagery with 2D/3D hybrid imagery
857,694
Peer-to-peer networks are becoming increasingly popular as a method of creating highly scalable and robust distributed systems. To address performance issues when scaling traditional unstructured protocols to large network sizes many protocols have been proposed which make use of distributed hash tables to provide a decentralised and robust routing table. This paper investigates the most significant structured distributed hash table (DHT) protocols through a comparative literature review and critical analysis of results from controlled simulations. This paper discovers several key design differences, resulting in pastry performing best in every test. Chord performs worst, mostly attributed to its unidirectional distance metric, while significant generation of maintenance messages hold Kademila back in bandwidth tests.
['Alexander Betts', 'Lu Liu', 'Zhiyuan Li', 'Nick Antonopoulos']
A critical comparative evaluation on DHT-based peer-to-peer search algorithms
541,865
We introduce a new data mining problem--redescription mining--that unifies considerations of conceptual clustering, constructive induction, and logical formula discovery. Redescription mining begins with a collection of sets, views it as a propositional vocabulary, and identifies clusters of data that can be defined in at least two ways using this vocabulary. The primary contributions of this paper are conceptual and theoretical: (i) we formally study the space of redescriptions underlying a dataset and characterize their intrinsic structure, (ii) we identify impossibility as well as strong possibility results about when mining redescriptions is feasible, (iii) we present several scenarios of how we can custom-build redescription mining solutions for various biases, and (iv) we outline how many problems studied in the larger machine learning community are really special cases of redescription mining. By highlighting its broad scope and relevance. we aim to establish the importance of redescription mining and make the case for a thrust in this new line of research.
['Laxmi Parida', 'Naren Ramakrishnan']
Redescription mining: structure theory and algorithms
135,278
In this paper, we present a formal model of human decision making in explore-exploit tasks using the context of multiarmed bandit problems, where the decision maker must choose among multiple options with uncertain rewards. We address the standard multiarmed bandit problem, the multiarmed bandit problem with transition costs, and the multiarmed bandit problem on graphs. We focus on the case of Gaussian rewards in a setting where the decision maker uses Bayesian inference to estimate the reward values. We model the decision maker's prior knowledge with the Bayesian prior on the mean reward. We develop the upper-credible-limit (UCL) algorithm for the standard multiarmed bandit problem and show that this deterministic algorithm achieves logarithmic cumulative expected regret, which is optimal performance for uninformative priors. We show how good priors and good assumptions on the correlation structure among arms can greatly enhance decision-making performance, even over short time horizons. We extend to the stochastic UCL algorithm and draw several connections to human decision-making behavior. We present empirical data from human experiments and show that human performance is efficiently captured by the stochastic UCL algorithm with appropriate parameters. For the multiarmed bandit problem with transition costs and the multiarmed bandit problem on graphs, we generalize the UCL algorithm to the block UCL algorithm and the graphical block UCL algorithm, respectively. We show that these algorithms also achieve logarithmic cumulative expected regret and require a sublogarithmic expected number of transitions among arms. We further illustrate the performance of these algorithms with numerical examples.
['Paul Reverdy', 'Vaibhav Srivastava', 'Naomi Ehrich Leonard']
Modeling Human Decision Making in Generalized Gaussian Multiarmed Bandits
217,644
In this article we present a novel algorithm for measuring protein similarity based on their three dimensional structure (protein tertiary structure). The PROSIMA algorithm using suffix tress for discovering common parts of main-chains of all proteins appearing in current NCSB Protein Data Bank (PDB). By identifying these common parts we build a vector model and next use classical information retrieval tasks based on the vector model to measure the similarity between proteins - all to all protein similarity. For the calculation of protein similarity we are using tf-idf term weighing schema and cosine similarity measure. The goal of this work to use the whole current PDB database (downloaded on June 2009) of known proteins, not just some kinds of selections of this database, which have been studied in other works. We have chose the SCOP database for verification of precision of our algorithm because it is maintained primarily by humans. The next success of this work is to be able to determine protein SCOP categories of proteins not included in the latest version of the SCOP database (v. 1.75) with nearly 100% precision.
['Tomáš Novosád', 'Vaclav Snasel', 'Ajith Abraham', 'Yang J']
Prosima: Protein similarity algorithm
89,981
We present DSDSR , a generic repair tool for complex data structures. Generic, automatic data structure repair algorithms have applications in many areas. Reducing repair time can may therefore have a significant impact on software robustness. Current state of the art tools try to address the problem exhaustively and their performance depend primarily on the style of the correctness condition. We propose a new approach and implement a prototype that suffers less from style limitations and utilizes recent improvements in automatic theorem proving to reduce the time required in repairing a corrupt data structure. We also present experimental results to demonstrate the promise of our approach for generic repair and discuss our prototype implementation.
['Ishtiaque Hussain', 'Christoph Csallner']
DSDSR: a tool that uses dynamic symbolic execution for data structure repair
526,698
Test integration for heterogeneous cores under test has been a challenging problem in a system-on-chip (SoC) design. To integrate heterogeneous cores under test, the test wrapper should be capable of dealing with multiple-clock domain problems, at-speed testing problems, test power problems, etc. In this paper, we propose an alternative wrapper architecture that supports multiple clock domains, and therefore test operations can run (test) at system speed. Since each CUT has very different requirements, the test wrapper unavoidably needs to be re-designed for a new CUT. In order to reduce the manual effort, we propose to automatically generate test wrappers and the corresponding test programs based on the given configuration and test description for each CUT. We adopted the IEEE 1450.6 standard, a.k.a. Core Test Language (CTL), as the test description language in this work. Through the process, circuits can be tested with low overheads, and minimal intervention from designers will be required. We have successfully integrated a test wrapper generated by using our tool into a test chip which includes a Memory BIST and a Logic BIST and tapped out the chip in TSMC 0.18$\mu$m technology. The experiments showed that the area overhead of proposed architecture is only 0.02\% of chip area in the chip.
['Sung-Yu Chen', 'Ying-Yen Chen', 'Chun-Yu Yang', 'Jing-Jia Liou']
Multiple-Core under Test Architecture for HOY Wireless Testing Platform
21,533
Towards the Integrated Simulation and Programming of Palletizing Lines
['Antonello Calò', 'Davide Buratti', 'Dario Lodi Rizzini', 'Stefano Caselli']
Towards the Integrated Simulation and Programming of Palletizing Lines
773,835
The orientation and correction of mural images by registering them with laser scanning data is critical for the digital protection of ancient murals. This paper proposes a method for the non-rigid registration of mural images and laser scanning data based on the optimization of the edge of interest by using laser echo intensity information as an intermediary. First, the intensity image was generated from the laser echo intensity information, and registered with the mural image using a rigid transformation model. Second, the edges of interest in the mural image and the gradient field of the intensity images were processed as registration primitives. Third, every edge of interest was registered with the optimization base used in the rigid registration of the mural image and intensity image. Finally, the registration was completed after a non-rigid transformation model between these two images was constructed using the control points on the optimized edges. Our experimental results show that the proposed method can obtain high registration accuracy for different data sets.
['Fan Zhang', 'Xianfeng Huang', 'Wei Fang', 'Deren Li']
Non-rigid registration of mural images and laser scanning data based on the optimization of the edges of interest
621,266
Observation of Thermal Influence on Error Motions of Rotary Axes on a Five-Axis Machine Tool by Static R-Test
['Cefu Hong', 'Soichi Ibaraki']
Observation of Thermal Influence on Error Motions of Rotary Axes on a Five-Axis Machine Tool by Static R-Test
993,365
Presents a new semi-automatic method for quantifying regional heart function from two-dimensional echocardiography. In the approach, we first track the endocardial and epicardial boundaries using a new variant of the dynamic snake approach. The tracked borders are then decomposed into clinically meaningful regional parameters, using a novel interpretational shape-space motivated by the 16-segment model used in clinical practice for qualitative assessment of heart function. We show how a quantitative and automatic scoring scheme for the endocardial excursion and myocardial thickening can be derived from this. Results illustrating our approach on apical long-axis two-chamber-view data from a patient with a myocardial infarct in the apical anterior/inferior region of the heart are presented. In a case study (five patients, nine data sets) the performance of the tracking and interpretation techniques are compared with manual delineations of borders using a number of quantitative measures of regional comparison.
['Gary Jacob', 'J.A. Noble', 'Christian Peter Behrenbruch', 'Andrew D Kelion', 'Adrian P. Banning']
A shape-space-based approach to tracking myocardial borders and quantifying regional left-ventricular function applied in echocardiography
380,423
While demand for physically contiguous memory allocation is still alive, especially in embedded system, existing solutions are insufficient. The most adapted solution is reservation technique. Though it serves allocation well, it could severely degrade memory utilization. There are hardware solutions like Scatter/Gather DMA and IOMMU. However, cost of these additional hardware is too excessive for low-end devices. CMA is a software solution of Linux that aims to solve not only allocation but also memory utilization problem. However, in real environment, CMA could incur unpredictably slow latency and could often fail to allocate contiguous memory due to its complex design. We introduce a new solution for the above problem, GCMA (Guaranteed Contiguous Memory Allocator). It guarantees not only memory space efficiency but also fast latency and success by using reservation technique and letting only immediately discardable to use the area efficiently. Our evaluation on Raspberry Pi 2 shows 15 to 130 times faster and more predictable allocation latency without system performance degradation compared to CMA.
['SeongJae Park', 'Minchan Kim', 'Heon Young Yeom']
GCMA: guaranteed contiguous memory allocator
688,595
Current advancements in web technologies are enabling to develop new features of interactive systems that rely on cloud infrastructure and services. In this paper, we present our efforts related to the development of three prototypes of a web-based visualization tool that use Google Cloud Services to process and visualize geo-temporal data. The domain in which these efforts are taking place is in the field of environmental science. The need of web-based visualization tools in this area indicates the importance of allowing users in an interactive manner to explore, analyze and reflect on different representations of environmental data. We discuss the development of these prototypes and their features, as well as the different iterations that were carried out during these processes. The outcomes of the design and development processes point out that the users' feedback generated during the prototyping cycles provided valuable insights in order to further enhance the web-based visualization tool. Thus, our study emphasizes the need of finding a balance between design and development in order to consider how the rapid evolution of web technologies should be taken into account during the design and planning of different cycles of web prototyping.
['Bahtijar Vogel']
An Interactive Web-Based Visualization Tool: Design and Development Cycles
55,164
Fault diagnosis and accommodation(FDA) for nonlinear multi-variables system under multi-fault are investigated in the paper. A complete FDA architecture is proposed by incorporating the intelligent fault tolerant control strategy with a cost-effective fault detection and diagnosis (FDD) scheme based on a multiple-model. The schem efficiently handles the accommodation of both the anticipated and unanticipated failures in online situations. The three-tank with multiple sensor fault concurrence is simulated, the simulating result shows that the fault detection and tolerant control strategy has stronger robustness and tolerant fault ability.
['Jun Li', 'Cuimei Bo', 'Jiugen Zhang', 'Jie Du']
Fault diagnosis and accommodation based on online multi-model for nonlinear process
822,773
This paper examines operation of TFRC (TCP-friendly rate control) in scenarios where the receiver is untrustworthy and can misbehave to receive data at an unfairly high rate at the expense of competing traffic. Several attacks are considered for a selfish receiver to take advantage of TFRC. After confirming experimentally that the identified receiver attacks are effective, we design robust TCP-friendly rate control (RTFRC), a TFRC variant resilient to the attacks. We also show that additional attacks targeted directly at RTFRC are unable to compromise the protocol
['Manfred Georg', 'Sergey Gorinsky']
Protecting TFRC from a Selfish Receiver
188,290
Visualization of 3-D volume data through maximum intensity projections (MIP) requires isotropic voxels for generation of undistorted projected images. Unfortunately, due to the inherent scanning geometry, X-ray computed tomographic (CT) images are mostly axial images with submillimeter pixel resolution, with the slice spacing on the order of half to one centimeter. These axial images must be interpolated across the slices prior to the projection operation. The linear interpolation, due to the inherent noise in the data, generates MIP images with noise whose variance varies quadratically along the z-axis. Therefore, such MIP images often suffer from horizontal streaking artifacts, exactly at the position of the original slices (e.g., in coronal and sagittal MIPs). We propose a different interpolation technique based on a digital finite impulse response (FIR) filter. The proposed technique flattens the change in noise variances across the z-axis and results in either elimination or a reduction of horizontal streaking artifacts in coronal and sagittal views.
['Samuel Moon-Ho Song', 'Junghyun Kwon']
Interpolation of CT Slices for 3-D Visualization by Maximum Intensity Projections
437,217
Motivation: Although Genome Wide Association Studies (GWAS) genotype a very large number of single nucleotide polymorphisms (SNPs), the data are often analyzed one SNP at a time. The low predictive power of single SNPs, coupled with the high significance threshold needed to correct for multiple testing, greatly decreases the power of GWAS.#R##N##R##N#Results: We propose a procedure in which all the SNPs are analyzed in a multiple generalized linear model, and we show its use for extremely high-dimensional datasets. Our method yields P-values for assessing significance of single SNPs or groups of SNPs while controlling for all other SNPs and the family wise error rate (FWER). Thus, our method tests whether or not a SNP carries any additional information about the phenotype beyond that available by all the other SNPs. This rules out spurious correlations between phenotypes and SNPs that can arise from marginal methods because the ‘spuriously correlated’ SNP merely happens to be correlated with the ‘truly causal’ SNP. In addition, the method offers a data driven approach to identifying and refining groups of SNPs that jointly contain informative signals about the phenotype. We demonstrate the value of our method by applying it to the seven diseases analyzed by the Wellcome Trust Case Control Consortium (WTCCC). We show, in particular, that our method is also capable of finding significant SNPs that were not identified in the original WTCCC study, but were replicated in other independent studies.#R##N##R##N#Availability and implementation: Reproducibility of our research is supported by the open-source Bioconductor package hierGWAS.#R##N##R##N#Contact: [email protected]#R##N##R##N#Supplementary information: Supplementary data are available at Bioinformatics online.
['Laura Buzdugan', 'Markus Kalisch', 'Arcadi Navarro', 'Daniel Schunk', 'Ernst Fehr', 'Peter Bühlmann']
Assessing statistical significance in multivariable genome wide association analysis
701,005
Polynomial encoding of ORM conceptual models in CFDI .
['Pablo Rubén Fillottrani', 'C. Maria Keet', 'David Toman']
Polynomial encoding of ORM conceptual models in CFDI .
733,662
This paper presents a contactless capacitive angular speed sensor for automotive applications. The sensor is based on a passive rotating electrode placed between two mechanically static and electrically active electrodes. The different characteristics of the charge transfer at various sensor positions is utilized as an input for the calculation of the rotational speed. The main advantages of this low cost system are its capability to operate at high temperatures and humidity as well as its insensitivity to vibrations, dirt, dew and moisture deposited on the three sensor electrodes. The mathematical model of the sensor further enables the optimization of the sensor characteristics for specific applications. Experimental results from a prototype designed for the speed-measurement of a steering-wheel show a relative speed error of /spl plusmn/4% at a resolution better than 1/spl deg//s.
['Tibor Fabian', 'Georg Brasseur']
A robust capacitive angular speed sensor
552,567