abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
We present a novel technological approach, based on Textual Rulelog, to automated decision support for financial regulatory/policy compliance, via a case study on banking Regulation W from the US Federal Reserve. Legal regulations and related bank operational policies in English documents are en- coded relatively inexpensively by authors into Rulelog, a highly expressive log- ical knowledge representation. Key compliance queries are automatically an- swered accurately and fully explained in English, understandable to non-IT com- pliance staff and auditors. The prospective business impact of our approach over the next decade or two is significantly increased productivity and systemic stabil- ity, industry-wide, worth many billions of dollars.
['Benjamin N. Grosof', 'Janine Bloomfield', 'Paul Fodor', 'Michael Kifer', 'Isaac Grosof', 'Miguel Calejo', 'Terrance Swift']
Automated Decision Support for Financial Regulatory/Policy Compliance, using Textual Rulelog
688,436
Summary. A graph is a data structure composed of dots (i.e. vertices) and lines (i.e. edges). The dots and lines of a graph can be organized into intricate arrangements. The ability for a graph to denote objects and their relationships to one another allow for a surprisingly large number of things to be modeled as a graph. From the dependencies that link software packages to the wood beams that provide the framing to a house, most anything has a corresponding graph representation. However, just because it is possible to represent something as a graph does not necessarily mean that its graph representation will be useful. If a modeler can leverage the plethora of tools and algorithms that store and process graphs, then such a mapping is worthwhile. This article explores the world of graphs in computing and exposes situations in which graphical models are benecial.
['Marko A. Rodriguez', 'Peter Neubauer']
Constructions from dots and lines
8,173
Electronic health records (EHRs) have potential to improve the quality, efficiency, and cost of health care.1–6 The transition from traditional paper-based care to EHRs within both hospitals and ambulatory practices has been aggressively promoted by federal initiatives7,8 and is rapidly transforming the process of health care delivery throughout the United States.9–11 However, clinicians have raised concerns that EHR implementation has negatively impacted their real-world clinical productivity.12–16 For example, at Oregon Health & Science University (OHSU), we have one of the leading biomedical informatics departments in the world and completed a successful EHR implementation in 2006 that received national publicity. Yet we have published studies showing that OHSU ophthalmologists currently see 3–5% fewer patients than before EHR implementation and require >40% additional time for each patient encounter.17#R##N##R##N#Approaches toward improving the efficiency of clinical workflow using EHRs would have significant real-world impact. Clinicians are pressured to see more patients in less time for less reimbursement due to persistent concerns about the accessibility and cost of health care.18,19 Providers today are facing increased patient loads along with increased encounter times due to EHR use, but do not have guidance or information about how to meet these demands. For example, ophthalmologists typically see 15–30 patients or more in a half-day session, utilize multiple exam rooms simultaneously, work with ancillary staff (e.g., technicians, ophthalmic photographers), and examine patients in multiple stages (e.g., before and after dilation of eyes, before and after ophthalmic imaging studies). This creates enormous challenges in workflow and scheduling, and large variability in operational approaches.20#R##N##R##N#Patient wait time is a result of pressure on provider time as well as clinic inefficiency; wait time has been shown to affect patient satisfaction as well as create barriers to health care.21,22. Mathematics, specifically queueing theory, explains waiting by the mismatch of arrival times and service times (time with a physician).23 This mismatch can be increased by ad-hoc scheduling protocols that artificially increase patient wait time.24,25 Addressing this mismatch using smarter scheduling strategies has potential for improving patient wait time.26 Studying and evaluating appointment scheduling strategies in clinical settings is impractical, however, since patient and provider time is too valuable for experimentation. Empirical models of clinical processes using discrete event simulation (DES) can evaluate potential scheduling strategies effectively before implementing them in clinical settings. DES requires large amounts of workflow timing data—much more than can reasonably be collected using traditional time-motion studies. We believe that data to address these problems is currently available within EHR. One major benefit of EHR systems is that clinical data can be applied for “secondary use” beyond direct provision of clinical care; current efforts have focused on areas such as clinical research, public health, adverse event reporting, and quality assurance.27–29 Data mining the EHR data has been used to determine patient no-shows with success30, grouping patients in emergency departments (ED)31, and for quality assurance in the ED.32,33 DES has been used for quality improvement in healthcare, but not for evaluating scheduling strategies based on secondary use EHR data and detailed workflow data.30,31,34,35#R##N##R##N#In this paper, we present the results of using secondary EHR data for modeling clinical workflow in 3 outpatient ophthalmology clinics at OHSU. Ophthalmology is an ideal domain for these studies because it is a high-volume, high-complexity field that combines both medical and surgical practice. Our results show that the secondary use of EHR data for workflow data shows promise; it matches the trends of observed clinic workflows and is available for thousands of patient encounters. Further, workflow data can be used to build simulation models for evaluating scheduling strategies based on patient classification.
['Michelle R. Hribar', 'Sarah Read-Brown', 'Leah G. Reznick', 'Lorinna Lombardi', 'Mansi Parikh', 'Thomas R. Yackel', 'Michael F. Chiang']
Secondary Use of EHR Timestamp data: Validation and Application for Workflow Optimization.
830,280
The Improvement of the Processes of a Class of Graph-Cut-Based Image Segmentation Algorithms
['Shengxiao Niu', 'Gengsheng Chen']
The Improvement of the Processes of a Class of Graph-Cut-Based Image Segmentation Algorithms
948,088
Turing, a fast stream cipher.
['Gregory G. Rose', 'Philip Michael Hawkes']
Turing, a fast stream cipher.
796,179
This paper presents a novel 3D deformable surface that we call an active polyhedron. Rooted in surface evolution theory, an active polyhedron is a polyhedral surface whose vertices deform to minimize a regional and/or boundary-based energy functional. Unlike continuous active surface models, the vertex motion of an active polyhedron is computed by integrating speed terms over polygonal faces of the surface. The resulting ordinary differential equations (ODEs) provide improved robustness to noise and allow for larger time steps compared to continuous active surfaces implemented with level set methods. We describe an electrostatic regularization technique that achieves global regularization while better preserving sharper local features. Experimental results demonstrate the effectiveness of an active polyhedron in solving segmentation problems as well as surface reconstruction from unorganized points.
['Gregory G. Slabaugh', 'Gozde Unal']
Active polyhedron: surface evolution theory applied to deformable meshes
355,381
The fovea of a mammal retina was simulated with its detailed biological properties to study the local preprocessing of images. The direct visual pathway (photoreceptors, bipolar and ganglion cells) and the horizontal units, as well as the D-amacrine cells were simulated. The computer program simulated the analog nonspiking transmission between photoreceptor and bipolar cells, and between bipolar and ganglion cells, as well as the gap-junctions between horizontal cells, and the release of dopamine by D-amacrine cells and its diffusion in the extra-cellular space. A 64 × 64 photoreceptors retina, containing 16,448 units, was carried out. This retina displayed contour extraction with a Mach effect, and adaptation to brightness. The simulation showed that the dopaminergic amacrine cells were necessary to ensure adaptation to local brightness.
['Eric Boussard', 'Jean-François Vibert']
Dopaminergic Neuromodulation Brings a Dynamical Plasticity to the Retina
338,850
Simulation provides the U.S. Navy Submarine Force with the opportunity to safely, efficiently, and economically train Sailors at all levels in most aspects of their profession, starting with individual skills as the fundamental building blocks and assembling progressively larger teams. The Navy has invested heavily in providing the correct level of simulation fidelity for each application. This paper introduces the submarine training system in place today with emphasis on the impact of simulation technology and the benefits it provides across the spectrum of submarine training.
['Michael C. Jones']
Simulation across the spectrum of submarine training
4,366
Unlike their traditional, silicon counterparts, DNA computers have natural interfaces with both chemical and biological systems. These can be used for a number of applications, including the precise arrangement of matter at the nanoscale and the creation of smart biosensors. Like silicon circuits, DNA strand displacement systems DSD can evaluate non-trivial functions. However, these systems can be slow and are susceptible to errors. It has been suggested that localised hybridization reactions could overcome some of these challenges. Localised reactions occur in DNA 'walker' systems which were recently shown to be capable of navigating a programmable track tethered to an origami tile. We investigate the computational potential of these systems for evaluating Boolean functions. DNA walkers, like DSDs, are also susceptible to errors. We develop a discrete stochastic model of DNA walker 'circuits' based on experimental data, and demonstrate the merit of using probabilistic model checking techniques to analyse their reliability, performance and correctness.
['Frits Dannenberg', 'Marta Z. Kwiatkowska', 'Chris Thachuk', 'Andrew J. Turberfield']
DNA Walker Circuits: Computational Potential, Design, and Verification
575,751
In this paper I compare the expressive power of several models of concurrency based on their ability to represent causal dependence. To this end, I translate these models, in behaviour preserving ways, into the model of higher dimensional automata, which is the most expressive model under investigation. In particular, I propose four different translations of Petri nets, corresponding to the four different computational interpretations of nets found in the literature.I also extend various equivalence relations for concurrent systems to higher dimensional automata. These include the history preserving bisimulation, which is the coarsest equivalence that fully respects branching time, causality and their interplay, as well as the ST-bisimulation, a branching time respecting equivalence that takes causality into account to the extent that it is expressible by actions overlapping in time. Through their embeddings in higher dimensional automata, it is now well-defined whether members of different models of concurrency are equivalent.
['R.J. van Glabbeek']
On the Expressiveness of Higher Dimensional Automata
507,980
In R&D project selection, experts (or external reviewers) always play a very important role because their opinions will have great influence on the outcome of the project selection. It is also undoubted that experts with high-expertise level will make useful and professional judgments on the projects to be selected. So, how to measure the expertise level of experts and select the most appropriate experts for project selection is a very significant issue. This paper presents a group decision support approach to evaluate experts for R&D project selection. Where the criteria and their attributes for evaluating experts are summarized mainly based on the experience with the National Natural Science Foundation of China (NSFC). A formal procedure that integrates both objective and subjective information on experts is also presented. It is mainly based on analytic hierarchy process (AHP), scoring method, and fuzzy linguistic processing. A group decision support system is designed and implemented for illustration of the proposed method.
['Yong-Hong Sun', 'Jian Ma', 'Zhi-Ping Fan', 'Jun Wang']
A Group Decision Support Approach to Evaluate Experts for R&D Project Selection
342,987
The Internet consists of economically selfish players in terms of access/transit connection and content distribution. Such selfish behaviors often lead to techno-economic inefficiencies, such as unstable peering and revenue imbalance. Recent research results suggest that cooperation-based fair revenue sharing, i.e., multi-level Internet service provider (ISP) settlements, can be a candidate solution to avoid unfair revenue share. However, it has been under-explored whether selfish ISPs actually cooperate or not (often referred to as the stability of coalition), because they may partially cooperate or even do not cooperate, depending on how much revenue is distributed to each individual ISP. In this paper, we study this stability of coalition in the Internet, where our aim is to investigate the conditions under which ISPs cooperate under different regimes on the traffic demand and network bandwidth. We first consider the under-demanded regime, i.e., network bandwidth exceeds traffic demand, where revenue sharing based on Shapley value leads ISPs to entirely cooperate, i.e., stability of the grand coalition. Next, we consider the over-demanded regime, i.e., traffic demand exceeds network bandwidth, where there may exist some ISPs who deviate from the grand coalition. In particular, this deviation depends on how users’ traffic is handled inside the network, for which we consider three traffic scheduling policies having various degrees of content-value preference. We analytically compare those three scheduling policies in terms of network neutrality, and stability of cooperation that provides useful implications on when and how multi-level ISP settlements help and how the Internet should be operated for stable peering and revenue balance among ISPs.
['Hyojung Lee', 'Hyeryung Jang', 'Jeong-woo Cho', 'Yung Yi']
Traffic Scheduling and Revenue Distribution Among Providers in the Internet: Tradeoffs and Impacts
997,769
Change propagation has been identified as major concern for process collaborations during the last years. Although changes might become necessary for various reasons, they can often not be kept local, i.e., at one partner's side, but must be partly or entirely propagated to one or several other partners. Due to the autonomy of partners in a collaboration, change effects cannot be imposed on the partners, but must be agreed upon in a consensual way. In our model of this collective decision process, we assume that each partner that becomes involved in a negotiation has different alternatives on how a change may be realized, and evaluates these alternatives according to his or her individual costs and benefits utilities. This paper presents models from group decision making that can be applied for handling change negotiations in process collaborations in an efficient and fair way. The theoretical models are evaluated based on a proof-of-concept prototype that integrates an existing implementation for change propagation in process collaborations with change alternatives, utility functions, and group decision models. Based on simulating a realistic setting, the validity of the approach is shown. Our prototype supports the selection of change alternatives for each partner during negotiation that depending on the group decision model used, provides solutions emphasizing efficiency and/or fairness.
['Walid Fdhila', 'Conrad Indiono', 'Stefanie Rinderle-Ma', 'Rudolf Vetschera']
Finding Collective Decisions: Change Negotiation in Collaborative Business Processes
642,760
In this article, we consider a stochastic model of wireless sensor networks (WSNs) in which each sensor node randomly and alternatively stays in an active mode or a sleep mode. The active mode consists of two phases, called the full-active phase and the semi-active phase. When a referenced sensor node is in the full-active phase of the active mode, it may sense data packets, transmit the sensed packets, receive packets, and relay the received packets. However, when the phase of the sensor node switches from the full active phase to the semi-active phase, it is only able to transmit/relay data. When the referenced sensor node is in a sleep mode, it does not interact with the external world. In this article, first, we develop a stochastic model for the sensor node of a WSN, and then we derive an explicit expression of the stationary distribution of the number of data packets in the sensor node. Furthermore, we figure out some important performance measures, including the sensor node’s energy consumption for transmission, the energy consumption of the sensor operations, and the average energy consumption of the sensor node in a cycle of active and sleep modes. Also, a numerical analysis is provided to validate the proposed model and the results obtained. The novel aspects of our research are the development of a stochastic model for WSN with active and sleep features and the development of important analytical formulae for evaluating the energy consumption of a WSN. These results are expected to be useful as significant contributions to the fundamental theory of the design of various WSNs with active and sleep mode considerations.
['Yuhong Zhang', 'Wei Wayne Li']
Modeling and energy consumption evaluation of a stochastic wireless sensor network
446,382
Universal digital portable communications: a system perspective
['D. Cox', 'H. W. Arnold', 'P. T. Porter']
Universal digital portable communications: a system perspective
851,828
Patterns are descriptions and solutions for recurring problems in software design and implementation. In this paper, some ideas towards a formal approach to the specification of patterns in model-driven engineering (MDE) is presented. The approach is based on the Diagram Predicate Framework which provides a formal approach to (meta)modelling, model transformation and model management in MDE. In particular, patterns are defined as diagrammatic specifications and constraint-aware model transformations are adapted to enforce patterns. Moreover, running examples are used to illustrate the facade design pattern in structural models.
['Yngve Lamo', 'Adrian Rutle', 'Florian Mantz']
Enforcement of Patterns by Constraint-Aware Model Transformations
685,595
Background#R##N#Early diastolic left ventricular (LV) filling can be accurately described using the same methods used in classical mechanics to describe the motion of a loaded spring as it recoils, a validated method also referred to as the Parameterized Diastolic Filling (PDF) formalism. With this method, each E-wave recorded by pulsed wave (PW) Doppler can be mathematically described in terms of three constants: LV stiffness (k), viscoelasticity (c), and load (x0). Also, additional parameters of physiological and diagnostic interest can be derived. An efficient software application for PDF analysis has not been available. We aim to describe the structure, feasibility, time efficiency and intra-and interobserver variability for use of such a solution, implemented in Echo E-waves, a freely available software application (www.echoewaves.org).
['Martin Sundqvist', 'Katrin Salman', 'Per Tornvall', 'Martin Ugander']
Kinematic analysis of diastolic function using the freely available software Echo E-waves – feasibility and reproducibility
921,590
Given two convex d-polytopes P and Q in R d for d 3, we study the problem of bundling P and Q in a smallest convex container. More precisely, our problem asks to nd a minimum convex set containing and Q that are in contact under translations. For dimension d = 3, we present the rst exact algorithm that runs in O(n 3 ) time, where n denotes the number of vertices of P and Q. Our approach easily extends to any higher dimension d > 3, resulting in the rst exact algorithm.
['Hee-Kap Ahn', 'Sang Won Bae', 'Otfried Cheong', 'Dongwoo Park', 'Chan-Su Shin']
Minimum Convex Container of Two Convex Polytopes under Translations
673,678
In this paper, we derive a method to refine a Bayes network diagnostic model by exploiting constraints implied by expert decisions on test ordering. At each step, the expert executes an evidence gathering test, which suggests the test's relative diagnostic value. We demonstrate that consistency with an expert's test selection leads to non-convex constraints on the model parameters. We incorporate these constraints by augmenting the network with nodes that represent the constraint likelihoods. Gibbs sampling, stochastic hill climbing and greedy search algorithms are proposed to find a MAP estimate that takes into account test ordering constraints and any data available. We demonstrate our approach on diagnostic sessions from a manufacturing scenario.
['Omar Zia Khan', 'Pascal Poupart', 'John Mark Agosta']
Automated Refinement of Bayes Networks' Parameters based on Test Ordering Constraints
70,054
Summary: Sequencing of a bi-allelic PCR product, which contains an allele with a deletion/insertion mutation results in a superimposed tracefile following the site of this shift mutation. A trace file of this type hampers the use of current computer programs for base calling. ShiftDetector analyses a sequencing trace file in order to discover if it is a superimposed sequence of two molecules that differ in a shift mutation of 1 to 25 bases. The program calculates a probability score for the existence of such a shift and reconstructs the sequence of the original molecule. Availability: ShiftDetector is available from http://cowry.
['Eyal Seroussi', 'Micha Ron', 'Darek Kedra']
ShiftDetector: detection of shift mutations
115,299
Abstract#R##N##R##N#Identifying the users and impact of research is important for research performers, managers, evaluators, and sponsors. It is important to know whether the audience reached is the audience desired. It is useful to understand the technical characteristics of the other research/development/applications impacted by the originating research, and to understand other characteristics (names, organizations, countries) of the users impacted by the research. Because of the many indirect pathways through which fundamental research can impact applications, identifying the user audience and the research impacts can be very complex and time consuming. The purpose of this article is to describe a novel approach for identifying the pathways through which research can impact other research, technology development, and applications, and to identify the technical and infrastructure characteristics of the user population. A novel literature-based approach was developed to identify the user community and its characteristics. The research performed is characterized by one or more articles accessed by the Science Citation Index (SCI) database, beccause the SCI's citation-based structure enables the capability to perform citation studies easily. The user community is characterized by the articles in the SCI that cite the original research articles, and that cite the succeeding generations of these articles as well. Text mining is performed on the citing articles to identify the technical areas impacted by the research, the relationships among these technical areas, and relationships among the technical areas and the infrastructure (authors, journals, organizations). A key component of text mining, concept clustering, was used to provide both a taxonomy of the citing articles' technical themes and further technical insights based on theme relationships arising from the grouping process. Bibliometrics is performed on the citing articles to profile the user characteristics. Citation Mining, this integration of citation bibliometrics and text mining, is applied to the 307 first generation citing articles of a fundamental physics article on the dynamics of vibrating sand-piles. Most of the 307 citing articles were basic research whose main themes were aligned with those of the cited article. However, about 20% of the citing articles were research or development in other disciplines, or development within the same discipline. The text mining alone identified the intradiscipline applications and extradiscipline impacts and applications; this was confirmed by detailed reading of the 307 abstracts. The combination of citation bibliometrics and text mining provides a synergy unavailable with each approach taken independently. Furthermore, text mining is a REQUIREMENT for a feasible comprehensive research impact determination. The integrated multigeneration citation analysis required for broad research impact determination of highly cited articles will produce thousands or tens or hundreds of thousands of citing article Abstracts. Text mining allows the impacts of research on advanced development categories and/or extradiscipline categories to be obtained without having to read all these citing article Abstracts. The multifield bibliometrics provide multiple documented perspectives on the users of the research, and indicate whether the documented audience reached is the desired target audience.
['Ronald N. Kostoff', 'J. Antonio del Río', 'James A. Humenik', 'Esther Ofilia García', 'Ana María Ramírez']
Citation mining : Integrating text mining and bibliometrics for research user profiling
381,570
Improving ASR performance on non-native speech using multilingual and crosslingual information.
['Ngoc Thang Vu', 'Yuanfan Wang', 'Marten Klose', 'Zlatka Mihaylova', 'Tanja Schultz']
Improving ASR performance on non-native speech using multilingual and crosslingual information.
754,062
Various problems in artificial intelligence can be solved by translating them into a quantified boolean formula (QBF) and evaluating the resulting encoding. In this approach, a QBF solver is used as a black box in a rapid implementation of a more general reasoning system. Most of the current solvers for QBFs require formulas in prenex conjunctive normal form as input, which makes a further translation necessary, since the encodings are usually not in a specific normal form. This additional step increases the number of variables in the formula or disrupts the formula's structure. Moreover, the most important part of this transformation, prenexing, is not deterministic. In this paper, we focus on an alternative way to process QBFs without these drawbacks and describe a solver, $\ensuremath{\sf qpro}$ , which is able to handle arbitrary formulas. To this end, we extend algorithms for QBFs to the non-normal form case and compare $\ensuremath{\sf qpro}$ with the leading normal form provers on several problems from the area of artificial intelligence. We prove properties of the algorithms generalized to non-clausal form by using a novel approach based on a sequent-style formulation of the calculus.
['Uwe Egly', 'Martina Seidl', 'Stefan Woltran']
A solver for QBFs in negation normal form
441,904
Among the eight tasks of the DARPA Robotics Challenge (DRC), the driving task was one of the most challenging. Obstacles in the course prevented straight driving and restricted communications limited the situation awareness of the operator. In this video we show how Team Hector and Team ViGIR successfully completed the driving task with different robot platforms, THOR-Mang and Atlas respectively, but using the same software and compliant steering adapter. Our driving user interface presents to the operator image view from cameras and driving aids such as wheel positioning and turn radius path of the wheels. The operator uses a standard computer game joystick which is used to command steering wheel angles and gas pedal pressure. Steering wheel angle positions are generated off-line and interpolated on-line in the robot's onboard computer. The compliant steering adapter accommodates end-effector positioning errors. Gas pedal pressure is generated by a binary joint position of the robot's leg. Commands are generated in the operator control station and sent as target positions to the robot. The driving user interface also provides feedback from the current steering wheel position. Video footage with descriptions from the driving interface, robot's camera and LIDAR perception and external task monitoring is presented.
['Alberto Romay', 'Achim Stein', 'Martin Oehler', 'Alexander Stumpf', 'Stefan Kohlbrecher', 'Oskar von Stryk', 'David C. Conner']
Open source driving controller concept for humanoid robots: Teams hector and ViGIR at 2015 DARPA robotics challenge finals
574,340
State-of-the-art narrowband noise cancellation techniques utilise the generalised eigenvalue decomposition (GEVD) for multi-channel Wiener filtering, which can be applied to independent frequency bins in order to achieve broadband processing. Here we investigate the extension of the GEVD to broadband, polynomial matrices, akin to strategies that have already been developed by McWhirter et. al on the polynomial matrix eigenvalue decomposition (PEVD). In our approach we extend the Cholesky method for calculating the scalar GEVD to polynomial matrices. In this paper we outline our Cholesky-like approach, which utilises recently developed techniques for polynomial matrix spectral factorisation and polynomial matrix inversion.
['Jamie Corr', 'Jennifer Pestana', 'Stephan Weiss', 'Ian K. Proudler', 'Soydan Redif', 'Marc Moonen']
Investigation of a polynomial matrix generalised EVD for multi-channel Wiener filtering
864,123
A Hybrid eBusiness Software Metrics Framework for Decision Making in Cloud Computing Environment
['Feng Zhao', 'Guodong Nian', 'Hai Jin', 'Laurence T. Yang', 'Yajun Zhu']
A Hybrid eBusiness Software Metrics Framework for Decision Making in Cloud Computing Environment
898,518
Increasing buffer number for future technology makes traditional one-pass-flow (timing driven placement is followed by buffer insertion and legalization) failed, since accommodation for buffers significantly disturbs original design. This paper exploits the delicate relationship between buffer insertion and timing driven placement, and proposes a novel method to incorporate buffer insertion during timing driven placement. Experimental results show that this incorporation not only ensures design convergence, but also benefits timing behavior and alleviates buffer explosion.
['Lijuan Luo', 'Qiang Zhou', 'Yici Cai', 'Xianlong Hong', 'Yibo Wang']
A novel technique integrating buffer insertion into timing driven placement
28,294
The paper presents a system for knowledge representation and coordination, where autonomous agents reason and act in a shared environment. Agents autonomously pursue individual goals, but can interact through a shared knowledge repository. In their interactions, agents deal with problems of synchronization and concurrency, and have to realize coordination by developing proper strategies in order to ensure a consistent global execution of their autonomously derived plans. This kind of knowledge is modeled using an extension of the action description languageB. A distributed planning problem is formalized by providing a number of declarative specifications of the portion of the problem pertaining a single agent. Each of these specifications is executable by a stand-alone CLP-based planner. The coordination platform, implemented in Prolog, is easily modifiable and extensible. New user-defined interaction protocols can be integrated.
['Agostino Dovier', 'Andrea Formisano', 'Enrico Pontelli']
BAAC: A Prolog System for Action Description and Agents Coordination
436,175
To optimize the utility boiler' s combustion process, a method for its combustion performance modeling based on modular Redial Basis Function(RBF) Neural Network is proposed in this paper. The whole modeling can be divided into two stages: first, get the mathematical model of carbon content of fly ash, exhaust flue gas temperature and their related input parameters; second, take the output of Neural Network as the input of boiler thermal efficiency calculation, and build a modular performance model of boiler combustion. This method can express the boiler combustion model in parts, the parts that can be described with mathematics be expressed with functions, other parts that can not be described with mathematics be expressed with RBF Neural Network. Data test and practical applications prove that this modeling method is efficient, has high precision and also meets the needs of boiler's running optimization.
['Zhi Li', 'Xiangfeng Wang', 'Xuewei Gao', 'Xu Zhang']
Utility boiler's combustion performance modeling based on modular RBF network
58,174
Power Analysis Based Reverse Engineering on the Secret Round Function of Block Ciphers.
['Ming Tang', 'Zhenlong Qiu', 'Weijie Li', 'Shubo Liu', 'Huanguo Zhang']
Power Analysis Based Reverse Engineering on the Secret Round Function of Block Ciphers.
758,487
This paper analyzes and compares different incentive mechanisms for a client to motivate the collaboration of smartphone users on both data acquisition and distributed computing applications.
['Lingjie Duan', 'Takeshi Kubo', 'Kohei Sugiyama', 'Jianwei Huang', 'Teruyuki Hasegawa', 'Jean C. Walrand']
Incentive mechanisms for smartphone collaboration in data acquisition and distributed computing
81,202
Synchronous collaboration via a network of distributed workstations requires concurrency awareness within a relaxed WYSIWIS model (What You See is What I See). Many applications let users navigate within highly structured object spaces, such as documents, multimedia courses, graphs, or nested tables, which can be distributed asynchronously. To support their manipulation through distributed teams, we provide a novel interaction paradigm, called finger. A finger serves to highlight objects of interest within the shared object space. Locations of fingers, their movements, and changes inflicted on objects can be signaled by means of operation broadcasting to other collaborators who need to be aware of it. Other than telepointers, fingers do not require window-sharing and are independent of the actual object presentation. This paper describes the interaction paradigm and basic set of operations. It uses a tele-teaching scenario to illustrate its features. However its collaboration principles apply to many other CSCW areas, like shared authoring, trading, scheduling, crisis management, and distance maintenance.
['Bernd J. Krämer', 'Lutz M. Wegner']
Beyond the Whiteboard: synchronous collaboration in shared object spaces
39,974
A test compression/decompression scheme using statistical coding is proposed for design-for-testability (DFT) in order to reduce test application cost. In this scheme, a given test set of a VLSI circuit is compressed by statistical coding beforehand, and then decompressed while the VLSI circuit is tested. Previously, we proposed a method for generating test sets suitable for the test compression scheme. The method generates a small compressed test set, although the number of test-patterns included in the test set is not always small. In this paper, we propose a method to generate highly compressible test sets while keeping the number of generated test sets small. Experimental results show that our method can generate small, compressible test sets in short computational time.
['Hideyuki Ichihara', 'Tomoo Inoue']
Generating small test sets for test compression/decompression scheme using statistical coding
32,638
Large power grid RLC models are complex and computationally expensive. This work explores the efficient modeling of power grid at early stages of the design with reduced complexity. This paper presents a simplified model of the power grid circuit based on the assumption of uniform load current distribution and equipotential nodes approximation. In addition, based on the moment matching technique an approximate analytical model is derived. The presented models provide huge reduction in complexity without scarifying much accuracy in the trans-impedance frequency response compared to the full RLC power grid model. This is desirable in the early phases of design where only initial estimates of load currents are available. The analytical model presented can be easily used for power grid optimization and trade-offs.
['DiaaEldin Khalil', 'Yehea I. Ismail']
Approximate Frequency Response Models for RLC Power Grids
342,653
Fabrication of Miniature Shell Structures of Stainless Steel Foil and Their Forming Limit in Single Point Incremental Microforming
['Toshiyuki Obikawa', 'Tsutomu Sekine']
Fabrication of Miniature Shell Structures of Stainless Steel Foil and Their Forming Limit in Single Point Incremental Microforming
989,757
Business process execution data is analyzed for different reasons such as process discovery, performance analysis, or anomaly detection. However, visualizations might suffer from a number of limitations. Sonification (the presentation of data using sound) has been proven to successfully enhance visualization in many domains. Although there exist approaches that apply sonification for real-time monitoring of process executions, so far this technique has not been applied to analyze process execution data ex post. We therefore propose a multi-modal system, combining visualization and sonification, for this purpose. The concepts are evaluated by a prototypical ProM plugin as well as based on a use case.
['Tobias Hildebrandt', 'Felix Amerbauer', 'Stefanie Rinderle-Ma']
Combining Sonification and Visualization for the Analysis of Process Execution Data
962,276
We propose in the paper the concept of Phase Forward (PF) as a possible relay strategy for cooperative communication involving CPFSK modulation in a "fast" fading environment. The technique enables the relay nodes to maintain constant envelop signaling without the need to perform decoding and signal regeneration. To further reduce complexity, we adopt non-coherent discriminator detection at the destination node. A semi-analytical expression for the bit error probability (BEP) of this phase forward non-coherent CPFSK cooperative transmission scheme is derived in the paper. The analysis is general in the sense that it can accommodate different Doppler frequencies and signal-to-noise ratios in the various links. It was found that PF can provide a strong diversity effect, especially when the signal-to-noise ratio in the source-relay link is substantially stronger than those in the source-destination and the relay-destination links. Furthermore, from a BEP stand point, phase forward is better than conventional decode-and-forward (DF) under typical channel conditions. It can also attain the same performance as amplify-and-forward (AF) when fading is static. Finally, we found that the concept of PF applies equally well to PSK modulations.
['Qi Yang', 'Paul Ho']
Cooperative Transmission with Continuous Phase Frequency Shift Keying and Phase-Forward Relays
272,961
Objective: The purpose of this study is to determine if an alternative mouse promotes more neutral postures and decreases forearm muscle activity and if training enhances these biomechanical benefits. Background: Computer mouse use is a risk factor for developing musculoskeletal disorders; alternative mouse designs can help lower these risks. Ergonomic training combined with alternative input devices could be even more effective than alternative designs alone. Methods: Thirty healthy adults (15 males, 15 females) performed a set of computer mouse tasks with a standard mouse and an alternative mouse while an electromagnetic motion analysis system measured their wrist and forearm postures and surface electromyography measured the muscle activity of three wrist extensor muscles. Fifteen participants received no training on how to hold the alternative mouse, whereas the remaining 15 participants received verbal instructions before and during use of the alternative mouse. Results: The alternative mouse was found to promote a more neutral forearm posture compared with the standard mouse (up to 11.5° lower forearm pronation); however, pronation was further reduced when instructions on how to hold the mouse were provided. Wrist extensor muscle activity was reduced for the alternative mouse (up to 1.8% of maximum voluntary contraction lower) compared with the standard mouse, but only after participants received instructions. Conclusion: The alternative mouse design decreased biomechanical exposures; however, instructions enhanced this potential ergonomic benefit of the design. Application: User knowledge and training are important factors when effectively implementing an alternative ergonomic device.
['Annemieke Houwink', 'Karen M. Oude Hengel', 'Dan Odell', 'Jack T. Dennerlein']
Providing training enhances the biomechanical improvements of an alternative computer mouse design
250,896
Do store brands aid store loyalty by enhancing store differentiation or merely draw price-sensitive customers with little or no store loyalty? This paper seeks to answer this question by empirically investigating the relationship between store brand loyalty and store loyalty. First, we find a robust, monotonic, positive relationship between store brand loyalty and store loyalty by using multiple loyalty metrics and data from multiple retailers and by controlling for alternative factors that can influence store loyalty. Second, we take advantage of a natural experiment involving a store closure and find that the attrition in chain loyalty is lower for households with greater store brand loyalty prior to store closure. Together, our results are consistent with evidence for the store differentiation role of store brands. This paper was accepted by J. Miguel Villas-Boas, marketing .
['Satheesh Seenivasan', 'K. Sudhir', 'Debabrata Talukdar']
Do Store Brands Aid Store Loyalty
53,569
We investigate a general framework of multiplicative multitask feature learning which decomposes individual task's model parameters into a multiplication of two components. One of the components is used across all tasks and the other component is task-specific. Several previous methods can be proved to be special cases of our framework. We study the theoretical properties of this framework when different regularization conditions are applied to the two decomposed components. We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers. Further, an analytical formula is derived for the across-task component as related to the task-specific component for all these regularizers, leading to a better understanding of the shrinkage effects of different regularizers. Study of this framework motivates new multitask learning algorithms. We propose two new learning formulations by varying the parameters in the proposed framework. An effcient blockwise coordinate descent algorithm is developed suitable for solving the entire family of formulations with rigorous convergence analysis. Simulation studies have identified the statistical properties of data that would be in favor of the new formulations. Extensive empirical studies on various classification and regression benchmark data sets have revealed the relative advantages of the two new formulations by comparing with the state of the art, which provides instructive insights into the feature learning problem with multiple tasks.
['Xin Wang', 'Jinbo Bi', 'Shipeng Yu', 'Jiangwen Sun', 'Minghu Song']
Multiplicative multitask feature learning
835,305
Two families of complementary codes over finite fields $\mathbb{F}_q$ are studied, where $q=r^2$ is square: i) Hermitian complementary dual linear codes, and ii) trace Hermitian complementary dual subfield linear codes. Necessary and sufficient conditions for a linear code (resp., a subfield linear code) to be Hermitian complementary dual (resp., trace Hermitian complementary dual) are determined. Constructions of such codes are given together their parameters. Some illustrative examples are provided as well.
['Kriangkrai Boonniyom', 'Somphong Jitman']
Complementary Dual Subfield Linear Codes Over Finite Fields
777,792
The paper is concerned with timing recovery for ultra-wideband communications operating in a dense multipath environment. A timing algorithm is proposed that exploits the samples of the received signal to estimate the start of the individual frames with respect to a local reference (frame timing) and the location of the first frame in each symbol (symbol timing). Channel estimation is inherent in the timing algorithm and can be used for coherent matched filter detection.
['Cecilia Carbonelli', 'Umberto Mengali']
Timing recovery for UWB signals
122,932
We present a set of modular series-elastic actuators (SEAs) that allow rapid and robust prototyping of mobile legged robots. The SEA modules were originally developed for a snake robot, SEA Snake, and have recently been reconfigured into Snake Monster, a multi-modal walking robot that can be easily adapted to hexapod, quadruped, and biped configurations. The use of SEAs allows the implementation of a compliant hybrid controller using both position and force-based walking. This paper presents the mechanical design, control architecture, and initial locomotion experiments using the Snake Monster platform. Additionally, we discuss the enhanced capabilities, pertaining particularly to search and rescue applications, enabled by the use of our modular hardware. Finally, we highlight how these modules provide a powerful tool for both field deployment requiring locomotion and manipulation tasks.
['Simon Kalouche', 'David Rollinson', 'Howie Choset']
Modularity for maximum mobility and manipulation: Control of a reconfigurable legged robot with series-elastic actuators
702,582
Observing the environment is the raison dpsilaetre of sensor networks, but the precise reconstruction of the measured process requires too many messages for a low power sensor network. The number of messages can be reduced by focussing on events relevant for the application. This paper examines three event definitions for a home automation example and compares the message reduction. In the extreme case, the number of transmitted messages drops from more than 3000 messages per node and hour to a maximum of about two messages per node and hour.
['Andreas Köpke', 'Adam Wolisz']
What’s new? Message reduction in sensor networks using events
475,300
Dynamic Provenance for SPARQL Updates Using Named Graphs.
['Harry Halpin', 'James Cheney']
Dynamic Provenance for SPARQL Updates Using Named Graphs.
732,720
Characterizing the distribution for the sum of correlated Gamma random variables (RVs), especially for the sum of those with unequal fading and power parameters, is still an open issue. In this paper, based on the Cholesky factorization on the covariance matrix and moments matching method, we propose an approximate expression for the probability density function (PDF) of the sum of correlated Gamma RVs with unequal fading and power parameters and arbitrary correlation matrix. The proposed PDF expression is simple, accurate, and closed-form and thus can be conveniently used for general performance analysis in wireless communications. Simulation results are used to confirm the validity of the proposed PDF expression. The performance analysis of maximal-ratio combining (MRC) diversity system and cellular mobile radio system in wireless communications using the proposed PDF expression is also presented.
['Yizhi Feng', 'Miaowen Wen', 'Jun Zhang', 'Fei Ji', 'Gengxin Ning']
Sum of arbitrarily correlated Gamma random variables with unequal parameters and its application in wireless communications
692,505
Private set intersection(PSI) enables parties to compute the intersection of their input sets privately, which has wide applications. Nowadays, with the development of cloud computing, several server-aided PSI protocols have been proposed. Unfortunately, all existing server-aided PSI protocols assume the server provider does not collude with any user and all the parties use the same encryption key, which have two main disadvantages: one is the protocols require user interaction to agree on a random key; the other is once one party is successfully attacked, all communications are compromised.In this work, we propose a two-server-aided PSI protocol under multiple keys, combining symmetric key proxy re-encryption (SKPRE) firstly introduced by Boneh etal. (CREPTO 2013) and social reputation system. By using two-server-aided model and SKPRE, our protocol can be suitable for multiparty set intersection, and each party uses a different key for encryption which is more secure than previous server-aided schemes. To prevent collusion, we assume each party has a reputation value updated according to his behavior in the current PSI protocol. The parties who defect the PSI protocol will be penalized for stimulating them to be cooperative and foresighted. Finally the protocol can be computed with complete fairness.Moreover, the protocol does not require public key infrastructures, eliminates user interaction and can outsource expensive computation from a weak computational client to a powerful server, which is suitable for mobile cloud computing.
['En Zhang', 'Fenghua Li', 'Ben Niu', 'Yanchao Wang']
Server-aided private set intersection based on reputation
899,387
Recognizing human faces in various lighting conditions is quite difficult for a surveillance system. The problem becomes more difficult if face images are taken in extremely high dynamic range scenes. Most of automatic face recognition systems assume the images are taken under well controlled illumination. The face segmentation as well as recognition problem is much simpler under such a constrained condition. However, controlling illumination is not feasible while the surveillance system is installed on locations at will. Without compensating for the effect of uneven illuminants, it is impossible to get a satisfactory recognition result. In this paper, we propose an integrated system that first compensates illuminant effect by local contrast enhancement. Then the enhanced images are fed into a robust face recognition engine which adaptively selects important features and performs classification by support vector machines (SVMs). The experimental result shows that the proposed recognition system is better than recently published literatures with two popular human face image databases.
['Wen Chung Kao', 'Ming Chai Hsu']
Local contrast enhancement for human face recognition in poor lighting conditions
258,254
Experiential computer art
['Lucy Petrovich', 'Maurice Benayoun', 'Tammy Knipp', 'Thomas Lehner', 'Christa Sommerer']
Experiential computer art
94,142
Correctness problems in the iBGP routing, the de-facto standard to spread global routing information in Autonomous Systems, are a well-known issue. Configurations may route cost-suboptimal, inconsistent, or even behave non-convergent and -deterministic. However, even if a lot of studies have shown many exemplary problematic configurations, the exact scope of the problem is largely unknown: Up to now, it is not clear which problems may appear under which iBGP architectures. The exact scope of the iBGP correctness problem is of high theoretical and practical interest. Knowledge on the resistance of specific architecture schemes against certain anomaly classes and the reasons may help to improve other iBGP schemes. Knowledge on the specific problems of the dierent schemes helps to identify the right scheme for an AS and develop workarounds.
['Uli Bornhauser', 'Peter Martini', 'Martin Horneffer']
The Scope of the IBGP Routing Anomaly Problem
271,770
As wearable devices become more popular, situations where there are multiple persons present with such devices will become commonplace. In these situations, wearable devices could support collaborative tasks and experiences between co-located persons through multi-user applications. We present an elicitation study that gathers from end users interaction methods for wearable devices for two common tasks in co-located interaction: group binding and cross-display object movement. We report a total of 154 methods collected from 30 participants. We categorize the methods based on the metaphor and modality of interaction, and discuss the strengths and weaknesses of each category based on qualitative and quantitative feedback given by the participants.
['Tero Jokela', 'Parisa Pour Rezaei', 'Kaisa Väänänen']
Natural group binding and cross-display object movement methods for wearable devices
883,606
Health care providers continue to feel the pressure in providing adequate care for an increasing elderly population. If length of stay patterns for elderly patients in care can be captured through analytical modelling, then accurate predictions may be made on when they are expected to leave hospital. The Discrete Conditional Phase-type (DC-Ph) model is an effective technique through which length of stay in hospital can be modelled and consists of both a conditional and a process component. This research expands the DC-Ph model by introducing a survival tree as the conditional component, whereby covariates are used to partition patients into cohorts based on their distribution of length of stay in hospital. The Coxian phase-type distribution is then used to model the length of stay for patients belonging to each cohort. A demonstration of how patient length of stay may be predicted for new admissions using this methodology is then given. This tool has the benefit of providing an aid to the decision making processes undertaken by hospital managers and has the potential to result in the more effective allocation of hospital resources. Hospital admission data from the Lombardy region of Italy is used as a case-study.
['Andrew S. Gordon', 'Adele Marshall', 'Mariangela Zenga']
A Discrete Conditional Phase-Type Model Utilising a Survival Tree for the Identification of Elderly Patient Cohorts and Their Subsequent Prediction of Length of Stay in Hospital
865,105
The nearest neighbor classifier is one of the most popular non-parametric classification methods. It is very simple, intuitive and accurate in a great variety of real-world applications. Despite its simplicity and effectiveness, practical use of this decision rule has been historically limited due to its high storage requirements and the computational costs involved. In order to overcome these drawbacks, it is possible either to employ fast search algorithms or to use training set size reduction scheme. The present paper provides a comparative analysis of fast search algorithms and data reduction techniques to assess their pros and cons from both theoretical and practical viewpoints
['José Salvador Sánchez', 'José Martínez Sotoca', 'Filiberto Pla']
Efficient nearest neighbor classification with data reduction and fast search algorithms
389,762
Footwear impressions are one of the most frequently securedtypes of evidence at crime scenes. For the investigation of crime seriesthey are among the major investigative notes. In this paper, we introducean unsupervised footwear retrieval algorithm that is able to cope withunconstrained noise conditions and is invariant to rigid transformations.A main challenge for the automated impression analysis is the separationof the actual shoe sole information from the structured backgroundnoise. We approach this issue by the analysis of periodic patterns. Givenunconstrained noise conditions, the redundancy within periodic patternsmakes them the most reliable information source in the image. In thiswork, we present four main contributions: First, we robustly measurelocal periodicity by fitting a periodic pattern model to the image. Second,based on the model, we normalize the orientation of the image andcompute the window size for a local Fourier transformation. In this way,we avoid distortions of the frequency spectrum through other structuresor boundary artefacts. Third, we segment the pattern through robustpoint-wise classification, making use of the property that the amplitudesof the frequency spectrum are constant for each position in a periodicpattern. Finally, the similarity between footwear impressions is measuredby comparing the Fourier representations of the periodic patterns. Wedemonstrate robustness against severe noise distortions as well as rigidtransformations on a database with real crime scene impressions. Moreover,we make our database available to the public, thus enabling standardizedbenchmarking for the first time.
['Adam Kortylewski', 'Thomas Albrecht', 'Thomas Vetter']
Unsupervised Footwear Impression Analysis and Retrieval from Crime Scene Data
658,564
Prototype of Smart Phone Supporting TV White-Spaces LTE System
['Takeshi Matsumura', 'Kazuo Ibuka', 'Kentaro Ishizu', 'Homare Murakami', 'Fumihide Kojima', 'Hiroyuki Yano', 'Hiroshi Harada']
Prototype of Smart Phone Supporting TV White-Spaces LTE System
751,648
In mobile computing, context-awareness indicates the ability of a system to obtain and use information on aspects of the system environment. To implement context-awareness, mobile system components have to be augmented with the ability to capture aspects of their environment. Recent work has mostly considered location-awareness, and hence augmentation of mobile artifacts with locality. We discuss augmentation of mobile artifacts with diverse sets of sensors and perception techniques for awareness of context beyond location. We report experience from two projects, one on augmentation of mobile phones with awareness technologies, and the other on embedding of awareness technology in everyday non-digital artifacts.
['Hans-Werner Gellersen', 'A. Schmidt', 'Michael Beigl']
Adding some smartness to devices and everyday things
456,343
Abstract#R##N##R##N#In this note we consider register-machines with symbol manipulation capabilities. They can form words over a given alphabet in their registers by appending symbols to the strings already stored. These machines are similar to Post's normal systems and the related machine-models discussed in the literature. But unlike the latter devices they are deterministic and are not allowed to read symbols from the front of the registers. Instead they can compare registers and erase them. At first glance it is surprising that in general these devices are as powerful as the seemingly stronger models. Here we investigate the borderline of universality for these machines.#R##N##R##N##R##N##R##N#Mathematics Subject Classification: 03D20, 68Q05, 68Q10.
['Holger Petersen']
The Computation of Partial Recursive Word-Functions Without Read Instructions
392,156
Hierarchical Task Network (HTN) planning with Task Insertion (TIHTN planning) is a formalism that hybridizes classical planning with HTN planning by allowing the insertion of operators from outside the method hierarchy. This additional capability has some practical benefits, such as allowing more flexibility for design choices of HTN models: the task hierarchy may be specified only partially, since "missing required tasks" may be inserted during planning rather than prior planning by means of the (predefined) HTN methods.#R##N##R##N#While task insertion in a hierarchical planning setting has already been applied in practice, its theoretical properties have not been studied in detail, yet - only EXPSPACE membership is known so far. We lower that bound proving NEXPTIME-completeness and further prove tight complexity bounds along two axes: whether variables are allowed in method and action schemas, and whether methods must be totally ordered. We also introduce a new planning technique called acyclic progression, which we use to define provably efficient TIHTN planning algorithms.
['Ron Alford', 'Pascal Bercher', 'David W. Aha']
Tight bounds for HTN planning with task insertion
777,038
Global versus Regional User Requirements for the Vehicle HMI
['Iris Menrath', 'Verena Wagner', 'Stefan Wolter', 'Stefan Becker']
Global versus Regional User Requirements for the Vehicle HMI
645,157
Adaptive classification is an important online problem in data analysis. The nonlinear and nonstationary nature of much data makes standard static approaches unsuitable. In this paper, we propose a set of sequential dynamic classification algorithms based on extension of nonlinear variants of Bayesian Kalman processes and dynamic generalized linear models. The approaches are shown to work well not only in their ability to track changes in the underlying decision surfaces but also in their ability to handle in a principled manner missing data. We investigate both situations in which target labels are unobserved and also where incoming sensor data are unavailable. We extend the models to allow for active label requesting for use in situations in which there is a cost associated with such information and hence a fully labelled target set is prohibitive.
['Seung Min Lee', 'Stephen J. Roberts']
Sequential Dynamic Classification Using Latent Variable Models
513,370
Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KBQA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.
['Min Chul Yang', 'Nan Duan', 'Ming Zhou', 'Hae Chang Rim']
Joint Relational Embeddings for Knowledge-based Question Answering
614,655
We investigate two-channel complex-valued filterbanks and wavelets that simultaneously have orthogonality and symmetry properties. First, the conditions for the filterbank to be orthogonal, symmetric, and regular (for generating smooth wavelets) are presented. Then, a complete and minimal lattice structure is developed, which enables a general design approach for filterbanks and wavelets with arbitrary length and arbitrary order of regularity. Finally, two integer implementation methods that preserve the perfect reconstruction property of the filterbank are proposed. Their performances are evaluated via experimental results.
['Xiqi Gao', 'Truong Q. Nguyen', 'Gilbert Strang']
A study of two-channel complex-valued filterbanks and wavelets with orthogonality and symmetry properties
267,874
Summary: pymzML is an extension to Python that offers a) an easy access to mass spectrometry (MS) data which allows the rapid development of tools, b) a very fast parser for mzML data, the standard data format in mass spectrometry and c) a set of functions to compare or handle spectra. Availability and Implementation: pymzML requires Python2.6.5+ and is fully compatible with Python3. The module is freely available on http://pymzml.github.com or pypi, is published under LGPL license and requires no additional modules to be installed. Contact: [email protected]
['Till Bald', 'Johannes Barth', 'Anna Niehues', 'Michael Specht', 'Michael Hippler', 'Christian Fufezan']
pymzML—Python module for high-throughput bioinformatics on mass spectrometry data
95,061
Since compressed video sequences may be corrupted or lost when transmitted over error-prone networks, error concealment techniques are very important for video communication. As a video sequence is a group of high-dimensional data, it can be considered as a big 3rd-order tensor. The methodologies that matricize high-dimensional data and then apply matrix-based method for further analysis often cause a loss of the internal structure information. Therefore, we built a tensor model to process such data, in order to preserve the natural multilinear structure. The key idea of our tensor model includes two parts. The first part is to construct a small tensor consist of the corrupted block and its several reference blocks. This part could be accomplished by block matching, but the traditional block matching might give a wrong match when the corrupted area is large and continuous. In order to overcome such situation, we proposed a flexible block matching scheme (FBM). The second part is to figure out the data in the corrupted part by tensor low rank approximation. Unlike the traditional low rank approximation, we did not fix the rank of core tensor as a constant number. Instead, the rank is flexible in our method, which is adaptive to the situation. Compared with some other error concealment method in the experiments, our method is able to achieve significantly higher PSNR (Peak Signal to Noise Ratio) as well as better visual quality.
['Zhiheng Zhou', 'Ming Dai', 'Ruzheng Zhao', 'Bo Li', 'Huiqiang Zhong', 'Yiming Wen']
Video error concealment scheme based on tensor model
871,491
Background#R##N#Phenotypes form the basis for determining the existence of a disease against the given evidence. Much of this evidence though remains locked away in text – scientific articles, clinical trial reports and electronic patient records (EPR) – where authors use the full expressivity of human language to report their observations.
['Nigel Collier', 'Anika Oellrich', 'Tudor Groza']
Concept selection for phenotypes and diseases using learn to rank
118,939
This paper deals with three classes of generalized vector quasi-equilibrium problems with or without compact assumptions. Using the well-known Fan-KKM theorems, their existence theorems for them are established. Some examples are given to illustrate our results.
['Li X', 'S. J. Li']
Existence of solutions for generalized vector quasi-equilibrium problems
237,097
Let d/sub q/(n,k) be the maximum possible minimum Hamming distance of a q-ary [n,k,d]-code for given values of n and k. It is proved that d/sub 4/ (33,5)=22, d/sub 4/(49,5)=34, d/sub 4/(131,5)=96, d/sub 4/(142,5)=104, d/sub 4/(147,5)=108, d/sub 4/(152,5)=112, d/sub 4/(158,5)=116, d/sub 4/(176,5)/spl ges/129, d/sub 4/(180,5)/spl ges/132, d/sub 4/(190,5)/spl ges/140, d/sub 4/(195,5)=144, d/sub 4/(200,5)=148, d/sub 4/(205,5)=152, d/sub 4/(216,5)=160, d/sub 4/(227,5)=168, d/sub 4/(232,5)=172, d/sub 4/(237,5)=176, d/sub 4/(240,5)=178, d/sub 4/(242,5)=180, and d/sub 4/(247,5)=184. A survey of the results of recent work on bounds for quaternary linear codes in dimensions four and five is made and a table with lower and upper bounds for d/sub 4/(n,5) is presented.
['Iliya Boukliev', 'Rumen N. Daskalov', 'Stoyan N. Kapralov']
Optimal quaternary linear codes of dimension five
75,275
Boundary labeling deals with annotating features in images such that labels are placed outside of the image and are connected by curves so-called leaders to the corresponding features. While boundary labeling has been extensively investigated from an algorithmic perspective, the research on its readability has been neglected. In this paper we present the first formal user study on the readability of boundary labeling. We consider the four most studied leader types with respect to their performance, i.e., whether and how fast a viewer can assign a feature to its label and vice versa. We give a detailed analysis of the results regarding the readability of the four models and discuss their aesthetic qualities based on the users' preference judgments and interviews.
['Lukas Barth', 'Andreas Gemsa', 'Benjamin Niedermann', 'Martin Nöllenburg']
On the Readability of Boundary Labeling
548,486
Kraken: Leveraging Live Traffic Tests to Identify and Resolve Resource Utilization Bottlenecks in Large Scale Web Services.
['Kaushik Veeraraghavan', 'Justin Meza', 'David Chou', 'Wonho Kim', 'Sonia Margulis', 'Scott Michelson', 'Rajesh Nishtala', 'Daniel Obenshain', 'Dmitri Perelman', 'Yee Jiun Song']
Kraken: Leveraging Live Traffic Tests to Identify and Resolve Resource Utilization Bottlenecks in Large Scale Web Services.
994,787
E-commerce (EC) not only holds the opportunities for business growth but also posses the challenges. These challenges have a significant impact on EC success. By examining drivers of business performance in EC from a firm's resource perspective, the author conceptualises three antecedents of firm's human resources that are critical for business performance in EC-based business: managerial expertise, top management support and learning capability. Defining EC capability by firm's human resources it is expected that EC have a significant contribution to business performance (financial and non-financial performance). In this paper, we have proposed a conceptual framework to find out the human resource attributes and its contribution in developing firm's EC capability which lead to better performance. In effect this research provides a checklist for managers to utilise the human resources and redesign their strategies for the successful EC technology implementation and usage.
['Muhammad Jehangir', 'P. D. D. Dominic', 'Alamgir Khan', 'Naseebullah']
The contribution of human resources to e-commerce capability and business performance: a structural equation modelling
347,764
Remote sensing has been widely applied for environmental monitoring by means of change detection techniques, commonly for identifying deforestation signs which is the gateway for illegal activities such as uncontrolled urban growth and grazing pasture. Monthly acquired X-Band images from airborne Synthetic Aperture Radar (SAR) provided multi-temporal scenes employed in this work resulting in environmental incident reports forwarded to the responsible authorities. The present work proposes the use of both, Superpixel segmentation by Simple Linear Iterative Clustering (SLIC) and change detection by Object Correlation Images (OCI) not yet applied to multi-temporal X-Band high resolution SAR images, and the application of a simple Multilayer Perceptron (MLP) supervised learning technique for detecting and classifying the changes into relevant activities. Experiments have been performed using acquired SAR imagery from BRADAR airborne sensor OrbiSAR-2 under Brazilian Atlantic Forest which revealed possible deforestation activities comparing achieved results with those obtained with experts.
['Thiago Luiz Morais Barreto', 'Rafael A. S. Rosa', 'Christian Wimmer', 'Joao B. Nogueira', 'Jurandy Almeida', 'Fabio A. M. Cappabianco']
Deforestation change detection using high-resolution multi-temporal X-Band SAR images and supervised learning classification
927,842
Summary. The Modifiable Areal Unit Problem (MAUP) prevails in the analysis of spatially aggregated data and influences pattern recognition. It describes the sensitivity of the measurement of spatial phenomena to the size (the scale problem) and the shape (the aggregation problem) of the mapping unit. Much attention has been recieved from fields as diverse as statistical physics, image processing, human geography, landscape ecology, and biodiversity conservation. Recently, in the field of spatial ecology, a Bayesian estimation was proposed to grasp how our description of species distribution (described by range size and spatial autocorrelation) changes with the size and the shape of grain. This Bayesian estimation (BYE), called the scaling pattern of occupancy, is derived from the comparison of pair approximation (in the spatial analysis of cellular automata) and join-count statistics (in the spatial autocorrelation analysis) and has been tested using various sources of data. This chapter explores how the MAUP can be described and potentially solved by the BYE. Specifically, the scale and the aggregation problems are analyzed using simulated data from an individual-based model. The BYE will thus help to finalize a comprehensive solution to the MAUP.
['C. Hui']
A Bayesian Solution to the Modifiable Areal Unit Problem
52,226
Networked multiagent systems are very popular in large-scale application environments. In networked multiagent systems, the interaction structures can be shaped into the form of networks where each agent occupies a position that is determined by such agent's relations with others. To avoid collisions between agents, the decision of each agent's strategies should match its own interaction position, so that the strategies available to all agents are in line with their interaction structures. Therefore, this paper presents a novel decision-making model for networked multiagent strategies based on their interaction structures, where the set of strategies for an agent is conditionally decided by other agents within its dependence interaction substructure. With the presented model, the resulting strategies available to all agents can minimize the collisions of multiagents regarding their interaction structures, and the model can produce the same resulting strategies for the isomorphic interaction structures. Furthermore, this paper uses a multiagent citation network as a case study to demonstrate the effectiveness of the presented decision-making model.
['Yichuan Jiang', 'Jing Hu', 'Donghui Lin']
Decision Making of Networked Multiagent Systems for Interaction Structures
179,592
Statistics and the Law.
['Mary W. Gray']
Statistics and the Law.
755,963
Cartesian products of graphs and hypergraphs have been studied since the 1960s. For (un)directed hypergraphs, unique prime factor decomposition (PFD) results with respect to the Cartesian product are known. However, there is still a lack of algorithms, that compute the PFD of directed hypergraphs with respect to the Cartesian product. In this contribution, we focus on the algorithmic aspects for determining the Cartesian prime factors of a finite, connected, directed hypergraph and present a first polynomial time algorithm to compute its PFD. In particular, the algorithm has time complexity O(|E||V|r 2 ) for hypergraphs H = (V,E), where the rank r is the maximum number of vertices contained in an hyperedge of H. If r is bounded, then this algorithm performs even in O(|E|log 2 (|V|)) time. Thus, our method additionally improves also the time complexity of PFD-algorithms designed for undirected
['Marc Hellmuth', 'Florian Lehner']
Fast Factorization of Cartesian products of (Directed) Hypergraphs
567,374
Modelling of Phosphorus and Gallium doped Nano-GNRFET based gas sensor
['Shyam Sumukh S. R', 'Akshay Moudgil', 'Sundaram Swaminathan']
Modelling of Phosphorus and Gallium doped Nano-GNRFET based gas sensor
647,508
Prior theoretical research has established that many software products are subject to network effects and exhibit the characteristics of two-sided markets. However, despite the importance of the software industry to the world economy, few studies have attempted to empirically examine these characteristics, or several others which theory suggests impact software price. This study develops and tests a research-grounded model of two-sided software markets that accounts for several key factors influencing software pricing, including network externalities, cross-market complementarities, standards, mindshare, and trialability. Applying the model to the context of the market for Web server software, several key findings are offered. First, a positive market share to price relationship is identified, offering support for the network externalities hypothesis even though the market examined is based on open standards. Second, the results suggest that the market under study behaves as a two-sided market in that firms able to capture market share for one product enjoy benefits in terms of both market share and price for the complement. Third, the positive price benefits of securing consumer mindshare, of supporting dominant standards, and from offering a trial product are demonstrated. Last, a negative price shock is also identified in the period after a well-known, free-pricing rival has entered the market. Nonetheless, network effects continued to remain significant during the period. These findings enhance our understanding of software markets, offer new techniques for examining such markets, and suggest the wisdom of allocating resources to develop advantages in the factors studied.
['John Gallaugher', 'Yu-Ming Wang']
Understanding network effects in software markets: evidence from web server pricing
300,496
The paper derives lower bounds on the redundancy, the protection capacity to working capacity ratio, needed by link-protection or link-restoration. Homogeneity conditions, on which the well-known redundancy bound 1/(dmacr1) holds for a network, are also identified, where dmacr is the average nodal degree.
['Dominic A. Schupke']
Lower bounds on the redundancy of link-recovery mechanisms
471,322
Purpose#R##N##R##N#Providing access to patient information is the key factor in nurses’ adoption of a Nursing Information System (NIS). In this study the requirements for information quality and the perceived quality of information are investigated. A teaching hospital in the Netherlands has developed a NIS as a module of the Hospital Information System. After the NIS was implemented in six wards in March 2009, the NIS was evaluated.#R##N##R##N#Methods#R##N##R##N#A paper questionnaire was distributed among all 195 nurses, who used the system. Included in the research were 93 (48%) respondents. Also twelve NIS-users were interviewed, using the USE IT-model.#R##N##R##N#Results#R##N##R##N#Nurses express a broad need for information of each patient. Although the history is essential, the information needs are not very specified. They expect complete, correct, up-to-date and accessible information of each patient. The information quality of the NIS is satisfactory, but needs improvement. Since the achieved quality of information depends largely on the data-entry by the nurses themselves, a controversy exists between the required information quality and the effort needed to accomplish this.#R##N##R##N#Conclusions#R##N##R##N#The aspect of data-entry by the user of the information is not included in Information Quality-literature. To further increase the quality of information, a redesign of both process and system seems necessary, which reduces the information needs of nurses and rewards the nurse for accurate data-entry
['Margreet B. Michel-Verkerke']
Information Quality of a Nursing Information System depends on the nurses: A combined quantitative and qualitative evaluation
318,683
Phone Call Detection Based on Smartphone Sensor Data
['Huiyu Sun', 'Suzanne McIntosh']
Phone Call Detection Based on Smartphone Sensor Data
923,215
In this paper, we consider the problem of estimating the frequency of a sinusoidal signal whose amplitude could be either constant or time-varying. We present a nonlinear least-squares (NLS) approach when the envelope is time-varying. We show that the NLS estimator can be efficiently implemented using a FFT. A statistical analysis shows that the NLS frequency estimator is nearly efficient. The problem of detecting amplitude time variations is next addressed. A statistical test is formulated, based on the statistics of the difference between two frequency estimates. The test is computationally efficient and yields as a by-product consistent frequency estimates under either hypothesis (i.e. constant or time-varying amplitude). Numerical examples are included to show the performance in terms of both estimation and detection.
['Olivier Besson', 'Petre Stoica']
Frequency estimation and detection for sinusoidal signals with arbitrary envelope: a nonlinear least-squares approach
390,289
It is shown that stability of the celebrated MaxWeight or back pressure policies is a consequence of the following interpretation: either policy is myopic with respect to a surrogate value function of a very special form, in which the "marginal disutility" at a buffer vanishes for vanishingly small buffer population. This observation motivates the h-MaxWeight policy, defined for a wide class of functions h. These policies share many of the attractive properties of the MaxWeight policy: (i) Arrival rate data is not required in the policy, (ii) Under a variety of general conditions, the policy is stabilizing when h is a perturbation of a monotone linear function, a monotone quadratic, or a monotone Lyapunov function for the fluid model, (iii) A perturbation of the relative value function for a workload relaxation gives rise to a myopic policy that is approximately average-cost optimal in heavy traffic, with logarithmic regret. The first results are obtained for a completely general Markovian network model. Asymptotic optimality is established for a Markovian scheduling model with a single bottleneck, and homogeneous servers.
['Sean P. Meyn']
Myopic policies and maxweight policies for stochastic networks
154,044
A minimum edge coloring of a bipartite graph is a partition of the edges into $\Delta $ matchings, where $\Delta $ is the maximum degree in the graph. Coloring algorithms that run in time $O(\min (m(\log n)^2 ,n^2 \log n))$ are presented. The algorithms rely on an efficient procedure for the special case of $\Delta $ an exact power of two. The coloring algorithms can be used to find maximum cardinality matchings on regular bipartite graphs in the above time bound. An algorithm for coloring multigraphs with large multiplicities is also presented.
['Harold N. Gabow', 'Oded Kariv']
ALGORITHMS FOR EDGE COLORING BIPARTITE GRAPHS AND MULTIGRAPHS
27,738
According to the theory that computing sources on network can be integrated into a composite computation with suitable application logic, a new distributed software architecture IMSA is proposed. IMSA uses XML to express the logic of atom computation, composite computation and computing source. The isomerism among all the computing sources is shielded by IMSA, which is computing transparent, complete, reliable, dynamically configurable, extensible, loose coupled, and load equivalent. Based on IMSA structure, we can make good use of network computing ability, construct a new application in a short duration, and provide a new method for the integration of enterprises' legacy business systems.
['Ying Li', 'Qing Wu', 'Zhaohui Wu']
IMSA: integrated multi-computing-sources software architecture
269,039
Participatory Mapping for Disaster Preparedness: The Development & Standardization of Animal Evacuation Maps.
['Joanne I. White', 'Leysia Palen']
Participatory Mapping for Disaster Preparedness: The Development & Standardization of Animal Evacuation Maps.
981,123
A Metamodel-driven Architecture for Generating, Populating and Manipulating "Possible Worlds" to Answer Questions.
['Imre Kilián', 'Gábor Alberti']
A Metamodel-driven Architecture for Generating, Populating and Manipulating "Possible Worlds" to Answer Questions.
783,486
High performance computing is absolutely necessary for large-scale geophysical simulations. In order to obtain a realistic image of a geologically complex area, industrial surveys collect vast amounts of data making the computational cost extremely high for the subsequent simulations. A major computational bottleneck of modeling and inversion algorithms is solving the large sparse systems of linear ill-conditioned equations in complex domains with multiple right hand sides. Recently, parallel direct solvers have been successfully applied to multi-source seismic and electromagnetic problems. These methods are robust and exhibit good performance, but often require large amounts of memory and have limited scalability. In this paper, we evaluate modern direct solvers on large-scale modeling examples that previously were considered unachievable with these methods. Performance and scalability tests utilizing up to 65,536 cores on the Blue Waters supercomputer clearly illustrate the robustness, efficiency and competitiveness of direct solvers compared to iterative techniques. Wide use of direct methods utilizing modern parallel architectures will allow modeling tools to accurately support multi-source surveys and 3D data acquisition geometries, thus promoting a more efficient use of the electromagnetic methods in geophysics. Parallel direct solvers are evaluated on large problems of electromagnetic modeling.Performance and memory requirements are compared on different architectures.Robustness and efficiency of direct solvers for multi-source problems is confirmed.Scalability tests utilizing up to 65,536 cores are presented.
['Vladimir Puzyrev', 'Seid Koric', 'Scott Wilkin']
Evaluation of parallel direct sparse linear solvers in electromagnetic geophysical problems
650,607
Autonomous underwater vehicles (AUVs) that rely on dead reckoning suffer from unbounded localization error growth at a rate dependent on the quality (and cost) of the navigational sensors. Many AUVs surface occasionally to get a GPS position update. Alternatively underwater acoustic beacons such as long baseline (LBL) arrays are used for localization, at the cost of substantial deployment effort. The idea of cooperative localization with a few vehicles with high navigation accuracy (beacon vehicles) among a team of AUVs with poor navigational sensors has recently gained interest. Autonomous surface crafts (ASCs) with GPS, or sophisticated AUVs with expensive navigational sensors may play the role of beacon vehicles. Other AUVs are able to measure their range to these acoustically, and use the resulting information for self-localization. Since a single range measurement is insufficient for unambiguous localization, multiple beacon vehicles are usually required. In this paper, we explore the use of a single beacon vehicle to support multiple AUVs. We develop path planning algorithms for the beacon vehicle that take into account and minimize the errors being accumulated by other AUVs. We show that the generated beacon vehicle path enables the other AUVs to get sufficient information to keep their localization errors bounded over time.
['Mandar Chitre']
Path planning for cooperative underwater range-only navigation using a single beacon
85,426
Objective:The aim of this study was to characterize associations between psychosocial and work organizational risk factors and upper-extremity musculoskeletal symptoms and disorders.Background:Methodological limitations of previous studies of psychosocial and work organizational risk factors and musculoskeletal outcomes have produced inconsistent associations.Method:In this prospective epidemiologic study of 386 workers, questionnaires to assess decision latitude (“control”) and psychological job demands (“demand”) were administered to study participants and were used to classify them into job strain “quadrants.” Measures of job stress and job change were collected during each week of follow-up. Incident hand/arm and neck/shoulder symptoms and disorders were ascertained weekly. Associations between exposure measures and musculoskeletal outcomes were estimated with proportional hazard methods.Results:When compared to the low-demand/high-control job strain referent category, large increases in risk of hand/...
['Fredric Gerr', 'Nathan B. Fethke', 'Dan Anton', 'Linda Merlino', 'John Rosecrance', 'Michele Marcus', 'Michael P. Jones']
A Prospective Study of Musculoskeletal Outcomes Among Manufacturing Workers II. Effects of Psychosocial Stress and Work Organization Factors
118,394
We present a novel approach to automatic metaphor identification, that discovers both metaphorical associations and metaphorical expressions in unrestricted text. Our system first performs hierarchical graph factorization clustering (HGFC) of nouns and then searches the resulting graph for metaphorical connections between concepts. It then makes use of the salient features of the metaphorically connected clusters to identify the actual metaphorical expressions. In contrast to previous work, our method is fully unsupervised. Despite this fact, it operates with an encouraging precision (0.69) and recall (0.61). Our approach is also the first one in NLP to exploit the cognitive findings on the differences in organisation of abstract and concrete concepts in the human brain.
['Ekaterina Shutova', 'Lin Sun']
Unsupervised Metaphor Identification Using Hierarchical Graph Factorization Clustering
78,830
Computing similarity, especially Jaccard Similarity, between two datasets is a fundamental building block in big data analytics, and extensive applications including genome matching, plagiarism detection, social networking, etc. The increasing user privacy concerns over the release of has sensitive data have made it desirable and necessary for two users to evaluate Jaccard Similarity over their datasets in a privacy-preserving manner. In this paper, we propose two efficient and secure protocols to compute the Jaccard Similarity of two users' private sets with the help of an unfully-trusted server. Specifically, in order to boost the efficiency, we leverage Minhashing algorithm on encrypted data, where the output of our protocols is guaranteed to be a close approximation of the exact value. In both protocols, only an approximate similarity result is leaked to the server and users. The first protocol is secure against a semi-honest server, while the second protocol, with a novel consistency-check mechanism, further achieves result verifiability against a malicious server who cheats in the executions. Experimental results show that our first protocol computes an approximate Jaccard Similarity of two billion-element sets within only 6 minutes (under 256-bit security in parallel mode). To the best of our knowledge, our consistency-check mechanism represents the very first work to realize an efficient verification particularly on approximate similarity computation.
['Shuo Qiu', 'Boyang Wang', 'Ming Li', 'Jesse Victors', 'Jiqiang Liu', 'Yanfeng Shi', 'Wei Wang']
Fast, Private and Verifiable: Server-aided Approximate Similarity Computation over Large-Scale Datasets
774,977
Probabilistic embedding methods provide a principled way of deriving new spatial representations of discrete objects from human interaction data. The resulting assignment of objects to positions in a continuous, low-dimensional space not only provides a compact and accurate predictive model, but also a compact and flexible representation for understanding the data. In this paper, we demonstrate how probabilistic embedding methods reveal the “taste space” in the recently released Million Musical Tweets Dataset (MMTD), and how it transcends geographic space. In particular, by embedding cities around the world along with preferred artists, we are able to distill information about cultural and geographical differences in listening patterns into spatial representations. These representations yield a similarity metric among city pairs, artist pairs, and cityartist pairs, which can then be used to draw conclusions about the similarities and contrasts between taste space and geographic location.
['Joshua L. Moore', 'Thorsten Joachims', 'Douglas Turnbull']
TASTE SPACE VERSUS THE WORLD: AN EMBEDDING ANALYSIS OF LISTENING HABITS AND GEOGRAPHY
672,031
Classical inconsistency-tolerant query answering relies on selecting maximal components of an ABox/database which are consistent with the ontology. However, some rules in ontologies might be unreliable if they are extracted from ontology learning or written by unskillful knowledge engineers. In this paper we present a framework of handling inconsistent existential rules under stable model semantics, which is defined by a notion called rule repairs to select maximal components of the existential rules. Surprisingly, for R-acyclic existential rules with R-stratified or guarded existential rules with stratified negations, both the data complexity and combined complexity of query answering under the rule repair semantics remain the same as that under the conventional query answering semantics. This leads us to propose several approaches to handle the rule repair semantics by calling answer set programming solvers. An experimental evaluation shows that these approaches have good scalability of query answering under rule repairs on realistic cases.
['Hai Wan', 'Heng Zhang', 'Peng Xiao', 'Haoran Huang', 'Yan Zhang']
Query answering with inconsistent existential rules under stable model semantics
644,418
In this letter, we derive the distribution characteristics of first-order multipath ghosts in a nested multiple-input–multiple-output (MIMO) through-wall radar and evaluate the efficacy of the phase coherence factor (PCF) in ghost suppression. Different from a synthetic aperture radar, the first-order multipath echoes of a nested MIMO through-wall radar generate several ghosts. For example, for a nested MIMO array composed of a compact receiving subarray and $M$ spatially dispersed transmitters, there are $M$ ghosts at the same side of the wall as the array. The $m$ th ghost is supposed to occur near the intersection of the line, connecting the target and the center of the receiving subarray, and the ellipse whose foci are the positions of the target and the $m$ th transmitter. Under the assumption of phase uniform distribution clutter, the PCF can suppress the ghosts up to $-20 \lg(1-\sqrt {(M^{2}-1)/M^{2}})$ dB, which is about 17.46 dB when $M = 2$ .
['Jiangang Liu', 'Lingjiang Kong', 'Xiaobo Yang', 'Qing Huo Liu']
First-Order Multipath Ghosts' Characteristics and Suppression in MIMO Through-Wall Imaging
842,631
Abstract Process Failure Modes and Effects Analysis (PFMEA) concept, has been developed based on the success of Failure Modes and Effects Analysis (FMEA) to include a broader analysis team for the realization of a comprehensive analysis in a short time. The most common use of the PFMEA involves manufacturing processes as they are required to be closely examined against any unnatural deviation in the state of the process for producing products with consistent quality. In a typical FMEA, for each failure modes, three risk factors; severity (S), occurrence (O), and detectability (D) are evaluated and their multiplication derives the risk priority number (RPN). However there are many shortcomings of this classical crisp RPN calculation. This study introduces a fuzzy hybrid approach that allows experts to use linguistic variables for determining S, O, and D for PFMEA by applying fuzzy ‘technique for order preference by similarity to ideal solution’ (TOPSIS) and fuzzy ‘analytical hierarchy process’ (AHP). An ap...
['Mehmet Ekmekçioğlu', 'Ahmet Can Kutlu']
A Fuzzy Hybrid Approach for Fuzzy Process FMEA: An Application to a Spindle Manufacturing Process
186,860
The Short Stories Corpus: Notebook for PAN at CLEF 2015.
['Faisal Alvi', 'Mark Stevenson', 'Paul D. Clough']
The Short Stories Corpus: Notebook for PAN at CLEF 2015.
739,948
On the Equational Theory of Representable Polyadic Equality Algebras
['István Németi', 'Gábor Sági']
On the Equational Theory of Representable Polyadic Equality Algebras
464,159
A perennial problem facing flight simulator designers is how to handle motion system transients generated by washout algorithms intended to restrict the travel of the motion-base hardware. Motion cues in the flight simulator provide opportunities for lead compensation on the part of the pilot and thus one must ensure that other unwanted motion transients generated by the system are not detected. The present study employs typical washout motion transients in an experiment designed to establish the motion levels required to achieve the aforementioned design goals. A set of critical amplitudes for both onset and return motion are determined in a flight simulator environment. It is found that a significant increase in detection levels occurs when the pilot switches from being a pure observer to actively controlling the simulator. >
['Al-Amyn Samji', 'Lloyd D. Reid']
The detection of low-amplitude yawing motion transients in a flight simulator
321,010
This paper presents a method for detection and recognition of traffic signs. We proposed a new recognition approach of traffic signs, which has the feature of introducing Genetic Algorithm (GA). We designed and built a prototype system by implementation of C++ and OpenCV library. The experimental results show that our approach could have good prospects for automatic detection and recognition of traffic signs.
['Mitsuhiro Kobayashi', 'Mitsuru Baba', 'Kozo Ohtani', 'Li Li']
A method for traffic sign detection and recognition based on genetic algorithm
651,351
The CIA World Factbook is a prime example of a curated database – a database that is constructed and maintained with a great deal of human effort in collecting, verifying, and annotating data. Preservation of old versions of the Factbook is important for verification of citations; it is also essential for anyone interested in the history of the data such as demographic change. Although the Factbook has been published, both physically and electronically, only for the past 30 years, we appear in danger of losing this history. This paper investigates the issues involved in capturing the history of an evolving database and its application to the CIA World Factbook. In particular it shows that there is substantial added value to be gained by preserving databases in such a way that questions about the change in data, (longitudinal queries) can be readily answered. Within this paper, we describe techniques for recording change in a curated database and we describe novel techniques for querying the change. Using the example of this archived curated database, we discuss the extent to which the accepted practices and terminology of archiving, curation and digital preservation apply to this important class of digital artefacts. 1
['Peter Buneman', 'Heiko Müller', 'Chris Rusbridge']
Curating the CIA World Factbook
311,536