abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
Development of Training System for Pedaling Skill by Visualizing Muscle Activity Pattern
['Takuhiro Sato', 'Shoma Kushizaki', 'Shimpei Matsumoto', 'Tomoki Kitawaki', 'Tatsushi Tokuyasu']
Development of Training System for Pedaling Skill by Visualizing Muscle Activity Pattern
914,241
In this paper, we propose a new link selection (relay link or direct link) scheme for the full duplex (FD) two-way (TW) amplify-and-forward (AF) relay system with the existence of the direct link, where the relay link or direct link is adaptively selected to assist the communication procedure between the source and the destination. The system performance in terms of the outage probability of the proposed scheme is investigated. Simulation results show the great performance gain of the proposed scheme compared to the conventional scheme without consideration of the direct link, especially in the high transmit power regime. In addition, the impacts of the residual self-interference (RSI) incurred by the FD transmission are also revealed. Monte Carlo simulations also guarantee the correctness of our analytical results.
['Yanwen Mao', 'Cheng Li', 'Zhiyuan Zhang']
Adaptive link selection of full-duplex two-way relaying with direct link
949,566
The ECLIPS project 'Extended collaborative integrated life cycle supply chain planning system' is introduced. The discussed project scope focuses on development of concepts and technologies in the domain of supply chain planning and managing at the product maturity phase. The concerned concepts and technologies are based on integrating analytical and simulation techniques in multi-echelon cyclic planning. Analytical techniques are used under conditions of dynamic and deterministic demand. Simulation techniques provide a test environment for analytical solutions and allow modeling and optimising planning decisions under conditions of demand variability and uncertainty
['Yuri Merkuryev', 'Galina Merkuryeva', 'Bram Desmet', 'Eric Jacquet-Lagrèze']
Integrating Analytical and Simulation Techniques in Multi-Echelon Cyclic Planning
73,168
A high peak-to-average power ratio (PAPR) is a major shortcoming in multicarrier systems, as it causes nonlinearity in the transmitter, degrading the performance of the system significantly. Partial transmit sequences (PTS) is one of the best methods in reducing PAPR, in which the information-bearing subcarriers are divided into M disjoint subblocks, each controlled by a phase rotation factor which brings PAPR down. Though PAPR reduction by PTS is more effective with more subblocks, there is a corresponding exponential increase in complexity. In this paper, a novel implementation of PTS is presented, in which a dual-layered approach is employed to reduce the complexity.
['Wong Sai Ho', 'A. S. Madhukumar', 'Francois P. S. Chin']
Peak-to-average power reduction using partial transmit sequences: a suboptimal approach based on dual layered phase sequencing
29,673
Conceptual decoding for spoken dialog systems
['Yannick Estève', 'Christian Raymond', 'Frédéric Béchet', 'Renato De Mori']
Conceptual decoding for spoken dialog systems
738,939
The concept of microgrid hierarchical control is presented recently. In this paper, a hierarchical scheme is proposed which includes primary and secondary control levels. The primary level comprises distributed generators (DGs) local controllers. The local controllers mainly consist of power, voltage and current controllers, and virtual impedance control loop. The central secondary controller is designed to manage the compensation of voltage unbalance at the point of common coupling (PCC) in an islanded microgrid. Unbalance compensation is achieved by sending proper control signals to the DGs local controllers. The design procedure of the control system is discussed in detail and the simulation results are presented. The results show the effectiveness of the proposed control structure in compensating the voltage unbalance.
['Mehdi Savaghebi', 'Alireza Jalilian', 'Juan Carlos Vasquez', 'Josep M. Guerrero']
Secondary Control Scheme for Voltage Unbalance Compensation in an Islanded Droop-Controlled Microgrid
378,507
Intelligent supporting techniques for the maintenance of constraint-based configuration systems.
['Florian Reinfrank', 'Gerald Ninaus', 'Franz Wotawa', 'Alexander Felfernig']
Intelligent supporting techniques for the maintenance of constraint-based configuration systems.
986,583
This paper presents a vision based Lane Departure Warning System (LDWS) and its implementation on a Field Programmable Gate Array (FPGA) device. It is used as a Driver Assistance (DA) system that supports drivers and helps avoiding accidents. The FPGA technology has the advantages of high-performances for digital image processing and low cost, both of which are the requirements of the DA systems. The main contributions of this work are threefold: 1) a hardware architecture, which combines Single Instruction Multiple Data (SIMD) structure and Single Instruction Single Data (SISD) structure based on FPGA, is implemented. This architecture is in possession of both efficiency and flexibility. Therefore, it can be employed to handle many vision processing tasks in real time; 2) an improved parallel Hough Transform (HT) is introduced. Compared with traditional HT, we move the origin to the estimated vanishing point, so as to reduce the storage requirements and improve the detection robustness; and 3) a simple and efficient warning strategy is presented which can be implemented on FPGA easily. Experiments illustrate the high performance of the introduced system in various common roadway scenes.
['Erke Shang', 'Jian Li', 'Xiangjing An', 'Hangen He']
A real-time lane departure warning system based on FPGA
395,355
In this paper, we propose a method for estimating the user position where a user is holding a microphone in an indoor environment using digital watermarking for audio signals. The proposed method utilizes detection strengths, which are calculated while detecting spread-spectrum-based watermarks. Taking into account delays and attenuation of the watermarked signals emitted from multiple loudspeakers and other factors, we construct a model of detection strengths. The user position is estimated in real-time using the model. The experimental results indicate that the user positions are estimated with 1.3 m of root mean squared error on average for the case where the user is static. We demonstrate that the proposed method successfully estimates the user position even when the user moves.
['Ryosuke Kaneto', 'Yuta Nakashima', 'Noboru Babaguchi']
Real-Time User Position Estimation in Indoor Environments Using Digital Watermarking for Audio Signals
313,240
In this study, we develop the embodied virtual communication system with the speech-driven nodding response model for the analysis by synthesis of embodied communication. Using the proposed system in embodied virtual communication, we perform experiments and carry out sensory evaluation and voice-motion analysis to demonstrate the effects of nodding responses on a talker's avatar called VirtualActor. The result of the study shows that superimposed nodding responses in a virtual .space promote communication.
['Yoshihiro Sejima', 'Tomio Watanabe', 'Michiya Yamamoto']
Analysis by Synthesis of Embodied Communication via VirtualActor with a Nodding Response Model
338,906
Crucial data like product features were obtained from consumer online reviews and sentiment words were gathered in Resource Description Format RDF in order to use them in meaningful reviews based categorisation on sentiments of the feature. The meaningful relationships among these pieces of RDF data are to be engineered in a Product Review Opinion Ontology PROO. This serves as background knowledge to learn rule based sentiments expressed on product features. These semantic rules are learned on both taxonomical and non-taxonomical relations available in PROO Ontology. In order to verify the mined rules, Inductive Logic Programming ILP is applied on PROO. The learned ILP rules are found to be among the mined rules. The positively classified features are grouped to justify the goal of ILP for examples which are both complete and consistent. Left out negative examples are useful in knowing their count at the time of categorisation.
['D. Teja Santosh', 'B. Vishnu Vardhan']
PROO ontology development for learning feature specific sentiment relationship rules on reviews categorisation: a semantic data mining approach
839,020
A novel vibration modelling method based on fuzzy sets is presented in this paper. In this method, firstly the mode shapes of a structure are guessed using experience or the rules that are developed in this research. The guessed mode shapes are referred to as mode shape forms (MSFs). The MSFs are approximate mode shapes which only give the direction of motion of the particles of the elastic body. This qualitative information is expressed by fuzzy sets. The deflections or displacement magnitudes of the MSFs are described by fuzzy linguistic terms such as Zero, Medium and Large. In this respect, natural frequencies and structural dimensions constitute the fuzzy inputs while MSFs are the fuzzy outputs. Fuzzy rules are designed based on MSF rules or guessed mode shapes to relate the inputs to the output. In the second stage, fuzzy representations of MSFs are updated by experimental modal analysis. This modification creates a set of mode shape data. In the final step, neural networks are used as a tool to obtain an accurate version of the mode shape data by learning the target set of the data. The method is extended to evaluate the error when a wrong MSF is assumed for the mode shape. In this case the method finds the correct MSF among available guessed MSFs. A further extension of the method is proposed for cases where there is no suitable guess available for the mode shape. In this situation the "closest" MSF is selected among the available MSFs. The chosen MSF is modified by correcting the fuzzy rules that were used in constructing the fuzzy MSF. Human common sense, heuristic and general knowledge, past experience, and the MSF developed in this method are the capabilities that cannot be provided with existing artificial intelligent system. The approach developed in this paper provides additional advantages over the existing modelling approaches by incorporating effective analysis methods such as mixed artificial intelligence and experimental validation, together with human interface/intelligence. As an illustrative example, the result of a clamped-clamped beam is compared with the corresponding mathematical equation of motion. An acceptable distribution of results is obtained from the method developed in this paper.
['Farbod Khoshnoud', 'A. Sadeghi', 'I.I. Esat', 'C.E. Ventura', 'C.W. de Silva']
Structural Vibration Modeling Using a Neuro-Fuzzy Approach
89,678
We study the impact of physician workload on hospital reimbursement utilizing a detailed data set from the trauma department of a major urban hospital. We find that the proportion of patients assigned a “high-severity” status for reimbursement purposes, which maps, on average, to a 47.8% higher payment for the hospital, is substantially reduced as the workload of the discharging physician increases. This effect persists after we control for a number of systematic differences in patient characteristics, condition, and time of discharge. Furthermore, we show that it is unlikely to be caused by selection bias or endogeneity in either discharge timing or allocation of discharges to physicians. We attribute this phenomenon to a workload-induced reduction in diligence of paperwork execution. We estimate the associated monetary loss to be approximately 1.1% (95% confidence interval, 0.4%--1.9%) of the department's annual revenue.
['Adam C. Powell', 'Sergei Savin', 'Nicos Savva']
Physician Workload and Hospital Reimbursement: Overworked Physicians Generate Less Revenue per Patient
52,024
In this paper, we study the capacity-achieving input covariance matrices for the multiuser multiple-input multiple-output (MIMO) uplink channel under jointly-correlated Rician fading when perfect channel state information (CSI) is known at the receiver, or CSIR while only statistical CSI at the transmitter, or CSIT, is available. The jointly-correlated MIMO channel (or the Weichselberger model) accounts for the correlation at two link ends and is shown to be highly accurate to model real channels. Classically, numerical techniques together with Monte-Carlo methods (named stochastic programming) are used to resolve the problem concerned but at a high computational cost. To tackle this, we derive the asymptotic sum-rate of the multiuser (MU) MIMO uplink channel in the large-system regime where the numbers of antennas at the transmitters and the receiver go to infinity with constant ratios. Several insights are gained from the analytic asymptotic sum-rate expression, based on which an efficient optimization algorithm is further proposed to obtain the capacity-achieving input covariance matrices. Simulation results demonstrate that even for a moderate number of antennas at each link, the new approach provides indistinguishable results as those obtained by the complex stochastic programming approach.
['Chao-Kai Wen', 'Shi Jin', 'Kai-Kit Wong']
On the Sum-Rate of Multiuser MIMO Uplink Channels with Jointly-Correlated Rician Fading
360,866
Dynamic Ambulance Management (DAM) is generally believed to provide means to enhance the response-time performance of emergency medical service providers. The implementation of DAM algorithms leads to additional movements of ambulance vehicles compared to the reactive paradigm, where ambulances depart from the base station when an incident is reported. In practice, proactive relocations are only acceptable when the number of additional movements is limited. Motivated by this trade-off, we study the effect of the number of relocations on the response-time performance. We formulate the relocations from one configuration to a target configuration by the Linear Bottleneck Assignment Problem, so as to provide the quickest way to transition to the target configuration. Moreover, the performance is measured by a general penalty function, assigning to each possible response time a certain penalty. We extensively validate the effectiveness of relocations for a wide variety of realistic scenarios, including a day and night scenario in a critically and realistically loaded system. The results consistently show that already a small number of relocations lead to near-optimal performance, which is important for the implementation of DAM algorithms in practice.
['van Thije Barneveld', 'Sandjai Bhulai', 'van der Rob Mei']
The effect of ambulance relocations on the performance of ambulance service providers
584,386
The distributed monitoring problem refers to the placement and configuration of passive monitoring points to jointly realize a task of monitoring traffic flows. Given a monitoring task, the objective consists in minimizing the total monitoring cost to realize this task. We formulate this problem as a mixed-integer program. This formulation can also be dualized to determine the gain obtained when varying the number of monitoring points (i.e., the installation cost) and the fraction of monitored traffic (i.e., the configuration cost). As traffic flows can follow different paths depending on the routing strategy, we compare the resulting cost and gain when they are routed along the min-cost path, the paths obtained by solving the min-cost multicommodity flow and the multicommodity capacity network design problem.
['Dimitri Papadimitriou', 'Bernard Fortz']
Distributed Monitoring Problem
767,787
In this paper, we extend the risk zone concept by creating the Generalized Risk Zone. The Generalized Risk Zone is a model-independent scheme to select key observations in a sample set. The observations belonging to the Generalized Risk Zone have shown comparable, in some experiments even better, classification performance when compared to the use of the whole sample. The main tool that allows this extension is the Cauchy-Schwartz divergence, used as a measure of dissimilarity between probability densities. To overcome the setback concerning pdf's estimation, we used the ideas provided by the Information Theoretic Learning, allowing the calculation to be performed on the available observations only. We used the proposed methodology with Learning Vector Quantization, feedforward Neural Networks, Support Vector Machines, and Nearest Neighbors.
['Rodrigo T. Peres', 'Carlos E. Pedreira']
Generalized Risk Zone: Selecting Observations for Classification
370,163
The wiretap channel is a setting where one aims to provide information-theoretic privacy of communicated data based solely on the assumption that the channel from sender to adversary is "noisier" than the channel from sender to receiver. It has been the subject of decades of work in the information and coding (I&C) community. This paper bridges the gap between this body of work and modern cryptography with contributions along two fronts, namely metrics (definitions) of security, and schemes. We explain that the metric currently in use is weak and insufficient to guarantee security of applications and propose two replacements. One, that we call mis-security, is a mutual-information based metric in the I&C style. The other, semantic security, adapts to this setting a cryptographic metric that, in the cryptography community, has been vetted by decades of evaluation and endorsed as the target for standards and implementations. We show that they are equivalent (any scheme secure under one is secure under the other), thereby connecting two fundamentally different ways of defining security and providing a strong, unified and well-founded target for designs. Moving on to schemes, results from the wiretap community are mostly non-constructive, proving the existence of schemes without necessarily yielding ones that are explicit, let alone efficient, and only meeting their weak notion of security. We apply cryptographic methods based on extractors to produce explicit, polynomial-time and even practical encryption schemes that meet our new and stronger security target.
['Mihir Bellare', 'Stefano Tessaro', 'Alexander Vardy']
A Cryptographic Treatment of the Wiretap Channel
217,282
In this note, we define strong and weak common quadratic Lyapunov functions (CQLFs) for sets of linear time-invariant (LTI) systems. We show that the simultaneous existence of a weak CQLF of a special form, and the nonexistence of a strong CQLF, for a pair of LTI systems, is characterized by easily verifiable algebraic conditions. These conditions are found to play an important role in proving the existence of strong CQLFs for general LTI systems.
['Robert Shorten', 'Kumpati S. Narendra', 'Oliver Mason']
A result on common quadratic Lyapunov functions
209,451
Gisbuilder: a Framework for the Semi-Automatic Generation of Web-based Geographic Information Systems.
['Nieves R. Brisaboa', 'Alejandro Cortiñas', 'Miguel Rodríguez Luaces', 'Oscar Pedreira']
Gisbuilder: a Framework for the Semi-Automatic Generation of Web-based Geographic Information Systems.
943,725
Motion Detection using a Model of Visual Attention.
['Shijie Zhang', 'F. W. M. Stentiford']
Motion Detection using a Model of Visual Attention.
689,036
This conference paper was presented in the 10th IEEE International Conference on Nano/Micro Engineered and Molecular Systems, NEMS 2015; Xi'an; China; 7 April 2015 through 11 April 2015 [© 2015 Institute of Electrical and Electronics Engineers Inc.] The conference paper's definite version is available at: http://10.1109/NEMS.2015.7147445
['Shifur Rahman Shakil', 'Fatema Tuz Zohra', 'Parna Pramanik', 'Raihanul Islam Tushar', 'Saha Atanu Kumar', 'Md. Belal Hossain Bhuian']
Vapor adsorption limitation of graphene nanoribbons in quasi conductance increment: A NEGF approach
633,289
We describe a common problem in the curation and analysis of archaeological materials: restoring the orientation and dimensions of damaged objects. Our focus is a common architectural type in Mediterranean sites, the Doric column drum, which we investigate at one of the earliest Doric temples in the Greek world, the Hera temple at Olympia. The 3D modeling and analysis of this building by the Digital Architecture Project since 2013 has revealed new insights into the construction history of its stone colonnades. This paper concerns the analysis of the 3D models of the in situ material, using the almost 100 fallen drums and capitals to reconstruct the colonnade digitally. In order to accomplish this, we propose two novel methods for training the machine to estimate the dimensions of a fragmentary column drum. One approach is a modification of ICP, where the fragment is compared to an ideal model of an intact drum, which is resized iteratively until concluding with a satisfactory fit. Another approach recasts the scan data into polar coordinates and uses RANSAC to identify the exterior profiles of the piece and remove points likely to belong to damaged areas. The filtered points are then examined by the algorithm to estimate the radii and taper of the drum. Besides saving a great deal of time in the field, these methods are also accurate to within 0.2p of the total radius for well-preserved material, and 1p for even the most fragmentary drums at Olympia. These data have allowed the digital reconstruction of 80p of the displaced drums and all of the capitals from the temple. Our algorithms can be used to measure any fluted column drums, and we discuss the potential value of our approach for other categories of archaeological artifacts.
['Philip Sapirstein', 'Eric T. Psota']
Pattern Matching and the Analysis of Damaged Ancient Objects: The Case of the Column Drum
892,907
A scalable blind source separation paradigm aimed at sensor networks is described. The approach facilitates an unlimited number of sensors and sources and does not require a fusion centre. It is based on a so-called ownership principle, where each network node aims to extract (own) a source signal that is not already extracted (owned) by another network node. Nodes that own a source signal broadcast that signal to user nodes outside the network. Nodes that do not currently own a source signal do not transmit information and can be active intermittently. A natural application of the method is a distributed microphone network in a multi-talker environment, with as user nodes hearing aids or telephone interface devices. Such a network can stretch across buildings or neighbourhoods. Simulations using independent component analysis (ICA) indicate the validity of the principles of the method.
['Yusuke Hioka', 'W. Bastiaan Kleijn']
Distributed blind source separation with an application to audio signals
291,090
The problem of selecting a portfolio has been largely faced in terms of minimizing the risk, given the return. While the complexity of the quadratic programming model due to Markowitz has been overcome by the recent progress in algorithmic research, the introduction of linear risk functions has given rise to the interest in solving portfolio selection problems with real constraints. In this paper we deal with the portfolio problem with minimum transaction lots. We show that in this case the problem of finding a feasible solution is, independently of the risk function, NP-complete. Moreover, given the mixed integer linear model, new heuristics are proposed which starting from the solution of the relaxed problem allow to find a solution close to the optimal one. The algorithms are based on the construction of mixed integer subproblems (using only a part of the securities available) formulated using the information obtained from the solution of the relaxed problem. The heuristics have been tested with respect to two disjoint time periods, using real data from the Milan Stock Exchange.
['Renata Mansini', 'Maria Grazia Speranza']
Heuristic algorithms for the portfolio selection problem with minimum transaction lots
261,941
Abstract. We report the results of a preliminary study testing the effect of participants’ mood rating on visual motor performance using a haptic device to manipulate a cartoonish human body. Our results suggest that moods involving high arousal (e.g. happiness) produce larger movements whereas mood involving low arousal (e.g. sadness) produce slower speed of performance. Our results are used for the development of a new haptic virtual reality application that we briefly present here. This application is intended to create a more interactive and motivational environment to treat body image issues and for emotional communication. Keywords. Haptics, Virtual reality, Mood, Body image. Introduction Interpersonal touch serves several adaptive functions such as soothing, signaling safety, reinforcing reciprocity and contributes to the cognitive and socioemotional development [1]. Interpersonal touch elicits and modulates human emotions and can convey immediacy and produce an effect more powerful than language (for a review see Gallace and Spence [2]). Despite its importance, Gallace and Spence [2] pointed out the fact that ‘
['Line Tremblay', 'Stéphane Bouchard', 'Brahim Chebbi', 'Lai Wei', 'Johana Monthuy-Blanc', 'Dominic Boulanger']
The development of a haptic virtual reality environment to study body image and affect.
836,760
Augmented immersed interface methods have been developed recently for interface problems and problems on irregular domains including CFD applications with free boundaries and moving interfaces. In an augmented method, one or several augmented variables are introduced along the interface or boundary so that one can get efficient discretizations. The augmented variables should be chosen such that the interface or boundary conditions are satisfied. The key to the success of the augmented methods often relies on the interpolation scheme to couple the augmented variables with the governing differential equations through the interface or boundary conditions. This has been done using a least squares interpolation (under-determined) for which the singular value decomposition (SVD) is used to solve for the interpolation coefficients. In this paper, based on properties of the finite element method, a new augmented immersed finite element method (IFEM) that does not need the interpolations is proposed for elliptic interface problems that have a piecewise constant coefficient. Thus the new augmented method is more efficient and simple than the old one that uses interpolations. The method then is extended to Poisson equations on irregular domains with a Dirichlet boundary condition. Numerical experiments with arbitrary interfaces/irregular domains and large jump ratios are provided to demonstrate the accuracy and the efficiency of the new augmented methods. Numerical results also show that the number of GMRES iterations is independent of the mesh size and nearly independent of the jump in the coefficient.
['Haifeng Ji', 'Jinru Chen', 'Zhilin Li']
A new augmented immersed finite element method without using SVD interpolations
606,928
An image may be decomposed as a difference between an image of peaks and an image of wells. This decomposition depends upon the point of view, an arbitrary set from where the image is considered: a peak appears as a peak if it is impossible to reach it starting from any position in the point of view without climbing. A well cannot be reached without descending. To any particular point of view corresponds a different decomposition. The decomposition is reversible. If one applies a morphological operator to the peaks and wells component before applying the inverse transform, one gets a new, transformed image.
['Fernand Meyer']
Image decompositions and transformations as peaks and wells
591,102
An Executable Specification for SPARQL
['Mihaela A. Bornea', 'Julian Dolby', 'Achille Fokoue', 'Anastasios Kementsietsidis', 'Kavitha Srinivas', 'Mandana Vaziri']
An Executable Specification for SPARQL
962,801
We use virtual prototyping technique to develop a 3D model for electrostatically suspended rotor micro gyroscope system according to its actual mechanical structure and material properties. System level dynamic simulation results obtained from the established virtual prototyping model provide necessary reference and guidance for micro gyroscope system control. Various PID control methods used to realize rotor initial levitation with different control parameters are evaluated and validated by the analytical model before application. The output motion characteristic curves including force, velocity and displacement of rotor are analyzed. Based on simulation results, we find suitable strategy to realize rotor levitation and obtain superior motion performance. The displacement of rotor in Z direction measured in real working environment shows that the PID control method verified by the virtual prototyping simulation is workable. Rapid initial levitation of rotor provides prerequisite for the follow-up rotating and torque exerting control.
['Dangdang Shao', 'Wenyuan Chen', 'Weiping Zhang', 'Feng Cui', 'Qijun Xiao']
Virtual prototyping simulation for electrostatically suspended rotor micro gyroscope initial levitation
135,505
Adaptive constraint propagation has recently received a great attention. It allows a constraint solver to exploit various levels of propagation during search, and in many cases it shows better performance than static/predefined. The crucial point is to make adaptive constraint propagation automatic, so that no expert knowledge or parameter specification is required. In this work, we propose a simple learning technique, based on multiarmed bandits, that allows to automatically select among several levels of propagation during search. Our technique enables the combination of any number of levels of propagation whereas existing techniques are only defined for pairs. An experimental evaluation demonstrates that the proposed technique results in a more efficient and stable solver.
['Amine Balafrej', 'Christian Bessiere', 'Anastasia Paparrizou']
Multi-armed bandits for adaptive constraint propagation
620,864
The use of wireless implant technology requires correct delivery of the vital physiological signs of the patient along with the energy management in power-constrained devices. Toward these goals, we present an augmentation protocol for the physical layer of the medical implant communications service (MICS) with focus on the energy efficiency of deployed devices over the MICS frequency band. The present protocol uses the rateless code with the frequency-shift keying (FSK) modulation scheme to overcome the reliability and power cost concerns in tiny implantable sensors due to the considerable attenuation of propagated signals across the human body. In addition, the protocol allows a fast start-up time for the transceiver circuitry. The main advantage of using rateless codes is to provide an inherent adaptive duty cycling for power management, due to the flexibility of the rateless code rate. Analytical results demonstrate that an 80% energy saving is achievable with the proposed protocol when compared to the IEEE 802.15.4 physical layer standard with the same structure used for wireless sensor networks. Numerical results show that the optimized rateless coded FSK is more energy efficient than that of the uncoded FSK scheme for deep tissue (e.g., digestive endoscopy) applications, where the optimization is performed over modulation and coding parameters.
['Jamshid Abouei', 'J David Brown', 'Konstantinos N. Plataniotis', 'Subbarayan Pasupathy']
Energy Efficiency and Reliability in Wireless Biomedical Implant Systems
535,724
A study on textual features for medical records classification.
['Anita Alicante', 'Flora Amato', 'Giovanni Cozzolino', 'Francesco Gargiulo', 'Nicla Improda', 'Antonino Mazzeo']
A study on textual features for medical records classification.
817,481
Knowledge-based Artifical Intelligence (AI) systems have incorporated mechanisms which resolve uncertainty. This uncertainty systems from incompleteness and noise in input data and from errorful processing.#R##N##R##N#Resolution of uncertainty is also an important issue in the design of disributed processing systems. Uncertainty is introduced in these systems from the use of incomplete and inconsistent local data bases and from errorful communication channels.#R##N##R##N#The mechanisms used in knowledge-based AI systems provide a model for the design of disturbed algorithms which can resolve uncertainty as an integral part of thier problem solving activity. Use of such algorithms in a distributed processing system makes possible a reduction in the amount of inter-node communication required to resolve uncertainty. This reduction in communication requirements allows effective distribution of applications that are impractical using current approaches to the design of distributed algorithms.
['Victor R. Lesser', 'Daniel D. Corkill']
The application of artificial intelligence techniques to cooperative distributed processing
571,704
The current issue's new challenge involves the development of elegant recursive formulas. Recursive formulation requires the specification of a task instance by smaller task instances. Suitable and insightful points of view may yield elegant and concise specifications. The challenge requires such points of view, in three levels of difficulty.
['David Ginat']
Domino arrangements
708,302
Institutions of higher education that give the credits by distance learning using WBT have increased recently. In these situations, the authentication model by (ID, password) pair is general. However, this authentication model cannot prevent "Identity theft" effectively. In this paper, we propose a new authentication model to solve this problem by using cellular phones.
['Hideyuki Takamizawa', 'Kenji Kaijiri']
Reliable Authentication Method by Using Cellular Phones in WBT
74,612
Automated Visualization Support for Linked Research Data.
['Belgin Mutlu', 'Patrick Höfler', 'Vedran Sabol', 'Gerwald Tschinkel', 'Michael Granitzer']
Automated Visualization Support for Linked Research Data.
801,856
Generalized secret sharing is a method of constructing secret sharing from the perspective of access structure. In this paper, we propose a novel solution for achieving generalized secret sharing with linear hierarchical secrets. We use a matrix to model the relationship related to the access structure and transfer the matrix to modular arithmetic, which is calculated by Chinese Remainder Theorem. The participants in the corresponding access structures can cooperate with each other to produce secrets in monotonous levels. We prove that shared secrets can be efficient and reconstructed only by the qualified subset of participants; unqualified participants cannot reconstruct the corresponding shared secret.
['Xi Chen', 'Yun Liu', 'Chin-Chen Chang', 'Cheng Guo']
Generalized Secret Sharing with Linear Hierarchical Secrets
748,673
On Stochastic Broadcast Control of Swarms
['Ilana Segall', 'Alfred M. Bruckstein']
On Stochastic Broadcast Control of Swarms
876,430
Third Generation Partnership Project (3GPP) specification TR 25.950 proposed high-speed downlink packet access for the Universal Mobile Telecommunication System (UMTS). In this mechanism, an active set of cells is defined for every user equipment (UE) communication session. The cell with the best wireless link quality (called the serving cell) in the active set is selected for communication with the UE. When the wireless link quality of the old serving cell degrades below some threshold, a new serving cell in the active set is selected to continue the communication session. Our previous work proposed a high-speed downlink packet access (HSDPA) overflow control scheme with four frame synchronization algorithms to switch the serving cell, and formally proved the correctness of the scheme. We propose an analytic model to investigate the performance of these frame synchronization algorithms, and show how the user movement patterns affect the control message delivery costs of these algorithms.
['Phone Lin', 'Yi-Bing Lin', 'Imrich Chlamtac']
Modeling frame synchronization for UMTS high-speed downlink packet access
197,771
Airborne scanning laser altimetry offers the potential for extracting high-resolution vegetation structure characteristics for monitoring and modeling the land surface. A unique dataset is used to study the sensitivity of laser interception profiles and laser-derived leaf area index (LAI) to assumptions about the surface structure and the measurement process. To simulate laser interception, one- and three-dimensional (3-D) vegetation structure models have been developed for maize and sunflower crops. Over sunflowers, a simple regression technique has been developed to extract laser-derived LAI, which accounts for measurement and model biases. Over maize, a 3-D structure/interception model that accounts for the effects of the laser inclination angle and detection threshold has enabled the fraction of radiation reaching the ground surface to be modelled to within 0.5% of the observed fraction. Good agreement was found between modelled and measured profiles of laser interception with a vertical resolution of 10 cm.
['Caroline J. Houldcroft', 'Claire Campbell', 'Ian J. Davenport', 'Robert J. Gurney', 'Nicholas Holden']
Measurement of canopy geometry characteristics using LiDAR laser altimetry: a feasibility study
263,203
Leaf area index (LAI) and plant area index (PAI) are common and important biophysical parameters used to estimate agronomical variables such as canopy growth, light interception and water requirements of plants and trees. LAI can be either measured directly using destructive methods or indirectly using dedicated and expensive instrumentation, both of which require a high level of know-how to operate equipment, handle data and interpret results. Recently, a novel smartphone and tablet PC application, VitiCanopy, has been developed by a group of researchers from the University of Adelaide and the University of Melbourne, to estimate grapevine canopy size (LAI and PAI), canopy porosity, canopy cover and clumping index. VitiCanopy uses the front in-built camera and GPS capabilities of smartphones and tablet PCs to automatically implement image analysis algorithms on upward-looking digital images of canopies and calculates relevant canopy architecture parameters. Results from the use of VitiCanopy on grapevines correlated well with traditional methods to measure/estimate LAI and PAI. Like other indirect methods, VitiCanopy does not distinguish between leaf and non-leaf material but it was demonstrated that the non-leaf material could be extracted from the results, if needed, to increase accuracy. VitiCanopy is an accurate, user-friendly and free alternative to current techniques used by scientists and viticultural practitioners to assess the dynamics of LAI, PAI and canopy architecture in vineyards, and has the potential to be adapted for use on other plants.
['Roberta De Bei', 'Sigfredo Fuentes', 'Matthew Gilliham', 'Steve Tyerman', 'Everard Edwards', 'Nicolò Bianchini', 'Jason Smith', 'Cassandra Collins']
VitiCanopy: A Free Computer App to Estimate Canopy Vigor and Porosity for Grapevine
709,782
Purpose#R##N#The anatomical anomaly of the number of vertebral bones is one of the major anomalies in the human body, which can cause confusion of the spinal level in, for example, surgery. The aim of this study is to develop an automatic detection system for this type of anomaly.
['Shouhei Hanaoka', 'Yoshiyasu Nakano', 'Mitsutaka Nemoto', 'Yukihiro Nomura', 'Tomomi Takenaga', 'Soichiro Miki', 'Takeharu Yoshikawa', 'Naoto Hayashi', 'Yoshitaka Masutani', 'Akinobu Shimizu']
Automatic detection of vertebral number abnormalities in body CT images.
975,762
The increasing tendency toward the extreme network densification has motivated network operators to leverage spectrum across multiple radio access networks, in order to significantly enhance spectral efficiency, quality of service, as well as network capacity. There is therefore a substantial need to develop innovative network selection mechanisms that consider energy efficiency while meeting application quality requirements. In this context, in accordance with the new trends foreseen for 5G systems, we propose a user-centric scheme for efficient network selection. Our solution accounts for network characteristics and application requirements, as well as for different user objectives by assigning them different weights and dynamically updating them. Numerical results show the efficiency of the proposed solution and its ability to grasp the conflicting nature of users' objectives while achieving an excellent level of fairness among them.
['Alaa Awad', 'Amr Mohamed', 'Carla Fabiana Chiasserini']
Dynamic Network Selection in Heterogeneous Wireless Networks: A User-centric Scheme for Improved Delivery
964,673
Regression techniques can be used not only for legitimate data analysis, but also to infer private information about individuals. In this paper, we demonstrate that regression trees, a popular data-analysis and datamining technique, can be used to effectively reveal individuals' sensitive data. This problem, which we call a regression attack, has not been addressed in the data privacy literature, and existing privacy-preserving techniques are not appropriate in coping with this problem. We propose a new approach to counter regression attacks. To protect against privacy disclosure, our approach introduces a novel measure, called digression, which assesses the sensitive value disclosure risk in the process of building a regression tree model. Specifically, we develop an algorithm that uses the measure for pruning the tree to limit disclosure of sensitive data. We also propose a dynamic value-concatenation method for anonymizing data, which better preserves data utility than a user-defined generalization scheme commonly used in existing approaches. Our approach can be used for anonymizing both numeric and categorical data. An experimental study is conducted using realworld financial, economic, and healthcare data. The results of the experiments demonstrate that the proposed approach is very effective in protecting data privacy while preserving data quality for research and analysis.
['Xiao Bai Li', 'Sumit Sarkar']
Digression and value concatenation to enable privacy-preserving regression
227,598
"Quantum " data structures for synthesis of digital system are proposed. They are based on transactions between addressable memory components to implement any functionality. The new approach of logic function minimization for synthesis of digital systems is proposed. It is supposed to apply vector form (quantum) of description logic and sequential structures implemented into memory elements. This approach remarkably differs from the common synthesis theory of discrete devices based on truth tables of components. It is based on an opportunity of applying quantum or qubit data structures [1-5] in modern computers when doing calculating processes with the purpose of unary coding states of input, internal and output variables, and also on the technology of qubit-vectors implementation into memory elements FPGA, which realize combinational and sequential primitives. The use of quantum memory-based only models for describing digital components when computer systems design would allow us to increase yield, enhance reliability of computers, make the process of design and production of devices cheaper, and also provides human-free repairing in remote & online mode.
['Wajeb Gharibi', 'Eugenia Litvinova', 'Vladimir Hahanov', 'Ivan Hahanov']
«Quantum» structures for digital systems synthesis
818,410
The problem of dealing with incomplete information in object-oriented data models (OODMs) is addressed. A method to compute default values for unknown objects' attributes is proposed, based both on the association of typical values with the attributes in the intensional definition of a class and on the application of a prioritized aggregation operator to combine typical values appearing in an inheritance structure. This method can also be applied to refine vague attribute values expressed by means of fuzzy sets interpreted as possibility distributions. A new interpretation of partial inheritance in this context is proposed, introducing the concept of "partial overriding" of typical values.
['Gabriella Pasi', 'Ronald R. Yager']
Calculating attribute values using inheritance structures in fuzzy object-oriented data models
693,196
Let C be a binary linear block code of length n , dimension k and minimum Hamming distance d over GF(2) n . Let d ⊥ denote the minimum Hamming distance of the dual code of C over GF(2) n . Let e:GF(2) n →{-1,1} n be the component-wise mapping e( vi ):=(-1) vi , for v =( v 1 , v 2 ,..., vn ) ∈ GF(2) n . Finally, for p n , let \mmbΦ C be a p × n random matrix whose rows are obtained by mapping a uniformly drawn set of size p of the codewords of C under e. It is shown that for d ⊥ large enough and y := p / n ∈ (0,1) fixed, as n →∞ the empirical spectral distribution of the Gram matrix of [1/(√ n )]\mmbΦ C resembles that of a random i.i.d. Rademacher matrix (i.e., the Marchenko-Pastur distribution). Moreover, an explicit asymptotic uniform bound on the distance of the empirical spectral distribution of the Gram matrix of [1/(√ n )]\mmbΦ C to the Marchenko-Pastur distribution as a function of y and d ⊥ is presented.
['Behtash Babadi', 'Vahid Tarokh']
Spectral Distribution of Random Matrices From Binary Linear Block Codes
194,639
S-adenosyl-l-homocysteine hydrolase of Plasmodium falciparum (PfSAHH) has been reported as a potential drug target against malaria. A series of aristeromycin derivatives and analogs were designed and tested for inhibition of PfSAHH. 2-Fluoroaristeromycin has been reported as a potential inhibitor of PfSAHH. Here, we have performed the molecular dynamics simulation study of 2-Fluoroaristeromycin with PfSAHH with 15-ns simulation time to evaluate the dynamic perturbation of inhibitor in the binding site of PfSAHH in docked complex. This indicates that the complex structure of PfSAHH-2-Fluoroaristeromycin is stable after 10 ns of simulation. MD results indicate that Leu53, His54, Thr56, Glu58, Cys59, Asp134, Glu200, Lys230, Leu389, Leu392, Gly397, Hip398, Met403, and Phe407 are the key residues in the binding pocket of PfSAHH that interacts with the inhibitor 2-Fluoroaristeromycin. Earlier studies have reported Cys59 of PfSAHH as a selective residue to design potential and specific inhibitor of PfSAHH. Simulation study also indicates the role of Cys59 in binding interaction with inhibitor. The MD simulation of PfSAHH-2-Fluoroaristeromycin complex reveals the stable nature of docking interaction. The result provides a set of guidelines for the rational design of potential inhibitors of PfSAHH.
['Dev Bukhsh Singh', 'Seema Dwivedi']
Docking and molecular dynamics simulation study of inhibitor 2-Fluoroaristeromycin with anti-malarial drug target PfSAHH
790,731
When designing a distributed system, good practices like using modular architectures or applying design patterns are always desirable, but there are relevant aspects that may initially go unnoticed even if we carefully approach the task by the book. Among them, there are a number of decisions to be taken about the specifics of the communications between system nodes: the format of the messages to be sent, the desired/demanded features of the network (latency, bandwidth...), etc. In particular, one of the most common problems in distributed systems design and implementation is the definition of a good approach to node failure or netsplits management. In fact, these are concerns that, in many cases, arise once the system is already at deployment stage. Different contingency mechanisms can be proposed to solve this kind of problems, and they vary greatly from one another: choosing which and how to implement them depends not only on the technology used, but also on the communications network reliability, or even the hardware where the system will be running on. In this paper we present ADVERTISE, a distributed system for advertisement transmission to on-customer-home set-top boxes (STBs) over a Digital TV network (iDTV) of a cable operator. We use this system as a case study to explain how we addressed the aforementioned problems from a declarative point of view.
['Macías López', 'Laura M. Castro', 'David Cabrero']
Declarative distributed advertisement system for iDTV: an industrial experience
253,367
All-optical packet switching (AOPS) technology is essential to fully utilize the tremendous bandwidth provided by advanced optical communication techniques through forwarding packets in optical domain for the next generation network. However, long packet headers and other complex operations such as table lookup and packet header re-writing still have to be processed electronically for lack of cost-effective optical processing techniques. This not only increases system complexity but also limits packet forwarding speed due to optical-electronic-optical conversion. Lots of work of improving optical processing techniques to realize AOPS is reported in the literature. Differently, this paper proposes a new networking structure to facilitate AOPS realization and support various existing networks through simplifying networking operations. This structure only requires an AOPS node to process a short packet header to forward packets across it with neither table lookup nor header re-writing. Furthermore, it moves high layer addressing issues from packet forwarding mechanisms of routers. Consequently, any changes in addressing schemes such as address space extension do not require changes in the AOPS nodes. It can also support both connection-oriented and connectionless services to carry various types of traffic such as ATM and IP traffic. This structure is mainly based on the hierarchical source routing approach. The analytical results show that average packet header sizes are still acceptable even for long paths consisting of many nodes each of which has a large number of output ports.
['Shengming Jiang']
An addressing independent networking structure favorable for all-optical packet switching
427,335
The present paper deals with the land cover classification of high resolution Quickbird images using the texture feature analysis. The study area covers the wider region of the urbanized environment of Chania, Greece. Different textural features including Entropy and Asm (angular second moment) were extracted based on GLCM (Grey Level Co-occurrence Matrix) texture feature and used as the distinct feature value in classification procedures. The classification was performed on the texture image that was produced by the synthesis of the original image with vegetation index BRI (band ratio index) extracted from the original datasets. Results indicate that the proposed approach brings significant improvement of the classification rate based on the different texture feature images of various bands, allowing a better discrimination and mapping of mixed land cover types.
['Guangrong Shen', 'Apostolos Sarris']
Application of Texture Analysis in Land Cover Classification of High Resolution Image
530,287
Detection of parked vehicles from a radar based occupancy grid
['Renaud Dube', 'Markus Hahn', 'Markus Schütz', 'Jurgen Dickmann', 'Denis Gingras']
Detection of parked vehicles from a radar based occupancy grid
408,092
Nowadays, people are overwhelmed with multiple tasks and responsibilities, resulting in increasing stress level. At the same time, it becomes harder to find time for self-reflection and diagnostics of problems that can be source of stress. In this paper, we propose a tool that supports a person in self-reflection by providing views on life events in their relation to person's well-being in a concise and intuitive form. The tool, called LifelogExplorer, takes sensor data (like skin conductance and accelerometer measurements) and data obtained from digital sources (like personal calendars) as input and generates views on this data which are comprehensible and meaningful for the user due to filtering and aggregation options which help to cope with the data explosion. We evaluate our approach on the data collected from two case studies focused on addressing stress at work: 1) with academic staff of a university, and 2) with teachers from a vocational school.
['Rafal Kocielnik', 'Fabrizio Maria Maggi', 'Natalia Sidorova']
Enabling self-reflection with LifelogExplorer: Generating simple views from complex data
519,932
Motivation: Cysteine residues have particular structural and functional relevance in proteins because of their ability to form covalent disulfide bonds. Bioinformatics tools that can accurately predict cysteine bonding states are already available, whereas it remains challenging to infer the disulfide connectivity pattern of unknown protein sequences. Improving accuracy in this area is highly relevant for the structural and functional annotation of proteins. Results: We predict the intra-chain disulfide bond connectivity patterns starting from known cysteine bonding states with an evolutionary-based unsupervised approach called Sephiroth that relies on high-quality alignments obtained with HHblits and is based on a coarse-grained cluster-based modelization of tandem cysteine mutations within a protein family. We compared our method with state-of-the-art unsupervised predictors and achieve a performance improvement of 25-27% while requiring an order of magnitude less of aligned homologous sequences (1/410 3 instead of 1/410 4). Availability and implementation: The software described in this article and the datasets used are available at http://ibsquare.be/sephiroth. Contact:
['Daniele Raimondi', 'Gabriele Orlando', 'Wim F. Vranken']
Clustering-based model of cysteine co-evolution improves disulfide bond connectivity prediction and reduces homologous sequence requirements
288,484
Using Low-Power Sensors to Enhance Interaction on Wristwatches and Bracelets
['Simon T. Perrault', 'Eric Lecolinet']
Using Low-Power Sensors to Enhance Interaction on Wristwatches and Bracelets
691,375
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages obtained by Monte-Carlo sampling.
['Dörthe Malzahn', 'Manfred Opper']
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages
88,722
Software model checking technology is based on an exhaustive and efficient simulation of all possible execution paths in concurrent programs. Existing tools based on this method can rapidly detect execution errors, preventing malfunctions in the final system. However dealing with dynamic memory allocation is still an open trend. In this paper, we present a novel method to extend explicit model checking of C programs with dynamic memory management. The method consists in defining a canonical representation of the heap that is based on moving most of the information from the state vector to a global structure. We give a formal semantics of the method in order to show its soundness. Our experimental results show that this method can be efficiently implemented in many well known model checkers, like CADP or SPIN.
['M. del Mar Gallardo', 'Pedro Merino', 'David Sanán']
Model Checking C Programs with Dynamic Memory Allocation
267,184
Data Synchronized Pipeline Architecture: Pipelining in Multiprocessor Environments.
['Yvon Jégou', 'André Seznec']
Data Synchronized Pipeline Architecture: Pipelining in Multiprocessor Environments.
734,091
We dynamically monitor per cycle scan activity to speed up the scan clock for low activity cycles without exceeding the specified peak power budget. The activity monitor is implemented either as on-chip hardware or through pre-simulated and stored test data. In either case a handshake protocol controls the rate of test data flow between the automatic test equipment (ATE) and device under test (DUT). The test time reduction accomplished depends upon an average activity factor α. For low α, about 50% test time reduction is analytically shown. With moderate activity, α = 0.5, simulated test data gives about 25% test time reduction for ITC02 benchmarks. For full scan s38584, the dynamic scan clock control reduced the test time by 19% when fully specified ATPG vectors were used and by 43% for vectors with don't cares. BIST with dynamic clock showed about 19% test time reduction for the largest ISCAS89 circuits in which the hardware activity monitor and scan clock control required about 2–3% hardware overhead.
['Priyadharshini Shanmugasundaram', 'Vishwani D. Agrawal']
Dynamic scan clock control for test time reduction maintaining peak power limit
507,876
A REAL-TIME FRACTAL-BASED BRAIN STATE RECOGNITION FROM EEG AND ITS APPLICATIONS
['Olga Sourina', 'Qiang Wang', 'Yisi Liu', 'Minh Khoa Nguyen']
A REAL-TIME FRACTAL-BASED BRAIN STATE RECOGNITION FROM EEG AND ITS APPLICATIONS
786,382
The process of creating photorealistic 3-dimensional computer graphic (3DCG) images is divided into two stages, i.e., modeling and rendering. Automatic rendering has gained popularity, and photorealistic rendering is generally used to render different types of images. However, professional artists still model characters manually. Moreover, not much progress has been achieved with regard to 3-D shape data acquisition techniques that can be applied to complex object modeling; this is an important problem hampering the progress of 3DCG. Generally, a laser and a highly accurate camera are used to acquire 3-D shape data. However, this technique is time-consuming and expensive. Further, the eyes may be damaged while measuring by this method. In order to solve these problems, we have proposed a simple method for 3-D shape data acquisition using a projector and a web camera. This method is economical, and simpler than conventional techniques. In this paper, we describe the setup of the projector and web camera, shape data acquisition process, image processing, and generation of a photorealistic model. We also propose a method for application to complex objects using several cameras.
['Ippei Torii', 'Yousuke Okada', 'Masayuki Mizutani', 'Naohiro Ishii']
A Simple Method for 3-Dimensional Modeling and Application to Complex Objects
348,981
Virtual worlds and avatars as the new frontier of telehealth care.
['Jacquelyn Ford Morie', 'Edward Haynes', 'Eric Chance', 'Dinesh Purohit']
Virtual worlds and avatars as the new frontier of telehealth care.
826,910
Statistical anomaly detection techniques provide the next layer of cyber-security defences below traditional signature-based approaches. This article presents a scalable, principled, probability-based technique for detecting outlying connectivity behaviour within a directed interaction network such as a computer network. Independent Bayesian statistical models are fit to each message recipient in the network using the Dirichlet process, which provides a tractable, conjugate prior distribution for an unknown discrete probability distribution. The method is shown to successfully detect a red team attack in authentication data obtained from the enterprise network of Los Alamos National Laboratory.
['Nicholas A. Heard', 'Patrick Rubin-Delanchy']
Network-wide anomaly detection via the Dirichlet process
942,927
Given a repeatedly issued query and a document with a not-yet-confirmed potential to satisfy the users' needs, a search system should place this document on a high position in order to gather user feedback and obtain a more confident estimate of the document utility. On the other hand, the main objective of the search system is to maximize expected user satisfaction over a rather long period, what requires showing more relevant documents on average. The state-of-the-art approaches to solving this exploration-exploitation dilemma rely on strongly simplified settings making these approaches infeasible in practice. We improve the most flexible and pragmatic of them to handle some actual practical issues. The first one is utilizing prior information about queries and documents, the second is combining bandit-based learning approaches with a default production ranking algorithm. We show experimentally that our framework enables to significantly improve the ranking of a leading commercial search engine.
['Aleksandr Vorobev', 'Damien Lefortier', 'Gleb Gusev', 'Pavel Serdyukov']
Gathering Additional Feedback on Search Results by Multi-Armed Bandits with Respect to Production Ranking
580,647
Recently, much attention has been given to models for identifying rumors in social media. Features that are helpful for automatic inference of credibility, veracity, reliability of information have been described. The ultimate goal is to train classification models that are able to recognize future high-impact rumors as early as possible, before the event unfolds. The generalization power of the models is greatly hindered by the domain-dependent distributions of the features, an issue insufficiently discussed. Here we study a large dataset consisting of rumor and non-rumor tweets commenting on nine breakingnews stories taking place in different locations of the world. We found that the distribution of most features are specific to the event and that this bias naturally affects the performance of the model. The analysis of the domain-specific feature distributions is insightful and hints to the distinct characteristics of the underlying social network for different countries, social groups, cultures and others.
['Laura Tolosi', 'Andrey Tagarev', 'Georgi Georgiev']
An Analysis of Event-Agnostic Features for Rumour Classification in Twitter.
990,248
Traditional approaches to the problem of extracting data from texts have emphasized hand-crafted linguistic knowledge. In contrast, BBN's PLUM system (Probabilistic Language Understanding Model) was developed as part of a DARPA-funded research effort on integrating probabilistic language models with more traditional linguistic techniques. Our research and development goals are• more rapid development of new applications,• the ability to train (and re-train) systems based on user markings of correct and incorrect output,• more accurate selection among interpretations when more than one is found, and• more robust partial interpretation when no complete interpretation can be found.
['Damaris M. Ayuso', 'Sean Boisen', 'Heidi Fox', 'Herbert Gish', 'Robert Ingria', 'Ralph M. Weischedel']
BBN: description of the PLUM system as used for MUC-3
205,695
We present a manufacturing simulation system based on autonomous agents and a dynamic price mechanism that explores routing flexibility and provides a programming language for modeling manufacturing environments, based on autonomous agents and a dynamic price mechanism. The simulation system contains a control simulation module and a manufacturing environment simulation module. The control simulation module consists of a collection of autonomous agents who negotiate with each other to reach job processing decisions based on negotiation protocols and built-in price adjustment algorithms. The manufacturing environment simulation module is an event-based simulation system that couples with an input simulation language called Flexible Routing Adaptive Control Simulation Language. The language can be used to model complicated manufacturing environments and to specify flexibility in part process plans. By integrating the control framework with the FRACS simulation system, a sophisticated test bed is created for research of different control and negotiation strategies. The simulation system allows quick turn around time for software prototyping, and the modeling language enables easy model adjustment and performance tuning. Many experiments in the control and scheduling of manufacturing systems have been conducted using this simulation system. Several of these experiments are discussed in the paper.
['Grace Yuh-Jiun Lin', 'James J. Solberg']
An agent-based flexible routing manufacturing control simulation system
87,460
This article deals with the problem of fault prognosis in stochastic discrete event systems. For that purpose, partially observed stochastic Petri nets are considered to model the system with its sensors. The model represents both healthy and faulty behaviors of the system. Marking trajectories which are consistent with the measurements issued from the sensors are first obtained. Based on the events dates, the probabilities of the consistent trajectories are evaluated and a state estimation is obtained as a consequence. From the set of possible current states and their probabilities, a method to evaluate the probability of future fault is developed using a probabilistic model. An example is presented to illustrate the results.
['Rabah Ammour', 'Edouard Leclercq', 'Eric Sanlaville', 'Dimitri Lefebvre']
Faults prognosis using partially observed stochastic Petri nets
820,049
The trend toward pervasive computing necessitates finding and implementing appropriate ways for users to interact with devices. We believe the future of interaction with pervasive devices lies in attentive user interfaces, systems that pay attention to what users do so that they can attend to what users need. Such systems track user behavior, model user interests, and anticipate user desires and actions. In addition to developing technologies that support attentive user interfaces, and applications or scenarios that use attentive user interfaces, there is the problem of evaluating the utility of the attentive approach. With this last point in mind, we observed users in an "office of the future", where information is accessed on displays via verbal commands. Based on users' verbal data and eye-gaze patterns, our results suggest people naturally address individual devices rather than the office as a whole.
['Paul P. Maglio', 'Teenie Matlock', 'Christopher S. Campbell', 'Shumin Zhai', 'Barton A. Smith']
Gaze and Speech in Attentive User Interfaces
199,286
Computer Science abounds in folktales about how — in the early days of computer programming — bit vectors were ingeniously used to encode and manipulate finite sets. Algorithms have thus been developed to minimize memory footprint and maximize efficiency by taking advantage of microarchitectural features. With the development of automated and interactive theorem provers, finite sets have also made their way into the libraries of formalized mathematics. Tailored to ease proving , these representations are designed for symbolic manipulation rather than computational efficiency. This paper aims to bridge this gap. In the Coq proof assistant, we implement a bitset library and prove its correct-ness with respect to a formalization of finite sets. Our library enables a seamless interaction of sets for computing — bitsets — and sets for proving — finite sets.
['Arthur Blot', 'Pierre-Evariste Dagand', 'Julia L. Lawall']
From Sets to Bits in Coq
595,543
In this paper, the optimum functional patterns for CMOS operational amplifier are proposed based on an analysis to find the maximum difference between the good circuit and the faulty circuit for a CMOS operational amplifier. The theoretical and simulation results show that the derived test patterns do give the maximum difference at the output even when the circuit has a "soft" fault. The results have also been applied to generate test patterns for a programmable gain/loss mixed signal circuit.
['Soon Jyh Chang', 'Chung Len Lee', 'Jwu E. Chen']
Functional test pattern generation for CMOS operational amplifier
477,495
Often the arrangement of frequency dependent data such as pages on a sequential memory such as disks or tapes critically affects the turnaround time of real-time or dedicated mode processes. Since the size of typical problems renders exact solution techniques impractical, a fast, efficient heuristic procedure becomes very useful. This paper describes such a procedure which is applicable to a general class of objective functions corresponding to seek time functions constrained to be only monotonically piecewise linear.
['C. V. Ramamoorthy', 'Parker R. Blevins']
Arranging frequency dependent data on sequential memories
227,225
This paper investigates a framework that discovers pair-wise constraints for semi-supervised text document clustering. An active learning approach is proposed to select informative document pairs for obtaining user feedbacks. A gain directed document pair selection method that measures how much we can learn by revealing the relationships between pairs of documents is designed. Three different models, namely, uncertainty model, generation error model, and objective function model are proposed. Language modeling is investigated for representing clusters in the semi-supervised document clustering approach.
['Ruizhang Huang', 'Wai Lam']
Semi-supervised Document Clustering via Active Learning with Pairwise Constraints
442,437
Distributted Work Environments for Collaborative Engineering
['Heinz-Hermann Erbe', 'Dieter Müller']
Distributted Work Environments for Collaborative Engineering
582,537
Gaussian mean-shift (GMS) is a clustering algorithm that has been shown to produce good image segmentations (where each pixel is represented as a feature vector with spatial and range components). GMS operates by defining a Gaussian kernel density estimate for the data and clustering together points that converge to the same mode under a fixed-point iterative scheme. However, the algorithm is slow, since its complexity is O(kN2), where N is the number of pixels and k the average number of iterations per pixel. We study four acceleration strategies for GMS based on the spatial structure of images and on the fact that GMS is an expectation-maximisation (EM) algorithm: spatial discretisation, spatial neighbourhood, sparse EM and EM-Newton algorithm. We show that the spatial discretisation strategy can accelerate GMS by one to two orders of magnitude while achieving essentially the same segmentation; and that the other strategies attain speedups of less than an order of magnitude.
['Miguel Á. Carreira-Perpiñán']
Acceleration Strategies for Gaussian Mean-Shift Image Segmentation
410,492
Over the last two decades, propositional satisfiability (SAT) has become one of the most successful and widely applied techniques for the solution of NP-complete problems. The aim of this paper is to investigate theoretically how SAT can be utilized for the efficient solution of problems that are harder than NP or co-NP. In particular, we consider the fundamental reasoning problems in propositional disjunctive answer set programming (ASP), BRAVE REASONING and SKEPTICAL REASONING, which ask whether a given atom is contained in at least one or in all answer sets, respectively. Both problems are located at the second level of the Polynomial Hierarchy and thus assumed to be harder than NP or co-NP. One cannot transform these two reasoning problems into SAT in polynomial time, unless the Polynomial Hierarchy collapses.#R##N##R##N#We show that certain structural aspects of disjunctive logic programs can be utilized to break through this complexity barrier, using new techniques from Parameterized Complexity. In particular, we exhibit transformations from BRAVE and SKEPTICAL REASONING to SAT that run in time O(2kn2) where k is a structural parameter of the instance and n the input size. In other words, the reduction is fixed-parameter tractable for parameter k. As the parameter k we take the size of a smallest backdoor with respect to the class of normal (i.e., disjunction-free) programs. Such a backdoor is a set of atoms that when deleted makes the program normal. In consequence, the combinatorial explosion, which is expected when transforming a problem from the second level of the Polynomial Hierarchy to the first level, can now be confined to the parameter k, while the running time of the reduction is polynomial in the input size n, where the order of the polynomial is independent of k. We show that such a transformation is not possible if we consider backdoors with respect to tightness instead of normality.#R##N##R##N#We think that our approach is applicable to many other hard combinatorial problems that lie beyond NP or co-NP, and thus significantly enlarge the applicability of SAT.
['Johannes Klaus Fichte', 'Stefan Szeider']
Backdoors to normality for disjunctive logic programs
773,229
Define a ratio-based similarity index for a pair of interval-valued cross ratios.Develop new similarity measures for interval reciprocal preference relations (IRPRs).Devise an induced interval-valued cross ratio ordered weighted geometric operator.Develop a consensus model to solve group decision making problems with IRPRs. Similarity analysis and preference information aggregation are two important issues for consensus building in group decision making with preference relations. Pairwise ratings in an interval reciprocal preference relation (IRPR) are usually regarded as interval-valued And-like representable cross ratios (i.e., interval-valued cross ratios for short) from the multiplicative perspective. In this paper, a ratio-based formula is introduced to measure similarity between a pair of interval-valued cross ratios, and its desirable properties are provided. We put forward ratio-based similarity measurements for IRPRs. An induced interval-valued cross ratio ordered weighted geometric (IIVCROWG) operator with interval additive reciprocity is developed to aggregate interval-valued cross ratio information, and some properties of the IIVCROWG operator are presented. The paper devises an importance degree induced IRPR ordered weighted geometric operator to fuse individual IRPRs into a group IRPR, and discusses the derivation of its associated weights. By employing ratio-based similarity measurements and IIVCROWG-based aggregation operators, a soft consensus model including a generation mechanism of feedback recommendation rules is further proposed to solve group decision making problems with IRPRs. Three numerical examples are examined to illustrate the applicability and effectiveness of the developed models.
['Zhou-Jing Wang', 'Jian Lin']
Ratio-based similarity analysis and consensus building for group decision making with interval reciprocal preference relations
628,390
Patent litigation not only covers legal and technical issues, it is also a key consideration for managers of high-technology (high-tech) companies when making strategic decisions. Patent litigation influences the market value of high-tech companies. However, this raises unique challenges. To this end, in this paper, we develop a novel recommendation framework to solve the problem of litigation risk prediction. We will introduce a specific type of patent-related litigation, that is, Section 337 investigations, which prohibit all acts of unfair competition, or any unfair trade practices, when exporting products to the United States. To build this recommendation framework, we collect and exploit a large amount of published information related to almost all Section 337 investigation cases. This study has two aims: (1) to predict the litigation risk in a specific industry category for high-tech companies and (2) to predict the litigation risk from competitors for high-tech companies. These aims can be achieved by mining historical investigation cases and related patents. Specifically, we propose two methods to meet the needs of both aims: a proximal slope one predictor and a time-aware predictor. Several factors are considered in the proposed methods, including the litigation risk if a company wants to enter a new market and the risk that a potential competitor would file a lawsuit against the new entrant. Comparative experiments using real-world data demonstrate that the proposed methods outperform several baselines with a significant margin.
['Bo Jin', 'Chao Che', 'Kuifei Yu', 'Yue Qu', 'Li Guo', 'Cuili Yao', 'Ruiyun Yu', 'Qiang Zhang']
Minimizing Legal Exposure of High-Tech Companies through Collaborative Filtering Methods
873,973
Bedeutung der 3D-OP-Planung und 3D-Navigation für die minimalinvasive Neurochirurgie.
['Eike Schwandt', 'Sven R. Kantelhardt', 'Ali Ayyad', 'Michael Kosterhon', 'Axel Stadie', 'Alf Giese']
Bedeutung der 3D-OP-Planung und 3D-Navigation für die minimalinvasive Neurochirurgie.
736,751
The recovery of a three-dimensional (3-D) model from a sequence of two-dimensional (2-D) images is very useful in medical image analysis. Image sequences obtained from the relative motion between the object and the camera or the scanner contain more 3-D information than a single image. Methods to visualize the computed tomograms can be divided into two approaches: the surface rendering approach and the volume rendering approach. In this paper, a new surface rendering method using optical flow is proposed. Optical flow is the apparent motion in the image plane produced by the projection of real 3-D motion onto the 2-D image. The 3-D motion of an object can be recovered from the optical-flow field using additional constraints. By extracting the surface information from 3-D motion, it is possible to obtain an accurate 3-D model of the object. Both synthetic and real image sequences have been used to illustrate the feasibility of the proposed method. The experimental results suggest that the proposed method is suitable for the reconstruction of 3-D models from ultrasound medical images as well as other computed tomograms.
['Nan Weng', 'Yee-Hong Yang', 'Roger Pierson']
Three-dimensional surface reconstruction using optical flow for medical imaging
404,892
Maps are traditional means of presentation and tools for analysis of spatial information. The power of maps can be also put into service in analysis of spatio-temporal data, i.e. data about phenomena that change with time. Exploration of such data requires highly interactive and dynamic maps. We implement such a kind of mapping in a collection of Java applets designed for various types of spatially referenced time series data: occurrences of events, movement of objects in space, and statistical data referring to parts of territory. As a theoretical background, we develop a classification of analytical tasks depending on what aspect of a spatial phenomenon varies with the time (existence, spatial location, size and shape, or thematic properties) and what kind of view is required, with respect to time (instant, interval, or overall). On the basis of this classification, we select appropriate presentation techniques and devise interactive map manipulation tools supporting various kinds of tasks.
['Natalia V. Andrienko', 'Gennady L. Andrienko', 'Peter Gatalsky']
Visualization of spatio-temporal information in the Internet
56,363
In modern drive systems, inverters are a fundamental component. To improve the performance of this component, ensure their operability, and check their reliability, motor-load testbeds are used during the process of development. Unfortunately, there are several drawbacks and disadvantages inherent to conventional motor-load testbeds. In order to avoid these problems, a new concept for a hardware-in-the-loop-based electronic testbed has been developed. A well-defined second inverter in combination with a mathematical model of the machine-load combination is used to replace the conventional test setup. Different machine-load combinations can be easily simulated with one system by simply changing the mathematical models. This paper shows the system topology, analyzes the components of the testbed, and presents the experimental results that verify the feasibility and capability of the method proposed.
['Stefan Grubic', 'B. Amlang', 'Walter Schumacher', 'Andree Wenzel']
A High-Performance Electronic Hardware-in-the-Loop Drive–Load Simulation Using a Linear Inverter (LinVerter)
356,363
This research presents 2 experiments that serve as a framework for exploring auditory information processing. The framework is termed polychotic listening or auditory search, and requires a listener to scan multiple simultaneous auditory streams for the appearance of a target word. Subjects' ability to scan between 2 and 6 simultaneous auditory streams of letter and digit names for the name of a target letter was examined using 6 loudspeakers. The main independent variable was auditory load, or the number of active audio streams on a given trial. The main dependent variables were target localization accuracy and reaction time. Results show that as load increased, performance decreased. The performance decrease was evident in reaction time, accuracy, and sensitivity measures. The 2nd study required subjects to practice the same task for 10 sessions, for a total of 1,800 trials. These results show that even with extensive practice, performance was still affected by auditory load. The present results are compared with findings in the visual search literature. Some potential applications of this research for cockpit and automobile warning displays and virtual reality and training systems are described.
['Mark D. Lee']
Multichannel Auditory Search: Toward Understanding Control Processes in Polychotic Auditory Listening
106,384
Typical placement objectives involve reducing net-cut cost or minimizing wirelength. Congestion minimization is least understood, however, it models routability most accurately. In this paper, we study the congestion minimization problem during placement. First we point out that the bounding box router used previously is not an accurate measurement of the congestion in the placement. We use a realistic global router to evaluate congestion in the placement stage. This ensures that the final placement is truely congestion minimized. We also propose two new post processing algorithms, the flow-based cell-centric algorithm and the net-centric algorithm. While the flow-based cell-centric algorithm can move multiple cells at the same time to minimize the congestion, it suffers large consumption of memory. Experimental results show that the net-centric algorithm can effectively identify the congested spots in the placement and reduce the congestion. It can produce on an average 7.7% less congestion than the bounding box router method. Finally, we use a final global router to verify that the placement obtained from our algorithm has 39% less congestion than a wirelength-optimized placement obtained by TimberWolf (commercial version 1.3.1).
['Maogang Wang', 'Hossein Sarrafzadeh']
Modeling and minimization of routing congestion
820,901
Symbols are frequently used to represent data objects in visualization. An appropriate contrast between symbols is a precondition that determines the efficiency of a visual analysis process. We study the contrast between different types of symbols in the context of scatterplots, based on user testing and a quantitative model for symbol contrast. In total, 32 different symbols were generated by using four sizes, two classes (polygon-and asterisk shaped), and four categories of rotational symmetry; and used three different tasks. From the user test results an internal separation space is established for the symbol types under study. In this space, every symbol is represented by a point, and the visual contrasts defined by task performance between the symbols are represented by the distances between the points. The positions of the points in the space, obtained by Multidimensional Scaling (MDS), reveal the effects of different visual feature scales. Also, larger distances imply better symbol separation for visual tasks, and therefore indicate appropriate choices for symbols. The resulting configurations are discussed, and a number of patterns in the relation between properties of the symbols and the resulting contrast are identified. In short we found that the size effect in the space is not linear and more dominant than shape effect.
['Jing Li', 'Jarke J. van Wijk', 'Jean-Bernard Martens']
Evaluation of symbol contrast in scatterplots
468,457
In this paper, we propose a new \emph{dynamic compressed index} of $O(w)$ space for a dynamic text $T$, where $w = O(\min(z \log N \log^*M, N))$ is the size of the signature encoding of $T$, $z$ is the size of the Lempel-Ziv77 (LZ77) factorization of $T$, $N$ is the length of $T$, and $M \geq 3N$ is an integer that can be handled in constant time under word RAM model. Our index supports searching for a pattern $P$ in $T$ in $O(|P| f_{\mathcal{A}} + \log w \log |P| \log^* M (\log N + \log |P| \log^* M) + \mathit{occ} \log N)$ time and insertion/deletion of a substring of length $y$ in $O((y+ \log N\log^* M)\log w \log N \log^* M)$ time, where $f_{\mathcal{A}} = O(\min \{ \frac{\log\log M \log\log w}{\log\log\log M}, \sqrt{\frac{\log w}{\log\log w}} \})$. Also, we propose a new space-efficient LZ77 factorization algorithm for a given text of length $N$, which runs in $O(N f_{\mathcal{A}} + z \log w \log^3 N (\log^* N)^2)$ time with $O(w)$ working space.
['Takaaki Nishimoto', 'Tomohiro I', 'Shunsuke Inenaga', 'Hideo Bannai', 'Masayuki Takeda']
Dynamic index and LZ factorization in compressed space
813,688
We propose a new algorithm for the detection of all intersections between a set of balls and a general query object. The proposed algorithm does not impose any restrictive condition on the set of balls and utilises power diagrams to minimize the amount of intersection tests. The price for this is power diagram computation in a preprocessing step.
['Michal Zemek', 'Ivana Kolingerová']
Power diagrams and intersection detection
554,145
Developing A Networked Authority: Nature and Significance of Power Relationships.
['Leiser Silva', 'Gurpreet Dhillon', 'James Backhouse']
Developing A Networked Authority: Nature and Significance of Power Relationships.
747,976
This paper presents a solution to a robust optimal regulation problem for a nonlinear polynomial system affected by parametric and matched uncertainties, which is based only on partial state information. The parameters describing the dynamics of the nonlinear polynomial plant depend on a vector of unknown parameters, which belongs to a finite parametric set, and the application of a certain control input is associated with the worst or least favourable value of the unknown parameter. A high-order sliding mode state reconstructor is designed for the nonlinear plant in such a way that the previously designed control can be applied for a system with incomplete information. Additionally, the matched uncertainty is also compensated by means of the same output-based regulator. The obtained algorithm is applied to control an uncertain nonlinear inductor circuit of the third order and a mechanical pendulum of the third order, successfully verifying the effectiveness of the developed approach.
['Manuel Jimenez-Lizarraga', 'Michael Basin', 'Victoria Celeste Rodríguez Carreón', 'Pablo Cesar Rodriguez Ramirez']
Output mini-max control for polynomial systems: analysis and applications
298,975
Large biomedical text data represents an important source of information that not only enables researchers to discover in-depth knowledge about biological systems, but also helps healthcare professionals do evidence-based medicine in clinical settings. However, investigating and analyzing these data is often both data-intensive and computation-intensive. In this paper, we investigate how to use MapReduce, a parallel and distributed programming paradigm, to efficiently mine the associations between biomedical concepts extracted from a large set of biomedical articles. First, biomedical concepts were obtained by matching text to Unified Medical Language System (UMLS) Metathesaurus, a biomedical vocabulary and standard database. Then we developed a MapReduce algorithm that could be used to calculate a category of interestingness measures defined on the basis of a 2 × 2 contingency table. This algorithm consists of two MapReduce jobs and takes a stripes approach to reduce the number of intermediate results. Experiments were conducted using Amazon Elastic MapReduce (EMR) with an input of 33,960 articles from TREC (Text REtrieval Conference) 2006 Genomics Track. Performance test indicated that our algorithm had approximately linear scalability and was more efficient than a ‘‘pairs’’ approach in the literature. The physician in our project team evaluated a subset of the association mining results related to drug-disease treatment and found that meaningful association rules ranked high.
['Yanqing Ji', 'Yun Tian', 'Fangyang Shen', 'John C. Tran']
Leveraging MapReduce to efficiently extract associations between biomedical concepts from large text data
689,866
There is a growing need to develop tools that are able to retrieve relevant textual information rapidly, to present textual information in a meaningful way, and to integrate textual information with related data retrieved from other sources. These tools are critical to support applications within corporate intranets and across the rapidly evolving World Wide Web. This paper introduces a framework for modelling structured text and presents a small set of operations that may be applied against such models. Using these operations structured text may be selected, marked, fragmented, and transformed into relations for use in relational and object-oriented database systems. The extended functionality has been accepted for inclusion within the SQL/MM standard, and a prototype database engine has been implemented to support SQL with the proposed extensions. This prototype serves as a proof of concept intended to address industrial concerns, and it demonstrates the power of the proposed abstract data type for structured text. 1. The challenge Database technology is essential to the operation of conventional business enterprises, and it is becoming increasingly important in the development of distributed information systems. However, most database systems, and in particular relational database systems, provide few facilities for effectively managing the vast body of electronic information embedded within text. Many customers require that large texts be searched both vertically, with respect to their internal structure, and horizontally, with respect to their textual content [Wei85]. Texts often need to be fragmented at appropriate structural boundaries. Sometimes selected text needs to be extracted as separate units, but often the appropriate context surrounding selected text must be recovered, and thus the selected text needs to be marked in some manner, so that it can be subsequently located within a potentially much larger context.
['L. J. Brown', 'Mariano P. Consens', 'Ian J. Davis', 'Christopher R. Palmer', 'Frank Wm. Tompa']
A structured text ADT for object-relational databases
487,472
Our tangible touch table interface mapping system was designed for adults to complete short map-based interactive problem solving tasks using purpose-designed model objects. The table interface was compared with the closest existing traditionally equivalent method using a within subjects exercise of 64 adult members of the general public in-situ at the local library and museum. The hypothesis investigated whether "a tangible multi-touch table interface improved understanding of preparing for bushfire using map-based constructivist learning tasks". The system design and content founded upon adult learning preferences (Knowles et al. 2005) further evolved using an iterative process by participatory involvement with three bushfire community groups. After using the preparing for bushfire tangible interactive mapping system all of the participants improved upon their pre-test scores indicating that they learned from the experience (t(31)=-9.08,p
['Mark Brown', 'Winyu Chinthammit', 'Paddy Nixon']
An Implementation of Tangible Interactive Mapping to Improve Adult Learning for Preparing for Bushfire
582,149
The main objective of pavement management is to provide comfortable roads for the users. The present serviceability index (PSI) is established to determine the quality of the road. Present serviceability rating (PSR), rutting, roughness and pavement distress data, etc. for the provincial and county road in Taiwan were collected in this study. Considering the interaction among parameters which deteriorate pavement condition, the fuzzy integral method is used to establish the non-interactive integrated performance value. Because the conventional regression model cannot deal with the fuzzy characteristic existed in the PSR, fuzzy regression model is used to establish the PSI relationship between subjective rating (PSR) and objective survey (parameters).
['Jia-Ruey Chang', 'Gwo-Hshiung Tzeng', 'Ching-Tsung Hung', 'Hsin-Hwa Lin']
Non-additive fuzzy regression applied to establish flexible pavement present serviceability index
198,062
Content Based Image Retrieval Based on Modelling Human Visual Attention
['Alex Papushoy', 'Adrian G. Bors']
Content Based Image Retrieval Based on Modelling Human Visual Attention
633,367
Though shared virtual memory (SVM) systems promise low cost solutions for high performance computing, they suffer from long memory latencies. These latencies are usually caused by repetitive invalidation on shared data. Since shared data are accessed through synchronizations and the patterns by which threads synchronize are repetitive, a prefetching scheme based on such repetitiveness would reduce memory latencies. Based on this observation, we propose a prefetching technique which predicts future access behavior by analyzing access history per synchronization variable. Our technique was evaluated on an 8-node SVM system using the SPLASH-2 benchmark. The results show that our technique could achieve 34%-45% reduction in memory access latencies.
['Sang-Kwon Lee', 'Heechul Yun', 'Joonwon Lee', 'Seungryoul Maeng']
Adaptive prefetching technique for shared virtual memory
269,512
The Research for Effecting to Traffic Congestion of Tangshan Based on PCA
['Junna Jiang', 'Caidonng Bian', 'Yili Tan', 'Xue-yu Mi']
The Research for Effecting to Traffic Congestion of Tangshan Based on PCA
94,975
The performance evaluation of a supply chain (SC) is an important step for continuous improvement of business processes. This study proposes a new approach to support the SC performance evaluation based on the combination between SCOR (Supply Chain Operations Reference) model and fuzzy-TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution). It was implemented using MATLAB and applied in an illustrative case. The proposed approach brings several benefits when compared with other approaches, such as: it enables benchmarking against other SCs; the fuzzy-TOPSIS requires few judgments to parameterization, which contributes to the agility of the decision process; it does not limit the number of alternatives simultaneously evaluated; it does not cause the ranking reversal problem when a new alternative is included in the evaluation process.
['Francisco Wellington Rodrigues Lima', 'Luiz Cesar Ribeiro Carpinetti']
Evaluating supply chain performance based on SCOR ® model and fuzzy-TOPSIS
946,016
Work processes are conducted in various contexts and they involve different tasks, interruptions, activities and actions. In all of these, tacit knowledge plays a part. Some part of that tacit knowledge can be externalized and articulated by continuously monitoring the user’s activities. Because the desktop environment is an integral part of almost any office work context, we chart the demands the unstructured and discontinuous nature of work puts on the management of desktop working context. We discuss possibilities to augment the user’s awareness of his/her desktop working environment by providing a context-aware application that can act as a map-like resource for the user’s past activities on the desktop. We propose using temporal information to couple personal experiences with representational, more objective aspects of the context in order to make it possible for the user to express and retrieve subjectively significant activities with a minimal effort. We present an abstract model for designing an application for this purpose.
['Kimmo Wideroos', 'Samuli Pekkola']
Presenting the Past: A Framework for Facilitating the Externalization and Articulation of User Activities in Desktop Environment
389,736
Motivated by an aggregate production-planning problem in an actual global manufacturing network, we examine the impact of exchange-rate uncertainty on the choice of optimal production policies when the allocation decision can be deferred until the realization of exchange rates. This leads to the formulation of the problem as a two-stage recourse program whose optimal policy structure features two forms of flexibility denoted as operational hedging: (1) production hedging, where the firm deliberately produces less than the total demand; and (2) allocation hedging, where due to unfavorable exchange rates, some markets are not served despite having unused production. Our characterization of the optimal policy structure leads to an economic valuation of production and allocation hedging. We show that the prevalence of production hedging is moderated by the degree of correlation between exchange rates. A comprehensive examination under the following four generalized settings provides the depth, scope, and relevancy that our proposed operational hedges play to facilitate aggregate planning: (1) multiple periods, (2) demand uncertainty, (3) price setting or monopolistic pricing, and (4) price setting under demand uncertainty. We show that production and allocation hedging are robust for these generalizations and should be integrated into the overall aggregate planning strategy of a global manufacturing firm.
['Burak Kazaz', 'Maqbool Dada', 'Herbert Moskowitz']
Global Production Planning Under Exchange-Rate Uncertainty
513,646