abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
A GPRS communication system is presented in this paper, including image data fusion with the help of neuro-fuzzy, image data compression with controllable compression rate, effective congestion control. The combination system adopts a new fuzzy neuron network (FNN) which can appropriately adjust input and output, and increase robustness, stability and working speed of the network. Besides FNN, wavelet transform also be applied to compression algorithm for a higher and controllable compression rate. As shown by experimental results, the communication system can obtain composite image data effectively, can compress and decompress image data dynamically, can prevent the congestion significantly, which proves that it is a more practical and more effective method than traditional methods for communication in wireless networks, and it is especially better at video data transmitting.
['Jianliang Su', 'Yimin Chen', 'Zhonghui Ouyang']
GPRS Communication System Designed for High Congestion Risk Circumstance
430,511
This technical contribution explores the process capability implications of the requirements of ISO/IEC 20000-1. The foundation of the methodology that is used to determine the linkages between each requirement and the notional process capability attribute that is supported is considered. The results of analyses are then presented of the implied clause process capability profile from the perspective of all the clauses, on one hand, and from individual clauses, on the other hand. The implications of these process capability insights to the design of an organizational maturity model applicable to ISO/IEC 20000-1 are then considered. Copyright © 2014 John Wiley & Sons, Ltd.
['Alastair Duncan Walker', 'Antonio Coletta', 'Rama Sivaraman']
An evaluation of the process capability implications of the requirements of ISO/IEC 20000-1
166,096
Take a Walk, Grow a Tree (Preliminary Version)
['Sandeep N. Bhatt', 'Jin-Yi Cai']
Take a Walk, Grow a Tree (Preliminary Version)
160,718
To meet the organization process measurement need, this paper presented an integrated measurement model for the software process management, which establishes the relationship between organization unit, role definition and the GDM analysis result of business goal. This model supports such process measurement activities as the data collection, data analysis and communication of analysis result. When the business goal is changed, the data collection and the data analysis could be changed accordingly. Based on the integrated process model, an integrated software process measurement platform has been developed, which supports the data management for quality, cost, delivery, process, people skill, etc. It provides the measurement view for a variety of role on different organization levels to facilitate the usage of analysis result.
['Xiaodong Guo', 'Li Meng']
Organization Application Oriented Software Process Measurement Model
148,697
Human Intervention for Searching Targets Using Mobile Agents in a Multi-Robot Environment.
['Takushi Nishiyama', 'Munehiro Takimoto', 'Yasushi Kambayashi']
Human Intervention for Searching Targets Using Mobile Agents in a Multi-Robot Environment.
737,453
Finite domain constraint solvers are typically applied to problems with only quite small values. This is the case in many tasks for which constraint-based approaches are well suited. A well-known benchmark library for constraints, CSPLib ([1]), consists almost exclusively of such examples.#R##N##R##N#On the other hand, the need for arbitrary precision integer arithmetic is widely recognised, and many common Prolog systems provide transparent built-in support for arbitrarily large integers.
['Markus Triska']
Generalising Constraint Solving over Finite Domains
476,259
The XOR network code is widely used in the conventional network coding. However, when the channel is noisy, the XOR code is suboptimal in terms of minimizing distortion in the source signal. We propose to use a new network code for the two-way relay channel, designed to minimize signal distortion due to the channel noise. We assume the source signal X is encoded by a source encoder and the sequence of symbols is transmitted over the two-way relay channel with noise. The received sequence of symbols may be corrupted by the channel noise, and Xcirc, the estimation of X, may be different from X. We design a new network code which minimizes the distortion between X and Xcirc. To provide an algorithm to obtain a network code with minimum distortion, we define the expected distortion associated with a network code. Starting with the XOR network code, we iteratively optimize the network code to obtain smaller expected distortion, while maintaining the Latin square constraint of the network code. New network code achieves substantial performance gain over the XOR network code.
['Moonseo Park', 'Seong-Lyun Kim']
Minimum distortion network code design for source coding over noisy channels
314,209
In order to be able to better deal with uncertain information, this paper presents an uncertain reasoning approach based on rough set theory and other uncertainty theories. This paper studies mainly the application of the reasoning approach on satellite fault detection. The simulation results show, the detection precision based on the new reasoning approach is improved from previous 75.95 percent to now 83.33 percent at average, and the new reasoning approach has also some advantages, such as, it has the faster detection speed, the lower storage capacity, and does not need any prior information in addition to data processing, these results indicate that the reasoning approach is more effective and feasible than the old reasoning approaches. Finally, some prospects for future research are given.
['Qing E. Wu', 'Zhenyu Han', 'Anping Zheng', 'Guangzhao Cui']
An Uncertain Reasoning Approach with Application to Satellite Fault Detection
483,271
We describe some methods for evaluating error probabilities of infinite signal constellations, requiring only a finite number of terms. These methods are applicable, for example, to convolutional codes decoded with a finite-depth Viterbi algorithm and to signal constellations carved from lattices. Coded modulations based on lattices and convolutional or block codes can also be dealt with. As an example of application, we analyze a variable rate 3-stage coded modulation encoder/decoder, which is being built and is based on a combination of convolutional codes with a single-parity-check block code. >
['Ezio Biglieri', 'Andrea Sandri', 'A. Spalvieri']
Computing upper bounds to error probability of coded modulation schemes
56,141
We propose a novel collaborative filtering method for top-$$n$$n recommendation task using bicustering neighborhood approach. Our method takes advantage of local biclustering structure for a more precise and localized collaborative filtering. Using several important properties from the field of Formal Concept Analysis, we build user-specific biclusters that are "more personalized" to the users of interest. We create an innovative rank scoring of candidate items that combines local similarity of biclusters with global similarity. Our method is parameter-free, thus removing the need for tuning parameters. It is easily scalable and can efficiently make recommendations. We demonstrate the performance of our algorithm using several standard benchmark datasets and two paypal (in-house) datasets. Our experiments show that our method generates better recommendations compared to several state-of-the-art algorithms, especially in the presence of sparse data. Furthermore, we also demonstrated the robustness of our approach to increasing data sparsity and the number of users.
['Faris Alqadah', 'Chandan K. Reddy', 'Junling Hu', 'Hatim F. Alqadah']
Biclustering neighborhood-based collaborative filtering method for top-n recommender systems
222,369
We develop theorems that place limits on the point-wise approximation of the responses of filters, both linear shift invariant (LSI) and linear shift variant (LSV), to input signals and images that are LSV in the following sense: they can be expressed as the outputs of systems with LSV impulse responses, where the shift variance is with respect to the filter scale of a single-prototype filter. The approximations take the form of LSI approximations to the responses. We develop tight bounds on the approximation errors expressed in terms of filter durations and derivative (Sobolev) norms. Finally, we find application of the developed theory to defoveation of images, deblurring of shift-variant blurs, and shift-variant edge detection.
['Alan C. Bovik', 'Raghu G. Raj']
Approximating filtered scale-variant signals
239,300
Abstract From a partial observation of the behaviour of a labeled Discrete Event System, fault Diagnosis strives to determine whether or not a given “invisible” fault event has occurred. The diagnosability problem can be stated as follows: does the labeling allow for an outside observer to determine the occurrence of the fault, no later than a bounded number of events after that unobservable occurrence ? In concurrent systems, partial order semantics adds to the difficulty of the problem, but also provides a richer and more complex picture of observation and diagnosis. In particular, it is crucial to clarify the intuitive notion of “time after fault occurrence “. To this end, we will use a unifying metric framework for event structures, providing a general topological description of diagnosability in both sequential and nonsequential semantics for Petri nets.
['Stefan Haar']
What Topology Tells us about Diagnosability in Partial Order Semantics
580,817
The growing demand on the performance of transparent computing systems requires good cache schemes in order to overcome the prolonged network latency. However, evaluating cache schemes, especially measuring the performance of a transparent computing system with particular cache scheme remains challenging. This is because neither method is available to evaluate the effectiveness and efficiency of the cache schemes in transparent computing, nor the simulator has been developed to measure the system performance under particular cache schemes. In this paper, we propose TranSim, a full-featured, high-performance simulation framework for transparent computing. For the first time, TranSim introduces a methodology to evaluate the performance of multi-level cache hierarchies in transparent computing under different cache configurations and cache replacement policies. TranSim can also demonstrate the behavior and performance of the entire transparent computing system rather than only the cache miss/hit rate. Using TranSim, the system designer can quickly evaluate the effectiveness of cache schemes along with the system performance. We construct several experiments to evaluate the effectiveness and efficiency of TranSim. Results show that TranSim can accurately output the performance of both the cache hierarchy and the entire transparent computing system.
['Jinzhao Liu', 'Yuezhi Zhou', 'Di Zhang']
TranSim: A Simulation Framework for Cache-Enabled Transparent Computing Systems
721,423
In this note we analyze the performance measures of a one server queue when arrivals are not independent. The analysis is based on the correlated Poisson distribution for customers arrival. Service may have any distribution. This type of queue is dened as M C /G/1. The formulas for the performance measures of the queue are derived without any approximation. Surprisingly, these formulas are very simple.
['Zvi Drezner']
ON A QUEUE WITH CORRELATED ARRIVALS
470,724
This paper investigates the problem of information stabilization of the images of source sets over discrete memoryless channels (DMCs). It is shown that if the minimum image cardinality of a source set over a DMC has a specific entropy characterization, then the image of this source set will be information stable. In many applications, this requirement on the source set can be satisfied using the method of equal-image-size source partitioning. A construction of a strong secrecy subcode from a weak secrecy code for the wiretap channel is provided as an example to illustrate the use of the information stabilization technique.
['Eric Graves', 'Tan F. Wong']
Information stabilization of images over discrete memoryless channels
870,788
As reported by the Web site CBS MoneyWatch [1], electric vehicles are seeing a steady growth in consumer interest, especially within the youngest age group of potential buyers. As is the case with all vehicles, it is very important and even required to continuously monitor its vital equipment. Therefore, today almost all vehicles are equipped with an onboard diagnosis (OBD) system. This system is used for warnings and monitoring critical failures in the vehicle such as ignition, battery, oil and gasoline level, engine, and brakes, among others. If a problem or malfunction is detected, the OBD system sets a malfunction indicator light (MIL) on the dashboard that is readily visible to the vehicle operator and informs the driver of the existing problem. The OBD is a valuable tool that assists in the service and repair of vehicles by providing a simple, quick, and effective way to pinpoint problems by retrieving vital automobile diagnostics. In the case of vehicles with electric motors, the detection of faults expectedly differs from that in vehicles with gasoline engines. This article will describe DSP techniques using Texas Instruments TMS320F2812 signal processor to achieve this.
['Bilal Akin', 'Seungdeog Choi', 'Hamid A. Toliyat']
DSP Applications in Electric and Hybrid Electric Vehicles [In the Spotlight]
427,133
In satellite-based Internet access, improvements in the TCP protocol and a suitable media access control (MAC) scheme are two key factors in maximizing system throughput. While a number of techniques to enhance the TCP performance over satellite links have been proposed, this paper focuses on the influence of satellite MAC protocol on TCP performance. We present a pre-return reservation combined free/demand assignment multiple access (PRR-CFDAMA) protocol over an MFTDMA/TDM satellite link, and analyze its performance with Poisson and empirical Internet traffic by simulations. The results show that PRR-CFDAMA can provide higher throughput and shorter delay for TCP traffic. We also present simulation results for FTP over TCP and show that by tuning the TCP parameters, we can improve the system performance noticeably.
['Yuheng Li', 'Zhifeng Jiang', 'Victor C. M. Leung']
Performance evaluations of PRR-CFDAMA for TCP traffic over geosynchronous satellite links
172,300
Recently, numerous Multiobjective Evolutionary Algorithms (MOEAs) have been presented to solve real life problems. However, a number of issues still remain with regards to MOEAs such as convergence to the true Pareto front as well as scalability to many objective problems rather than just bi-objective problems. The performance of these algorithms may be augmented by incorporating the coevolutionary concept. Hence, in this paper, a new algorithm for multiobjective optimization called SPEA2-CC is illustrated. SPEA2-CC combines an MOEA, Strength Pareto Evolutionary Algorithm 2 (SPEA2) with Cooperative Coevolution (CC). Scalability tests have been conducted to evaluate and compare the SPEA2- CC against the original SPEA2 for seven DTLZ test problems with a set of objectives (3 to 5 objectives). The results show clearly that the performance scalability of SPEA2-CC was significantly better compared to the original SPEA2 as the number of objectives becomes higher.
['Tse Guan Tan', 'Jason Teo', 'Hui Keng Lau']
Performance Scalability of a Cooperative Coevolution Multiobjective Evolutionary Algorithm
42,777
e-Commerce is one of the most important application of recommendation systems (RS). In this paper we examine several recommendation methods for application in web-based gift recommendation systems. These methods were verified experimentally using MovieLens data set as well as data gathered during tests with the implemented SzukamCzegos.pl system.
['Janusz Sobecki', 'Krzysztof Piwowar']
Comparison of Different Recommendation Methods for an e-Commerce Application
26,850
We develop a dynamic model in which Operation Iraqi Freedom (OIF) servicemembers incur a random amount of combat stress during each month of deployment, develop posttraumatic stress disorder (PTSD) if their cumulative stress exceeds a servicemember-specific threshold, and then develop symptoms of PTSD after an additional time lag. Using Department of Defense deployment data and Mental Health Advisory Team PTSD survey data to calibrate the model, we predict that---because of the long time lags and the fact that some surveyed servicemembers experience additional combat after being surveyed---the fraction of Army soldiers and Marines who eventually suffer from PTSD will be approximately twice as large as in the raw survey data. We cannot put a confidence interval around this estimate, but there is considerable uncertainty (perhaps ±30%). The estimated PTSD rate translates into ≈300,000 PTSD cases among all Army soldiers and Marines in OIF, with ≈20,000 new cases each year the war is prolonged. The heterogeneity of threshold levels among servicemembers suggests that although multiple deployments raise an individual's risk of PTSD, in aggregate, multiple deployments lower the total number of PTSD cases by ≈30% relative to a hypothetical case in which the war was fought with many more servicemembers (i.e., a draft) deploying only once. The time lag dynamics suggest that, in aggregate, reserve servicemembers show symptoms ≈1--2 years before active servicemembers and predict that >75% of OIF servicemembers who self-reported symptoms during their second deployment were exposed to the PTSD-generating stress during their first deployment.
['Michael P. Atkinson', 'Adam Guetz', 'Lawrence M. Wein']
A Dynamic Model for Posttraumatic Stress Disorder Among U.S. Troops in Operation Iraqi Freedom
434,862
Most safety critical embedded systems, i.e. systems for which constraints must necessarily be satisfied in order to avoid catastrophic consequences, consist of a set of data dependent tasks which exchange data. Although non-preemptive real-time scheduling is safer than preemptive real-time scheduling in a safety critical context, preemptive real-time scheduling provides a better success ratio, but the preemption has a cost. In this paper we propose a schedulability analysis for data dependent periodic tasks which takes into account the exact preemption cost, data dependence constraints without loss of data and mutual exclusion constraints.
['Falou Ndoye', 'Yves Sorel']
Monoprocessor Real-Time Scheduling of Data Dependent Tasks with Exact Preemption Cost for Embedded Systems
488,558
The output regulation problem of singular nonlinear systems via the normal output feedback control has been a challenging problem. Existing approaches for solving this problem employed techniques similar to those used for linear singular systems. Results from these approaches either rely on a normalizability assumption or are limited to systems with special structures. This paper gives a complete solution for this problem by employing a novel approach that is also interesting for linear systems.
['Zhiyong Chen', 'Jie Huang']
Solution of output regulation of singular nonlinear systems by normal output feedback
284,992
In an earlier work, the authors developed a rigged configuration model for the crystal $B(\infty)$ (which also descends to a model for irreducible highest weight crystals via a cutting procedure). However, the result obtained was only valid in finite types, affine types, and simply-laced indefinite types. In this paper, we show that the rigged configuration model proposed does indeed hold for all symmetrizable types. As an application, we give an easy combinatorial condition that gives a Littlewood-Richardson rule using rigged configurations which is valid in all symmetrizable Kac-Moody types.
['Ben Salisbury', 'Travis Scrimshaw']
Rigged configurations for all symmetrizable types
631,605
This paper presents a system-level Network-on-Chip simulation platform integrating the transaction-level performance model of NoC components and their architecture-level energy models. The transaction-level model written in SystemC enables fast simulation speed and the architectural energy model estimates communication energy, including both dynamic and leakage, dissipating on routers and links through the transaction-level simulation. This power model supports temporal power profiling for each NoC component and spatial power snapshots for the whole NoC, making it easy to inspect the power implications under application workloads. Applying this energy model on 8 deep sub-micron CMOS processes from 180nm to 45nm, we reveal an average 2.8X leakage power increase for each technology evolution. With temporal and spatial profiling for burst-mode applications, the power hungry portions in both time- and space-domains can be identified, and in turn it provides useful information for the energy-aware NoC design space exploration for the future nanoscale IC technologies.This paper presents a system-level Network-on-Chip simulation platform integrating the transaction-level performance model of NoC components and their architecture-level energy models. The transaction-level model written in SystemC enables fast simulation speed and the architectural energy model estimates communication energy, including both dynamic and leakage, dissipating on routers and links through the transaction-level simulation. This power model supports temporal power profiling for each NoC component and spatial power snapshots for the whole NoC, making it easy to inspect the power implications under application workloads. Applying this energy model on 8 deep sub-micron CMOS processes from 180nm to 45nm, we reveal an average 2.8X leakage power increase for each technology evolution. With temporal and spatial profiling for burst-mode applications, the power hungry portions in both time- and space-domains can be identified, and in turn it provides useful information for the energy-aware NoC design space exploration for the future nanoscale IC technologies.This paper presents a system-level Network-on-Chip simulation platform integrating the transaction-level performance model of NoC components and their architecture-level energy models. The transaction-level model written in SystemC enables fast simulation speed and the architectural energy model estimates communication energy, including both dynamic and leakage, dissipating on routers and links through the transaction-level simulation. This power model supports temporal power profiling for each NoC component and spatial power snapshots for the whole NoC, making it easy to inspect the power implications under application workloads. Applying this energy model on 8 deep sub-micron CMOS processes from 180nm to 45nm, we reveal an average 2.8X leakage power increase for each technology evolution. With temporal and spatial profiling for burst-mode applications, the power hungry portions in both time- and space-domains can be identified, and in turn it provides useful information for the energy-aware NoC design space exploration for the future nanoscale IC technologies.
['Jinwen Xi', 'Peixin Zhong']
A Transaction-Level NoC Simulation Platform with Architecture-Level Dynamic and Leakage Energy Models
118,809
We describe a method for retrieving shots containing a particular 2D human pose from unconstrained movie and TV videos. The method involves first localizing the spatial layout of the head, torso and limbs in individual frames using pictorial structures, and associating these through a shot by tracking. A feature vector describing the pose is then constructed from the pictorial structure. Shots can be retrieved either by querying on a single frame with the desired pose, or through a pose classifier trained from a set of pose examples. Our main contribution is an effective system for retrieving people based on their pose, and in particular we propose and investigate several pose descriptors which are person, clothing, background and lighting independent. As a second contribution, we improve the performance over existing methods for localizing upper body layout on unconstrained video. We compare the spatial layout pose retrieval to a baseline method where poses are retrieved using a HOG descriptor. Performance is assessed on five episodes of the TV series 'Buffy the Vampire Slayer', and pose retrieval is demonstrated also on three Hollywood movies..
['Vittorio Ferrari', 'Manuel J. Marín-Jiménez', 'Andrew Zisserman']
Pose search: Retrieving people using their pose
542,701
This paper represents an ongoing investigation of dexterous and natural control of upper extremity prostheses using the myoelectric signal. The scheme described within uses a hidden Markov model (HMM) to process four channels of myoelectric signal, with the task of discriminating six classes of limb movement. The HMM-based approach is shown to be capable of higher classification accuracy than previous methods based upon multilayer perceptrons. The method does not require segmentation of the myoelectric signal data, allowing a continuous stream of class decisions to be delivered to a prosthetic device. Due to the fact that the classifier learns the muscle activation patterns for each desired class for each individual, a natural control actuation results. The continuous decision stream allows complex sequences of manipulation involving multiple joints to be performed without interruption. The computational complexity of the HMM in its operational mode is low, making it suitable for a real-time implementation. The low computational overhead associated with training the HMM also enables the possibility of adaptive classifier training while in use.
['Adrian D. C. Chan', 'Kevin B. Englehart']
Continuous myoelectric control for powered prostheses using hidden Markov models
314,156
An Activation-based Sentence Processing Model of English.
['Kei Takahashi', 'Kiyoshi Ishikawa', 'Kei Yoshimoto']
An Activation-based Sentence Processing Model of English.
783,676
Network interfaces that contain a programmable processor offer much flexibility, which so far has mainly been used to optimize message passing libraries. We show that high performance gains can be achieved by implementing support for application-specific shared data structures on the network interface processors. As a case study, we have implemented shared transposition tables on a Myrinet network, using customized software that runs partly on the network processor and partly on the host. The customized software greatly reduces the overhead of interactions between the network interface and the host. Also, the software exploits application semantics to obtain a simple and efficient communication protocol. Performance measurements indicate that applications that run application-specific code on the network interface are up to 2.5 times as fast as those that use generic message-passing software.
['Raoul Bhoedjang', 'John W. Romein', 'Henri E. Bal']
Optimizing distributed data structures using application-specific network interface software
126,332
Efficiency‐based h‐ and hp‐refinement strategies for finite element methods
['H. De Sterck', 'Thomas A. Manteuffel', 'Stephen F. McCormick', 'J. W. Nolting', 'John W. Ruge', 'Lawrence Tang']
Efficiency‐based h‐ and hp‐refinement strategies for finite element methods
31,452
Algorithm of Trawler Fishing Effort Extraction Based on BeiDou Vessel Monitoring System Data
['Shengmao Zhang', 'Bailang Yu', 'Qiaoling Zheng', 'Weifeng Zhou']
Algorithm of Trawler Fishing Effort Extraction Based on BeiDou Vessel Monitoring System Data
724,156
Modal Input/Output interfaces (MIOs) is a new specification theory for systems communicating via inputs and outputs. The approach combines the advantages of both modal automata and interface automata, two dominant specification theories for component-based design. This paper presents the MIO Workbench that is the first complete implementation of the MIO theory.
['Sebastian S. Bauer', 'Philip Mayer', 'Axel Legay']
MIO workbench: a tool for compositional design with modal input/output interfaces
637,087
In this paper, we propose a novel method that can detect fingertips as well as recognize hand gestures. Firstly, we collect the hand curves with a Kinect sensor. Secondly, we detect fingertips based on the discrete curve evolution. Thirdly, we recognize hand gestures using evolved curves partitioned at the detected fingertips. Experimental results show that our method performs well in both fingertips detection and hand gesture recognition.
['Zhongyuan Lai', 'Zhijun Yao', 'Chun Wang', 'Hui Liang', 'Hongmei Chen', 'Wu Xia']
Fingertips detection and hand gesture recognition based on discrete curve evolution with a kinect sensor
984,140
Recent research has shown that anthropomorphism represents a means to facilitate HRI. Under which conditions do people anthropomorphize robots and other nonhuman agents? This research question was investigated in an experiment that manipulated participants' anticipation of a prospective human-robot interaction (HRI) with a robot whose behavior was characterized by either low or high predictability. We examined effects of these factors on perceptions of anthropomorphism and acceptance of the robot. Innovatively, the present research demonstrates that anticipation of HRI with an unpredictable agent increased anthropomorphic inferences and acceptance of the robot. Implications for future research on psychological determinants of anthropomorphism are discussed.
['Friederike Anne Eyssel', 'Dieta Kuchenbrandt', 'Simon Bobinger']
Effects of anticipated human-robot interaction and predictability of robot behavior on perceptions of anthropomorphism
521,979
The aim of the work is to provide a language to reason about Closed Interactions, i.e. all those situations in which the outcomes of an interaction can be determined by the agents themselves and in which the environment cannot interfere with they are able to determine. We will see that two different interpretations can be given of this restriction, both stemming from Pauly Representation Theorem. We will identify such restrictions and axiomatize their logic. We will apply the formal tools to reason about games and their regulation.
['Jan M. Broersen', 'Rosja Mastop', 'John-Jules Ch. Meyer', 'Paolo Turrini']
Determining the environment: a modal logic for closed interaction
264,786
Link adaptation techniques are important modern and future wireless communication systems to cope with quality of service fluctuations in fading channels. These techniques require the knowledge of the channel state obtained with a portion of resources devoted to channel estimation instead of data and updated every coherence time of the process to be tracked. In this paper, we analyze fast and slow adaptive modulation systems with diversity and non-ideal channel estimation under energy constraints. The framework enables to address the following questions: (i) What is the impact of non-ideal channel estimation on fast and slow adaptive modulation systems? (ii) How to define a proper figure of merit which considers both resources dedicated to data and those to channel estimation? (iii) Does fast adaptive always outperform slow adaptive techniques? Our analysis shows that, despite the lower complexity and feedback rate, slow adaptive modulation (SAM) can achieve higher spectral efficiency than fast adaptive modulation (FAM) in the presence of energy constraint, diversity, and non-ideal channel estimation. In addition, SAM satisfies bit error outage requirements also in FAM-denied region.
['Laura Toni', 'Andrea Conti']
Does Fast Adaptive Modulation Always Outperform Slow Adaptive Modulation
536,575
The focus of situation-aware ubiquitous computing has increased lately. An example of situation-aware applications is a multimedia education system. Since ubiquitous applications need situation-aware middleware services and computing environment keeps changing as the applications change, it is challenging to detect errors and recover them in order to provide seamless ser- vices and avoid a single point of failure. This paper proposes an Adaptive Fault Tolerance Agent (AFTA) in situation-aware middleware framework and pre- sents its simulation model of AFT-based agents. The strong point of this system is to detect and recover error automatically in case that the session's process comes to an end through a software error.
['Soongohn Kim', 'Eung Nam Ko']
An Adaptive Fault-Tolerance Agent Running on Situation-Aware Environment
79,003
Unitary gates are interesting resources for quantum communication in part because they are always invertible and are intrinsically bidirectional. This paper explores these two symmetries: time-reversal and exchange of Alice and Bob. We present examples of unitary gates that exhibit dramatic separations between forward and backward capacities (even when the back communication is assisted by free entanglement) and between entanglement-assisted and unassisted capacities, among many others. Along the way, we will give a general time-reversal rule for relating the capacities of a unitary gate and its inverse that will explain why previous attempts at finding asymmetric capacities failed. Finally, we will see how the ability to erase quantum information and destroy entanglement can be a valuable resource for quantum communication.
['Aram Wettroth Harrow', 'Peter W. Shor']
Time Reversal and Exchange Symmetries of Unitary Gate Capacities
166,008
We introduce a new adaptive Bayesian learning framework, called multiple-stream prior evolution and posterior pooling, for online adaptation of the continuous density hidden Markov model (CDHMM) parameters. Among three architectures we proposed for this framework, we study in detail a specific two stream system where linear transformations are applied to the mean vectors of the CDHMMs to control the evolution of their prior distribution. This new stream of prior distribution can be combined with another stream of prior distribution evolved without any constraints applied. In a series of speaker adaptation experiments on the task of continuous Mandarin speech recognition, we show that the new adaptation algorithm achieves a similar fast-adaptation performance as that of the incremental maximum likelihood linear regression (MLLR) in the case of small amount of adaptation data, while maintains the good asymptotic convergence property as that of our previously proposed quasi-Bayes adaptation algorithms.
['Qiang Huo', 'Bin Ma']
Online adaptive learning of continuous-density hidden Markov models based on multiple-stream prior evolution and posterior pooling
434,303
This paper presents a fuzzy constraint-directed approach to the development of agent-based business simulator for supporting collaborative learning. The advantages of this approach can be summarized in two folds: it provides (1) a general purpose framework for the knowledge representation and problem solving in business simulation, and (2) a fully distributed computational model along with a negotiation algorithm that can be easily built to mimic more closely to the real-world scenario of collaborative strategic planning and decision making. To demonstrate the usefulness of the proposed framework, we have prototyped a business simulator, MANAGE, and applied successfully to build a beer game for collaborative learning.
['Chung Cheng Tseng', 'Chung Hsien Lan', 'K.R. Lai']
Modeling Beer Game as Role-Play Collaborative Learning via Fuzzy Constraint-Directed Agent Negotiation
149,159
Alignment von UCD Aktivitäten mit benachbarten Geschäftsprozessen im Unternehmen
['Dirk Zimmermann', 'Natalie Woletz', 'Ron Hofer']
Alignment von UCD Aktivitäten mit benachbarten Geschäftsprozessen im Unternehmen
922,960
Novel algorithms for block equalization of M-ary phase shift keying (PSK) signals transmitted over multipath fading channels in the presence of an interferent cochannel signal are introduced and analyzed. The algorithms exploit the intrinsic statistical properties of cochannel interference (CCI) in order to mitigate its effects. Both linear and decision feedback equalizers (DFEs) are derived under the assumption that the overall channel impulse responses of both the useful and the inteferent signal are known. Simulation results show that: (a) whereas zero-forcing block equalizers yield a large noise enhancement effect, a minimum mean-square block DFE (MMSE-BDFE) can efficiently compensate for the distortion in the useful channel and reduce the effect of CCI at the same time, and (b) the MMSE-BDFEs outperform conventional DFEs, at least in the idealized conditions of our analysis.
['Alberto Ginesi', 'Giorgio Matteo Vitetta', 'David D. Falconer']
Block channel equalization in the presence of a cochannel interferent signal
436,353
Due to the rapidly increasing availability of audio files on the Web, it is relevant to augment search engines with advanced audio search functionality. In this context, the ranking of the retrieved music is an important issue. This paper proposes a music ranking method capable of flexibly fusing the music based on its relevance and importance. The fusion is controlled by a single parameter, which can be intuitively tuned by the user. The notion of authoritative music among relevant music is introduced, and social media mined from the Web is used in an innovative manner to determine both the relevance and importance of music. The proposed method may support users with diverse needs when searching for music.
['Maria Magdalena Ruxanda', 'Alexandros Nanopoulos', 'Christian S. Jensen', 'Yannis Manolopoulos']
Ranking music data by relevance and importance
451,806
This letter considers localization using multiple-input multiple-output (MIMO) systems, configured with multiple transmit and receive sensors, widely distributed in a three-dimensional space. The placement of antennas is explored when the receiver hardware has varying noise quality. Cramer–Rao Lower Bounds are optimized to find the antenna placements, where it is shown that a symmetric configuration of transmitting and different quality receiving sensors around an emitter is optimal.
['Vaneet Aggarwal', 'Lauren M. Huie']
Antenna Placement for MIMO Localization Systems With Varying Quality of Receiver Hardware Elements
906,530
We consider stochastic optimization problems in which the input probability distribution is not fully known, and can only be observed through data. Common procedures handle such problems by optimizing an empirical counterpart, namely via using an empirical distribution of the input. The optimal solutions obtained through such procedures are hence subject to uncertainty of the data. In this paper, we explore techniques to quantify this uncertainty that have potentially good finite-sample performance. We consider three approaches: the empirical likelihood method, nonparametric Bayesian approach, and the bootstrap approach. They are designed to approximate the confidence intervals or posterior distributions of the optimal values or the optimality gaps. We present computational procedures for each of the approaches and discuss their relative benefits. A numerical example on conditional value-at-risk is used to demonstrate these methods.
['Henry Lam', 'Enlu Zhou']
Quantifying uncertainty in sample average approximation
653,177
We propose DiaWear, a novel assistive mobile phone-based calorie monitoring system to improve the quality of life of diabetes patients and individuals with unique nutrition management needs. Our goal is to achieve improved daily semi-automatic food recognition using a mobile wearable cell phone. DiaWear currently uses a neural network classification scheme to identify food items from a captured image. It is difficult to account for the varying and implicit nature of certain foods using traditional image recognition techniques. To overcome these limitations, we introduce the role of the mobile phone as a platform to gather contextual information from the user and system in obtaining better food recognition.
['Geeta Shroff', 'Asim Smailagic', 'Daniel P. Siewiorek']
Wearable context-aware food recognition for calorie monitoring
924,513
Signcryption is one of the most recent public key paradigms that fulfills both the requirement of confidentiality and authenticity of messages between parties. It works more efficiently with a cost significantly smaller than that required by signature-then-encryption technique. In this work, a practically implementable ID-based sigcryption scheme using the bilinear pairings is presented. The proposed scheme is implemented under the hardness of CDH (Computational Diffie-Hellman) assumption in standard model without random oracle model. Performance evaluation of the scheme shows satisfactory result after comparing with other relevant ID-based signcryption schemes. Thus, our scheme should be implemented in real life scenario where both the confidentiality and authenticity is required with low computational cost.
['Arijit Karati', 'G. P. Biswas']
A practical identity based signcryption scheme from bilinear pairing
930,881
In this paper, we address the impact of resource limitations on the operation and performance of the broadcasting and multicasting schemes developed for infrastructureless wireless networks in our earlier studies. These schemes, which provide energy-efficient operation for source-initiated session traffic, were previously studied without fully accounting for such limitations. We discuss the “node-based” nature of the all-wireless medium, and demonstrate that improved performance can be obtained when such properties are exploited by networking algorithms. Our broadcast and multicast algorithms involve the joint choice of transmitter power and tree construction, and thus depart from the conventional approach that makes design choices at each layer separately. We indicate how the impact of limited frequency resources can be addressed. Alternative schemes are developed for frequency assignment, and their performance is compared under different levels of traffic load, while also incorporating the impact of limited transceiver resources. The performance results include the comparison of our algorithms to alternative “link-based” algorithms for broadcasting and multicasting.
['Jeffrey E. Wieselthier', 'Gam D. Nguyen', 'Anthony Ephremides']
Energy-Efficient Multicasting of Session Traffic in Bandwidth- and Transceiver-Limited Wireless Networks
388,779
In this paper, a novel code division multiplexing (CDM) algorithm-based reversible data hiding (RDH) scheme is presented. The covert data are denoted by different orthogonal spreading sequences and embedded into the cover image. The original image can be completely recovered after the data have been extracted exactly. The Walsh Hadamard matrix is employed to generate orthogonal spreading sequences, by which the data can be overlappingly embedded without interfering each other, and multilevel data embedding can be utilized to enlarge the embedding capacity. Furthermore, most elements of different spreading sequences are mutually cancelled when they are overlappingly embedded, which maintains the image in good quality even with a high embedding payload. A location-map free method is presented in this paper to save more space for data embedding, and the overflow/underflow problem is solved by shrinking the distribution of the image histogram on both the ends. This would further improve the embedding performance. Experimental results have demonstrated that the CDM-based RDH scheme can achieve the best performance at the moderate-to-high embedding capacity compared with other state-of-the-art schemes.
['Bin Ma', 'Yun Q. Shi']
A Reversible Data Hiding Scheme Based on Code Division Multiplexing
727,576
This paper proposes a new adaptive SNMPv6 for mobile devices. In order to improve the utilization of battery and wireless bandwidth and the reliability of trap delivery, the management functions for devices in wireless networks are implemented by combining the standard model with a new adaptive trap scheme. It classifies all trap messages into three categories and merging some non-emergent trap messages into a larger one. On the other hand, the very important trap message is sent and acknowledged to improve reliability. The combination of this two means can increase the efficiency and flexibility for trap packets transport from the agents to the SNMP NMS. The performance analysis shows that the enhanced scheme of the trap packets sending decrease the utilities of link bandwidth by 15% in a typical scenario without any large increasing delay of dealing. And the reliability of important trap delivery is increased with small add-on overhead.
['Xuejie Li', 'Zhigang Jin', 'Yantai Shu']
Enhanced Adaptive SNMPV6 for Mobile Devices
266,135
We propose and evaluate the performance of an integrated quality-of-service (QoS) aware radio resource management (RRM) framework for LTE uplink access. Unlike related work, our integrated RRM framework jointly takes into account the class of service requirements of the different connections, the constraints on the contiguity in resource block allocation due to the single-carrier frequency division multiple access (SC-FDMA) mechanism adopted for the LTE uplink, and the selection of the appropriate modulation and coding scheme with full integration with the LTE uplink closed-loop fractional power control for proper power allocation. The proposed RRM framework uses closed loop power control and interference limit per cell to provide an autonomous inter-cell interference coordination scheme that is applied locally without the need of exchanging interference related information between neighboring cells. Simulation results show the ability of the proposed framework to meet the QoS of a highly loaded system with four classes of service. The interference-aware scheme also manages to limit interference and provides fair resource sharing between cell-edge and cell-center users.
['Amira Afifi', 'Khaled M. F. Elsayed', 'Ahmed A. F. Khattab']
Interference-aware radio resource management framework for the 3GPP LTE uplink with QoS constraints
74,528
Abstract Let H = { 0 , 1 2 , 1 } with the natural order and p & q = max ⁡ { p + q − 1 , 0 } for all p , q ∈ H . It is proved that the category of liminf complete H -ordered sets is Cartesian closed. It reveals that there exists a Cartesian closed category consisting of non-frame valued liminf complete fuzzy ordered sets.
['Min Liu', 'Bin Zhao']
A non-frame valued Cartesian closed category of liminf complete fuzzy orders
847,095
In this paper, we analyze asynchronous and non linearly distorted FBMC signals in the downlink of multi-cellular networks. The considered system includes a reference mobile perfectly synchronized with its reference base station (BS) and K interfering BSs. Both synchronization errors and high power amplifiers (HPA) distortions will be considered and a theoretical analysis of the interference signal will be conducted. On the basis of this analysis, we will derive an accurate expression of bit error rate (BER) in the presence of a frequency selective channel. In order to reduce the computational complexity of the BER expression, we applied an interesting lemma based on the moment generating function of the interference power. Finally, the proposed model is evaluated through computer simulations which show perfect match between the developed theoretical model and simulation results.
['Brahim Elmaroud', 'Mohammed Abbad', 'Driss Aboutajdine']
BER analysis of asynchronous and non linear FBMC based multi-cellular networks
968,975
Evolution of the product manager
['Ellen Chisa']
Evolution of the product manager
534,972
A key step in the design of cyclo-static real-time systems is the determination of buffer capacities. In our multi-processor system, we apply back-pressure, which means that tasks wait for space in output buffers. Consequently buffer capacities affect the throughput. This requires the derivation of buffer capacities that both result in a satisfaction of the throughput constraint, and also satisfy the constraints on the maximum buffer capacities. Existing exact solutions suffer from the computational complexity that is associated with the required conversion from a cyclo-static dataflow graph to a single-rate dataflow graph. In this paper we present an algorithm, with polynomial computational complexity, that does not require this conversion and that obtains close to minimal buffer capacities. The algorithm is applied to an MP3 play-back application that is mapped on our multi-processor system. For this application, we see that a cyclo-static dataflow model can reduce the buffer capacities by 50% compared to a multi-rate dataflow model.
['Maarten H. Wiggers', 'Marco Jan Gerrit Bekooij', 'Gerard J. M. Smit']
Efficient computation of buffer capacities for cyclo-static dataflow graphs
314,489
Counterexamples explain why a desired temporal logic property fails to hold. The generation of counterexamples is considered to be one of the primary advantages of model checking as a verification technique. Furthermore, when model checking does succeed in verifying a property, there is typically no independently checkable witness that can be used as evidence for the verified property. Previously, we have shown how program transformation techniques can be used for the verification of both safety and liveness properties of reactive systems. However, no counterexamples or witnesses were generated using the described techniques. In this paper, we address this issue. In particular, we show how the program transformation technique distillation can be used to facilitate the construction of counterexamples and witnesses for temporal properties of reactive systems. Example systems which are intended to model mutual exclusion are analysed using these techniques with respect to both safety (mutual exclusion) and liveness (non-starvation), with counterexamples being generated for those properties which do not hold.
['Geoff W. Hamilton']
Generating Counterexamples for Model Checking by Transformation
822,771
This paper examines how social-movement-type political interactions between conflicting parties within an organization influence the adoption of a hybrid practice. We argue that a hybrid practice is likely to be adopted when power balance between challengers and incumbents is achieved. To shed light on conditions for organizational settlement based on such power balance, we focus on three factors: structures, actors, and processes of social-movement-type political interactions within organizations. By studying changes in the presidential selection systems of Korean universities between 1988 and 2006, this paper illustrates how organizational settlement resulted in the adoption of a hybrid system by combining elements of two previous competing presidential selection systems—appointment and direct voting systems. The general implications for the understanding of hybridization, organizational settlement, and organizational heterogeneity are discussed.
['Tai-Young Kim', 'Dongyoub Shin', 'Young-Chul Jeong']
Inside the “Hybrid” Iron Cage: Political Origins of Hybridization
689,516
Purpose – The purpose of this paper is to propose an efficient method, called kinodynamic velocity obstacle (KidVO), for motion planning of omnimobile robots considering kinematic and dynamic constraints (KDCs). Design/methodology/approach – The suggested method improves generalized velocity obstacle (GVO) approach by a systematic selection of proper time horizon. Selection procedure of the time horizon is based on kinematical and dynamical restrictions of the robot. Toward this aim, an omnimobile robot with a general geometry is taken into account, and the admissible velocity and acceleration cones reflecting KDCs are derived, respectively. To prove the advantages of the suggested planning method, its performance is compared with GVOs, the so-called Hamilton-Jacobi-Bellman equation and the rapidly exploring random tree. Findings – The obtained results of the presented scenarios which contain both computer and real-world experiments for complicated crowded environments indicate the merits of the suggested...
['Mostafa Mahmoodi', 'Khalil Alipour', 'Hadi Mohammadi']
KidVO: a kinodynamically consistent algorithm for online motion planning in dynamic environments
685,833
Designing an overlay network for publish/subscribe communication in a system where nodes may subscribe to many different topics of interest is of fundamental importance. For scalability and efficiency, it is important to keep the degree of the nodes in the publish/subscribe system low. It is only natural then to formalize the following problem: Given a collection of nodes and their topic subscriptions, connect the nodes into a graph that has least possible maximum degree in such a way that for each topic t , the graph induced by the nodes interested in t is connected. We present the first polynomial-time logarithmic approximation algorithm for this problem and prove an almost tight lower bound on the approximation ratio. Our experimental results show that our algorithm drastically improves the maximum degree of publish/subscribe overlay systems. We also propose a variation of the problem by enforcing that each topic-connected overlay network be of constant diameter while keeping the average degree low. We present three heuristics for this problem that guarantee that each topic-connected overlay network will be of diameter 2 and that aim at keeping the overall average node degree low. Our experimental results validate our algorithms, showing that our algorithms are able to achieve very low diameter without increasing the average degree by much.
['Melih Onus', 'Andréa W. Richa']
Minimum maximum-degree publish-subscribe overlay network design
61,998
During the development of Beyond-Sniff, a distributed multi-user development platform, we were confronted with various, apparently unrelated problems: data, control, and user interface integration of distributed components, system configuration, user specific preferences, etc. Undoubtedly, it is not trivial to find solutions for such issues, but C++ makes it even more challenging due to its static nature and insufficient meta-information. To overcome these shortcomings, we implemented a small and powerful framework called Any. The Any framework augments C++ with a flexible, dynamic, garbage-collected data representation mechanism. It serves as a language-independent data integration vehicle and provides data management and declarative retrieval facilities.
['Kai-Uwe Maetzel', 'Walter R. Bischofberger']
The any framework: a pragmatic approach to flexibility
117,939
Analysis of Some Database Schemas Used to Evaluate Natural Language Interfaces to Databases
['Rogelio Florencia-Juárez', 'J B Juan González', 'A R Rodolfo Pazos', 'A F José Martínez', 'María Lucila Morales-Rodríguez']
Analysis of Some Database Schemas Used to Evaluate Natural Language Interfaces to Databases
801,289
Two/Too Simple Adaptations of Word2Vec for Syntax Problems.
['Wang Ling', 'Chris Dyer', 'Alan W. Black', 'Isabel Trancoso']
Two/Too Simple Adaptations of Word2Vec for Syntax Problems.
684,894
System-level and Platform-based design, along with Transaction Level modeling (TLM) techniques and languages like SystemC, appeared as a response to the ever increasing complexity of electronics systems design, where complex SoCs composed of several modules integrated on the same chip have become very common. In this scenario, the exploration and verification of several architecture models early in the design flow has played an important role. This paper proposes a mechanism that relies on computational reflection to enable designers to interact, on the fly, with platform simulation models written in SystemC TLM. This allows them to monitor and change signals or even IP internal register values, thus injecting specific stimuli that guide the simulation flow through corner cases during platform debugging, which are usually hard to handle by standard techniques, thus improving functional coverage. The key advantages of our approach are that we do not require code instrumentation from the IP designer, do not need a specialized SystemC library, and not even need the IP source code to be able to inspect its contents. The reflection mechanism was implemented using a C++ reflection library and integrated into a platform modeling framework. We evaluate our technique through some platform case studies.
['Bruno Albertini', 'Sandro Rigo', 'Guido Araujo', 'Cristiano C. de Araujo', 'Edna Barros', 'Willians Azevedo']
A computational reflection mechanism to support platform debugging in SystemC
131,313
Artifacts such as cartoons contain explicit and implicit evidence of the geography of war. As such, they can offer political, reactive and personal perspectives that are not directly represented in conventional war maps. Maps and cartoons can complement each other in providing a more complete window into war geography. Cartoons relating to the Gallipoli campaign of 1915 were collated and coded for three classes, each of which contained a number of categories: a) the perspective (propaganda, satire, personal); b) the type of geographic evidence embodied in them (text, map, graphic, symbol, metaphor); and c) the country of origin. Category counts and correlation analysis were used to identify associations between category classes and between categories. It was found that Australian and Turkish cartoons share a distinctive pattern of characteristics, that embedded maps are a common feature of propaganda cartoons, and that graphics are associated with personal and satirical cartoons. Satirical cartoons also employ metaphor. Associations among categories within classes are also found, for example, symbolism and metaphor are positively correlated while propaganda is negatively correlated with satirical and personal perspectives. It was reasoned that these patterns emerge through various imperatives, including a political need to deploy a geographic shorthand (i.e. maps) to convey complex geographic concepts, a personal literal rendering of the war environment (i.e. through graphics) and the professional cartoonist’s use of symbolism and metaphor to communicate complex concepts.
['Antoni Moore', 'William Cartwright', 'Christina L. Hulbe']
Geographic Content Analysis of the Cartoons of Gallipoli 1915
562,705
ABSTRACT This paper describes a method of measuring the shape of solder bumps arrayed on an LSI package board presented for inspection based on the shape-from-focus technique. We used a copper-alloy mirror deformed by a piezoelectric actuator as a varifocal mirror to build a simple yet fast focusing-mechanism. The varifocal mirror was situated at the focal point of the image-taking lens in image space so that lateral magnification was constant during focusing and orthographic projection was perfectly established. A focused plane could be shifted along the optical axis with a precision of 1.4 µm in a depth range of 1.5 mm by driving the varifocal mirror. A magnification of 1.97 was maintained during focusing. Evaluating the curvature of field and removing its effect from the depth data reduced errors. The shape of 208 solder bumps 260-µm high arrayed at a pitch of 500 µm on the board was measured. The entire 10 mm x 10 mm board was segmented into 3 x 4 partly overlapping sections. We captured 101 images in each section with a high-resolution camera at different focal points at 15-µm intervals. The shape of almost the entire upper-hemisphere of a solder bump could be measured. Errors in measuring the bump heights were less than 12 µm. Keywords: Visual inspection, Shape from focus, LSI package, Solder bump, Varifocal mirror
['Akira Ishii', 'Jun Mitsudo']
Shape measurement of solder bumps by shape-from-focus using varifocal mirror
657,330
In this paper, a new call admission control scheme is proposed for Long Term Evolution (LTE) network. The calls are classified into new and handoff calls (HCs and NCs). The call admission control scheme gives the priority of handoff call in admission, without totally neglecting new calls. The proposed scheme guarantees quality of service and prevents network congestion. Simulation results show that our call admission control scheme increases session establishment success and resource utilization.
['Radhia Khdhir', 'Kais Mnif', 'Aymen Belghith', 'Lotfi Kamoun']
An efficient call admission control scheme for LTE and LTE-A networks
935,205
The Painleve test is very useful to construct not only the Laurent-series solutions but also the elliptic and trigonometric ones. Such single-valued functions are solutions of some polynomial first order differential equations. To find the elliptic solutions we transform an initial nonlinear differential equation in a nonlinear algebraic system in parameters of the Laurent-series solutions of the initial equation. The number of unknowns in the obtained nonlinear system does not depend on number of arbitrary coefficients of the used first order equation. In this paper we describe the corresponding algorithm, which has been realized in REDUCE and Maple.
['S. Yu. Vernov']
Construction of Single-valued Solutions for Nonintegrable Systems with the Help of the Painleve Test
226,668
In a Mobile Software Ecosystem (MSECO), the central organization (keystone), must restructure processes to aid external developers to produce mobile applications. The external developer helps the keystone to reach goals, such as growing number of mobile applications. However, there is no process in this context to support developers in the development aligned with the keystone's goals. This paper presents MSECO-DEV, a process to support external developers in reaching keystone's goals by developing mobile applications. MSECO-DEV comprises 8 activities, 7 artifacts, 8 recommendations, and 17 practices. Activities, recommendations, and practices were evaluated by 65 Brazilian developers (experts and novices). Such developers acted within the main MSECOs (Android, iOS and Windows Phone) to assess their benefits for the mobile applications development routine. As result, we stated that developers have difficulties to perform marketing activities, as well as to find materials that support development. Practices, activities, and recommendations were also evolved and adjusted for the definition of MSECO-DEV.
['Awdren de Lima Fontão', 'Rodrigo Pereira dos Santos', 'Jackson Feijó Filho', 'Arilo Claudio Dias-Neto']
MSECO-DEV: Application Development Process in Mobile Software Ecosystems
879,507
British Journal of Educational Technology#R##N#Early View (Online Version of Record published before inclusion in an issue)
['Vanessa G. Felix', 'Luis Mena', 'Rodolfo Ostos', 'Gladys E. Maestre']
A pilot study of the use of emerging computer technologies to improve the effectiveness of reading and writing therapies in children with Down syndrome
647,861
This poster will present our experiences using FindBugs in production software development environments, including both open source efforts and Google's internal code base. We summarize the defects found, describe the issue of real but trivial defects, and discuss the integration of FindBugs into Google's Mondrian code review system.
['Nathaniel Ayewah', 'William Pugh', 'J. David Morgenthaler', 'John Penix', 'YuQian Zhou']
Using FindBugs on production software
145,553
The purpose of this paper is to provide a review of the current state of fuzzy logic theory in epidemioloy, which is a recent area of research. We present four applications of fuzzy logic theory in epidemic problems, using linguistic fuzzy models, possibility measure, probability of fuzzy events and fuzzy decision making techniques. The results demonstrate that the applications of fuzzy sets in epidemiology is a very promising area of research. The final discussion sets the future stage of fuzzy sets application in epidemiology.
['Eduardo Massad', 'Neli Regina Siqueira Ortega', 'Claudio J. Struchiner', 'Marcelo Nascimento Burattini']
Fuzzy epidemics
816,228
Constrained Group Testing to Predict Binding Response of Candidate Compounds.
['Paul Quint', 'Stephen D. Scott', 'N. V. Vinodchandran', 'Brad Worley']
Constrained Group Testing to Predict Binding Response of Candidate Compounds.
867,006
Knowledge management (KM) is now widely recognized to be important to the success or failure of business management. Seeking to better understand the determinants of the evolution of KM, this study focuses on two main problems: (1) whether firms change their KM processes over time to improve KM effectiveness as well as develop their KM practices, and (2) whether socio-technical support results in more mature KM practices. This study draws on the previous literature to identify key dimensions of KM process (knowledge acquisition, knowledge conversion, knowledge application and knowledge protection), KM effectiveness (individual-level and organizational-level KM effectiveness) and socio-technical support (organizational support and information technology diffusion). The evolution of these dimensions is studied in the form of a stage model of KM that includes initiation, development, and mature stages. Data gathered from 141 senior executives in large Taiwanese organizations were employed to test the propositions. The results show that different stages of KM evolution can be distinguished across dimensions of KM process, KM effectiveness, and socio-technical support. Implications for organizations are also discussed.
['Hsiu-Fen Lin']
A stage model of knowledge management: an empirical investigation of process and effectiveness
84,564
In this paper, the performance of different generative methods for the classification of cervical nuclei are compared in order to detect cancer of cervix. These methods include classical Bayesian approaches, such as Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA) or Mixture Discriminant Analysis (MDA) and a high-dimensional approach (HDDA) recently developed. The classification of cervical nuclei presents 2 main statistical issues, scarce population and high-dimensional data, which impact on the ability to successfully discriminate the different classes. This paper presents an approach to face the problems of unbalanced data and high-dimensions in the context of cervical cancer detection.
['Charles Bouveyron', 'Camille Brunet', 'Vincent Vigneron']
Classification of high dimensional data for cervical cancer detection
799,254
Natural language texts often refer to complex situations. Many of these situations involve relationships between people's goals. In order to build a program that understands texts, it is necessary to give the program knowledge about goal relationships and the situations to which they give rise. This knowledge constitutes a theory of planning about real world situations. We have incorporated this theory of planning into a program called PAM (Plan Applier Mechanism). As a result we now have a natural language processing system that can comprehend many complicated dramatic situations.
['Robert Wilensky']
Understanding complex situations
614,257
Unlike recent works by Blom and Dunham on simple substitution ciphers, papers, we do not consider equivocations (conditional entropies given the cryptogram) but rather the probability that the enemy makes an error when he tries to decipher the cryptogram or to identify the key by means of optimal identification procedures. This approach is suggested by the usual approach to coding problems taken in Shannon theory, where one evaluates error probabilities with respect to optimal encoding-decoding procedures. The main results are asymptotic; the same relevant parameters are obtained as in Blom or Dunham.
['Andrea Sgarro']
Error probabilities for simple substitution ciphers
410,264
This paper deals with the CORSIM (Corridor Simulator) interval-based simulation model and its application to traffic operations analysis. It explains how with CORSIM, each vehicle is a distinct object moving every second and how its kinematic properties as well as status are updated every second. The authors propose that the driving behavior represented by speed distributions over different driver types should be built in CORSIM in order to reflect more realistic patterns of driving behavior. Simulation analysis is used to demonstrate the impact of the driver distribution to freeway capacity.
['Steven I. J. Chien', 'Kyriacos Mouskos', 'Shoaib M. Chowdhury']
GENERATING DRIVER POPULATION FOR THE MICROSCOPIC SIMULATION MODEL (CORSIM)
330,173
Declarative Continuations: an Investigation of Duality in Programming Language Semantics
['Andrzej Filinski']
Declarative Continuations: an Investigation of Duality in Programming Language Semantics
459,441
Protein size is an important biochemical feature since longer proteins can harbor more domains and therefore can display more biological functionalities than shorter proteins. We found remarkable differences in protein length, exon structure, and domain count among different phylogenetic lineages. While eukaryotic proteins have an average size of 472 amino acid residues (aa), average protein sizes in plant genomes are smaller than those of animals and fungi. Proteins unique to plants are ∼81 aa shorter than plant proteins conserved among other eukaryotic lineages. The smaller average size of plant proteins could neither be explained by endosymbiosis nor subcellular compartmentation nor exon size, but rather due to exon number. Metazoan proteins are encoded on average by ∼10 exons of small size [∼176 nucleotides (nt)]. Streptophyta have on average only ∼5.7 exons of medium size (∼230 nt). Multicellular species code for large proteins by increasing the exon number, while most unicellular organisms employ rather larger exons (>400 nt). Among subcellular compartments, membrane proteins are the largest (∼520 aa), whereas the smallest proteins correspond to the gene ontology group of ribosome (∼240 aa). Plant genes are encoded by half the number of exons and also contain fewer domains than animal proteins on average. Interestingly, endosymbiotic proteins that migrated to the plant nucleus became larger than their cyanobacterial orthologs. We thus conclude that plants have proteins larger than bacteria but smaller than animals or fungi. Compared to the average of eukaryotic species, plants have ∼34% more but ∼20% smaller proteins. This suggests that photosynthetic organisms are unique and deserve therefore special attention with regard to the evolutionary forces acting on their genomes and proteomes.
['Obed Ramírez-Sánchez', 'Paulino Pérez-Rodríguez', 'Luis Delaye', 'Axel Tiessen']
Plant Proteins Are Smaller Because They Are Encoded by Fewer Exons than Animal Proteins
965,846
Automatic white-box test generation is a challenging problem. Many existing tools rely on complex code analyses and heuristics. As a result, structural features of an input program may impact tool effectiveness in ways that tool users and designers may not expect or understand. We develop a technique that uses structural program metrics to predict the test coverage achieved by three automatic test generation tools. We use coverage and structural metrics extracted from 11 software projects to train several decision tree classifiers. Our experiments show that these classifiers can predict high or low coverage with success rates of 82% to 94%.
['Brett Daniel', 'Marat Boshernitsan']
Predicting Effectiveness of Automatic Testing Tools
263,603
This paper presents a hybrid Support Vector Machine (SVM) and Particle Swarm Optimization (PSO) model for predicting alpha particles emitting contamination on the internal surfaces of decommissioned channels. Six measuring parameters (channel diameter, channel length, distance to radioactive source, radioactive strength, wind speed and flux) and one ionizing value have been obtained via experiments. These parameters show complex linear and nonlinear relationships to measuring results. The model used PSO to optimize SVM parameters. The comparison of computational results of the hybrid approach with normal BP networks confirms its clear advantage for dealing with this complex nonlinear prediction.
['Mingzhe Liu', 'Xianguo Tuo', 'Jun Ren', 'Zhe Li', 'Lei Wang', 'Jianbo Yang']
A PSO-SVM based model for alpha particle activity prediction inside decommissioned channels
587,613
We study a real-world distribution problem arising in the automotive field in which cars, trucks, and other vehicles have to be loaded onto auto-carriers and then delivered to dealers. The solution of the problem involves both the computation of the routing of the auto-carriers along the road network and the determination of a feasible loading for each carrier. We solve the problem by means of an iterated local search algorithm that makes use of several inner local search strategies for the routing part and mathematical modeling and enumeration techniques for the loading part. Extensive computational results on real-world instances show that good savings on the total cost can be obtained within small computational efforts.
["Mauro Dell'Amico", 'Simone Falavigna', 'Manuel Iori']
Optimization of a Real-World Auto-Carrier Transportation Problem
236,117
Distance estimation is of great importance for localization and a variety of applications in wireless sensor networks. In this paper, we develop a simple and efficient method for estimating distances between any pairs of neighboring nodes in static wireless sensor networks based on their local connectivity information, namely the numbers of their common one-hop neighbors and non-common one-hop neighbors. The proposed method involves two steps: estimating an intermediate parameter through a Maximum-Likelihood Estimator (MLE) and then mapping this estimate to the associated distance estimate. In the first instance, we present the method by assuming that signal transmission satisfies the ideal unit disk model but then we expand it to the more realistic log-normal shadowing model. Finally, simulation results show that localization algorithms using the distance estimates produced by this method can deliver superior performances in most cases in comparison with the corresponding connectivity-based localization algorithms.
['Baoqi Huang', 'Changbin Yu', 'Brian D. O. Anderson', 'Guoqiang Mao']
Connectivity-Based Distance Estimation in Wireless Sensor Networks
425,534
User involvement in ontology matching using an online active learning approach
['Booma Sowkarthiga Balasubramani', 'Aynaz Taheri', 'Isabel F. Cruz']
User involvement in ontology matching using an online active learning approach
739,703
Statistical Methodology for Industrial Problems
['Karen Kafadar']
Statistical Methodology for Industrial Problems
705,424
Functional verification is widely recognized as the bottleneck of the hardware design cycle. With the ever-growing demand for greater performance and faster time to market, coupled with the exponential growth in hardware size, verification has become increasingly difficult. Although formal methods such as model checking and theorem proving have resulted in noticeable progress, these approaches apply only to the verification of relatively small design blocks or to very focused verification goals. Current industry practice is to use separate, automatic, random stimuli generators for processor- and multiprocessor-level verification. The generated stimuli, usually in the form of test programs, trigger architecture and microarchitecture events defined by a verification plan. MAC-based algorithms are well suited for the test program generation domain because they postpone heuristic decisions until after consideration of all architectural and testing-knowledge constraints. Geneysys-Pro is currently the main test generation tool for functional verification of IBM processors, including several complex processors. We've found that the new language considerably reduces the effort needed to define and maintain knowledge specific to an implementation and verification plan.
['Allon Adir', 'Eli Almog', 'Laurent Fournier', 'Eitan Marcus', 'Michal Rimon', 'Michael Vinov', 'Avi Ziv']
Genesys-Pro: innovations in test program generation for functional processor verification
184,755
Online failure prediction for large-scale software systems is a challenging task. One reason is the complex structure of many-partially inter-dependent-hardware and software components. State-of-the-art approaches use separate prediction models for parameters of interest or a monolithic prediction model which includes different parameters of all components. However, they have problems when dealing with evolving systems. In this paper, we propose our preliminary research work on online failure prediction targeting large-scale component-based software systems. For the prediction, three complementary types of models are used: (i) an architectural model captures relevant properties of hardware and software components as well as dependencies among them, (ii) for each component, a prediction model captures the current state of a component and predicts independent component failures in the future, (iii) a system-level prediction model represents the current state of the system and-using the component-level prediction models and information on dependencies-allows to predict failures and analyze impacts of architectural system changes for proactive failure management.
['Teerat Pitakrat', 'André van Hoorn', 'Lars Grunske']
Increasing Dependability of Component-Based Software Systems by Online Failure Prediction (Short Paper)
418,040
We consider scheduling problems in the master-slave model. In this model, each job has to be processed sequentially in three stages. In the first stage, a preprocessing task runs on a master machine, in the second stage, a slave task runs on a dedicated slave machine, and, in the last stage, a postprocessing task again runs on a master machine, possibly different from the master machine in the first stage. It has been shown that the problem of minimizing the makespan or the sum of completion times is NP-hard in the strong sense even if preemption is allowed. In this paper, we design efficient approximation algorithms to minimize the sum of completion times in various settings. These are the first general results for the minsum problem in the master-slave model. We also show that these algorithms generate schedules with small makespan as well
['Joseph Y.-T. Leung', 'Hairong Zhao']
Minimizing sum of completion times and makespan in master-slave systems
456,819
Efficient enumeration of optimal and approximate solutions of a scheduling problem.
['Sergey Sevastyanov', 'Bertrand M. T. Lin']
Efficient enumeration of optimal and approximate solutions of a scheduling problem.
773,758
This paper decomposes a large-scale learning problem into multiple limited-scale pairs of training subsets and cross validation (CV) subsets. One training subset only consists of its own class and some most neighboring samples from the other categories. Naturally, modular multilayer perceptrons (MLPs) come into being. If the final decision region of an MLP is open, its real outputs must be amended. According to the fuzzy set theory, each output of MLPs is added a correction coefficient, which is related to the class mean and covariance. In addition, weight increment correction factors are added to solve the sample disequilibrium problems in training subsets. The result for letter recognition shows that the above methods are quite effective
['Gao Daqi', 'Yang Yunfan']
Fuzzily Modular Multilayer Perceptron Classifiers for Large-Scale Learning Problems
22,423
Over the last two decades, fixed coefficient FIR filters were generally optimized by minimizing the number of adders required to implement the multiplier block in the transposed direct form filter structure. In this paper, an optimization method for the structural adders in the transposed tapped delay line is proposed. Although additional registers are required, an optimal trade-off can be made such that the overall combinational logic is reduced. For a majority of taps, the delay through the structural adder is shortened except for the last tap. The one full adder delay increase for the last optimized tap is tolerable as it does not fall in the critical path in most cases. The criterion for which area reduction is possible is analytically derived and an area reduction of up to 4.5% for the structural adder block of three benchmark filters is estimated theoretically. The saving is more prominent as the number of taps grows. Actual synthesis results obtained by Synopsys Design compiler with 0.18µm TSMC CMOS libraries show a total area reduction of up to 13.13% when combined with common subexpression elimination. In all examples, up to 11.96% of the total area saved were due to the reduction of structural adder costs by our proposed method.
['Mathias Faust', 'Chip Hong Chang']
Optimization of structural adders in fixed coefficient transposed direct form FIR filters
96,002
This paper exploits the most recent developments in sparsity approximation and Compressed Sensing (CS) to efficiently perform localization in wireless networks. Based on the spatial sparsity of the mobile devices distribution, a Bayesian Compressed Sensing (BCS) scheme has been put forward to perform accurate localization. Location estimation is carried out at a network central unit (CU) thus significantly alleviating the burden of mobile devices. Since the CU can observe correlated signals from different mobile devices, the proposed method utilizes the common structure of the received measurements in order to jointly estimate the locations precisely. Moreover, when the number of mobile devices changes, we increase or decrease the measurement number adaptively depending on “error bars” along with precedent reconstruction processes. Simulation shows that the proposed method, i.e. Adaptive Multi-task BCS Localization (AMBL), results in a better accuracy in terms of mean localization error compared with traditional localization schemes.
['Yuan Zhang', 'Zhifeng Zhao', 'Honggang Zhang']
Adaptive Bayesian Compressed Sensing based localization in wireless networks
909,090
Argument order as an expectation trigger in Korean.
['Hongoak Yun', 'Uphong Hong', 'Yunju Nam', 'Hyunjung Kim']
Argument order as an expectation trigger in Korean.
749,921
Relatively little work on cloud shadow detection has been published and many of these papers deal with restricted geometries. Here, arbitrary viewing and illumination conditions are considered. A means is provided to extend more restricted treatments of cloud shadow detection and removal to the general case.
['James J. Simpson', 'Zhonghai Jin', 'James R. Stitt']
Cloud shadow detection under arbitrary viewing and illumination conditions
170,691
Novel Online Multi-Divisive Hierarchical Clustering for On-body Sensor Data.
['Ibrahim N. Musa', 'GyeongMin Yi', 'Dong Gyu Lee', 'Myeong-Chan Cho', 'Jang-Whan Bae', 'Keun Ho Ryu']
Novel Online Multi-Divisive Hierarchical Clustering for On-body Sensor Data.
886,957
Balanced Scoring Method for Multiple-mark Questions
['Darya Tarasowa', 'Sören Auer']
Balanced Scoring Method for Multiple-mark Questions
747,285
Although traditionally used as a gesture recognition device, the Kinect has been recently leveraged for user entry control. In this context, a user admission decision is typically based on biometrics such as face, speech, gait and gestures. Despite being a relatively new biometric, gestures have been shown to be a promising authentication modality. These results have been achieved using a single Kinect camera. This paper aims to investigate the potential performance and robustness gains in gesture-based user authentication using multiple Kinects. We study the impact of multiple viewpoints on a dataset of 40 users that contains notable degradations from user memory and personal effects (multiple types of bags and outerwear). We found that two additional viewpoints can provide as much as 26 -- 43% average relative improvement in the Equal Error Rate (EER) for user authentication, and as much as 16 -- 68% average relative improvement in the Correct Classification Error (CCE) compared to using a single centered Kinect camera.
['Jonathan Wu', 'Janusz Konrad', 'Prakash Ishwar']
The Value of Multiple Viewpoints in Gesture-Based User Authentication
540,399
Over recent decades there has been a growing interest in the question of whether computer programs are capable of genuinely creative activity. Although this notion can be explored as a purely philosophical debate, an alternative perspective is to consider what aspects of the behaviour of a program might be noted or measured in order to arrive at an empirically supported judgement that creativity has occurred. We sketch out, in general abstract terms, what goes on when a potentially creative program is constructed and run, and list some of the relationships (for example, between input and output) which might contribute to a decision about creativity. Specifically, we list a number of criteria which might indicate interesting properties of a program's behaviour, from the perspective of possible creativity. We go on to review some ways in which these criteria have been applied to actual implementations, and some possible improvements to this way of assessing creativity.
['Graeme Ritchie']
Some Empirical Criteria for Attributing Creativity to a Computer Program
524,678
Manual pocket depth probing has been widely used as a retrospective diagnosis method in periodontics. However, numerous studies have questioned its ability to accurately measure the anatomic pocket depth. In this paper, an ultrasonic periodontal probing method is described, which involves using a hollow water-filled probe to focus a narrow beam of ultrasound energy into and out of the periodontal pocket, followed by automatic processing of pulse-echo signals to obtain the periodontal pocket depth. The signal processing algorithm consists of three steps: peak detection/characterization, peak classification, and peak identification. A dynamic wavelet fingerprint (DWFP) technique is first applied to detect suspected scatterers in the A-scan signal and generate a two-dimensional black and white pattern to characterize the local transient signal corresponding to each scatterer. These DWFP patterns are then classified by a two-dimensional FFT procedure and mapped to an inclination index curve. The location of the pocket bottom was identified as the third broad peak in the inclination index curve. The algorithm is tested on full-mouth probing data from two sequential visits of 14 patients. Its performance is evaluated by comparing ultrasonic probing results with that of full-mouth manual probing at the same sites, which is taken as the "gold standard."
['Jidong Hou', 'S. Timothy Rose', 'Mark K. Hinders']
Ultrasonic periodontal probing based on the dynamic wavelet fingerprint
405,074
Network Role Analysis in the Study of Food Webs: An Application of Regular Role Coloration
['Jeffrey C. Johnson', 'Stephen P. Borgatti', 'Joseph J. Luczkovich', 'Martin G. Everett']
Network Role Analysis in the Study of Food Webs: An Application of Regular Role Coloration
100,938
Wait a minute! A fast, Cross-VM attack on AES.
['Gorka Irazoqui Apecechea', 'Mehmet Sinan Inci', 'Thomas Eisenbarth', 'Berk Sunar']
Wait a minute! A fast, Cross-VM attack on AES.
791,177