abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
In this paper, we quantify how much codes can reduce the data retrieval latency in storage systems. By combining a simple linear code with a novel request scheduling algorithm, which we call Blocking-one Scheduling (BoS), we show analytically that it is possible to use codes to reduce data retrieval delay by up to 17% over currently popular replication-based strategies. Although in this work we focus on a simplified setting where the storage system stores a single content, the methodology developed can be applied to more general settings with multiple contents. The results also offer insightful guidance to the design of storage systems in data centers and content distribution networks.
['Longbo Huang', 'Sameer Pawar', 'Hao Zhang', 'Kannan Ramchandran']
Codes can reduce queueing delay in data centers
483,353
In this paper, an adaptive window length estimation for channel estimation in direct-sequence (DS) code-division multiple-access (CDMA) receivers is presented. The estimated optimum window length is derived based on autocovariance analysis of the estimated channel coefficients. As the propagation channel changes over time, an adaptive algorithm for updating the window length is proposed through Doppler and signal-to-noise ratio (SNR) estimations. The proposed system improves the performance by adaptively selecting and continuously updating the window lengths over a wide-range of velocities (5 to 400 kmph).
['Agustian Taufiq Asyhari', 'Ser Wah Oh', 'Kwok Hung Li', 'Kah Chan Teh']
Adaptive Window Length Estimation for Channel Estimation in CDMA Receivers
79,867
We consider quadratic stabilizability and ℌ ∞ disturbance attenuation of switched systems which are composed of a finite set of linear time-invariant subsystems. The situation is that none of the subsystems is quadratically stable with certain ℌ ∞ disturbance attenuation level but a convex combination of the subsystems achieves such performance. We then design a state-dependent switching signal (state feedback) and an output-dependent switching signal (output feedback) such that the entire switched system is quadratically stable with the same ℌ ∞ disturbance attenuation level. In the case of state feedback, when the number of subsystems is two, we show that the existence of desired convex combination of subsystems is not only sufficient but also necessary for quadratic stabilizability with the ℌ ∞ disturbance attenuation of the switched system.
['Guisheng Zhai']
Quadratic stabilizability and ℌ ℞ disturbance attenuation of switched linear systems via state and output feedback
73,803
In practical time-frequency (T-F) analysis of nonstationary signals associated with the use of the smooth pseudo Wigner-Ville distribution (SPWV), an important issue is a choice of the smoothing window. To address this problem, a fast sub-optimal technique is proposed, relating the window length to the approximate slope of the frequency modulation. The results obtained by using this automated procedure are comparable to results given by supervised decompositions and are more (locally) adapted. The developed method has been used in processing of radar returns. In order to objectively assess the performance of the transforms, several measures can be used. For signals with unknown modulation laws, cost functions are available, including measures based on entropy function. Their application to evaluation of the obtained T-F distributions is demonstrated.
['Artur Loza', 'Nishan Canagarajah', 'D.R. Bull']
Adaptive SPWV distribution with adjustable volume 2-D separable kernel
941,353
Designing an intelligent situated agent is a difficult task because the designer must see the problem from the agent's viewpoint, considering all its sensors, actuators, and computation systems. In this paper, we introduce a bio-inspired hybridization of reinforcement learning, cooperative co-evolution, and a cultural-inspired memetic algorithm for the automatic development of behavior-based agents. Reinforcement learning is responsible for the individual-level adaptation. Cooperative co-evolution performs at the population level and provides basic decision-making modules for the reinforcement-learning procedure. The culture-based memetic algorithm, which is a new computational interpretation of the meme metaphor, increases the lifetime performance of agents by sharing learning experiences between all agents in the society. In this paper, the design problem is decomposed into two different parts: 1) developing a repertoire of behavior modules and 2) organizing them in the agent's architecture. Our proposed cooperative co-evolutionary approach solves the first problem by evolving behavior modules in their separate genetic pools. We address the problem of relating the fitness of the agent to the fitness of behavior modules by proposing two fitness sharing mechanisms, namely uniform and value-based fitness sharing mechanisms. The organization of behavior modules in the architecture is determined by our structure learning method. A mathematical formulation is provided that shows how to decompose the value of the structure into simpler components. These values are estimated during learning and are used to find the organization of behavior modules during the agent's lifetime. To accelerate the learning process, we introduce a culture-based method based on our new interpretation of the meme metaphor. Our proposed memetic algorithm is a mechanism for sharing learned structures among agents in the society. Lifetime performance of the agent, which is quite important for real-world applications, increases considerably when the memetic algorithm is in action. Finally, we apply our methods to two benchmark problems: an abstract problem and a decentralized multirobot object-lifting task, and we achieve human-competitive architecture designs.
['Amir Massoud Farahmand', 'Majid Nili Ahmadabadi', 'Caro Lucas', 'Babak Nadjar Araabi']
Interaction of Culture-Based Learning and Cooperative Co-Evolution and its Application to Automatic Behavior-Based System Design
51,495
The simplest example of an infinite Burnside group arises in the class of automaton groups. However there is no known example of such a group generated by a reversible Mealy automaton. It has been proved that, for a connected automaton of size at most 3, or when the automaton is not bireversible, the generated group cannot be Burnside infinite. In this paper, we extend these results to automata with bigger stateset, proving that, if a connected reversible automaton has a prime number of states, it cannot generate an infinite Burnside group.
['Thibault Godin', 'Ines Klimann']
Connected reversible Mealy automata of prime size cannot generate infinite Burnside groups
713,889
Differently from traditional two-dimensional texture images, the depth images of three-dimensional (3D) video systems have significant sparse characteristics under the certain transform basis, which make it possible for compressive sensing to represent depth information efficiently. Therefore, in this paper, a novel depth image coding scheme is proposed based on a block compressive sensing method. At the encoder, in view of the characteristics of depth images, the entropy of pixels in each block is employed to represent the sparsity of depth signals. Then according to the different sparsity in the pixel domain, the measurements can be adaptively allocated to each block for higher compression efficiency. At the decoder, the sparse transform can be combined to achieve the compressive sensing reconstruction. Experimental results have shown that at the same sampling rate, the proposed scheme can obtain higher PSNR values and better subjective quality of the rendered virtual views, compared with the method using a uniform sampling rate.
['Huihui Bai', 'Mengmeng Zhang', 'Meiqin Liu', 'Anhong Wang', 'Yao Zhao']
Depth Image Coding Using Entropy-Based Adaptive Measurement Allocation
401,393
In an extended image sequence of an outdoor scene, one observes changes in color induced by variations in the spectral composition of daylight. This paper proposes a model for these temporal color changes and explores its use for the analysis of outdoor scenes from time-lapse video data. We show that the time-varying changes in direct sunlight and ambient skylight can be recovered with this model, and that an image sequence can be decomposed into two corresponding components. The decomposition provides access to both radiometric and geometric information about a scene, and we demonstrate how this can be exploited for a variety of visual tasks, including color-constancy, background subtraction, shadow detection, scene reconstruction, and camera geo-location.
['Kalyan Sunkavalli', 'Fabiano Romeiro', 'Wojciech Matusik', 'Todd E. Zickler', 'Hanspeter Pfister']
What do color changes reveal about an outdoor scene
145,288
This paper proposes a new template model for the simulation of casting defects which are classified, according to shape, into three main types: the single defect with circular or elliptical shape, shrinkage defects with stochastic discontinuities, and the cavity or sponge shrinkage defects. For effective simulation, different nesting stencil plates are designed to reflect the characteristics of different casting defects. These include intensity, orientation, size, and shape. The proposed approach also uses geometric diffusion to demonstrate the production of simulated defects with effective shading and contrast when compared to their background. In order to evaluate the effectiveness of the proposed approach, the simulated casting defects are superposed on real radioscopic images of casting pieces and compared with real defects by extensive visual inspection. On the other hand, in order to verify the similarity of the simulated defects and the real defects, we have used our defect inspection algorithm to recognize both real and simulated defects in the same image. The experimental results show that the proposed defect simulation approach can produce a large range of simulated casting defects, which can be utilized as sample images to tune the parameters of casting inspection algorithms.
['Qian Huang', 'Yuan Wu', 'John Baruch', 'Ping Jiang', 'Yonghong Peng']
A Template Model for Defect Simulation for Evaluating Nondestructive Testing in X-Radiography
544,879
In this paper, we study the convergence of dynamic fusion algorithms that can be modeled as Linear Time-Varying (LTV) systems with (sub-) stochastic system matrices. Instead of computing the joint spectral radius, we partition the entire set of system matrices into slices, whose lengths characterize the stability (convergence) of the underlying LTV system. We relate the lengths of the slices to the rate of the information flow within the network, and show that fusion is achieved if the unbounded slice lengths grow slower than an explicit exponential rate. As a motivating application, we provide a distributed algorithm to track the positions of an arbitrary number of agents moving in a bounded region of interest. At each iteration, agents update their position estimates as a convex combination of the states of the neighbors, if they are able to find enough neighbors; and do not update otherwise. We abstract the corresponding position tracking algorithm as an LTV system, and introduce the notion of slices to provide sufficient conditions under which the system asymptotically converges to the true positions of the agents. We demonstrate the effectiveness of our approach through simulations.
['Sam Safavi', 'Usman A. Khan']
On the convergence of time-varying fusion algorithms: Application to localization in dynamic networks
974,721
This paper presents a novel BigEAR big data framework that employs psychological audio processing chain (PAPC) to process smartphone-based acoustic big data collected when the user performs social conversations in naturalistic scenarios. The overarching goal of BigEAR is to identify moods of the wearer from various activities such as laughing, singing, crying, arguing, and sighing. These annotations are based on ground truth relevant for psychologists who intend to monitor/infer the social context of individuals coping with breast cancer. We pursued a case study on couples coping with breast cancer to know how the conversations affect emotional and social well being. In the state-of-the-art methods, psychologists and their team have to hear the audio recordings for making these inferences by subjective evaluations that not only are time-consuming and costly, but also demand manual data coding for thousands of audio files. The BigEAR framework automates the audio analysis. We computed the accuracy of BigEAR with respect to the ground truth obtained from a human rater. Our approach yielded overall average accuracy of 88.76% on real-world data from couples coping with breast cancer.
['Harishchandra Dubey', 'Matthias R. Mehl', 'Kunal Mankodiya']
BigEAR: Inferring the Ambient and Emotional Correlates from Smartphone-Based Acoustic Big Data
818,447
The last decade, sampling based planners like the Probabilistic Roadmap Method have proved to be successful in solving complex motion planning problems. We give a reachability based analysis for these planners which leads to a better understanding of the success of the approach and enhancements of the techniques suggested. This also enables us to study the effect of using new local planners.
['Roland Geraerts', 'Mark H. Overmars']
Reachability Analysis of Sampling Based Planners
339,281
We describe a markerless camera tracking system for augmented reality that operates in environments which contain one or more planes. This is a common special case, which we show significantly simplifies tracking. The result is a practical, reliable, vision-based tracker. Furthermore, the tracked plane imposes a natural reference frame, so that the alignment of the real and virtual coordinate systems is rather simpler than would be the case with a general structure-and-motion system. Multiple planes can be tracked, and additional data such as 2D point tracks are easily incorporated.
['Gilles Simon', 'Andrew W. Fitzgibbon', 'Andrew Zisserman']
Markerless tracking using planar structures in the scene
148,250
We study non-orthogonal spectrum sharing to determine under what circumstances operators can gain by such sharing. To model the spectrum sharing, we use the multiple-input single-output (MISO) interference channel (IC) assuming that the operators transmit in the same band. For the baseline scenario of no sharing, we use the MISO broadcast channel (BC) assuming that the operators transmit in disjunct bands. For both the IC and BC, we give achievable (lower) and upper bounds on the maximum sum-rate. While these bounds are well-known we also propose a new fast algorithm for finding a lower bound on the sum-rate of the BC using linear beamforming. We use the bounds to numerically evaluate the potential gain of non-orthogonal spectrum sharing. In this study we assume that the operators efficiently utilize all their spatial degrees of freedom. We will see that the gains from spectrum sharing under these circumstances are limited.
['Johannes Lindblom', 'Erik G. Larsson']
Does non-orthogonal spectrum sharing in the same cell improve the sum-rate of wireless operators?
224,248
The main result of this paper is a generic composition theorem for low error two-query probabilistically checkable proofs (PCPs). Prior to this work, composition of PCPs was well-understood only in the constant error regime. Existing composition methods in the low error regime were non-modular (i.e., very much tailored to the specific PCPs that were being composed), resulting in complicated constructions of PCPs. Furthermore, until recently, composition in the low error regime suffered from incurring an extra 'consistency' query, resulting in PCPs that are not 'two-query' and hence, much less useful for hardness-of-approximation reductions. In a recent breakthrough, Moshkovitz and Raz [In Proc. 49th IEEE Symp. on Foundations of Comp. Science (FOCS), 2008] constructed almost linear-sized low-error 2-query PCPs for every language in NP. Indeed, the main technical component of their construction is a novel composition of certain specific PCPs. We give a modular and simpler proof of their result by repeatedly applying the new composition theorem to known PCP components. To facilitate the new modular composition, we introduce a new variant of PCP, which we call a "decodable PCP (dPCP)". A dPCP is an encoding of an NP witness that is both locally checkable and locally decodable. The dPCP verifier in addition to verifying the validity of the given proof like a standard PCP verifier, also locally decodes the original NP witness. Our composition is generic in the sense that it works regardless of the way the component PCPs are constructed.
['Irit Dinur', 'Prahladh Harsha']
Composition of Low-Error 2-Query PCPs Using Decodable PCPs
127,980
Multicultural Adaptive Systems.
['Hannu Jaakkola', 'Bernhard Thalheim']
Multicultural Adaptive Systems.
783,055
The Effect of Disrupted Attention on Encoding in Young Children.
['Karrie E. Godwin', 'Anna V. Fisher']
The Effect of Disrupted Attention on Encoding in Young Children.
780,076
The work of the British Aerospace Dependable Computing Systems Centre includes the development of formal techniques for use in defining and tracing requirements for software systems at the system architecture level. A basic repertoire of techniques proposed so far includes the graphical representation of timing requirements allied to model-oriented specifications of functionality. This paper gives an overview of these techniques and reports on a small study in their application conducted by British Aerospace Defence. The study uses a realistic example of an avionics system: the pilot data entry system for a waypoint database. The example is described with some technical detail. Formally analysing a timing requirement for the rate of data entry yields local timing requirements for the cockpit equipments. Conclusions assess the value of these techniques, as perceived by BAe systems developers and propose further work in providing tool support.
['Leonor Barroca', 'John S. Fitzgerald', 'L. Spencer']
The architectural specification of an avionic subsystem
121,601
As a new addition to the recursive least squares (RLS) family filters, the state space recursive least squares (SSRLS) filter can achieve desirable performance by conquering some limitations of the standard RLS filter. However, when the system is contaminated by some non-Gaussian noises, the performance of SSRLS will get worse. The main reason for this is that the SSRLS is developed under the well-known minimum mean square error (MMSE) criterion, which is not very suitable for non-Gaussian situations. To address this issue, in this paper, we propose a new state space based linear filter, called the state space least p-power (SSLP) filter, which is derived under the least mean p-power error (LMP) criterion instead of the MMSE. With a proper p value, the SSLP can outperform the SSRLS substantially especially in non-Gaussian noises. Two illustrative examples are presented to show the satisfactory results of the new algorithm.
['Xi Liu', 'Chen B', 'Jiuwen Cao', 'Bin Xu', 'Haiquan Zhao']
State space least p-power filter
964,895
Two notions of "search" can be used to enhance the software engineering process -- the notion of searching for optimal architectures/designs using AI-motivated optimization algorithms, and the notion of searching for reusable components using query-driven search engines. To date these possibilities have largely been explored separately within different communities. In this paper we suggest there is a synergy between the two approaches, and that a hybrid approach which integrates their strengths could be more useful and powerful than either approach individually. After first characterizing the two approaches we discuss some of the opportunities and challenges involved in their synergetic integration.
['Colin Atkinson', 'Marcus Kessel', 'Marcus Schumacher']
On the Synergy between Search-Based and Search-Driven Software Engineering
78,338
This paper is motivated by the fact that mixed integer nonlinear programming is an important and difficult area for which there is a need for developing new methods and software for solving large-scale problems. Moreover, both fundamental building blocks, namely mixed integer linear programming and nonlinear programming, have seen considerable and steady progress in recent years. Wishing to exploit expertise in these areas as well as on previous work in mixed integer nonlinear programming, this work represents the first step in an ongoing and ambitious project within an open-source environment. COIN-OR is our chosen environment for the development of the optimization software. A class of hybrid algorithms, of which branch-and-bound and polyhedral outer approximation are the two extreme cases, are proposed and implemented. Computational results that demonstrate the effectiveness of this framework are reported. Both the library of mixed integer nonlinear problems that exhibit convex continuous relaxations, on which the experiments are carried out, and a version of the software used are publicly available.
['Pierre Bonami', 'Lorenz T. Biegler', 'Andrew R. Conn', 'Gérard Cornuéjols', 'Ignacio E. Grossmann', 'Carl D. Laird', 'Jon Lee', 'Andrea Lodi', 'François Margot', 'Nicolas W. Sawaya', 'Andreas Wächter']
An algorithmic framework for convex mixed integer nonlinear programs
524,423
A Mobile Ad hoc Network (MANET) is receiving a great attention by different communities (e.g. military and civil applications) thanks to its self-configuration and self maintenance potential. Securing a MANET is a very critical matter as it is vulnerable to different attacks and also it is characterised with no clear line of defence. Since any security solution relies on a particular trust model, there are different types of trust models that could suit MANETs. This paper present a design of security architecture based on the hybrid trust model. It consists of a set of servers (i.e. a Central authority Server (CAS), Threshold Authority Servers (TASs) and Delegated Authority Servers (DASs)) emulating certification authorities. Our security architecture caters for improving services availability and utilisation.
['Salaheddin Darwish', 'Simon J. E. Taylor', 'Gheorghita Ghinea']
Security Server-Based Architecture for Mobile Ad Hoc Networks
326,107
Modern live cell fluorescence microscopy imaging systems, used abundantly for studying intra-cellular processes in vivo, generate vast amounts of noisy image data that cannot be processed efficiently and accurately by means of manual or current computerized techniques. We propose an improved tracking method, built within a Bayesian probabilistic framework, which better exploits temporal information and prior knowledge. Experiments on simulated and real fluorescence microscopy image data acquired for microtubule dynamics studies show that the technique is more robust to noise, photobleaching, and object interaction than common tracking methods and yields results that are in good agreement with expert cell biologists.
['Ihor Smal', 'Katharina Draegestein', 'Niels Galjart', 'Wiro J. Niessen', 'Erik Meijering']
Rao-blackwellized marginal particle filtering for multiple object tracking in molecular bioimaging
409,018
In this paper, we propose a new cooperative communication protocol, which achieves high bandwidth efficiency while guaranteeing full diversity order. The proposed scheme considers relay selection via the available partial channel state information (CSI) at the source and the relays. More precisely, the source determines when it needs to cooperate with one relay only among arbitrary N relays and which relay to cooperate with in case of cooperation, i.e., "When to cooperate?" and "Whom to cooperate with?". In case of cooperation, the source employs the optimal relay, which has the maximum instantaneous scaled harmonic mean function of its source-relay and relay-destination channels' gains. For the symmetric scenario, we prove that full diversity is guaranteed and that a significant increase of the bandwidth efficiency is achieved. Furthermore, we show the tradeoff between the achievable bandwidth efficiency and the corresponding error rate. Finally, the obtained analytical results are verified through computer simulations.
['Ahmed S. Ibrahim', 'Ahmed K. Sadek', 'Weifeng Su', 'K.J.R. Liu']
SPC12-5: Relay Selection in Multi-Node Cooperative Communications: When to Cooperate and Whom to Cooperate with?
431,149
Findings from a student survey at a Swedish upper secondary school class concerning the use of mobile phones for school work are presented in this paper. A previous study indicated that a majority of the students did not regard the mobile phone as an appropriate tool for school work at school. However 56% of the students stated that they used mobile phone for school work at home every week (Haglind, 2013). In relation to the previous study this paper ex- plores the students' perception of the mobile phone as a tool for school work in school and the students' use of it for learning at home. The results indicate that the mobile phone can be described as a boundary object between the students' social worlds of home and of school. The results also show that the students use their mobile phones for school work related tasks, when the task is suitable for the mobile phone format.
['Torbjörn Ott', 'Therése Haglind', 'Berner Lindström']
Students’ Use of Mobile Phones for School Work
140,612
Session types provide a means to prescribe the communication behavior between concurrent message-passing processes. However, in a distributed setting, some processes may be written in languages that do not support static typing of sessions or may be compromised by a malicious intruder, violating invariants of the session types. In such a setting, dynamically monitoring communication between processes becomes a necessity for identifying undesirable actions. In this paper, we show how to dynamically monitor communication to enforce adherence to session types in a higher-order setting. We present a system of blame assignment in the case when the monitor detects an undesirable action and an alarm is raised. We prove that dynamic monitoring does not change system behavior for welltyped processes, and that one of an indicated set of possible culprits must have been compromised in case of an alarm.
['Limin Jia', 'Hannah Gommerstadt', 'Frank Pfenning']
Monitors and blame assignment for higher-order session types
626,695
SaTellite: Hypermedia Navigation by Affinity.
['Xavier Pintado', 'Dennis Tsichritzis']
SaTellite: Hypermedia Navigation by Affinity.
550,691
Similarity measure plays a significant task in intensity-based image registration. Nowadays, mutual information (MI) has been used as an efficient similarity measure for multimodal image registration. MI reflects the quantitative aspects of the information as it considers the probabilities of the voxels. However, different voxels have distinct efficiency towards the gratification of the elementary target, which may be self-reliant of their probability of occurrence. Therefore, both intensity distributions and effectiveness are essential to characterise a voxel. In this study, a novel similarity measure has been proposed by integrating the effectiveness of each voxel along with the intensity distributions for computing the enhanced MI using joint histogram of the two images. Penalised spline interpolation is incorporated to the joint histogram of the similarity measure, where each grid point is penalised with a weighted factor to avoid the local extrema and to achieve better registration accuracy as compared with existing methods with efficient computational runtime. To demonstrate the proposed method, the authors have used a challenging medical image dataset consisting of pre- and post-operative brain magnetic resonance imaging. The registration accuracy for the dataset improves the clinical diagnosis, and detection of growth of tumour in post-operative image.
['Smita Pradhan', 'Dipti Patra']
Enhanced mutual information based medical image registration
708,480
Network Topology Management Optimization of Wireless Sensor Network (WSN)
['C. K. Ng', 'Chun-Ho Wu', 'W. H. Ip', 'J. Zhang', 'George T. S. Ho', 'C. Y. Chan']
Network Topology Management Optimization of Wireless Sensor Network (WSN)
844,057
In 1995, IBM Global Services began implementing a business model that included support for the growth and development of communities of practice focused on the competencies of the organization. This paper describes our experience working with these communities over a five-year period, concentrating specifically on how the communities evolved. We present an evolution model based on observing over 60 communities, and we discuss the evolution in terms of people and organization behavior, supporting processes, and enabling technology factors. Also described are specific scenarios of communities within IBM Global Services at various stages of evolution.
['Patricia Gongla', 'Christine Rizzuto']
Evolving communities of practice: IBM global services experience
531,311
It is well known that the Analytic Hierarchy Process (AHP) of Saaty is one of the most powerful approach for decision aid in solving of a multi criteria decision making (MCDM) problem. Several computing weights methods in AHP are analyzed. Based on least square method, three methods for calculating weights using the least the sum of squares of error criterion, the least the sum of error absolute value criterion and the least the error absolute value criterion are proposed. New least squares method is translated into linear system and Minimax method and absolute deviation method are translated into linear programming. New proposed methods can apply to the ranking estimation in incomplete AHP, which is very important to estimate incomplete comparisons data to have alternative’s weights. The computation methods and results are given through numerical examples.
['Shang Gao', 'Zaiyue Zhang', 'Cungen Cao']
Calculating Weights Methods in Complete Matrices and Incomplete Matrices
161,558
In this paper, we propose a mutual framework that combines two state-of-the-art visual object tracking algorithms. Both trackers benefit from each other's advantage leading to an efficient visual tracking approach. Many state-of-the-art trackers have poor performance due to rain, fog or occlusion in real-world scenarios. Often, after several frames, objects are getting lost, only leading to a short-term tracking capability. In this paper, we focus on long-term tracking, preserving real-time capability and very accurate positioning of tracked objects. The proposed framework is capable to track arbitrary objects, leading to decreased labeling efforts and an improved positioning of bounding boxes. This is especially interesting for applications such as semi-automatic labeling. The benefit of our proposed framework is demonstrated by comparing it with the related algorithms using own sequences as well as a well-known and publicly available dataset.
['Florian Baumann', 'Enes Dayangac', 'Josep Aulinas', 'Matthias Zobel']
MedianStruck for long-term tracking applications
979,254
Gebhardt et al. (2014) presented the Monitoring Activity Data for the Mexican REDD+ program (MAD-MEX), an automatic nation-wide land cover monitoring system for the Mexican REDD+ MRV. Though MAD-MEX represents a valuable first effort toward establishing a national reference emissions level for the implementation of REDD+ in Mexico, in this paper, we argue that this land cover system has important limitations that may prevent it from becoming operational for REDD+ MRV. Specifically, we show that (1) the accuracy assessment of MAD-MEX land cover maps is optimistically biased; (2) the ability of MAD-MEX to monitor land cover change, including deforestation and forest degradation; is poor and (3) the use of an entirely automatic classification approach, such as that followed by MAD-MEX, is highly problematic in the case of a large and heterogeneous country like Mexico. We discuss these limitations and call into question the ability of a land cover monitoring system, such as MAD-MEX, both to elaborate a national reference emissions level and to monitor future forest cover change, as part of a REDD+ MRV system. We provide some insights with the aim of improving the development of nation-wide land cover monitoring systems in Mexico and elsewhere.
['Jean-François Mas', 'Stéphane Couturier', 'Jaime Paneque-Gálvez', 'Margaret Skutsch', 'Azucena Pérez-Vega', 'Miguel Angel Castillo-Santiago', 'Gerardo Bocco']
Comment on Gebhardt et al. MAD-MEX: Automatic Wall-to-Wall Land Cover Monitoring for the Mexican REDD-MRV Program Using All Landsat Data. Remote Sens. 2014, 6, 3923–3943
836,353
Revisiting Known-Item Retrieval in Degraded Document Collections.
['Jason J. Soo', 'Ophir Frieder']
Revisiting Known-Item Retrieval in Degraded Document Collections.
985,785
This paper proposes a novel frame synchronization scheme for convolutionally encoded data packets. Rather than placing sync bits in a separate header, the sync bits are placed in a mid-amble and encoded as part of the data sequence, using the error correction encoder to resolve time ambiguities. This technique requires fewer bits for frame synchronisation. The performance improvement over conventional synchronization techniques is explored via simulation. The applicability of the scheme for synchronization of turbo codes is also discussed.
['M. Mostofa', 'K. Howlader', 'Brian D. Woerner']
Frame synchronization of convolutionally encoded sequences for packet transmission
360,246
A multi-core processor is widely used to achieve both high performance and low energy consumption. However, verification of a multi-core processor is more difficult than that of single-core processor. Because multi-core processor has large and complex circuits, and special mechanisms such as a cache coherency mechanism also increases complexity. In general, design flow of a processor include such as the following steps; functional verification by high level language, cycle accurate verification by RTL simulation and timing analysis considered gate delay, wiring delay and others. In particular, co-simulation framework for single-core processor has been proposed to reduce time of cycle accurate verification with RTL simulation. However, if the conventional framework is applied to verify a multi-core processor, it causes three problems; failure of the co-simulation framework by a mismatch of load and store operation, failure of the system call emulation by cache coherency mechanism and requirement of task scheduling by execution multi-threaded program. These problems makes verification of a multi-core processor difficult seriously and increases simulation time dramatically. Therefore, this paper proposes a rapid verification framework to support execution of a multi-threaded program for multi-core processors. The proposed method makes it possible to verify both a homogeneous and heterogeneous multi-core processors with the cache coherency mechanism, and to execute multithreaded programs without full system simulation. The proposed framework extends conventional co-simulation framework for a single-core processor. The proposed framework is composed of the following three extensions; bypassing loaded value from the verified processor to virtual processor, a cache access mechanism for system call emulation, and an internal task scheduler. As evaluation results, our framework verifies two-core processor correctly. Furthermore, the proposed method achieves to reduce the number of execution cycles by 71% in maximum and 46% in average compared with full system simulation.
['Kouki Kayamuro', 'Takahiro Sasaki', 'Yuki Fukazawa', 'Toshio Kondo']
A Rapid Verification Framework for Developing Multi-core Processor
992,190
Recognizing user-defined moves serves a large number of applications including sport monitoring, virtual reality or natural user interfaces (NUI). However, many of the efficient human move recognition methods are still limited to specific situations, such as straightforward NUI gestures or everyday human actions. In particular, most methods depend on a prior segmentation of recordings to both train and recognize moves. This segmentation step is generally performed manually or based on heuristics such as neutral poses or short pauses, limiting the range of applications. Besides, speed is generally not considered as a criterion to distinguish moves. We present an approach composed of a simplified move training phase that requires minimal user intervention, together with a novel online method to robustly recognize moves online from unsegmented data without requiring any transitional pauses or neutral poses, and additionally considering human move speed. Trained gestures are automatically segmented in real time by a curvature-based method that detects small pauses during a training session. A set of most discriminant key poses between different moves is also extracted in real time, optimizing the number of key poses. All together, this semi-supervised learning approach only requires continuous move performances from the user with small pauses. Key pose transitions and moves execution speeds are used as input to a novel human move recognition algorithm that recognizes unsegmented moves online, achieving high robustness and very low latency in our experiments, while also effective in distinguishing moves that differ only in speed.
['Thales Vieira', 'Romain Faugeroux', 'Dimas Martínez', 'Thomas Lewiner']
Online human moves recognition through discriminative key poses and speed-aware action graphs
953,134
The prediction of the protein tertiary structure from solely its residue sequence (the so called Protein Folding Problem) is one of the most challenging problems in Structural Bioinformatics. We focus on the protein residue contact map. When this map is assigned it is possible to reconstruct the 3D structure of the protein backbone. The general problem of recovering a set of 3D coordinates consistent with some given contact map is known as a unit-disk-graph realization problem and it has been recently proven to be NP-Hard. In this paper we describe a heuristic method (COMAR) that is able to reconstruct with an unprecedented rate (3-15 seconds) a 3D model that exactly matches the target contact map of a protein. Working with a non-redundant set of 1760 proteins, we find that the scoring efficiency of finding a 3D model very close to the protein native structure depends on the threshold value adopted to compute the protein residue contact map. Contact maps whose threshold values range from 10 to 18 Aringngstroms allow reconstructing 3D models that are very similar to the proteins native structure.
['Marco Vassura', 'Luciano Margara', 'P. Di Lena', 'Filippo Medri', 'Piero Fariselli', 'Rita Casadio']
Reconstruction of 3D Structures From Protein Contact Maps
521,735
Void-Handling Techniques for Routing Protocols in Underwater Sensor Networks: Survey and Challenges
['Seyed Mohammad Ghoreyshi', 'Alireza Shahrabi', 'Tuleen Boutaleb']
Void-Handling Techniques for Routing Protocols in Underwater Sensor Networks: Survey and Challenges
997,950
In this paper, a multi-objective multi-period supply chain design and planning problem is introduced. The problem seeks to minimise logistic costs and maximise service level in a three-echelon multi-product supply chain considering back orders. The layers of chain include suppliers, manufacturers and distribution centres. The parts of logistic costs are discussed and modelled while service level is also interpreted as low level of backorder and shortening the delivery time of products to customers. This problem is modelled using a multi-objective mixed integer mathematical programming. Several constraints due to real-world conditions are also considered in the proposed model. As the objective functions, i.e., logistic costs and satisfaction levels are conflictive so a posteriori multi-objective mathematical approach, called efficient epsilon-constraint is proposed to generate several non-dominated solutions on Pareto front of the problem. Illustrated numerical example is solved using proposed approach in ...
['Ashkan Hafezalkotob', 'Kaveh Khalili-Damghani']
Development of a multi-period model to minimise logistic costs and maximise service level in a three-echelon multi-product supply chain considering back orders
546,752
A Declarative Approach for Computing Ordinal Conditional Functions Using Constraint Logic Programming
['Christoph Beierle', 'Gabriele Kern-Isberner', 'Karl Södler']
A Declarative Approach for Computing Ordinal Conditional Functions Using Constraint Logic Programming
616,308
We address the problem of direction of arrival (DOA) finding for uniform linear arrays (ULAs). We derive a new and very simple method for estimating the DOA of a single source based on the covariance matrix of the received signal. The new method is non-data-aided (NDA) and does not therefore impinge on the whole throughput of the system. The noise components are assumed spatially and temporally white. The new method is derived in closed form and it exhibits exactly — over a wide practical SNR range — the same performance of the popular root-MUSIC algorithm, a powerful DOA estimation technique for ULA configurations. Therefore, the new estimator offers a way for a rapid and very easy DOA evaluation; making it very attractive for practical implementation as compared to the root-MUSIC algorithm that relies on the heavy operation of eigen decomposition.
['Faouzi Bellili', 'Sofiene Affes', 'Alex Stephenne']
Second-order moment-based direction finding of a single source for ULA systems
495,649
A model of reputation is presented in which agents share and aggregate their opinions, and observe the way in which their opinions effect the opinions of others. A method is proposed that supports the deliberative process of combining opinions into a group's reputation. The reliability of agents as opinion givers are measured in terms of the extent to which their opinions differ from that of the group reputation. These reliability measures are used to form an ap riorireputation estimate given the individual opinions of a set of independent agents.
['John K. Debenham', 'Carles Sierra']
Reputation as Aggregated Opinions
415,439
This work is devoted to the study of dispersed spectrum cognitive radio (CR) systems over independent and nonidentically distributed (i.n.i.d.) generalized fading channels. More specifically, this is performed in terms of the high-order statistics (HOS) of the channel capacity over $\eta{-}\mu$ fading channels. A generic analytic expression is derived for the corresponding $n$ th statistical moment, which is subsequently employed for deducing exact closed-form expressions for the first four moments. Using these expressions, important statistical metrics, such as the amount of dispersion, amount of fading, skewness, and kurtosis, are derived in closed form and can be efficiently used in providing insights on the performance of dispersed CR systems. The obtained numerical results reveal interesting outcomes that could be useful for the channel selection, either for sharing or aggregation in heterogeneous networks, which is the core structure of future wireless communication systems.
['Theodoros A. Tsiftsis', 'Fotis Foukalas', 'George K. Karagiannidis', 'Tamer Khattab']
On the Higher Order Statistics of the Channel Capacity in Dispersed Spectrum Cognitive Radio Systems Over Generalized Fading Channels
726,424
Multi-lingual Evaluation of a Natural Language Generation System
['Athanasios Karasimos', 'Amy Isard']
Multi-lingual Evaluation of a Natural Language Generation System
801,181
Testing for small delay defects is critical to guarantee that the manufactured silicon is timing-related defect free and to reduce quality loss associated with delay defects. Commercial solutions available for testing of small delay defects result in very high pattern count and run time. In this paper, we present two effective approaches for generating timing-aware transition fault patterns that target small delay defects. We identify a subset of transition faults that should be targeted by the timing-aware ATPG; while for the rest of the faults, classic non-timing-aware transition fault patterns can be generated. Experimental results for several industrial benchmarks show that the proposed approaches result in up to 75% reduction in test pattern count compared to existing timing-aware ATPG approaches.
['Sandeep Kumar Goel', 'Narendra Devta-Prasanna', 'Ritesh P. Turakhia']
Effective and Efficient Test Pattern Generation for Small Delay Defect
334,524
One approach to improving the real-time efficiency of plasma turbulence calculations is to use a parallel algorithm. A parallel algorithm for plasma turbulence calculations was tested on the Intel iPSC/860 hypercube and the Touchstone Delta machine. Using the 128 processors of the Intel iPSC/860 hypercube, a factor of 5 improvement over a single-processor CRAY-2 is obtained. For the Touchstone Delta machine, the corresponding improvement factor is 13. For plasma edge turbulence calculations, an extrapolation of the present results to the Intel sigma machine gives an improvement factor close to 64 over the single-processor CRAY-2. >
['Vickie E. Lynch', 'B.A. Carreras', 'John B. Drake', 'J.N. Leboeuf', 'P. Liewer']
Performance of a plasma fluid code on the Intel parallel computers
353,145
Energy Management in Plug-in Hybrid Electric Vehicles: Recent Progress and a Connected Vehicles Perspective
['Clara Marina Martinez', 'X. Hu', 'Dongpu Cao', 'Efstathios Velenis', 'Bo Gao', 'Matthias Wellers']
Energy Management in Plug-in Hybrid Electric Vehicles: Recent Progress and a Connected Vehicles Perspective
829,325
We provide a comprehensive framework for semantic GSM artifacts, discuss in detail its properties, and present main software engineering architectures it is able to capture. The distinguishing aspect of our framework is that it allows for expressing both the data and the lifecycle schema of GSM artifacts in terms of an ontology, i.e., a shared and formalized conceptualization of the domain of interest. To guide the modeling of data and lifecycle we provide an upper ontology, which is specialized in each artifact with specific lifecycle elements, relations, and business objects. The framework thus obtained allows to achieve several advantages. On the one hand, it makes the specification of conditions on data and artifact status attribute fully declarative and enables semantic reasoning over them. On the other, it fosters the monitoring of artifacts and the interoperation and cooperation among different artifact systems. To fully achieve such an interoperation, we enrich our framework by enabling the linkage of the ontology to autonomous database systems through the use of mappings. We then discuss two scenarios of practical interest that show how mappings can be used in the presence of multiple systems. For one of these scenarios we also describe a concrete instantiation of the framework and its application to a real-world use case in the energy domain, investigated in the context of the EU project ACSI.
['Riccardo De Masellis', 'Domenico Lembo', 'Marco Montali', 'Dmitry Solomakhin']
Semantic Enrichment of GSM-Based Artifact-Centric Models
1,089
Identifying Dyscalculia Symptoms Related to Magnocellular Reasoning Using Smartphones.
['Greger Siem Knudsen', 'Ankica Babic']
Identifying Dyscalculia Symptoms Related to Magnocellular Reasoning Using Smartphones.
857,752
Partitioning Templates for RDF
['Rebeca Schroeder', 'Carmem S. Hara']
Partitioning Templates for RDF
664,411
Motivation: Synthetic lethal interactions represent pairs of genes whose individual mutations are not lethal, while the double mutation of both genes does incur lethality. Several studies have shown a correlation between functional similarity of genes and their distances in networks based on synthetic lethal interactions. However, there is a lack of algorithms for predicting gene function from synthetic lethality interaction networks.#R##N##R##N#Results: In this article, we present a novel technique called kernelROD for gene function prediction from synthetic lethal interaction networks based on kernel machines. We apply our novel algorithm to Gene Ontology functional annotation prediction in yeast. Our experiments show that our method leads to improved gene function prediction compared with state-of-the-art competitors and that combining genetic and congruence networks leads to a further improvement in prediction accuracy.#R##N##R##N#Contact: [email protected]#R##N##R##N#Supplementary information: Supplementary data are available at Bioinformatics online.
['Christoph Lippert', 'Zoubin Ghahramani', 'Karsten M. Borgwardt']
Gene function prediction from synthetic lethality networks via ranking on demand
21,376
Web Service technologies offer a successful way for interoperability among applications. Now it is important to face how to model systems based on service functionality and also how to add extra-functional properties to them. This is the reason why we propose first of all a versatile and simple UML profile based on the Service Component Architecture specification for modeling services and, secondly, a new UML profile is proposed in order to model and reuse extra-functional properties in the named models. Besides, the property profile provides enough information to enable property code and description generation at a later stage.
['Guadalupe Ortiz', 'Juan Hernández']
Toward UML Profiles for Web Services and their Extra-Functional Properties
168,647
Business process modelling: From informal to formal process representations.
['Andreas Oberweis', 'Petra Elgass', 'Helmut Krcmar']
Business process modelling: From informal to formal process representations.
773,591
We introduce InfraStructs , material-based tags that embed information inside digitally fabricated objects for imaging in the Terahertz region. Terahertz imaging can safely penetrate many common materials, opening up new possibilities for encoding hidden information as part of the fabrication process. We outline the design, fabrication, imaging, and data processing steps to fabricate information inside physical objects. Prototype tag designs are presented for location encoding, pose estimation, object identification, data storage, and authentication. We provide detailed analysis of the constraints and performance considerations for designing InfraStruct tags. Future application scenarios range from production line inventory, to customized game accessories, to mobile robotics.
['Karl D.D. Willis', 'Andrew D. Wilson']
InfraStructs: fabricating information inside physical objects for imaging in the terahertz region
118,264
A simulation based analysis of analog and mixed analog/digital circuits on printed circuit board level requires a lot of information about the involved components. Entering solely the data sheet information into a component library normally disregards structural knowledge which is well-known by the circuit designer. Therefore, the expert knowledge necessary for an optimum use of analysis tools is no more available. This paper presents an approach to overcome this problem. A concept to represent efficiently expert knowledge and classification information is introduced and a classification of circuits which conserves the structural information is presented.
['D. Wagenblasst', 'Wolfgang Thronicke']
An approach for classification of integrated circuits by a knowledge conserving library concept
529,569
Requirements validation is a crucial process to determine whether client-stakeholders’ needs and expectations of a product are sufficiently correct and complete. Various requirements validation techniques have been used to#R##N#evaluate the correctness and quality of requirements, but most of these techniques are tedious, expensive and time consuming. Accordingly, most project members are reluctant to invest their time and efforts in the requirements validation process.Moreover, automated tool supports that promote effective collaboration between#R##N#the client-stakeholders and the engineers are still lacking. In this paper, we describe a novel approach that combines prototyping and test-based requirements techniques to improve the requirements validation process and promote better communication and collaboration between requirements engineers and clientstakeholders. To justify the potential of this prototype tool, we also present three types of evaluation conducted on the prototpye tool, which are the usability survey,#R##N#3-tool comparison analysis and expert reviews.
['Massila Kamalrudin', 'M. Nor Aiza', 'John C. Grundy', 'John G. Hosking', 'Mark Robinson']
Automatic acceptance test case generation from essential use cases
594,993
President Bush called for the construction of a permanently manned lunar base. All serious plans for such a base require the use of lunar soil as shielding material against the Sun's radiation. Plans rely on the use of a large bulldozer-like vehicle to be driven by an astronaut, either locally or under teleoperation control. Brooks and Flynn (89) proposed an alternate approach to a single large and complex robot based on many small totally autonomous robots which trade off time to achieve the task with lowered complexity and cost of the system. In this paper the authors describe an experimental system they are building with 20 small bulldozers, which work without explicit coordination or communication, but nevertheless cooperate to achieve tasks that will be useful in building a manned lunar base. In particular the authors believe such tasks as digging out trenches in which the habitation units will be placed, stockpiling a supply of loose lunar soil to cover the habitation units, and actually covering them when delivered, can all be carried out by such small bulldozers. >
['Rodney A. Brooks', 'Pattie Maes', 'Maja J. Matarić', 'Grinell More']
Lunar base construction robots
251,815
An Efficient Approach for Discovering Interesting Patterns from Biomedical Data.
['Raj Singh', 'Vikas Yadav']
An Efficient Approach for Discovering Interesting Patterns from Biomedical Data.
771,290
A simulator named “TLSIM is described in this paper. TLSIM simulates GaAs circuits considering the interconnects as transmission lines. The simulator provides a simplistic way of updating signals at circuit nodes. The simulation method is applicable to general graph topologies for interconnects. The behavior of signal flow at transmission line junctions and at the active circuit interfaces is captured by the application of Kirchoff‘s current equations. Coupling among conductors and loss in transmission lines are discussed. Results on run time and accuracy are presented for GaAs direct coupled FET logic (DCFL) circuits.
['J.S. Barkatullah', 'S. Chowdhury']
A transmission line simulator for GaAs integrated circuits
152,676
This paper presents an electronic system that extracts the periodicity of a sound. It uses three analogue VLSI building blocks: a silicon cochlea, two inner-hair-cell circuits and two spiking neuron chips. The silicon cochlea consists of a cascade of filters. Because of the delay between two outputs from the silicon cochlea, spike trains created at these outputs are synchronous only for a narrow range of periodicities. In contrast to traditional bandpass filters, where an increase in selectivity has to be traded off against a decrease in response time, the proposed system responds quickly, independent of selectivity.
['André van Schaik']
An Analog VLSI Model of Periodicity Extraction
210,157
Test coverage is sometimes used to measure how thoroughly software is tested and developers and vendors sometimes use it to indicate their confidence in the readiness of their software. This survey studies and compares 17 coverage-based testing tools primarily focusing on, but not restricted to, coverage measurement. We also survey features such as program prioritization for testing, assistance in debugging, automatic generation of test cases and customization of test reports. Such features make tools more useful and practical, especially for large-scale, commercial software applications. Our initial motivations were both to understand the available test coverage tools and to compare them to a tool that we have developed, called eXVantage (a tool suite that includes code coverage testing, debugging, performance profiling and reporting). Our study shows that each tool has some unique features tailored to its application domains. The readers may use this study to help pick the right coverage testing tools for their needs and environment. This paper is also valuable to those who are new to the practice and the art of software coverage testing, as well as those who want to understand the gap between industry and academia.
['Qian Yang', 'J. Jenny Li', 'David M. Weiss']
A Survey of Coverage-Based Testing Tools
332,709
This paper describes a novel sound source separation method for a robot that needs to cope with dynamically changing noises in the real world. The sound source separation method, Geometric Source Separation (GSS), is promising because it has high separation performance and requires low computational cost. One of the most important factors in GSS performance is a step-size parameter to update a separation matrix which is generally used for extracting a target sound source. A fixed value that was obtained empirically is commonly used as the step-size parameter. However, in the real world, the surrounding environment changes dynamically. Thus, conventional GSS with a fixed step-size parameter sometimes results in poor separation results, or divergence of the separation matrix. Another important factor is the weight parameter, which adjusts the balance between geometric errors and separation errors and also affects performance. If this parameter is set to a small value, GSS becomes similar to a Blind Source Separation method, by which the output signal may contain errors based on indefinite source amplitudes and orders. In contrast, if this parameter is set to a large value, GSS becomes similar to a delay-and-sum Beamforming method, which does not have high separation performance. GSS gives good performance when the parameters are tuned to an optimum value, which changes according to the environment. We propose two effective methods that can be used for general BSSpsilas. One is an adaptive step-size parameter control method. By using this method, the step-size and the weight parameters are automatically set to optimum values and are able to adapt to environmental changes. The other is an optima controlled recursive average method for correlation matrix estimation. This method can improve the estimation of a separation matrix, and achieve high separation performance. We evaluated the proposed GSS algorithm with an 8ch microphone array embedded in Honda ASIMO. Experimental results showed that the proposed method improved sound source separation even in dynamically changing environments.
['Hirofumi Nakajima', 'Kazuhiro Nakadai', 'Yuuji Hasegawa', 'Hiroshi Tsujino']
High performance sound source separation adaptable to environmental changes for robot audition
168,200
Increasingly, consulting firms are employed by client organizations to participate in the implementation of enterprise systems projects. Such consultant-assisted information systems projects differ from internal and outsourced IS projects in two important respects. First, the joint project team consists of members from client and consulting organizations that may have conflicting goals and incompatible work practices. Second, close collaboration between the client and consulting organizations is required throughout the course of the project. Consequently, coordination is more complex for consultant-assisted projects and is critical for project success. Drawing from coordination and agency theories and the trust literature, we developed a research model to investigate how interorganizational coordination could help build relationships based on trust and goal congruence and achieve higher project performance. Hypotheses derived from the model were tested using data collected from 324 projects. The results provide strong support for the model. Interorganizational coordination was found to have the largest overall significant effect on performance. However, its effect was achieved indirectly by building trust and goal congruence and by reducing technical and requirements uncertainty. The positive effects of trust and goal congruence on project performance demonstrate the importance of managing the client-consultant relationship in such projects. Project uncertainty, including both technical and requirements uncertainty, was found to negatively affect goal congruence and trust, as expected. This study represents a step toward the development of a new theory on the role of interorganizational coordination.
['Matthew J. Liberatore', 'Wenhong Luo']
Coordination in Consultant-Assisted IS Projects: An Agency Theory Perspective
413,074
We study a new type of proof system, where an unbounded prover and a polynomial time verifier interact, on inputs a string x and a function f, so that the Verifier may learn f(x). The novelty of our setting is that there no longer are "good" or "malicious" provers, but only rational ones. In essence, the Verifier has a budget c and gives the Prover a reward r ∈ [0,c] determined by the transcript of their interaction; the prover wishes to maximize his expected reward; and his reward is maximized only if he the verifier correctly learns f(x). Rational proof systems are as powerful as their classical counterparts for polynomially many rounds of interaction, but are much more powerful when we only allow a constant number of rounds. Indeed, we prove that if f ∈ #P, then f is computable by a one-round rational Merlin-Arthur game, where, on input x, Merlin's single message actually consists of sending just the value f(x). Further, we prove that CH, the counting hierarchy, coincides with the class of languages computable by a constant-round rational Merlin-Arthur game. Our results rely on a basic and crucial connection between rational proof systems and proper scoring rules, a tool developed to elicit truthful information from experts.
['Pablo Daniel Azar', 'Silvio Micali']
Rational proofs
670,360
Biases in Social Commerce Users' Rational Risk Considerations.
['Samira Farivar', 'Yufei Yuan', 'Ofir Turel']
Biases in Social Commerce Users' Rational Risk Considerations.
954,557
In 2004, Zhu and Ma proposed a new and efficient authentication scheme claiming to provide anonymity for wireless environments. Two years later, Lee et al. revealed several previously unpublished flaws in Zhu-Ma's authentication scheme and proposed a fix. More recently in 2008, Wu et al. pointed out that Lee et al.'s proposed fix fails to preserve anonymity as claimed and then proposed yet another fix to address the problem. In this paper, we use Wu et al.'s scheme as a case study and demonstrate that due to an inherent design flaw in Zhu-Ma's scheme, the latter and its successors are unlikely to provide anonymity. We hope that by identifying this design flaw, similar structural mistakes can be avoided in future designs.
['Peng Zeng', 'Zhenfu Cao', 'Kim-Kwang Raymond Choo', 'Shengbao Wang']
On the anonymity of some authentication schemes for wireless communications
265,712
Comparing feature detectors: A bias in the repeatability criteria
['Ives Rey-Otero', 'Mauricio Delbracio', 'Jean-Michel Morel']
Comparing feature detectors: A bias in the repeatability criteria
670,268
Improving Read Throughput of Deduplicated Cloud Storage using Frequent Pattern-Based Prefetching Technique
['Prabavathy Balasundaram', 'Chitra Babu', 'Subha Devi M']
Improving Read Throughput of Deduplicated Cloud Storage using Frequent Pattern-Based Prefetching Technique
695,018
This paper proposes a new method for the design and analysis of multi-objective unconstrained binary quadratic programming (mUBQP) instances, commonly used for testing discrete multi-objective evolutionary algorithms (MOEAs). These instances are usually generated considering the sparsity of the matrices and the correlation between objectives but randomly selecting the values for the matrix cells. Our hypothesis is that going beyond the use of objective correlations by considering different types of variables interactions in the generation of the instances can help to obtain more diverse problem benchmarks, comprising harder instances. We propose a parametric approach in which small building blocks of deceptive functions are planted into the matrices that define the mUBQP. The algorithm for creating the new instances is presented, and the difficulty of the functions is tested using variants of a decomposition-based MOEA. Our experimental results confirm that the instances generated by planting deceptive blocks require more function evaluations to be solved than instances generated using other methods.
['Murilo Zangari', 'Roberto Santana', 'Alexander Mendiburu', 'Aurora Pozo']
On the Design of Hard mUBQP Instances
844,646
Unsupervised Matching of Visual Landmarks for Robotic Homing using Fourier-Mellin Transform
['Alessandro Rizzi', 'Danilo Duina', 'Stefano Inelli', 'Riccardo Cassinis']
Unsupervised Matching of Visual Landmarks for Robotic Homing using Fourier-Mellin Transform
545,456
For an ideal I⊆ℝ[x] given by a set of generators, a new semidefinite characterization of its real radical I(V ℝ(I)) is presented, provided it is zero-dimensional (even if I is not). Moreover, we propose an algorithm using numerical linear algebra and semidefinite optimization techniques, to compute all (finitely many) points of the real variety V ℝ(I) as well as a set of generators of the real radical ideal. The latter is obtained in the form of a border or Grobner basis. The algorithm is based on moment relaxations and, in contrast to other existing methods, it exploits the real algebraic nature of the problem right from the beginning and avoids the computation of complex components.
['Jean B. Lasserre', 'Monique Laurent', 'Philipp Rostalski']
Semidefinite Characterization and Computation of Zero-Dimensional Real Radical Ideals
390,388
In recent years, cognition map techniques for human insights have already played a significant part in complex or ill-structured problem solving. There are increasing interests on computational methods rather than hand-drawing methods to build an cognition graph for insights generation. In this paper, a systematic approach called Temporal-IdeaGraph is proposed to build a directed cognition graph based on event sequences. Firstly, an algorithm of frequent sequence mining is employed to capture sequential patterns and a method is then designed to remove duplicate patterns. Secondly, relevant patterns are merged and visualized into a directed cognition graph. An algorithm is further proposed to identify bridge events and bridge patterns which would trigger human’s deeper insights for better decision making. Finally, two real case studies validate the effectiveness of proposed approach.
['Wei Wang', 'Chen Zhang', 'Hao Wang', 'Yang Gao', 'Yuanman Zheng']
A cognition graph approach for insights generation from event sequences
997,886
The paper presents a proof-theoretic semantics account of contextual domain restriction for quantified sentences in a fragment of English. First, the technique is exemplified in the more familiar first-order logic, and in its restricted quantification variant. Then, a proof-theoretic semantics for the NL fragment is reviewed, and extended to handling contextual domain restriction. The paper addresses both the descriptive facet of the problem, deriving meaning relative to a context, as well as the fundamental aspect, defining explicitly a context (suitable for quantifier domain restriction), and specifying what it is about such a context that brings about the variation of meaning due to it. The paper argues for the following principle: The context incorporation principle (CIP): For every quantified sentence S depending on a context c, there exists a sentence S', the meaning of which is independent of c, s.t. the contextually restricted meaning of S is equal to the  meaning of S'. Thus, the effect of a context can always be *internalized*. The current model-theoretic accounts of contextual domain restriction do not satisfy CIP, in that they imply intersection of some extension with an *arbitrary* subset of the domain, that need not be the denotation of any NL-expression.
['Nissim Francez']
A proof-theoretic semantics for contextual domain restriction
207,270
The success of machine learning, particularly in supervised settings, has led to numerous attempts to apply it in adversarial settings such as spam and malware detection. The core challenge in this class of applications is that adversaries are not static data generators, but make a deliberate effort to evade the classifiers deployed to detect them. We investigate both the problem of modeling the objectives of such adversaries, as well as the algorithmic problem of accounting for rational, objective-driven adversaries. In particular, we demonstrate severe shortcomings of feature reduction in adversarial settings using several natural adversarial objective functions, an observation that is particularly pronounced when the adversary is able to substitute across similar features (for example, replace words with synonyms or replace letters in words). We offer a simple heuristic method for making learning more robust to feature cross-substitution attacks. We then present a more general approach based on mixed-integer linear programming with constraint generation, which implicitly trades off overfitting and feature selection in an adversarial setting using a sparse regularizer along with an evasion model. Our approach is the first method for combining an adversarial classification algorithm with a very general class of models of adversarial classifier evasion. We show that our algorithmic approach significantly outperforms state-of-the-art alternatives.
['Bo Li', 'Yevgeniy Vorobeychik']
Feature Cross-Substitution in Adversarial Classification
102,340
switchBox: an R package for k–Top Scoring Pairs classifier development
['Bahman Afsari', 'Elana J. Fertig', 'Donald Geman', 'Luigi Marchionni']
switchBox: an R package for k–Top Scoring Pairs classifier development
16,302
Software developer's working process could benefit from the support of an active help system that is able to recommend applicable and useful integrated development environment (IDE) commands. While previous work focused on prediction methods that can identify what developers will eventually discover autonomously, and without taking into account the characteristics of their working tasks, we want to build a system that recommends only commands that lead to better work performance. Since we cannot expect that developers are willing to invest a significant effort to use our recommender system (RS), we are developing a context-aware multi-criteria RS based on implicit feedback. We already created and evaluated context and user models. We also acquired a data set with more than 100,000 command executions. Currently, we are developing RS algorithm for predicting the scores of performance and effort expectancy and developer's intention to use a specific command. We are also developing a user interface, that has to be persuasive, effective, and efficient. To date, a user interface for IDE command RS has not been developed.
['Marko Gasparic']
Context-Based IDE Command Recommender System
871,677
Mobile ad hoc networking has been an active research area for several years. How to stimulate cooperation among selfish mobile nodes, however, is not well addressed yet. In this paper, we propose Sprite, a simple, cheat-proof, credit-based system for stimulating cooperation among selfish nodes in mobile ad hoc networks. Our system provides incentive for mobile nodes to cooperate and report actions honestly. Compared with previous approaches, our system does not require any tamper-proof hardware at any node. Furthermore, we present a formal model of our system and prove its properties. Evaluations of a prototype implementation show that the overhead of our system is small. Simulations and analysis show that mobile nodes can cooperate and forward each other's messages, unless the resource of each node is extremely low.
['Sheng Zhong', 'Jiang Chen', 'Yang Richard Yang']
Sprite: a simple, cheat-proof, credit-based system for mobile ad-hoc networks
44,870
The associative Hopfield memory is a form of recurrent Artificial Neural Network (ANN) that can be used in applications such as pattern recognition, noise removal, information retrieval, and combinatorial optimization problems. This paper presents the implementation of the Hopfield Neural Network (HNN) parallel architecture on a SRAM-based FPGA. Themain advantage of the proposed implementation is its high performance and cost effectiveness: it requires O(1) multiplications and O(log N) additions, whereas most others require O(N) multiplications and O(N) additions.
['Wassim Mansour', 'Rafic A. Ayoubi', 'Haissam Ziade', 'Raoul Velazco', 'W. El Falou']
An optimal implementation on FPGA of a hopfield neural network
274,771
Aggregatable Pseudorandom Functions and Connections to Learning.
['Aloni Cohen', 'Shafi Goldwasser', 'Vinod Vaikuntanathan']
Aggregatable Pseudorandom Functions and Connections to Learning.
767,779
Asset allocation is an important decision problem in ?nancial planning. In this paper, we study the multistage dynamic asset allocation problem in which an investor is allowed to reallocate its wealth among a set of assets over ?nite discrete decision points and the stochastic return rates of the assets follow a Markov chain with nonstationary transition probabilities. The objective is to maximize the utility of the wealth at the end of the planning horizon where the utility of the wealth follows a general piecewise linear and concave function. Transaction costs are considered. We formulate the problem with a dynamic stochastic net-work model which has potential to introduce a computationally tractable tool to deal with the dynamic asset allocation problem of large number of assets and long planning horizon.
['Haiqing Song', 'Huei Chuen Huang', 'Ning Shi', 'K. K. Lai']
A Dynamic Stochastic Network Model for Asset Allocation Problem
238,235
This paper describes a system for capturing images of books with a handheld 3D stereo camera, which performs dewarping to produce images that are flattened. A Fuji film consumer grade 3D camera provides a highly mobile and low-cost 3D capture device. Applying standard computer vision algorithms, camera calibration is performed, the captured images are stereo rectified, and the depth information is computed by block matching. Due to technical limitations, the resulting point cloud has defects such as splotches and noise, which make it hard to recover the precise 3D locations of the points on the book pages. We address this problem by computing curve profiles of the depth map and using them to build a cylinder model of the pages. We then employ meshes to facilitate the flattening and rendering of the cylinder model in virtual space. We have implemented a prototype of the system and report on a preliminary evaluation based on measuring the straightness of resulting text lines.
['Michael Patrick Cutter', 'Patrick Chiu']
Capture and Dewarping of Page Spreads with a Handheld Compact 3D Camera
371,257
Modeling Software Architecture Process with a Decision-Making Approach
['Gilberto Pedraza-Garcia', 'Hernán Astudillo', 'Dario Correal']
Modeling Software Architecture Process with a Decision-Making Approach
882,953
An examination of high-performance computing export control policy in the 1990s
['Seymour E. Goodman', 'Peter Wolcott', 'Grey E. Burkhart']
An examination of high-performance computing export control policy in the 1990s
95,074
In this paper, we propose a general framework for fusing bottom-up segmentation with top-down object behavior classification over an image sequence. This approach is beneficial for both tasks, since it enables them to cooperate so that knowledge relevant to each can aid in the resolution of the other, thus enhancing the final result. In particular, classification offers dynamic probabilistic priors to guide segmentation, while segmentation supplies its results to classification, ensuring that they are consistent both with prior knowledge and with new image information. We demonstrate the effectiveness of our framework via a particular implementation for a hand gesture recognition application. The prior models are learned from training data using principal components analysis and they adapt dynamically to the content of new images. Our experimental results illustrate the robustness of our joint approach to segmentation and behavior classification in challenging conditions involving occlusions of the target object before a complex background.
['Laura Gui', 'Jean-Philippe Thiran', 'Nikos Paragios']
Joint Object Segmentation and Behavior Classification in Image Sequences
303,530
Valuable Group Trajectory Pattern Mining Directed by Adaptable Value Measuring Model
['Xinyu Huang', 'Tengjiao Wang', 'Shun Li', 'Wei Chen']
Valuable Group Trajectory Pattern Mining Directed by Adaptable Value Measuring Model
841,133
Full waveform inversion (FWI) is an emerging subsurface imaging technique, used to locate oil and gas reservoirs. The key chal- lenges that hinder its adoption by industry are both algorithmic and computational in nature, including storage, communication, and process- ing of large-scale data structures, which impose cardinal impediments upon computational scalability. In this work we will present a complete matrix-free algorithmic formulation of a 3D elastic time domain spectral element solver for both the forward and adjoint wave-fields as part of a greater cloud based FWI framework. We discuss computational opti- misation (SIMD vectorisation, use of Many Integrated Core architec- tures, etc.) and present scaling results for two HPC systems, namely an IBM Blue Gene/Q and an Intel based system equipped with Xeon Phi coprocessors.
['Stephen Moore', 'Devi Sudheer Chunduri', 'Sergiy Zhuk', 'Tigran T. Tchrakian', 'Ewout van den Berg', 'Albert Akhriev', 'Alberto Costa Nogueira', 'Andrew Rawlinson', 'Lior Horesh']
Semi-discrete Matrix-Free Formulation of 3D Elastic Full Waveform Inversion Modeling
620,897
We determine the number of nilpotent matrices of order n over Fq that are selfadjoint for a given nondegenerate symmetric bilinear form, and in particular nd the number of symmetric nilpotent matrices.
['Ae Andries Brouwer', 'Rod Gow', 'John Sheekey']
Counting Symmetric Nilpotent Matrices
312,931
Security is an important concern in cloud computing nowadays. RSA is one of the most popular asymmetric encryption algorithms that are widely used in internet based applications for its public key strategy advantage over symmetric encryption algorithms. However, RSA encryption algorithm is very compute intensive, which would affect the speed and power efficiency of the encountered applications. Racetrack Memory (RM) is a newly introduced promising technology in future storage and memory system, which is perfect to be used in memory intensive scenarios because of its high data density. However, novel designs should be applied to exploit the advantages of RM while avoiding the adverse impact of its sequential access mechanism. In this paper, we present an in-memory Booth multiplier based on racetrack memory to alleviate this problem. As the building block of our multiplier, a racetrack memory based adder is proposed, which saves 56.3% power compared with the state-of-the-art magnetic adder. Integrated with the storage element, our proposed multiplier shows great efficiency in area, power and scalability.
['Tao Luo', 'Wei Zhang', 'Bingsheng He', 'Douglas L. Maskell']
A racetrack memory based in-memory booth multiplier for cryptography application
664,447
Radio-frequency identification (RFID) technology is the enabler for applications like the future internet of things (IoT), where security plays an important role. When integrating security to RFID tags, not only the cryptographic algorithms need to be secure but also their implementation. In this work we present differential power analysis (DPA) and differential electromagnetic analysis (DEMA) attacks on a security-enabled RFID tag. The attacks are conducted on both an ASIC-chip version and on an FPGA-prototype version of the tag. The design of the ASIC version equals that of commercial RFID tags and has analog and digital part integrated on a single chip. Target of the attacks is an implementation of the Advanced Encryption Standard (AES) with 128-bit key length and DPA countermeasures. The countermeasures are shuffling of operations and insertion of dummy rounds. Our results illustrate that the effort for successfully attacking the ASIC chip in a real-world scenario is only 4.5 times higher than for the FPGA prototype in a laboratory environment. This let us come to the conclusion that the effort for attacking contactless devices like RFID tags is only slightly higher than that for contact-based devices. The results further underline that the design of countermeasures like the insertion of dummy rounds has to be done with great care, since the detection of patterns in power or electromagnetic traces can be used to significantly lower the attacking effort.
['Thomas Korak', 'Thomas Plos', 'Michael Hutter']
Attacking an AES-Enabled NFC tag: implications from design to a real-world scenario
190,820
The ileocecal valve (ICV) is a common source of false-positive (FP) detections in CT colonography (CTC) computer-aided detection (CAD) of polyps. In this paper, we propose an automatic method to identify ICV CAD regions to reduce FPs. The ICV is a particularly challenging structure to detect due to its variable, polyp-mimicking morphology. However, the vast majority of ICVs have a visible orifice, which appears as a 3D concave region. Our method identifies the orifice concave region using a partial differential equation (PDE) based on 3D curvature and geometric constraints. These orifice features, combined with intensity and shape features generated in a Bayesian framework, comprise a set of compact features fed into an Adaboost classifier to produce a final classification of a region being ICV or non-ICV. Experimental results on a multi-center tagged CTC dataset demonstrate the success of the method in detecting ICV regions and reducing FPs in CAD.
['Xujiong Ye', 'Gregory G. Slabaugh']
Concavity analysis for reduction of ileocecal valve false positives in CTC
207,618
Selection of Antenna Array Configuration for Polarimetric Direction Finding in Correlated Signal Environments
['Stephan Haefner', 'Martin Kaeske', 'Reiner S. Thomae', 'Uwe Trautwein', 'Alexis Paolo Garcia Ariza']
Selection of Antenna Array Configuration for Polarimetric Direction Finding in Correlated Signal Environments
629,805
We study an algorithm recently proposed, which is called sequential parametric approximation method, that finds the solution of a differentiable nonconvex optimization problem by solving a sequence of differentiable convex approximations from the original one. We show as well the global convergence of this method under weaker assumptions than those made in the literature. The optimization method is applied to the design of robust truss structures. The optimal structure of the model considered minimizes the total amount of material under mechanical equilibrium, displacements and stress constraints. Finally, Robust designs are found by considering load perturbations.
['Alfredo Canelas', 'Miguel Carrasco', 'Julio López']
Application of the sequential parametric convex approximation method to the design of robust trusses
874,116
Modern operational forest inventory often uses re- motely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To con- struct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measure- ments of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%-15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model's lack of fit.
['Virpi Junttila', 'Tuomo Kauranne', 'Andrew O. Finley', 'John B. Bradford']
Linear Models for Airborne-Laser-Scanning-Based Operational Forest Inventory With Small Field Sample Size and Highly Correlated LiDAR Data
670,442
This paper addresses the problem of cross-layer design of cyber-physical systems to cope with interactions between the cyber layer and the physical layer in a dynamic environment. We propose a bi-directional middleware that allows the optimal utilization of the common resources for the benefit of either or both the layers in order to obtain overall system performance. This has been implemented over Etherware, a prior developed separation-based middleware for networked control systems as the bridge between the layers. Our implementation employs a Resource Manager module to handle common resources between the layers on Etherware. A case study of network connectivity preservation in vehicular formation control illustrates how this approach applies to particular situations of interest.
['W. Ko', 'P. R. Kumar']
Cross-layer design for cyber-physical systems of coordinated networked vehicles over bi-directional middleware
856,165
Lung segmentation is often performed as a preprocessing step on chest Computed Tomography (CT) images because it is important for identifying lung diseases in clinical evaluation. Hence, researches on lung segmentation have received much attention. In this paper, we propose a new lung segmentation method based on an improved graph cuts algorithm from the energy function. First, the lung CT images is modeled with Gaussian mixture models (GMMs), and the optimized distribution parameters can be obtained with expectation maximization (EM) algorithm. With that parameters, we can construct the improved regional penalty item in the graph cuts energy function. Second, considering the image edge information, the Sobel operator is adopted to detect and extract the lung image edges, and the lung image edges information is used to improve the boundary penalty item of graph cuts energy function. Finally, the improved energy function of graph cuts algorithm is obtained, then the corresponding graph is created, and lung is segmented with the minimum cut theory. The experiments demonstrate that the proposed method is very accurate and efficient for lung segmentation.
['Shuangfeng Dai', 'Ke Lu', 'Jiyang Dong']
Lung segmentation with improved graph cuts on chest CT images
810,560
We present a library of reusable, abstract, low granularity components for the development of novel interaction techniques. Based on the InTml language and through an iterative process, we have designed 7 selection and 5 travel techniques from [5] as dataflows of reusable components. The result is a compact set of 30 components that represent interactive content and useful behavior for interaction. We added a library of 20 components for device handling, in order to create complete, portable applications. By design, we achieved a 68% of component reusability, measured as the number of components used in more than one technique, over the total number of used components. As a reusability test, we used this library to describe some interaction techniques in [1], a task that required only 2% of new components.
['Pablo Figueroa', 'David Castro']
A reusable library of 3D interaction techniques
244,554
While content-based recommendation has been applied successfully in many different domains, it has not seen the same level of attention as collaborative filtering techniques have. However, there are many recommendation domains and applications where content and metadata play a key role, either in addition to or instead of ratings and implicit usage data. For some domains, such as movies, the relationship between content and usage data has seen thorough investigation already, but for many other domains, such as books, news, scientific articles, andWeb pages we still do not know if and how these data sources should be combined to provided the best recommendation performance. The CBRecSys 2014 workshop aimed to address this by providing a dedicated venue for papers dedicated to all aspects of content-based recommender systems.
['Toine Bogers', 'Marijn Koolen', 'Iván Cantador']
Report on RecSys 2014: Workshop on New Trends in Content-Based Recommender Systems
609,769
This paper presents a novel distributed estimation algorithm based on the concept of moving horizon estimation. Under weak observability conditions we prove convergence of the state estimates computed by any sensor to the correct state even when constraints on noise are taken into account in the estimation process. Simulation examples are provided in order to show the main features of the proposed method.
['Marcello Farina', 'Giancarlo Ferrari-Trecate', 'Riccardo Scattolini']
A moving horizon scheme for distributed state estimation
288,122
Several computational methods have been developed to predict RNA-binding sites in protein, but its inverse problem (i.e., predicting protein-binding sites in RNA) has received much less attention. Furthermore, most methods that predict RNA-binding sites in protein do not consider interaction partners of a protein. This paper presents a web server called PRIdictor ( P rotein– R NA I nteraction pre dictor ) which predicts mutual binding sites in RNA and protein at the nucleotide- and residue-level resolutions from their sequences. PRIdictor can be used as a web-based application or web service at http://bclab.inha.ac.kr/pridictor .
['Narankhuu Tuvshinjargal', 'Wook Lee', 'Byungkyu Park', 'Kyungsook Han']
PRIdictor: Protein-RNA Interaction predictor.
560,372