abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
Memory cores (especially SRAM cores) used on a system chip usually come from a memory compiler. Commercial memory compilers have their limitation-a large memory may need to be implemented with multiple small memories, if generated by memory compilers. In this paper we introduce a testability-driven memory optimizer and wrapper generator that generates BISTed embedded memories by using a commercial memory compiler. We describe one of its key components called MORE (for Memory Optimization and REconfiguration). The approach is cost effective for designing embedded memories. By configuring small memory cores into the large one specified by the user and providing the BIST circuits, MORE allows the user to combine the commercial memory compiler and our memory BIST compiler into a cost-effective testability-driven memory generator. The resulting memory has a shorter test time, since the small memory cores can be tested in parallel, so far as the power and geometry constraints are considered. As an example, the test time of a typical 256 K/spl times/32 memory generated by MORE is reduced by about 75%.
['Rei-Fu Huang', 'Li-Ming Denq', 'Cheng-Wen Wu', 'Jin-Fu Li']
A testability-driven optimizer and wrapper generator for embedded memories
52,796
In this edition of GetMobile, we focus on three papers from MobiSys 2014 that embody the goal of enabling new services and using mobile systems in unique ways.
['Shyamnath Gollakota', 'Robin Kravets']
Uniquely Mobile
709,274
While training participants to assemble a 3D wooden burr puzzle, we compared results of training in a stereoscopic, head tracked virtual assembly environment utilizing haptic devices and data gloves with real world training. While virtual training took participants about three times longer, the group that used the virtual environment was able to assemble the physical test puzzle about three times faster than the group trained with the physical puzzle. We present several possible cognitive explanations for these results and our plans for future exploration of the factors that improve the effectiveness of virtual process training over real world experience.
['Mike Oren', 'Patrick E. Carlson', 'Stephen B. Gilbert', 'Judy M. Vance']
Puzzle assembly training: Real world vs. virtual environment
509,797
Protecting the register value and its data buses is crucial to reliable computing in high-performance microprocessors due to the increasing susceptibility of CMOS circuitry to soft errors induced by high-energy particle strikes. Since the register file is in the critical path of the processor pipeline, any reliable design that increases either the pressure on the register file or the register file access latency is not desirable. In this paper, we propose to exploit narrow-width register values, which present the majority of the generated values, for duplicating a copy of the value within the same data item, called in-register duplication (IRD), eliminating the requirement of additional copy registers. The datapath pipeline is augmented to efficiently incorporate parity encoding and parity checking such that error recovery is seamlessly supported in IRD and the parity checking is overlapped with the execution stage to avoid increasing the critical path. Our experimental evaluation using the SPEC CINT2000 benchmark suite shows that IRD provides superior read-with-duplicate (RWD) and error detection/ recovery rates under heavy error injection as compared to previous reliability schemes.
['Jie S. Hu', 'Shuai Wang', 'Sotirios G. Ziavras']
In-Register Duplication: Exploiting Narrow-Width Value for Improving Register File Reliability
13,392
Data loss or leakage occurs in many organizations, frequently with significant impacts, both in terms of incident-handling costs and of damage to the organization's reputation. In this paper, the author considers information leakage related to portable storage - for example, your laptop hard-disk - and what might best mitigate that. This article briefly considers some recent incidents, describe practical mitigation steps, and look at how we might plan, in advance, for handling such events.
['Stephen Farrell']
Portable Storage and Data Loss
420,416
People's interests are dynamically evolving, often affected by external factors such as trends promoted by the media or adopted by their friends. In this work, we model interest evolution through dynamic interest cascades: we consider a scenario where a user's interests may be affected by (a) the interests of other users in her social circle, as well as (b) suggestions she receives from a recommender system. In the latter case, we model user reactions through either attraction or aversion towards past suggestions. We study this interest evolution process, and the utility accrued by recommendations, as a function of the system's recommendation strategy. We show that, in steady state, the optimal strategy can be computed as the solution of a semi-definite program (SDP). Using datasets of user ratings, we provide evidence for the existence of aversion and attraction in real-life data, and show that our optimal strategy can lead to significantly improved recommendations over systems that ignore aversion and attraction.
['Wei Lu', 'Stratis Ioannidis', 'Smriti Bhagat', 'Laks V. S. Lakshmanan']
Optimal recommendations under attraction, aversion, and social influence
301,985
Visualization of Rule Behaviour in Active Databases.
['Tomas Fors']
Visualization of Rule Behaviour in Active Databases.
546,925
We investigate the effects of time delay and piecewise-linear threshold policy harvesting for a delayed predator–prey model. It is the first time that Holling response function of type III and the present threshold policy harvesting are associated with time delay. The trajectories of our delayed system are bounded; the stability of each equilibrium is analyzed with and without delay; there are local bifurcations as saddle-node bifurcation and Hopf bifurcation; optimal harvesting is also investigated. Numerical simulations are provided in order to illustrate each result.
['Israel Tankam', 'Plaire Tchinda Mouofo', 'Abdoulaye Mendy', 'Mountaga Lam', 'Jean Jules Tewa', 'Samuel Bowong']
Local Bifurcations and Optimal Theory in a Delayed Predator–Prey Model with Threshold Prey Harvesting
564,610
Organizations apply information security risk assessment (ISRA) methodologies to systematically and comprehensively identify information assets and related security risks. We review the ISRA literature and identify three key deficiencies in current methodologies that stem from their traditional accountancy-based perspective and a limited view of organizational "assets". In response, we propose a novel rich description method (RDM) that adopts a less formal and more holistic view of information and knowledge assets that exist in modern work environments. We report on an in-depth case study to explore the potential for improved asset identification enabled by the RDM compared to traditional ISRAs. The comparison shows how the RDM addresses the three key deficiencies of current ISRAs by providing: 1) a finer level of granularity for identifying assets, 2) a broader coverage of assets that reflects the informal aspects of business practices, and 3) the identification of critical knowledge assets.
['Piya Shedden', 'Atif Ahmad', 'Wally Smith', 'Heidi Tscherning', 'Rens Scheepers']
Asset Identification in Information Security Risk Assessment: A Business Practice Approach
924,760
IP Multimedia Subsystem (IMS) is considered as the main solution to the next generation multimedia communication.By examining potential security threats to IMS, we initiate a set of security policies for choice and consider their Quality of Protection (QoP) based on the strength of the security mechanism. In order to provide secure service to users, it is insufficient to just take the security benefits into consideration. An adequate analysis of the impact of security policies in IMS on the performance quantitatively is necessary and significant. We present a novel study of IMS performance using Queuing Petri Nets (QPN) to predict the system performance metrics, i.e., the signaling delay, server utilization, which then be used to evaluate the impacts of different security mechanism on system performance quantitatively. With the multi-view security partition introduced, multi-level security service can be provided to diverse users and applications for a best tradeoff between security requirements and system performance.
['Kai Wang', 'Chuang Lin', 'Fangqin Liu']
Quality of Protection with Performance Analysis in IP Multimedia Subsystem
270,588
A model of small airship's propelling
['Milan Adamek', 'Martin Pospisilik', 'Petr Neumann', 'Rui Miguel Soares Silva']
A model of small airship's propelling
595,596
Information-Centric Networking (ICN) reconsiders the host-centric Internet paradigm with a view to information or content-based identifiers and where multicast data delivery is the norm. However, Wi-Fi, the predominant means of local wireless connectivity today, but also 3G and 4G technologies, are known to suffer from poor multicast performance. In this work, we consider exploiting content awareness, which is inherent in ICN architectures, to improve wireless multicast delivery by means of relaying. In particular, given that different types of content have different performance requirements, we provide a multi-objective optimization formulation for the problem of activating appropriate subsets of users as relays and deciding on their transmission rates, optimizing for different criteria, such as reliability, performance, and energy cost on a per-content item basis. Based on that, we propose a heuristic algorithm to select relay-rate assignments, showing it to outperform standard wireless multicast transmission strategies and also to be feasible to operate on top of resource-constrained off-the-shelf wireless equipment. Finally, we demonstrate how our scheme could be utilized for multicasting scalable video with improved Quality of Experience.
['Pantelis A. Frangoudis', 'George C. Polyzos', 'Gerardo Rubino']
Relay-based multipoint content delivery for wireless users in an information-centric network
809,221
This paper considers a network of agents described by an undirected graph that seek to solve a convex optimization problem with separable objective function and coupling equality and inequality constraints. Both the objective function and the inequality constraints are Lipschitz continuous. We assume that the constraints are compatible with the network topology in the sense that, if the state of an agent is involved in the evaluation of any given local constraint, this agent is able to fully evaluate it with the information provided by its neighbors. Building on the saddle-point dynamics of an augmented Lagrangian function, we develop provably correct distributed continuous-time coordination algorithms that allow each agent to find their component of the optimal solution vector along with the optimal Lagrange multipliers for the equality constraints in which the agent is involved. Our technical approach combines notions and tools from nonsmooth analysis, set-valued and projected dynamical systems, viability theory and convex programming.
['Simon K. Niederlander', 'Jorge Cortes']
Distributed coordination for separable convex optimization with coupling constraints
654,531
Research on Semantic Web Services Composing System Based on Multi-Agent
['Junwei Luo', 'Huimin Luo']
Research on Semantic Web Services Composing System Based on Multi-Agent
618,142
In the context of 3D video systems, depth information could be used to render a scene from additional viewpoints. Although there have been many recent advances in this area, including the introduction of the Microsoft Kinect sensor, the robust acquisition of such information continues to be a challenge. This article reviews three depth-sensing approaches for 3DTV. The authors discuss several approaches for acquiring depth information and provides a comparative analysis of their characteristics.
['Sebastian Schwarz', 'Roger Olsson', 'Mårten Sjöström']
Depth Sensing for 3DTV: A Survey
428,478
Image mosaicing is a technique widely used for extending the field of view of industrial, medical, outdoor or indoor scenes. However, image registration can be very challenging, e.g. due to large texture variability, illumination changes, image blur and camera perspective changes. In this paper, a total variational optical flow approach is investigated to estimate dense point correspondences between image pairs. An edge preserving Riesz wavelet scale-space combined with a novel TV-regularizer is proposed for preserving motion discontinuities along the edges of weak textures and for handling strong in-plane rotations present in image sequences. An anisotropic weighted median filtering is implemented for minimizing outliers. Quantitative evaluation of the method on the Middlebury image database and simulated sequences with known ground truth demonstrates high accuracy of the proposed method in comparison with other state-of-the-art methods, including a robust graph-cut method and a patch matching approach. Qualitative results on video-sequences of difficult real scenes demonstrate the robustness of the proposed method. HighlightsMosaicing of images with strong texture and illumination variability.Fast, robust and accurate optical flow computation.TV-L1 method on a second order Riesz wavelet basis for preserving texture discontinuities.Novel anisotropic TV-regularizer.Accurate results for both the Middlebury data set and various complicated scenes.
['Sharib Ali', 'Christian Daul', 'Ernest Galbrun', 'François Guillemin', 'Walter Blondel']
Anisotropic motion estimation on edge preserving Riesz wavelets for robust video mosaicing
568,803
CrowdSR: A Crowd Enabled System for Semantic Recovering of Web Tables
['Huaxi Liu', 'Ning Wang', 'Xiangran Ren']
CrowdSR: A Crowd Enabled System for Semantic Recovering of Web Tables
795,046
Performance analysis of a tactile sensor
['David M. Siegel', 'Steven M. Drucker', 'Iñaki Garabieta']
Performance analysis of a tactile sensor
165,628
An upper bound on the feedback capacity of unifilar finite-state channels (FSCs) is derived. A new technique, called the $Q$ -context mapping, is based on a construction of a directed graph that is used for a sequential quantization of the receiver’s output sequences to a finite set of contexts . For any choice of $Q$ -graph, the feedback capacity is bounded by a single-letter expression, $C_{\mathrm {fb}}\leq \sup I(X,S;Y|Q)$ , where the supremum is over $p(x|s,q)$ and the distribution of $(S,Q)$ is their stationary distribution. It is shown that the bound is tight for all unifilar FSCs, where feedback capacity is known: channels where the state is a function of the outputs, the trapdoor channel, Ising channels, the no-consecutive-ones input-constrained erasure channel, and the memoryless channel. Its efficiency is also demonstrated by deriving a new capacity result for the dicode erasure channel; the upper bound is obtained directly from the above-mentioned expression and its tightness is concluded with a general sufficient condition on the optimality of the upper bound. This sufficient condition is based on a fixed point principle of the BCJR equation and, indeed, formulated as a simple lower bound on feedback capacity of unifilar FSCs for arbitrary $Q$ -graphs. This upper bound indicates that a single-letter expression might exist for the capacity of finite-state channels with or without feedback based on a construction of auxiliary random variable with specified structure, such as the $Q$ -graph, and not with i.i.d distribution. The upper bound also serves as a non-trivial bound on the capacity of channels without feedback, a problem that is still open.
['Oron Sabag', 'Haim H. Permuter', 'Henry D. Pfister']
A Single-Letter Upper Bound on the Feedback Capacity of Unifilar Finite-State Channels
957,379
We investigate pointing at graphical targets of arbitrary shapes. We first describe a previously proposed probabilistic Fitts' law model [7] which, unlike previous models that only account for rectangular targets, has the potential to handle arbitrary shapes. Three methods of defining the centers of arbitrarily shaped targets for use within the model are developed. We compare these methods of defining target centers, and validate the model using a pointing experiment in which the targets take on various shapes. Results show that the model can accurately account for the varying target shapes. We discuss the implications of our results to interface design.
['Tovi Grossman', 'Nicholas Kong', 'Ravin Balakrishnan']
Modeling pointing at targets of arbitrary shapes
195,897
Phonological Encoding of Sentence Production
['Caitlin Hilliard', 'Katrina Furth', 'T. Florian Jaeger']
Phonological Encoding of Sentence Production
977,296
This paper proposes and evaluates an approach for power and performance management in virtualized server clusters. The major goal of our approach is to reduce power consumption in the cluster while meeting performance requirements. The contributions of this paper are: (1) a simple but effective way of modeling power consumption and capacity of servers even under heterogeneous and changing workloads, and (2) an optimization strategy based on a mixed integer programming model for achieving improvements on power-efficiency while providing performance guarantees in the virtualized cluster. In the optimization model, we address application workload balancing and the often ignored switching costs due to frequent and undesirable turning servers on/off and VM relocations. We show the effectiveness of the approach applied to a server cluster test bed. Our experiments show that our approach conserves about 50% of the energy required by a system designed for peak workload scenario, with little impact on the applications' performance goals. Also, by using prediction in our optimization strategy, further QoS improvement was achieved.
['Vinicius Petrucci', 'Enrique V. Carrera', 'Orlando Loques', 'Julius C. B. Leite', 'Daniel Mossé']
Optimized Management of Power and Performance for Virtualized Heterogeneous Server Clusters
36,301
We present a practical system which can provide a textured full-body avatar within 3 s. It uses sixteen RGB-depth (RGB-D) cameras, ten of which are arranged to capture the body, while six target the important head region. The configuration of the multiple cameras is formulated as a constraint-based minimum set space-covering problem, which is approximately solved by a heuristic algorithm. The camera layout determined can cover the full-body surface of an adult, with geometric errors of less than 5 mm. After arranging the cameras, they are calibrated using a mannequin before scanning real humans. The 16 RGB-D images are all captured within 1 s, which both avoids the need for the subject to attempt to remain still for an uncomfortable period, and helps to keep pose changes between different cameras small. All scans are combined and processed to reconstruct the photorealistic textured mesh in 2 s. During both system calibration and working capture of a real subject, the high-quality RGB information is exploited to assist geometric reconstruction and texture stitching optimization.
['Shuai Lin', 'Yin Chen', 'Yu-Kun Lai', 'Ralph Robert Martin', 'Zhi-Quan Cheng']
Fast capture of textured full-body avatar with RGB-D cameras
726,916
Scalars, arrays, and records, together with associated operations and syntax, have been introduced as special cases of relations into the relational programming system, relix. This permits all of these data types, as well as relations, to be stored persistently. The requirement in most languages that array elements and record fields can be assigned to leads in this case to the general implementation of QT-selectors as l-expressions, with, in particular, systematic interpretations of assignment to projections and selections of relations. The authors discuss the principles and the implementation of this extension to the relational algebra. They take advantage of the very specialized syntax of array access to build a tuned access method, using B-trees and Z-order. The performance results show the advantage of this implementation over the slower implementation required for general QT-selectors. >
['T. H. Merrett', 'Normand Laliberte']
Including scalars in a programming language based on the relational algebra
117,323
A Successful OSS Adaptation and Integration in an e-Learning Platform: TEC Digital
['Mario Chacon-Rivas', 'Cesar Garita']
A Successful OSS Adaptation and Integration in an e-Learning Platform: TEC Digital
627,875
Selection of the Coordination Strategy in the Network MIMO Downlink
['Sebastian Stern', 'Robert F. H. Fischer']
Selection of the Coordination Strategy in the Network MIMO Downlink
623,038
Suppose D is a bounded, connected, open set in Rn and f is a smooth function on Rn with support in $\overD$. We study the recovery of f from the mean values of f over spheres centered on a part or the whole boundary of D. For strictly convex $\overline{D}$, we prove uniqueness when the centers are restricted to an open subset of the boundary. We provide an inversion algorithm (with proof) when the mean values are known for all spheres centered on the boundary of D, with radii in the interval [0, diam(D)/2]. We also give an inversion formula when D is a ball in Rn, $n \geq 3$ and odd, and the mean values are known for all spheres centered on the boundary.
['David Finch', 'Rakesh Sarah K. Patch']
Determining a Function from Its Mean Values Over a Family of Spheres
317,038
In this paper, we present a method for data classification with application to car/non-car objects. We first developed a sample based car/non-car maximal mutual information low dimensional subspace. We then trained a support vector machine (SVM) in this subspace for the detection of cars. Using publicly available standard training and testing data sets, we demonstrated that our car detector gave very competitive performances.
['Jianzhong Fang', 'Guoping Qiu']
Car/Non-Car Classification in an Informative Sample Subspace
408,639
We provide an updated version of the program hex-ecs originally presented in Comput. Phys. Commun. 185 (2014) 2903–2912. The original version used an iterative method preconditioned by the incomplete LU factorization (ILU), which–though very stable and predictable–requires a large amount of working memory. In the new version we implemented a “separated electrons” (or “Kronecker product approximation”, KPA) preconditioner as suggested by Bar-On et al., Appl. Num. Math. 33 (2000) 95–104. This preconditioner has much lower memory requirements, though in return it requires more iterations to reach converged results. By careful choice between ILU and KPA preconditioners one is able to extend the computational feasibility to larger calculations.#R##N##R##N#Secondly, we added the option to run the KPA preconditioner on an OpenCL device (e.g. GPU). GPUs have generally better memory access times, which speeds up particularly the sparse matrix multiplication.#R##N#New version program summary#R##N#Program title: hex-ecs#R##N##R##N#Catalogue identifier: AETI_v2_0#R##N##R##N#Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETI_v2_0.html#R##N##R##N#Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland#R##N##R##N#Licensing provisions: MIT License#R##N##R##N#No. of lines in distributed program, including test data, etc.: 73693#R##N##R##N#No. of bytes in distributed program, including test data, etc.: 520475#R##N##R##N#Distribution format: tar.gz#R##N##R##N#Programming language: C++11.#R##N##R##N#Computer: Any recent CPU, preferably 64-bit. Computationally intensive parts can be run on GPU (tested on AMD Tahiti and NVidia TitanX models).#R##N##R##N#Operating system: Tested on Windows 10 and various Linux distributions.#R##N##R##N#RAM: Depends on the problem solved and particular setup; KPA test run uses apx. 300 MiB.#R##N##R##N#Classification: 2.4.#R##N##R##N#Catalogue identifier of previous version: AETI_v2_0#R##N##R##N#Journal reference of previous version: Comput. Phys. Comm. 185 (2014) 2903#R##N##R##N#External routines: GSL [1], UMFPACK [2], BLAS and LAPACK (ideally threaded OpenBLAS [3]).#R##N##R##N#Does the new version supersede the previous version?: Yes#R##N##R##N#Nature of problem: Solution of the two-particle Schrodinger equation in central field.#R##N##R##N#Solution method: The two-electron states are expanded into angular momentum eigenstates, which gives rise to the coupled bi-radial equations. The bi-radially dependent solution is then represented in a B-spline product basis, which transforms the set of equations into a large matrix equation in this basis. The boundary condition is of Dirichlet type, thanks to the use of the exterior complex scaling method, which extends the coordinates into the complex plane. The matrix equation is then solved by preconditioned conjugated orthogonal conjugate gradient method (PCOCG) [4].#R##N##R##N#Reasons for new version: The original program has been updated to achieve better performance. Also, some external dependencies have been removed (HDF5, FFTW3), which simplifies deployment.#R##N##R##N#Summary of revisions: We implemented a new preconditioner introduced in [5], both for general CPU and also for an arbitrary OpenCL device (e.g. GPU) conforming to the OpenCL 2.0 specification. Furthermore, many other minor improvements have been made, particularly with the intention of reducing the memory requirements. With appropriate switches the program now does not precompute the used matrices and only calculates their elements on the fly. This is aided also by the vectorized B-spline evaluation function, which can now make use of AVX instructions when a single B-spline is being evaluated at several points. The accompanying tools hex-db and hex-dwba [6] have been also updated to use the shared code base.#R##N##R##N#Running time: KPA test run — apx. 2 minutes on Intel i7-4790K (4 threads)#R##N##R##N#References:#R##N#[1]#TAB##R##N#Galassi M. et al, GNU Scientific Library: Reference Manual, Network Theory Ltd., 2003.#R##N##R##N#[2]#TAB##R##N#Davis T. A., Algorithm 832: UMFPACK, an unsymmetric-pattern multifrontal method, ACM Trans. Math. Softw. 30 (2004) 196–199.#R##N##R##N#[3]#TAB##R##N#Xianyi Z. et al, Model-driven Level 3 BLAS Performance Optimization on Loongson 3A Processor, 2012 IEEE 18th International Conference on Parallel and Distributed Systems (ICPADS), 17–19 Dec. 2012.#R##N##R##N#[4]#TAB##R##N#van der Vorst H. A., Melissen J. B. M., A Petrov–Galerkin type method for solving Ax=bAx=b, where AA is symmetric complex, IEEE Trans. Magn. 26 (1990) 706–708.#R##N##R##N#[5]#TAB##R##N#Bar-On et al., Parallel solution of the multidimensional Helmholtz/Schroedinger equation using high order methods, Appl. Num. Math. 33 (2000) 95–104.#R##N##R##N#[6]#TAB##R##N#Benda J., Houfek K., Collisions of electrons with hydrogen atoms I. Package outline and high energy code, Comput. Phys. Commun. 185 (2014) 2893–2902.
['Jakub Benda', 'Karel Houfek']
New version of hex-ecs, the B-spline implementation of exterior complex scaling method for solution of electron–hydrogen scattering
711,567
Abstract Introduction The rapid scale-up of HIV care and treatment in resource-limited countries requires concurrent, rapid development of health information systems to support quality service delivery. Mozambique, a country with an 11.5% prevalence of HIV, has developed nation-wide patient monitoring systems (PMS) with standardized reporting tools, utilized by all HIV treatment providers in paper or electronic form. Evaluation of the initial implementation of PMS can inform and strengthen future development as the country moves towards a harmonized, sustainable health information system. Objective This assessment was conducted in order to 1) characterize data collection and reporting processes and PMS resources available and 2) provide evidence-based recommendations for harmonization and sustainability of PMS. Methods This baseline assessment of PMS was conducted with eight non-governmental organizations that supported the Ministry of Health to provide 90% of HIV care and treatment in Mozambique. The study team conducted structured and semi-structured surveys at 18 health facilities located in all 11 provinces. Seventy-nine staff were interviewed. Deductive a priori analytic categories guided analysis. Results Health facilities have implemented paper and electronic monitoring systems with varying success. Where in use, robust electronic PMS facilitate facility-level reporting of required indicators; improve ability to identify patients lost to follow-up; and support facility and patient management. Challenges to implementation of monitoring systems include a lack of national guidelines and norms for patient level HIS, variable system implementation and functionality, and limited human and infrastructure resources to maximize system functionality and information use. Conclusions This initial assessment supports the need for national guidelines to harmonize, expand, and strengthen HIV-related health information systems. Recommendations may benefit other countries with similar epidemiologic and resource-constrained environments seeking to improve PMS implementation.
['Mindy Hochgesang', 'Sophia Zamudio-Haas', 'Lissa Moran', 'Leopoldo Nhampossa', 'Laura Packel', 'Hannah Leslie', 'Janise Richards', 'Starley B. Shade']
Scaling-up health information systems to improve HIV treatment: An assessment of initial patient monitoring systems in Mozambique
945,798
Abstract#R##N##R##N#At signalized intersections, the decision-making process of each individual driver is a very complex process that involves many factors. In this article, a fuzzy cellular automata (FCA) model, which incorporates traditional cellular automata (CA) and fuzzy logic (FL), is developed to simulate the decision-making process and estimate the effect of driving behavior on traffic performance. Different from existing models and applications, the proposed FCA model utilizes fuzzy interface systems (FISs) and membership functions to simulate the cognition system of individual drivers. Four FISs are defined for each decision-making process: car-following, lane-changing, amber-running, and right-turn filtering. A field observation study is conducted to calibrate membership functions of input factors, model parameters, and to validate the proposed FCA model. Simulation experiments of a two-lane system show that the proposed FCA model is able to replicate decision-making processes and estimate the effect on overall traffic performance.
['Chen Chai', 'Yiik Diew Wong']
Fuzzy Cellular Automata Model for Signalized Intersections
549,911
This paper presents a topology reconstruction method to explore better trade-off points between search and access load balancing performance in unstructured peer-to-peer (P2P) file sharing networks. The proposed topology reconstruction method changes a network topology in a dynamic, autonomous, and decentralized manner. The topology reconstruction is based on local threshold-based rules that use query trails, which stand for information on previous successful search paths. A power-law network is used as the initial network in simulations. The simulation results show that, depending on the setting of the threshold values, compared to the case without topology reconstruction, the proposed method can explore better trade-off points between search and storage access load balancing performance.
['Kei Ohnishi', 'S. Nagamatsu', 'Yuji Oie']
Performance Trade-off Exploration by Query-Trail-Mediated Topology Reconstruction in Unstructured P2P Networks
535,907
The Shape of Digital Transformation: A Systematic Literature Review
['Emily Henriette', 'Mondher Feki', 'Imed Boughzala']
The Shape of Digital Transformation: A Systematic Literature Review
632,649
In the underlay cognitive radio network, the secondary users are allowed to transmit concurrently with the primary users , which shows great potential to relieve the spectrum scarcity problem. However, it raises significant security issues at the physical layer due to the spectrum sharing approach. Different from most present research, in this paper we study the secure communication problem between secondary users where the primary receiver is a curious passive adversary, i.e., an eavesdropper. Specifically, we consider the worst but common case where the primary receiver can decode the messages from its primary transmitter successfully before trying to decode secondary user's confidential messages. To protect these messages from being eavesdropped, we employ a fullduplex secondary receiver who can broadcast jamming signals to degrade eavesdropper's channel quality while guaranteeing the quality of service (QoS) for primary transmission. Following this framework, we formulate a secrecy capacity maximization problem and obtain the optimal power allocation scheme to allocate power between the secondary transmitter and the fullduplex secondary receiver. Simulations are presented to show the superiority of the optimal scheme over baseline algorithms.
['Qian Xu', 'Pinyi Ren', 'Qinghe Du', 'Li Sun']
Secure Secondary Communications with Curious Primary Users in Cognitive Underlay Networks
830,611
In this paper, we investigate the application of adaptive ensemble models of Extreme Learning Machines (ELMs) to the problem of one-step ahead prediction in (non)stationary time series. We verify that the method works on stationary time series and test the adaptivity of the ensemble model on a nonstationary time series. In the experiments, we show that the adaptive ensemble model achieves a test error comparable to the best methods, while keeping adaptivity. Moreover, it has low computational cost.
['Mark van Heeswijk', 'Yoan Miche', 'Tiina Lindh-Knuutila', 'Peter A. J. Hilbers', 'Timo Honkela', 'Erkki Oja', 'Amaury Lendasse']
Adaptive Ensemble Models of Extreme Learning Machines for Time Series Prediction
405,141
A set of p-valued logic gates (primitives) is called universal if an arbitrary p-valued logic function can be realized by a logic circuit built up from a finite number of gates belonging to this set. In this paper, we consider the problem of determining the number of universal single-gate libraries of p-valued reversible logic gates with two inputs and two outputs, under the assumption that constant signals can be applied to an arbitrary number of inputs. We have proved some properties of such gates and established that over 97% of ternary gates are universal.
['Pawel Kerntopf', 'Marek A. Perkowski', 'Mozammel H. A. Khan']
On universality of general reversible multiple-valued logic gates
476,267
In this paper, we propose a peptide folding prediction method which discovers contrast patterns to differentiate and predict peptide folding classes. A contrast pattern is defined as a set of sequentially associated amino acids which frequently appear in one type of folding but significantly infrequent in other folding classes. Our hypothesis is that each type of peptide folding has its unique interaction patterns among peptide residues (amino acids). The role of contrast patterns is to act as signatures or features for prediction of a peptide’s folding type. For this purpose, we propose a two phase peptide folding prediction framework, where the first stage is to discover contrast patterns from different types of contrast datasets, followed by a learning process which uses all discovered patterns as features to build a supervised classifier for folding prediction. Experimental results on two benchmark protein datasets will indicate that the proposed framework can outperform simple secondary structure prediction based approaches for peptide folding prediction.
['Chinar Shah', 'Xing-Quan Zhu', 'Taghi M. Khoshgoftaar', 'Justin Beyer']
Contrast Pattern Mining with Gap Constraints for Peptide Folding Prediction
235,285
Purpose#R##N##R##N##R##N##R##N##R##N#The purpose of this paper is to study the empirical relationship between specialization, diversification and rate of survival in the digital publishing industry. The sample includes all publishing companies in Italy that produce electronic content and distribute it through internet platforms.#R##N##R##N##R##N##R##N##R##N#Design/methodology/approach#R##N##R##N##R##N##R##N##R##N#The first part of the paper discusses the pros and cons of specialization against diversification, and applies the related economic theories to the digital publishing industry. The empirical work regarding the factors that affect firm survival is reviewed. The second part is empirical and analyzes the diversification strategies of 2,838 Italian digital editors between 1995 and 2014, and the impact of diversification on the probability of survival.#R##N##R##N##R##N##R##N##R##N#Findings#R##N##R##N##R##N##R##N##R##N#On the whole, digital publishing companies that are also active in traditional print activities have been constantly declining. However, those who combine print and digital activities or operate other mass media businesses have a higher probability of surviving in the market. These findings hold controlling for firm size and market structure, before and after the economic crisis exploded in 2009, in different geographical areas and by different legal forms of publishing companies.#R##N##R##N##R##N##R##N##R##N#Research limitations/implications#R##N##R##N##R##N##R##N##R##N#As the industry often presents country-specific characteristics, the econometric analysis should also be integrated with case studies that highlight particular survival conditions.#R##N##R##N##R##N##R##N##R##N#Practical implications#R##N##R##N##R##N##R##N##R##N#The study provides mass media scholars as well as practitioners with detailed information on the digital publishing trends in the medium term.#R##N##R##N##R##N##R##N##R##N#Originality/value#R##N##R##N##R##N##R##N##R##N#This research is significant because, in the period under review, many digital native entrepreneurs with scarce experience entered the industry, targeted digital native consumers/readers and challenged traditional and established media conglomerates.
['Andrea Mangani', 'Elisa Tarrini']
Who survives a recession? Specialization against diversification in the digital publishing industry
981,210
The S transform is a useful linear time-frequency distribution with a progressive resolution. Since it is linear, it filters efficiently in a time-frequency domain by multiplying a mask function. Several different inverse algorithms exist, which may result in different filtering effects. The conventional inverse S transform (IST) proposed by Stockwell is efficient but suffers from time leakage during filtering. The recent algorithm proposed by Schimmel and Gallart has better time localization during filtering but suffers from a reconstruction error and the frequency leakage during filtering. In this paper, two new IST algorithms are proposed that have better time-frequency localization in filtering than the previous two methods.
['Soo-Chang Pei', 'Pai-Wei Wang']
Novel Inverse S Transform With Equalization Filter
363,661
Social Network Analysis of Learning Teams During Emergency Events.
['Jafar Hamra', 'Liaquat Hossain', 'Christine Owen']
Social Network Analysis of Learning Teams During Emergency Events.
631,347
The Web-based reflective tutorial dialogue system (W-ReTuDiS) is a system for personalized learning of historical text comprehension on the Web. The system offers a two level open interface: tutor level and learner level. In tutor level, the tutor manages the learner model and makes decisions concerning the appropriate activity and dialogue strategy for the learner according to his learner model, which is based on the diagnostic results. In learner level, the learner participates in the construction of his learner model through dialogue activities, which promote reflective learning. The dialogue generator module, which is activated by the diagnostic results, plans the appropriate sequence of dialogue-parts using the dialogue-parts' library and constructs personalized tutorial dialogue. The system promotes learners' personalized reflection to accomplish the learning goals and helps learners to be aware of their reasoning and leads them towards the scientific thought.
['Maria Grigoriadou', 'Grammatiki Tsaganou', 'Theodora Cavoura']
Dialogue-based personalized reflective learning
305,345
With the proliferation of social media, consumers' cognitive costs during information-seeking can become non-trivial during an online shopping session. We propose a dynamic structural model of limited consumer search thatcombines an optimal stopping framework with an individual-level choice model. We estimate the parameters of the model using a dataset of approximately 1 million online search sessions resulting in bookings in 2117 U.S. hotels. The model allows us to estimate the monetary value of the the search costs incurred by users of product search engines in a social media context. On average, searching an extra page on a search engine costs consumers $39.15 and examining an additional offer within the same page has a cost of $6.24, respectively. A good recommendation saves consumers, on average, $9.38, whereas a bad one costs $18.54. Our policy experiment strongly supports this finding by showing that the quality of ranking can have significant impact on consumers’ search efforts, and customized ranking recommendations tend to polarize the distribution of consumer search intensity. Our model-fit comparison demonstrates that the dynamic search model provides the highest overall predictive power compared to the baseline static models. Our dynamic model indicates that consumers have lower price sensitivity than a static model would have predicted, implying that consumers pay a lot of attention to nonprice factors during an online hotel search.
['Anindya Ghose', 'Panagiotis G. Ipeirotis', 'Beibei Li']
Search Less, Find More? Examining Limited Consumer Search with Social Media and Product Search Engines
176,013
In recent years Computer-Assisted Language Learning (CALL) systems have been widely used in foreign language education. Some systems use automatic speech recognition (ASR) technologies to detect pronunciation errors and estimate the proficiency level of individual students. When speech recording is done in a CALL classroom, however, utterances of a student are always recorded with those of the others in the same class. The latter utterances are just background noise, and the performance of automatic pronunciation assessment is degraded especially when a student is surrounded with very active students. To solve this problem, we apply a noise reduction technique, Stereo-based Piecewise Linear Compensation for Environments (SPLICE), and the compensated feature sequences are input to a Goodness Of Pronunciation (GOP) assessment system. Results show that SPLICE-based noise reduction works very well as a means to improve the assessment performance in a noisy classroom.
['Yi Luan', 'Masayuki Suzuki', 'Yutaka Yamauchi', 'Nobuaki Minematsu', 'Shuhei Kato', 'Keikichi Hirose']
Performance improvement of automatic pronunciation assessment in a noisy classroom
342,582
This paper presents a new form of Kalman filter-the sigmaRho filter-useful for operational implementation in applications where stability and throughput requirements stress traditional implementations. The new mechanization has the benefits of square root filters in both promoting stability and reducing dynamic range of propagated terms. State standard deviations and correlation coefficients are propagated rather than covariance square root elements and these physically meaningful statistics are used to adapt the filtering for further ensuring reliable performance. Finally, all propagated variables can be scaled to predictable dynamic range so that fixed point procedures can be implemented for embedded applications. A sample problem from communications signal processing is presented that includes nonlinear state dynamics, extreme time-variation, and extreme range of system eigenvalues. The sigmaRho implementation is successfully applied at sample rates approaching 100 MHz to decode binary digital data from a 1.5-GHz carrier.
['Mohinder S. Grewal', 'James Kain']
Kalman Filter Implementation With Improved Numerical Properties
412,778
We present a measure called collidability measure for obstacle avoidance control of redundant manipulators. Considering moving directions of manipulator links, the collidability measure is defined as the inverse of sum of predicted collision distances between links and obstacles. This measure is suitable for obstacle avoidance control since directions of moving links are as important as distances to obstacles. For dynamic redundancy resolution, null space control is utilized to avoid obstacles by minimizing the collidability measure. Also, by clarifying decomposition in the joint acceleration level, we present a simple dynamic control law with bounded joint torques which guarantees tracking of a given end-effector trajectory and improves a kinematic cost function such as collidability measure. Simulation results are presented to illustrate the effectiveness of the proposed algorithm.
['Su Il Choi', 'Byung Kook Kim']
Obstacle avoidance control for redundant manipulators using collidability measure
64,888
We present a new software tool for teaching logic based on natural deduction. Its proof system is formalized in the proof assistant Isabelle such that its definition is very precise. Soundness of the formalization has been proved in Isabelle. The tool is open source software developed in TypeScript / JavaScript and can thus be used directly in a browser without any further installation. Although developed for undergraduate computer science students who are used to study and program concrete computer code in a programming language we consider the approach relevant for a broader audience and for other proof systems as well.
['Jørgen Villadsen', 'Alexander Birch Jensen', 'Anders Schlichtkrull']
NaDeA: A Natural Deduction Assistant with a Formalization in Isabelle
630,299
An Axiomatic Characterization of the Reliability Polynomial.
['Richard P. McLean', 'Douglas H. Blair']
An Axiomatic Characterization of the Reliability Polynomial.
760,179
Distributed Phishing Attacks.
['Markus Jakobsson', 'Adam L. Young']
Distributed Phishing Attacks.
776,480
An IP mobility support protocol that enables personal and terminal mobility for IP-based applications is put forward. This protocol does not require new network entities or support from network service providers. It comprises an innovative IP-to-IP address mapping module at the network layer and an user agent to interact with a directory service server and correspondent nodes. It does not require a permanent IP address and a home server. It does not use tunnelling on mobile nodes nor alter route path of IP packets. In this paper, we describe our implementation and present our experimental results. Experiments show that this protocol works for UDP and TCP connections without affecting the throughput of the mobile node on a wireless LAN. Related works are also discussed and quantitatively compared. As an example, this protocol provides seamless execution for applications like VoIP and video conferencing on mobile nodes that roam across wireless networks.
['Bu-Sung Lee', 'Teck Meng Lim', 'Chai Kiat Yeo', 'Quang Vinh Le']
A Mobility Scheme for Personal and Terminal Mobility
444,227
ExplicitPRISMSymm: Symmetry Reduction Technique for Explicit Models in PRISM
['Reema Patel', 'Kevin Patel', 'Dhiren R. Patel']
ExplicitPRISMSymm: Symmetry Reduction Technique for Explicit Models in PRISM
648,254
Traffic analysis is the best known approach to uncover relationships amongst users of anonymous communication systems, such as mix networks. Surprisingly, all previously published techniques require very specific user behavior to break the anonymity provided by mixes. At the same time, it is also well known that none of the considered user models reflects realistic behavior which casts some doubt on previous work with respect to real-life scenarios. We first present a user behavior model that, to the best of our knowledge, is the least restrictive scheme considered so far. Second, we develop the Perfect Matching Disclosure Attack, an efficient attack based on graph theory that operates without any assumption on user behavior. The attack is highly effective when de-anonymizing mixing rounds because it considers all users in a round at once, rather than single users iteratively. Furthermore, the extracted sender-receiver relationships can be used to enhance user profile estimations. We extensively study the effectiveness and efficiency of our attack and previous work when de-anonymizing users communicating through a threshold mix. Empirical results show the advantage of our proposal. We also show how the attack can be refined and adapted to different scenarios including pool mixes, and how precision can be traded in for speed, which might be desirable in certain cases.
['Carmela Troncoso', 'Benedikt Gierlichs', 'Bart Preneel', 'Ingrid Verbauwhede']
Perfect Matching Disclosure Attacks
375,153
Through a series of game-theoretical models, this study systematically examines decision making in cross-functional teams. It provides a framework for the design of an organization-specific decision-making process and for the alignment of a team's microdecision with the “optimal” decision that maximizes the firm's payoff. This study finds that even without changing the team leader, firms could change and even dictate the team's microdecision outcome via adjusting the team member's seniority, empowering team members with veto power or involving a supervisor as a threat to overrule the team decision. This finding implies that to reposition products in the marketplace, structuring cross-functional teams’ microdecision-making processes is essential.
['Zhijian Cui']
Decision making in cross-functional teams: : The role of decision power*
73,667
Fairground: Thrill Laboratory was a series of live events that augmented the experience of amusement rides. A wearable telemetry system captured video, audio, heart-rate and acceleration data, streaming them live to spectator interfaces and a watching audience. In this paper, we present a study of this event, which draws on video recordings and post-event interviews, and which highlights the experiences of riders, spectators and ride operators. Our study shows how the telemetry system transformed riders into performers, spectators into an audience, and how the role of ride operator began to include aspects of orchestration, with the relationship between all three roles also transformed. Critically, the introduction of a telemetry system seems to have had the potential to re-connect riders/performers back to operators/orchestrators and spectators/audience, re-introducing a closer relationship that used to be available with smaller rides. Introducing telemetry to a real-world situation also creates significant complexity, which we illustrate by focussing on a moment of perceived crisis.
['Holger Schnädelbach', 'Stefan Rennick Egglestone', 'Stuart Reeves', 'Steve Benford', 'Brendan Walker', 'Michael Wright']
Performing thrill: designing telemetry systems and spectator interfaces for amusement rides
467,821
Traffic load is often unevenly distributed among the access points in enterprise WLANs. Such load imbalance results in sub-optimal network throughput, unfair bandwidth allocation among users, and unsatisfactory user quality of experience. We have collected real traces from over 12,000 WiFi users at Shanghai Jiao Tong University. Through intensive data analysis, we find that the social behavior of users (e.g., leaving together) may cause a significant AP load imbalance problem. We also observe from the traces that users with similar application usage have the potential to leave together. Inspired by those observations, we propose a socialaware AP selection scheme (S 3 ), which can actively learn the sociality information among users trained with their history application profiles and elegantly assign users to different APs based on the obtained knowledge. Trace-driven simulation results show that S 3 is feasible and can achieve better balancing performance when compared to state-of-the-art balance algorithms.
['Guangtao Xue', 'Yanmin Zhu', 'Zhenxian Hu', 'Hongzi Zhu', 'Chaoqun Yue', 'Jiadi Yu']
Characterizing sociality for user-friendly steady load balancing in enterprise WLANs
566,847
Conducting the Wizard-of-Oz Experiment.
['Melita Hajdinjak', 'France Mihelic']
Conducting the Wizard-of-Oz Experiment.
767,520
Along with the appearance of new optimization and control problems, novel paradigms emerge. A large number of them are based on behavioral ecology, where population dynamics play an important role. One of the most known models of population dynamics is the replicator equation, whose applications in optimization and control have increased in recent years. This fact motivates the study of the replicator dynamics’ properties that are related to the implementation of this method for solving optimization and control problems. This paper addresses implementation issues of the replicator equation in engineering problems. We show by means of the Lyapunov theory that the replicator dynamics model is robust under perturbations that make the state to leave the simplex (among other reasons, this phenomenon can emerge due to numerical errors of the solver employed to obtain the replicator dynamic’s response). A refinement of these results is obtained by introducing a novel robust dynamical system inspired by the replicator equation that allows to control and optimize plants under arbitrary initial conditions on the positive orthant. Finally, we characterize stability bounds of the replicator dynamics model in problems that involve N strategies that are subject to time delays. We illustrate our results via simulations.
['German D. Obando', 'Jorge I. Poveda', 'Nicanor Quijano']
Replicator dynamics under perturbations and time delays
834,433
The issue of seeking efficient and effective methods for classifying unstructured text in large document corpora has received much attention in recent years. Traditional document representation like bag-of-words encodes documents as feature vectors, which usually leads to sparse feature spaces with large dimensionality, thus making it hard to achieve high classification accuracies. This paper addresses the problem of classifying unstructured documents on the Web. A classification approach is proposed that utilizes traditional feature reduction techniques along with a collaborative filtering method for augmenting document feature spaces. The method produces feature spaces with an order of magnitude less features compared with a baseline bag-of-words feature selection method. Experiments on both real-world data and benchmark corpus indicate that our approach improves classification accuracy over the traditional methods for both support vector machines and AdaBoost classifiers.
['Yang Song', 'Ding Zhou', 'Jian Huang', 'Isaac G. Councill', 'Hongyuan Zha', 'C.L. Giles']
Boosting the Feature Space: Text Classification for Unstructured Data on the Web
382,609
Transportation project selection is one of the most important planning activities encountered by a government, especially in a developing city. In this paper, we explore the potential of applying the analytic network process (ANP) to evaluate transportation projects in Ningbo, China. ANP differs from traditional hierarchical analysis tools in that it allows feedback and interdependence among various decision levels and criteria. Compared with the conventional transportation evaluation methods, our model has incorporated a much wider range of long-term and short-term factors, which are classified into benefits, opportunities, costs, and risks. Tactical and operational issues are taken into consideration. The evaluation framework is comprehensive and flexible, and shows great potential for helping decision-makers and others concerned with the transportation decision-making process.
['Jennifer Shang', 'Youxu Cai Tjader', 'Yi-zhong Ding']
A unified framework for multicriteria evaluation of transportation projects
447,996
Indoor positioning is considered an enabler for a variety of applications, the demand for an indoor positioning service has also been accelerated. That is because that people spend most of their time indoor environment. Meanwhile, the smartphone integrated powerful camera is an efficient platform for navigation and positioning. However, for high accuracy indoor positioning by using a smartphone, there are two constraints that includes: (1) limited computational and memory resources of smartphone; (2) users’ moving in large buildings. To address those issues, this paper uses the TC-OFDM for calculating the coarse positioning information includes horizontal and altitude information for assisting smartphone camera-based positioning. Moreover, a unified representation model of image features under variety of scenarios whose name is FAST-SURF is established for computing the fine location. Finally, an optimization marginalized particle filter is proposed for fusing the positioning information from TC-OFDM and images. The experimental result shows that the wide location detection accuracy is 0.823 m (1σ) at horizontal and 0.5 m at vertical. Comparing to the WiFi-based and ibeacon-based positioning methods, our method is powerful while being easy to be deployed and optimized.
['Jichao Jiao', 'Zhongliang Deng', 'Lianming Xu', 'Fei Li']
A Hybrid of Smartphone Camera and Basestation Wide-area Indoor Positioning Method
716,266
Automatic 3D model acquisition and 3D tracking of simple objects under motion using a single camera is often difficult due to the sparsity of information from which to establish the model. We have developed an automatic scheme that first computes a simple pointalistic Euclidean model of the object and then enriches this model using hyper-patches. These hyper-patches contain information on both the orientation and intensity pattern variation of roughly planar patches on an object. This information allows both the spatial and intensity distortions of the projected patch to be modelled accurately under 3D object motion. We show that hyper-patches not only can be computed automatically during model acquisition from a monocular image sequence, but that they are also extremely appropriate for the task of visual tracking.
['Charles Wiles', 'Atsuto Maki', 'Natsuko Matsuda', 'Mutsumi Watanabe']
Hyper-patches for 3D model acquisition and tracking
334,127
Our purpose in this study was to develop a computer-aided diagnosis (CAD) scheme for distinguishing between benign and malignant breast masses in dynamic contrast material-enhanced magnetic resonance imaging (DCE-MRI). Our database consisted 90 DCE-MRI examinations, each of which contained four sequential phase images; this database included 28 benign masses and 62 malignant masses. In our CAD scheme, we first determined 11 objective features of masses by taking into account the image features and the dynamic changes in signal intensity that experienced radiologists commonly use for describing masses in DCE-MRI. Quadratic discriminant analysis (QDA) was employed to distinguish between benign and malignant masses. As the input of the QDA, a combination of four objective features was determined among the 11 objective features according to a stepwise method. These objective features were as follows: (i) the change in signal intensity from 2 to 5 min; (ii) the change in signal intensity from 0 to 2 min; (iii) the irregularity of the shape; and (iv) the smoothness of the margin. Using this approach, the classification accuracy, sensitivity, and specificity were shown to be 85.6 % (77 of 90), 87.1 % (54 of 62), and 82.1 % (23 of 28), respectively. Furthermore, the positive and negative predictive values were 91.5 % (54 of 59) and 74.2 % (23 of 31), respectively. Our CAD scheme therefore exhibits high classification accuracy and is useful in the differential diagnosis of masses in DCE-MRI images.
['Emi Honda', 'Ryohei Nakayama', 'Hitoshi Koyama', 'Akiyoshi Yamashita']
Computer-Aided Diagnosis Scheme for Distinguishing Between Benign and Malignant Masses in Breast DCE-MRI
573,115
This paper presents results from three experiments which investigate the evolution of referential communication in embodied dynamical agents. Agents, interacting with only simple sensors and motors, are evolved in a task which requires one agent to communicate the locations of spatially distant targets to another agent. The results from these experiments demonstrate a variety of successful communication strategies, providing a first step towards understanding the emergence of referential communication in terms of coordinated behavioral interactions.
['Paul L. Williams', 'Randall D. Beer', 'Michael Gasser']
Evolving Referential Communication in Embodied Dynamical Agents
294,632
Special Issue on Mobile Social Networking and computing in Proximity (MSNP)
['Yufeng Wang', 'Qun Jin', 'Athanasios V. Vasilakos']
Special Issue on Mobile Social Networking and computing in Proximity (MSNP)
588,841
With more and more machines achieving petascale capabilities, the focus is shifting towards the next big barrier, exascale computing and its possibilities and challenges. There is a common agreement that using machines on this level will definitively require co-design of systems and applications, and corresponding actions on different levels of software, hardware, and the infrastructure. Defining the vision of exascale computing for the community as providing capabilities on levels of performance at extreme scales, and identifying the role and mission of the involved experts from computer science has laid the basis for further discussions. By reflecting on the current state of petascale machines and technologies and identifying known bottlenecks and pitfalls looming ahead, this workshop derived the concrete barriers on the road towards exascale and presented some ideas on how to overcome them, as well as raising open issues to be addressed in future leading-edge research on this topic.
['Arndt Bode', 'Adolfy Hoisie', 'Dieter Kranzlmüller', 'Wolfgang E. Nagel']
Co-Design of Systems and Applications for Exascale (Dagstuhl Perspectives Worksop 12212)
611,740
Future interactive television applications are the upcoming front-end to interactive radio, television, video rental, home shopping, multimedia communication, and information retrieval. The challenge of an interactive television system is not that the medium is digital, nor the ability of browsing media resources, nor reception of multiple kinds of media. Users do not need more information than today. The colleague is to provide information which corresponds to users' interests and needs. This paper describes the vision of a multimedia iTV system whose appearance is personally adapted for each user. The concept personal iTV systems is presented by explaining the task of personalization and the technical fundamentals of intelligent assistance.
['Hartmut Wittig', 'Carsten Griwodz']
Intelligent media agents in interactive television systems
355,534
This paper addresses the problem of image change detection (ICD) based on Markov random field (MRF) models. MRF has long been recognized as an accurate model to describe a variety of image characteristics. Here, we use the MRF to model both noiseless images obtained from the actual scene and change images (CIs), the sites of which indicate changes between a pair of observed images. The optimum ICD algorithm under the maximum a posteriori (MAP) criterion is developed under this model. Examples are presented for illustration and performance evaluation.
['Teerasit Kasetkasem', 'Pramod K. Varshney']
An image change detection algorithm based on Markov random field models
18,242
This paper presents an analysis of snake locomotion that explains how non-uniform viscous ground friction conditions enable snake robots to locomote forward on a planar surface. The explanation is based on a simple mapping from link velocities normal to the direction of motion into propulsive forces in the direction of motion. From this analysis, a controller for a snake robot is proposed. A Poincare map is employed to prove that all state variables of the snake robot, except for the position in the forward direction, trace out an exponentially stable periodic orbit.
['Pål Liljebäck', 'Kristin Ytterstad Pettersen', 'Øyvind Stavdahl', 'Jan Tommy Gravdahl']
Stability analysis of snake robot locomotion based on Poincaré maps
400,292
ABSTRACT‘Selfie’, the Oxford Dictionary’s word of the year in 2013, has been gaining popularity as a global phenomenon and its usage is growing with technological advancements in front-facing cameras and photo-editing software. Earlier studies hold a lopsided view of either criticising selfies as ‘vain’ and ‘narcissist’ or appreciate them as ‘feel good’ for ‘positive identity formation’. The current study intends to take a fresh look at the act and explores reasons and motivators of young college students in India, as they take selfies and traces the usage pattern and its likely relationship with the motivators of selfie-taking. Qualitative data were gathered through a focused group discussion conducted among graduate students with an average of 23.5 years who volunteered to participate in the discussion. Results show that male and female students have varying reasons for taking selfies and it is often an act of fun and assertion of one’s right to ‘self-depiction’. Selfies have a life-cycle which ends aft...
['Reena Shah', 'Ruchi Tewari']
Demystifying ‘selfie’: a rampant social media activity
857,767
Information Warfare: Tactics.
['Gerald L. Kovacich', 'Andrew Jones', 'Perry Luzwick']
Information Warfare: Tactics.
772,207
The rate of introduction of new technology into safety critical domains continues to increase. Improvements in evaluation methods are needed to keep pace with the rapid development of these technologies. A significant challenge in improving evaluation is developing efficient methods for collecting and characterizing knowledge of the domain and context of the work being performed. Traditional methods of incorporating domain and context knowledge into an evaluation rely upon expert user testing, but these methods are expensive and resource intensive. This paper will describe three new methods for evaluating the applicability of a user interface within a safety-critical domain (specifically aerospace work domains), and consider how these methods may be incorporated into current evaluation processes.
['Michael Feary', 'Dorrit Billman', 'Xiuli Chen', 'Andrew Howes', 'Richard L. Lewis', 'Lance Sherry', 'Satinder P. Singh']
Linking context to evaluation in the design of safety critical interfaces
577,646
With the continuing technological trend of ever cheaper and larger memory, most data sets in database servers will soon be able to reside in main memory. In this configuration, the performance bottleneck is likely to be the gap between the processing speed of the CPU and the memory access latency. Previous work has shown that database applications have large instruction and data footprints and hence do not use processor caches effectively. In this paper we propose Call Graph Prefetching (CGP), a hardware technique that analyzes the call graph of a database system and prefetches instructions from the function that is deemed likely to be called next. CGP capitalizes on the highly predictable function call sequences that are typical of database systems. We evaluate the performance of CGP on sets of Wisconsin and TPC-H queries, as well as on CPU-2000 benchmarks. For most CPU-2000 applications the number of l-cache misses were very few even without any prefetching, obviating the need for CGP. Our database experiments show that CGP reduces the I-cache misses by 83% and can improve the performance of a database system by 30% over a baseline system that uses the OM tool to layout the code so as to improve I-cache performance. CGP also achieved 7% higher performance than OM with next-N-line prefetching on database applications.
['Murali Annavaram', 'Jignesh M. Patel', 'Edward S. Davidson']
Call graph prefetching for database applications
267,988
Objective: Modelling the associations from high-throughput experimental molecular data has provided unprecedented insights into biological pathways and signalling mechanisms. Graphical models and networks have especially proven to be useful abstractions in this regard. Ad hoc thresholds are often used in conjunction with structure learning algorithms to determine significant associations. The present study overcomes this limitation by proposing a statistically motivated approach for identifying significant associations in a network. Methods and materials: A new method that identifies significant associations in graphical models by estimating the threshold minimising the L"1 norm between the cumulative distribution function (CDF) of the observed edge confidences and those of its asymptotic counterpart is proposed. The effectiveness of the proposed method is demonstrated on popular synthetic data sets as well as publicly available experimental molecular data corresponding to gene and protein expression profiles. Results: The improved performance of the proposed approach is demonstrated across the synthetic data sets using sensitivity, specificity and accuracy as performance metrics. The results are also demonstrated across varying sample sizes and three different structure learning algorithms with widely varying assumptions. In all cases, the proposed approach has specificity and accuracy close to 1, while sensitivity increases linearly in the logarithm of the sample size. The estimated threshold systematically outperforms common ad hoc ones in terms of sensitivity while maintaining comparable levels of specificity and accuracy. Networks from experimental data sets are reconstructed accurately with respect to the results from the original papers. Conclusion: Current studies use structure learning algorithms in conjunction with ad hoc thresholds for identifying significant associations in graphical abstractions of biological pathways and signalling mechanisms. Such an ad hoc choice can have pronounced effect on attributing biological significance to the associations in the resulting network and possible downstream analysis. The statistically motivated approach presented in this study has been shown to outperform ad hoc thresholds and is expected to alleviate spurious conclusions of significant associations in such graphical abstractions.
['Marco Scutari', 'Radhakrishnan Nagarajan']
Identifying significant edges in graphical models of molecular networks
504,183
As the network grows, the centralized SDN (Software-Defined Networking) architecture causes a scalability problem. To handle this problem, multiple controllers have been used in many related works. However, if the load is concentrated on a certain controller, incoming messages to the controller can be blocked while other controllers are not busy. Especially, this problem becomes critical for handover requests in mobile networks because users will perceive severe QoS deterioration when these requests are blocked. Therefore, this paper presents a mobility-aware load distribution scheme for scalable SDN-based mobile networks. In the proposed scheme, incoming handover request messages can be differentiated among other messages and admitted with priority. In addition, these messages can be migrated to other controllers when the controller has no capacity to prevent them from being blocked. Analytical results confirm that the proposed scheme has lower blocking probability and higher resource utilization ratio than conventional schemes without much additional signaling load as offered load increases.
['Yeunwoong Kyung', 'Youngjun Kim', 'Kiwon Hong', 'Hyungoo Choi', 'Mingyu Joo', 'Jinwoo Park']
Mobility-aware load distribution scheme for scalable SDN-based mobile networks
884,382
Recent developments in graphics processing unit (GPU) technology has invigorated an interest in using GPUs for accelerating the simulation of SystemC models. SystemC is extensively used for design space exploration, and early performance analysis of hardware systems. SystemC's reference implementation of the simulation kernel supports a single-threaded simulation kernel. However, modern computing platforms offer substantially more compute power by means of multiple central processing units, and multiple co-processors such as GPUs. This has peaked an interest in parallelizing SystemC simulations. Of these, several efforts focus on utilizing the massive parallelism offered by GPUs as an alternate computing platform. In this paper, we present a summary of these recent research efforts that propose using GPUs for accelerating SystemC simulation.
['Mahesh Nanjundappa', 'Anirudh M. Kaushik', 'Hiren D. Patel', 'Sandeep K. Shukla']
Accelerating SystemC simulations using GPUs
285,237
A fast macroblock mode selection algorithm based on dynamic multi-threshold is proposed to improve the encoding speed of multiview video, but with insignificant degradation in rate distortion (RD) performance. The macroblock modes are divided into four classes after statistically analyzing the macroblock mode selection results. Three thresholds are adopted based on the great RD cost gaps between the macroblock mode classes. An approximate computing method and a dynamic updating method of the three thresholds are proposed for implementing the fast algorithm. Simulation results demonstrate that the proposed fast algorithm promotes the encoding speed by 1.92~7.07 times in comparison with JMVM, while the algorithm hardly influences the RD performance.
['Zongju Peng', 'Gangyi Jiang', 'Mei Yu']
A fast multiview video coding algorithm based dynamic multi-threshold
431,219
In a massive open online course (MOOC), a single pro-gramming or digital hardware design exercise may yield thousands of student solutions that vary in many ways, some superi� cial and some fundamental. Understanding large-scale variation in student solutions is a hard but important problem. For teachers, this variation can be a source of pedagogically valuable examples and expose corner cases not yet covered by autograding. For students, the variation in a large class means that other students may have struggled along a similar solution path, hit the same bugs, and can offer hints based on that earned expertise. We developed three systems to take advantage of the solu-tion variation in large classes, using program analysis and learnersourcing. All three systems have been evaluated using data or live deployments in on-campus or edX courses with thousands of students.
['Elena L. Glassman', 'Robert C. Miller']
Leveraging Learners for Teaching Programming and Hardware Design at Scale
688,736
Emotions influence everyday decisions. When people make decisions about movies to watch, songs to listen or even about more serious issues such as health, they perform a cognitive process that estimates which of various alternative choices would yield the most positive consequences. Indeed, this process in not totally rational because it is influenced, directly or in a subtle way by personality traits and emotions. In this paper we propose the idea of defining an affective user profile, which can act as a computational model of personality and emotions, included in a general, affective-aware, recommendation framework.
['Marco Polignano']
A framework for emotion-aware recommender systems supporting decision making
683,199
A method for solving obstacle avoidance for a redundant robot is proposed in the present paper. Extra degrees of freedom (DOF) of a redundant robot are effective for realizing an objective position and orientation of its end effector (referred to hereafter as "pose"), while the robot is avoiding obstacles. The path should be planned so as that the robot can avoid obstacles and realize the desired goal pose. The models of six elementary types of obstacles are assumed, taking account of real environment. The path planning method proposed herein is divided into three procedures as follows: 1) solving inverse kinematics by an analytical method for avoiding six elementary types of obstacles, 2) solving inverse kinematics by a semi-analytical method for realizing a goal pose, and 3) generating a path from a start pose to the goal one. A computational simulator for a redundant robot to avoid arbitrary obstacles based on these procedures is developed.
['Seiji Aoyagi', 'Kazuya Tashiro', 'Mamoru Minami', 'Masaharu Takano']
Development of redundant robot simulator for Avoiding arbitrary obstacles based on semi-analytical method of solving inverse kinematics
265,051
This research applies knowledge management principles to examine knowledge transfer in the social marketing of the human papillomavirus (HPV) vaccination program for girls and young women. Using focus group research we develop framework to define the domain of health based decision making in young women and develop understanding of constructs in knowledge transfer along the health consumer supply-chain. We find these are the role of trust, the absorptive capacity of the receiver, the medium of the knowledge object and the authority of the figure providing that knowledge. These findings have implications for budgetary support of and accountability for public health knowledge transfer mechanisms.
['Suzanne Zyngier', "Clare D'Souza", 'Priscilla Robinson', 'Morgan Schlotterlein']
Knowledge Transfer: Examining a Public Vaccination Initiative in a Digital Age
310,966
Join Spaces Determined by Lattices.
['Violeta Leoreanu Fotea', 'Ivo G. Rosenberg']
Join Spaces Determined by Lattices.
807,576
On Cryptographic Applications of Matrices Acting on Finite Commutative Groups and Rings.
['S. M. Dehnavi', 'A. Mahmoodi Rishakani', 'M. R. Mirzaee Shamsabad']
On Cryptographic Applications of Matrices Acting on Finite Commutative Groups and Rings.
751,959
Rate adaptation (RA) has been traditionally used to achieve high goodput. In this work, we design RA for 802.11n NICs from an energy-efficiency perspective. We show that current MIMO RA algorithms are not energy efficient for NICs despite ensuring high throughput. The fundamental problem is that, the high-throughput setting is not equivalent to the energy-efficient one. Marginal throughput gain may be realized at high energy cost. We then propose EERA and EERA+, two energy-based RA schemes that trade off goodput for energy savings at NICs. EERA applies multidimensional ternary search and simultaneous pruning to speed up its runtime convergence in single-client operations, and uses fair airtime sharing to handle multiple-client operations. EERA+ further searches for multiple, staged rates to yield more energy savings over EERA. Our experiments have confirmed their effectiveness in various scenarios.
['Chi-Yu Li', 'Chunyi Peng', 'Peng Cheng', 'Songwu Lu', 'Xinbing Wang', 'Fengyuan Ren', 'Tao Wang']
An Energy Efficiency Perspective on Rate Adaptation for 802.11n NIC
727,487
This paper presents an analytical model for the behavior of dataflow graphs with data-dependent control flow and discusses its suitability to the generation of efficient software and hardware implementations of digital signal processing (DSP) applications. In the model, the number of tokens produced or consumed by each actor is given as a symbolic function of the Boolean values in the system; in addition, it may vary cyclically to permit more memory-efficient multirate implementations. The model can be used to extend the ability of block-diagram-oriented systems for DSP design, such as Ptolemy [1], to produce efficient hardware and software implementations; this permits the hardware-software codesign techniques of [2] to be efficiently targeted at a wider class of problems, those involving some asynchronous behavior, for example.
['Joseph Buck']
A dynamic dataflow model suitable for efficient mixed hardware and software implementations of DSP applications
34,593
Focusing on the fact that the collection of independent tasks to be scheduled onto the grid is always on a large-scale, a conception of task partition is proposed to group tasks exclusively according to the machine that gives the earliest completion time. As a result, several tasks in different task partitions can be scheduled at the same time, which reduces the range of task searching and eliminates the re-assignment of tasks completely. Furthermore, a Task Partition-Based Heuristic (TPBH) is presented with sufferage as the first heuristic and minimum completion time as the second one. Simulation results confirm that this dual heuristic scheduling strategy can reduce both makespan and the runtime; and the larger the task set is, the better performance the algorithm shows.
['Ding Ding', 'Siwei Luo', 'Zhan Gao']
A Dual Heuristic Scheduling Strategy Based on Task Partition in Grid Environments
397,053
In this paper we examine certain problems related to the use of diffusion approximations for the approximate modelling of computer systems. In particular we develop a model which allows us to handle waiting times and batch arrivals: these results are a new approach to the use of diffusion approximations. We also examine the effect of the distribution of holding times at boundaries: this question had remained open in earlier studies. We show that the stationary distributions associated with these diffusion models depend only on the average residence time of the process on the boundaries and not on the complete distribution function. This result justifies the use of exponential holding times as had been done in an earlier study.
['Erol Gelenbe']
Probabilistic models of computer systems
696,028
Location Fingerprinting (LF) is a promising localization technique that enables many commercial and emergency location-based services (LBS). While significant efforts have been invested in enhancing LF using advanced machine learning methods, the configuration effort required to deploy a LF system remains a significant issue. In this paper, a practical LF system is proposed which employs Gaussian Processes (GP) to significantly reduce the required database density. The GP solution is enhanced with a tracking algorithm which can easily incorporate floor plan constraints. The proposed system was prototyped with Android mobile phones in an enterprise environment. It is shown that with the proposed system an accuracy required for most commercial LBS applications can be achieved with a significantly reduced configuration effort.
['Marzieh Dashti', 'Simon Yiu', 'Siamak Yousefi', 'Fernando Perez-Cruz', 'Holger Claussen']
RSSI Localization with Gaussian Processes and Tracking
652,575
Recent OLAP systems, which are usually called HOLAP systems, are often developed using both NSM and DSM storages in a single database system for big data analytics in real-time. In HOLAP systems, a method for two types of storages usually depends on the type of SQL queries (e.g., insert, delete, or update). In short, we have the potential to find another approach to improve query processing time if we focus on the characteristics of data that an issued query handles. In this paper, we propose a method for optimizing query processing in a HOLAP system considering the four types of data characteristics such as those of the data extracted by a correlated subquery and by a join operation using tables that are different in size with an appropriate index construction.
['Takamitsu Shioi', 'Kenji Hatano']
Query processing optimization using disk-based row-store and column-store
698,196
This paper presents a new synthesis approach for dedicated systems. The aim of the synthesis scheme is to achieve can automatic exploration of VLIW processor architectures from a pure C description of the input system. The innovation consists of the fact that unit allocation must manage the fact that a function may be realized either by dedicated functional units or by a set of lower-level efficiently controlled functional units. For example, execution of a square root function can be accomplished in two ways: either by a dedicated functional unit or by an oriented software implementation of Newton's iterations. The aim is to find the best global trade-off between all the candidate architectures. In order to illustrate this synthesis scheme, we give an example issued on a sonar application. >
['Michel Auguin', 'F. Boeri', 'C. Carrière']
Automatic exploration of VLIW processor architectures from a designer's experience based specification
412,500
Cluster ensemble has become an important extension to traditional clustering algorithms, yet the cluster ensemble problem is very challenging due to the inherent difficulty in resolving the label correspondence problem. We adapted the integrated K-means - Laplacian clustering approach to solve the cluster ensemble problem by exploiting both the attribute information embedded in the cluster labels and the pairwise relations among the objects. The optimal solution of the proposed approach requires computing the pseudo inverse of the normalized Laplacian matrix and the eigenvalue decomposition of a large matrix, which can be computationally burdensome for large scale document datasets. We devised an effective algebraic transformation method for efficiently carrying out the aforementioned computations and proposed an integrated K-means - Laplacian cluster ensemble approach (IKLCEA). Experimental results with benchmark document datasets demonstrate that IKLCEA outperforms other cluster ensemble techniques on most cases. In addition, IKLCEA is computationally efficient and can be readily employed in large scale document applications.
['Sen Xu', 'Kung-Sik Chan', 'Jun Gao', 'Xiufang Xu', 'Xianfeng Li', 'Xiaopeng Hua', 'Jing An']
An integrated K-means - Laplacian cluster ensemble approach for document datasets
821,514
In this paper we propose a Fault Detection and Isolation (FDI) filter design method for Spark Injection Engines. Starting from a detailed nonlinear mean-value representation of the engine, a LPV approximation is obtained based on a judicious convex interpolation of a family of linearized models. A LPV-FDI filter based on a bank of Luenberger observers is synthesized by ensuring guaranteed levels of disturbance rejection and fault detection and isolation. The resulting diagnostic setup is parameter-dependent and uses a set of engine parameters, assumed measurable on-line, as a scheduling vector. The effectiveness of the LPV-FDI framework is illustrated by numerical examples where the diagnostic capabilities of the proposed FDI architecture are proved.
['G. Gagliardi', 'Alessandro Casavola', 'Domenico Famularo']
A bank of observers based LPV Fault Detection and Isolation method for Spark Injection Engines
89,581
Optimal utilization of power is a major concern for HPC, and is one of the focus points on the path towards exascale and approaches range from chip level to facility wide solutions. In order to evaluate the implications of these approaches and their impact on future system design, we need to understand their interaction with applications as well as their performance impact. In this work we describe the GREMLIN framework, a general framework to emulate system changes on existing platforms by resource restriction or event injection. We use this framework to understand the behavior of applications executed on power limited systems and to evaluate a solution for one of the problems resulting from operating under a power limit: the translation of manufacturing variability into heterogeneous performance, as observed in power limited HPC environments. We show that in a power limited environment manufacturing variability is a key source of performance imbalances and thus non-optimal execution. We propose a Power Balancer for redistribution of unused power and show performance gains of up to 1.5% at small to medium node counts.
['Matthias Maiterth', 'Martin Schulz', 'Dieter Kranzlmüller', 'Barry Rountree']
Power Balancing in an Emulated Exascale Environment
857,872
Abstract#R##N##R##N#This paper focuses on the factors that influence collaborative learning in distance education. Distance education has been around for many years and the use of collaborative learning techniques in distance education is becoming increasingly popular. Several studies have demonstrated the superiority of collaborative learning over traditional modes of learning and it has been identified as a potential solution to some of the weaknesses of traditional distance education courses. There are a rapidly growing number of technologies in use today and educators and practitioners face an increasingly difficult challenge to successfully implement collaborative learning in distance education; precipitated not only from technical advances but also from wider social and organisational concerns. To the best of our knowledge, this study is the first to investigate the factors that influence collaborative learning in distance education, by eliciting the opinions of an expert panel using a Delphi survey. The aim was to produce an integrated list of the most important implementation factors and to investigate the role that technology is perceived to contribute. The findings identified 17 of the most important factors; these factors cover a range of themes including course rationale and design, instructor characteristics, training, group dynamics, the development of a learning community and technology. The potential of technology, however, does not seem to be fully realised and newer technologies such as multi-user environments would seem to be of limited use in practice according to the expert panel.
["Susan O'Neill", 'Murray Scott', 'Kieran Conboy']
A Delphi study on collaborative learning in distance education: The faculty perspective
363,070
This paper presents a new and efficient scheme to decompress a set of deterministic test vectors for circuits with scan. The scheme is based on the reseeding of a Multiple Polynomial Linear Feedback Shift Register (MP-LFSR) but uses variable-length seeds to improve the encoding efficiency of test vectors with a wide variation in their number of specified bits. The paper analyzes the effectiveness of this novel approach both theoretically and through extensive experiments. A modular design of the decompression hardware re-uses the same LFSR used for pseudo-random vector generation and scan registers to minimize the area overhead.
['Nadime Zacharia', 'Janusz Rajski', 'Jerzy Tyszer']
Decompression of test data using variable-length seed LFSRs
298,808
The overcrowding and the heterogeneity of participants' profiles in a Massive Open Online Course (MOOC) are some of the main causes for a high dropout rate. International reports and research works points out the personalized learning as an important way to improve learning in any educational context. The information and communication technologies help to address adaptive technics in education through online courses. The specific characteristics of MOOCs point to the need to implement adaptive methodologies in MOOCs to increase the completion rates. This work presents a statistical analysis to find out in what aspects the condition of adaptivity, defined by the construct, is a preference of MOOC users, depending on three factors of the users profile. These factors are: profiles (gender, age, geographical location and academic level), their previous experience or knowledge on the topic of the MOOC and their motivation to start the MOOC.
['Dolores Lerís', 'María Luisa Sein-Echaluce', 'Miguel Hernández', 'Ángel Fidalgo-Blanco']
Relation between adaptive learning actions and profiles of MOOCs users
953,626
This poster describes CHiMPS, a toolflow that aims to provide software developers with a way to program hybrid CPU-FPGA platforms using familiar tools, languages, and techniques. CHiMPS starts with C and produces a specialized spatial dataflow architecture that supports coherent caches and the shared-memory programming model. The toolflow is designed to abstract away the complex details of data movement and separate memories on the hybrid platforms, as well as take advantage of memory management and computation techniques unique to reconfigurable hardware. This poster focuses on the memory design for CHiMPS, particularly the use of numerous small caches customized for various phases of program execution. The poster also addresses area vs. performance tradeoffs for various configurations. Applications compiled using CHiMPS show performance improvements of more than 36x on simple compute-intensive kernels, and 4.3x on the difficult-to-parallelize STSWM application without any special optimizations compared to running only on the CPU. The toolflow supports full ANSI-C, and produces hardware that runs on platforms that are expected to be available within one year
['Andrew Putnam', 'Dave Bennett', 'Eric Dellinger', 'Jeff Mason', 'Prasanna Sundararajan']
CHiMPS: a high-level compilation flow for hybrid CPU-FPGA architectures
492,282
We develop a novel collision-free channel assignment and scheduling mechanism (CFCS) for multichannel wireless sensor networks. Firstly, each node is assigned with a quiescent channel to reduce hidden terminal beforehand and then it makes channel adjustment according to dynamic traffic. Secondly, a scalable multichannel scheduling is designed to make a tradeoff between overhead and fairness. We have implemented simulation to evaluate the performance of CFCS by comparing with other relevant protocols. The results show that our protocol exhibits more prominent ability, which utilizes multichannel to make parallel transmission and reduce hidden terminal problems effectively in resource-constrained wireless sensor networks.
['Yuanyuan Zeng', 'Naixue Xiong', 'Tai-hoon Kim']
Channel assignment and scheduling in multichannel wireless sensor networks
399,472
The aim of this paper is to provide a comparison of various algorithms and parameters to build reduced semantic spaces. The effect of dimension reduction, the stability of the representation and the effect of word order are examined in the context of the five algorithms bearing on semantic vectors: Random projection (RP), singular value decomposition (SVD), non-negative matrix factorization (NMF), permutations and holographic reduced representations (HRR). The quality of semantic representation was tested by means of synonym finding task using the TOEFL test on the TASA corpus. Dimension reduction was found to improve the quality of semantic representation but it is hard to find the optimal parameter settings. Even though dimension reduction by RP was found to be more generally applicable than SVD, the semantic vectors produced by RP are somewhat unstable. The effect of encoding word order into the semantic vector representation via HRR did not lead to any increase in scores over vectors constructed from word co-occurrence in context information. In this regard, very small context windows resulted in better semantic vectors for the TOEFL test.
['Laurianne Sitbon', 'Peter D. Bruza', 'Christian Werner Prokopp']
EMPIRICAL ANALYSIS OF THE EFFECT OF DIMENSION REDUCTION AND WORD ORDER ON SEMANTIC VECTORS
153,629
Duty cycling or periodic sleep scheduling of RF transceivers of nodes in a wireless ad hoc or sensor network can significantly reduce energy consumption. This paper sheds light on the fundamental limits of the end-to-end data delivery latency and the per-flow throughput in a wireless network with multiple interfering flows, in the presence of "coordinated" duty cycling. We propose green wave sleep scheduling (GWSS) - inspired by synchronized traffic lights - for scheduling sleep-wake slots and routing data in a duty cycling wireless network, whose performance can approach the aforementioned limits. Particularly, we derive a general latency lower bound and show that GWSS is latency optimal on various structured topologies, such as the line, grid and the tree, at low traffic load. For an arbitrary network, finding a solution to the delay-efficient sleep scheduling problem is NP-hard. But for the 2D grid topology, we show that a non-interfering construction of GWSS is optimal in the sense of scaling laws of latency and capacity. Finally, using results from percolation theory, we extend GWSS to random wireless networks, where nodes are placed in a square area according to the Poisson point process. Aided by strong numerical evidence for a new conjecture on percolation on a semi-directed lattice that we propose, we demonstrate the latency optimality of GWSS on a random extended network, i.e., for an area-n random network with unit-density-Poisson distributed nodes, and a node-active (duty-cycling) rate p, GWSS can achieve a per-flow throughput scaling of T(n, p) = Ω(p/√n) bits/sec and latency D(n, p) scaling of O(√n) + O(1/p) hops/packet/flow.
['Saikat Guha', 'Prithwish Basu', 'Chi-Kin Chau Chau', 'Richard J. Gibbens']
Green Wave Sleep Scheduling: Optimizing Latency and Throughput in Duty Cycling Wireless Networks
176,463
Body rotation under free fall along a desired trajectory can be found in many applications such as sports, entertainment, and manufacturing. An appropriately designed body path could lower the forces at the joints during inversion and thus minimizing potential injury. This paper presents a method of developing dynamic models that characterize the interaction between the body of a live object undergoing inversion and the mechanical system driving the rotation. The method offers an effective means to analyze the sensitivity of the design and operational parameters on the body rotation. The models have been validated experimentally. The simulated and experimental results offer significant insights to the joint forces and a means to improve the body dynamics. While the results have immediate application in inverting live birds for poultry meat processing, we expect the model will provide a basis for analyzing body rotational dynamics in other applications such as gymnastics and roller coasters.
['Kok-Meng Lee', 'Chris Shumway']
Dynamic modeling of the body inversion for automated transfer of live birds
70,703
Citation analysis of documents retrieved from the Medline database (at the Web of Knowledge) has been possible only on a case-by-case basis. A technique is presented here for citation analysis in batch mode using both Medical Subject Headings (MeSH) at the Web of Knowledge and the Science Citation Index at the Web of Science (WoS). This freeware routine is applied to the case of "Brugada Syndrome," a specific disease and field of research (since 1992). The journals containing these publications, for example, are attributed to WoS categories other than "cardiac and cardiovascular systems", perhaps because of the possibility of genetic testing for this syndrome in the clinic. With this routine, all the instruments available for citation analysis can now be used on the basis of MeSH terms. Other options for crossing between Medline, WoS, and Scopus are also reviewed.
['Loet Leydesdorff', 'Tobias Opthof']
Citation analysis with medical subject Headings (MeSH) using the Web of Knowledge: A new routine
169,792