abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
This paper elaborates a generalized approach to the modeling of human and humanoid motion. Instead of the usual inductive approach that starts from the analysis of different situations of real motion (like bipedal gait and running; playing tennis, soccer, or volleyball; gymnastics on the floor or using some gymnastic apparatus) and tries to make a generalization, the deductive approach considered begins by formulating a completely general problem and deriving different real situations as special cases. The paper first explains the general methodology. The concept and the software realization are verified by comparing the results with the ones obtained by using "classical" software for one particular well-known problem: biped walk. The applicability and potentials of the proposed method are demonstrated by simulation using a selected example. The simulated motion includes a landing on one foot (after a jump), the impact, a dynamically balanced single-support phase, and overturning (falling down) when the balance is lost. It is shown that the same methodology and the same software can cover all these phases.
['Veljko Potkonjak', 'Miomir Vukobratovic', 'Kalman Babkovic', 'Branislav Borovac']
GENERAL MODEL OF DYNAMICS OF HUMAN AND HUMANOID MOTION: FEASIBILITY, POTENTIALS AND VERIFICATION
24,249
The analysis of fluorescently tagged proteins in live cells from multi-channel microscopy image sequences requires a registration to a reference frame to decouple the movement and deformation of cells from the movement of proteins. We have developed an intensity-based approach to register 2D and 3D multi-channel microscopy image sequences. This approach directly exploits the image intensities. We have compared the results of our approach with results based on segmented images. Also, we have performed a comparison between a direct registration scheme to the reference frame and an incremental scheme taking into account results from preceding time steps. We have validated our approach based on 3D synthetic images of a simulated cell with known deformation which has been calculated based on an analytic solution of the Navier equation given certain boundary conditions. We have also successfully applied our approach to 2D and 3D real microscopy image sequences
['Il-Han Kim', 'S. Yang', 'P. Le Baccon', 'Edith Heard', 'Yi Chun M Chen', 'David L. Spector', 'Christoph Kappel', 'Roland Eils', 'Karl Rohr']
NON-RIGID TEMPORAL REGISTRATION OF 2D AND 3D MULTI-CHANNEL MICROSCOPY IMAGE SEQUENCES OF HUMAN CELLS
79,844
Communication between OS files is often necessary when information that is manipulated and stored in Sharp APL files is required to be used by non-APL routines. For example, one may use APL for number crunching and have the output processed by SAS or FORTRAN to plot the data. The arising demand for procedures that would make data accessible from APL to other systems motivated the author to take action. TSIOPAK was developed with the intention to provide a standard interface between Sharp APL and the MVS operating system facilitating the accessibility of data created through APL and other languages. In addition, the reader will find that starting from the section titled the TSIOPAK File System, the format for each section looks very much like the Sharp APL Reference Manual. This is not a coincidence, what the author has tried to achieve with this work is to have a file system functionally identical to Sharp's using OS files. The author's efforts have gone as far as trying to replicate the format of Sharp's manual. In addition, parts of the Sharp APL File System have not been implemented in TSIOPAK due to the lack of time and resources. Hence, this paper is a proposal for a new file system which would make APL-generated data generally available.
['Carlos G. Leon']
TSIOPAK—a proposal for a new Sharp APL file system
417,277
Congestion Control (CC) algorithms are essential to quickly restore the network performance back to stable whenever congestion occurs. A majority of the existing CC algorithms are implemented at the transport layer, mostly coupled with TCP. Over the past three decades, CC algorithms have incrementally evolved, resulting in many extensions of TCP. A thorough evaluation of a new TCP extension is a huge task. Hence, the Internet Congestion Control Research Group (ICCRG) has proposed a common TCP evaluation suite that helps researchers to gain an initial insight into the working of their proposed TCP extension. This paper presents an implementation of the TCP evaluation suite in ns-3, that automates the simulation setup, topology creation, traffic generation, execution, and results collection. We also describe the internals of our implementation and demonstrate its usage for evaluating the performance of five TCP extensions available in ns-3, by automatically setting up the following simulation scenarios: (i) single and multiple bottleneck topologies, (ii) varying bottleneck bandwidth, (iii) varying bottleneck RTT and (iv) varying the number of long flows.
['Dharmendra Kumar Mishra', 'Pranav Vankar', 'Mohit P. Tahiliani']
TCP Evaluation Suite for ns-3
739,570
On presente une systematisation des logiques multivalentes finies a l'aide de la methode des tableaux
['Walter Alexandre Carnielli']
Systematization of finite many-valued logics through the method of tableaux
259,879
This paper aims at bringing social robots closer to naive users. A Natural Programming System that allows the end-user to give instructions to a Social Robot has been developed. The instructions derive in a sequence of actions and conditions, that can be executed while the own sequence verbal edition continues. A Dialogue Manager System (DMS) has been developed in a Social Robot. The dialog is described in a voiceXML structure, where a set of information slots is defined. These slots are related to the necessary attributes for the construction of the sequence in execution time. The robot can make specific requests on encountering unfilled slots. Temporal aspects of dialog such as barge-in property, mixed-initiative, or speech intonation control are also considered. Dialog flow is based on Dialog Acts. The dialog specification has also been extended for multimodality management. The presented DMS has been used as a part of a Natural Programming System but can also be used for other multimodal humanrobot interactive skills.
['Javier F. Gorostiza', 'Miguel Angel Salichs']
Natural Programming of a Social Robot by Dialogs
629,846
Abstract#R##N##R##N#In 1988, Chvatal and Sbihi (J Combin Theory Ser B 44(2) (1988), 154–176) proved a decomposition theorem for claw-free perfect graphs. They showed that claw-free perfect graphs either have a clique-cutset or come from two basic classes of graphs called elementary and peculiar graphs. In 1999, Maffray and Reed (J Combin Theory Ser B 75(1) (1999), 134–156) successfully described how elementary graphs can be built using line-graphs of bipartite graphs using local augmentation. However, gluing two claw-free perfect graphs on a clique does not necessarily produce claw-free graphs. In this article, we give a complete structural description of claw-free perfect graphs. We also give a construction for all perfect circular interval graphs.
['Maria Chudnovsky', 'Matthieu Plumettaz']
The Structure of Claw-Free Perfect Graphs
296,931
In this paper, we present an algorithm using distance-based two-layer fuzzy sliding mode control (DBTLF-SMC) to achieve the prespecified trajectory for a class of nonlinear systems. The tracking trajectory is composed of a set of sequentially-operated piecewise continuous sliding surfaces which the system's state can follow to the equilibrium in phase plane. The algorithm using the boundary layer technique is capable of handling the reaching phase (RP) of the trajectory and chattering effects inherent to the VSC easily and effectively. By using distance-based two-layer fuzzy sliding mode controller, the state would follow the sliding surfaces in turn, and the parametric uncertainties or disturbances can be reduced effectively and the response of the tracking error will be small than that of the distance-based single-layer fuzzy sliding mode control. It is shown that the stability of the control system is guaranteed in the Lyapunov sense. Finally, an inverted pendulum control problem subject to external disturbance is simulated to demonstrate the validity of the proposed algorithm
['Wen-Shyong Yu']
Design of Distance-Based Two-Layer Fuzzy Sliding Mode Control
137,356
Scientific applications exhibit good spatial and temporal data memory access locality. It is possible to hide memory latency for the level 3 cache, and reduce contention between multiple cores sharing a single level 3 cache, by using a prefetch cache to identify data streams which can be profitably prefetched, and decouple the cache line size mismatch between L3 cache and the level 1 data cache. In this work, a design space exploration is presented, which helped shape the design of the BlueGene/L supercomputer memory sub-system. The prefetch cache consists of a small number of 128 line buffers that speculatively prefetches data from the L3 cache, since applications present some sequential access pattern, this prefetching scheme increases the likelihood that a request from the level 1 data cache was present in the prefetch cache. Since most compute intensive applications contain a small number of data streams, it is sufficient for the prefetch cache to have small number of line buffers to track and detect the data streams. This paper focuses on the evaluation of stream detection mechanisms and the influence of varying the replacement policies for stream prefetch caches.
['José R. Brunheroto', 'Valentina Salapura', 'Fernando F. Redigolo', 'Dirk Hoenicke', 'Alan Gara']
Data cache prefetching design space exploration for BlueGene/L supercomputer
186,382
Background: Recent advances in proteomics technologies such as SELDI-TOF mass spectrometry has shown promise in the detection of early stage cancers. However, dimensionality reduction and classification are considerable challenges in statistical machine learning. We therefore propose a novel approach for dimensionality reduction and tested it using published high-resolution SELDI-TOF data for ovarian cancer. Results: We propose a method based on statistical moments to reduce feature dimensions. After refining and ttesting, SELDI-TOF data are divided into several intervals. Four statistical moments (mean, variance, skewness and kurtosis) are calculated for each interval and are used as representative variables. The high dimensionality of the data can thus be rapidly reduced. To improve efficiency and classification performance, the data are further used in kernel PLS models. The method achieved average sensitivity of 0.9950, specificity of 0.9916, accuracy of 0.9935 and a correlation coefficient of 0.9869 for 100 five-fold cross validations. Furthermore, only one control was misclassified in leave-one-out cross validation. Conclusion: The proposed method is suitable for analyzing high-throughput proteomics data.
['Kailin Tang', 'Tonghua Li', 'Wenwei Xiong', 'Kai Chen']
Ovarian cancer classification based on dimensionality reduction for SELDI-TOF data
19,095
Multiscale, Geometric Image Descriptions for Interactive Object Definition
['Stephen M. Pizer', 'John M. Gauch', 'James M. Coggins', 'Timothy J. Cullip', 'Robin E. Fredericksen', 'Victoria Interrante']
Multiscale, Geometric Image Descriptions for Interactive Object Definition
338,304
Initial deployment and subsequent dynamic reconfiguration of a software system is difficult because of the interplay of many interdependent factors, including cost, time, application state, and system resources. As the size and complexity of software systems increases, procedures (manual or automated) that assume a static software architecture and environment are becoming untenable. We have developed a novel technique for carrying out the deployment and reconfiguration planning processes that leverages recent advances in the field of temporal planning. We describe a tool called Planit, which manages the deployment and reconfiguration of a software system utilizing a temporal planner. Given a model of the structure of a software system, the network upon which the system should be hosted, and a goal configuration, Planit will use the temporal planner to devise possible deployments of the system. Given information about changes in the state of the system, network and a revised goal, Planit will use the temporal planner to devise possible reconfigurations of the system. We present the results of a case study in which Planit is applied to a system consisting of various components that communicate across an application-level overlay network.
['Naveed Arshad', 'Dennis Heimbigner', 'Alexander L. Wolf']
Deployment and dynamic reconfiguration planning for distributed software systems
27,472
Many organizations today store large streams of transactional data in real time. This data can often show important changes in trends over time. In many commercial applications, it may be valuable to provide the user with an understanding of the nature of changes occuring over time in the data stream. In this paper, we discuss the process of analysing the significant changes and trends in data streams in a way which is understandable, intuitive and user-friendly.
['Charu C. Aggarwal']
An intuitive framework for understanding changes in evolving data streams
359,555
A common hypothesis is that students will more deeply understand dynamic systems and other complex phenomena if they construct computational models of them. Attempts to demonstrate the advantages of model construction have been stymied by the long time required for students to acquire skill in model construction. In order to make model construction a feasible vehicle for science instruction, the Dragoon system combined three simplifications: (1) a simple notation for models of dynamic systems, (2) a step-based tutoring system, and (3) problems that described the model to be constructed as well as the system represented by the model. In order to test whether these simplifications reduced the time for learning how to construct models while preserving the benefits of model construction over baseline instruction, three classroom studies were conducted. All studies were experiments, in that they compared classes using Dragoon to classes learning the same material without Dragoon. However, as classroom studies, they could not tightly control all sources of variation. The first study produced null results, but it compared learning across just one class period. The second study in 4 high school science classes showed that instruction based on Dragoon cost only one extra class period (about 50 min) out of 4 class periods and was more effective than the same content taught without Dragoon. A third study in 3 more high school science classes, where 2 Dragoon classes and 1 non-Dragoon class met for the same number of class periods, showed that Dragoon was more effective than the same content taught without Dragoon. The effect sizes were moderately large on both an open response test (d = 1.00) and a concept mapping task (d = 0.49). Thus, it appears that our efforts have simplified model construction to the point that it can be used in science instruction with no additional class time needed, and yet it still seems to be more effective than the same instruction done without model construction.
['Kurt VanLehn', 'Gregory K. W. K. Chung', 'Sachin Grover', 'Ayesha Madni', 'Jon Wetzel']
Learning Science by Constructing Models: Can Dragoon Increase Learning without Increasing the Time Required?
678,882
We consider the processes of achieving alignment in coordinated inter-organizational networks through a case study of a system development project in ARC Transistance, a network of European automobile clubs that cooperate to provide pan-European service. The theoretical contribution of the paper is, first, an extended strategic alignment model for inter-organizational networks that distinguishes between integration of IS with business strategy and infrastructure, and what we label ‘accordance’ between the strategies and infrastructures of the network and the member firms. Second, we propose that for a network organization, network and member strategies might be complementary as well as tightly coupled. We similarly argue that IS architectures for networks should strive for being ‘business strategy-neutral’ to more easily accommodate the diversity of members. Finally, we discuss how the process of developing a network information system can be a driver towards network alignment, but how the lack of effective governance structures makes alignment harder to achieve.
['Bernhard R. Katzy', 'Gordon Sung', 'Kevin Crowston']
Alignment in an inter-organisational network: the case of ARC transistance
807,879
In this paper, a new framework for evaluating a variety of computer vision systems and components is introduced. This framework is particularly well suited for domains such as classification or recognition systems, where blind application of the i.i.d. assumption would reduce an evaluation's accuracy, such as with classification or recognition systems. With few exceptions, most previous work on vision system evaluation does not include confidence intervals, since they are difficult to calculate, and are often coupled with strict requirements. We show how a set of previously overlooked replicate statistics tools can be used to obtain tighter confidence intervals of evaluation estimates while simultaneously reducing the amount of data and computation required to reach such sound evaluatory conclusions. In the included application of the new methodology, the well-known FERET face recognition system evaluation is extended to incorporate standard errors and confidence intervals.
['Ross J. Micheals', 'Terrance E. Boult']
Efficient evaluation of classification and recognition systems
8,819
We propose a distributed control scheme for cyclic formations of multi-agent systems using relative position measurements in local coordinate frames. It is assumed that agents cannot communicate with each other and do not have access to global position information. For the case of three and four agents with desired formation defined as a regular polygon, we prove that under the proposed control, starting from almost any initial condition, agents converge to the desired configuration. Moreover, it is shown that the control is robust to the failure of any single agent. From Monte Carlo analysis, a conjecture is proposed to extend the results to any number of agents.
['Kaveh Fathian', 'Dmitrii Rachinskii', 'Tyler H. Summers', 'Nicholas R. Gans']
Distributed control of cyclic formations with local relative position measurements
973,300
Gives an overview of alternative publishing models to replace the academic journal. Reviews the roles of the journal in the academic enterprise. Describes the structure and operation of a proposed new academic publishing model called the Deconstructed Journal.
['John W T Smith']
The deconstructed journal
435,121
Real-time embedded systems are increasingly being built using commercial-off-the-shelf (COTS) components such as mass-produced peripherals and buses to reduce costs, time-to-market, and increase performance. Unfortunately, COTS-interconnect systems do not usually guarantee timeliness, and might experience severe timing degradation in the presence of high-bandwidth I/O peripherals. Moreover, peripherals do not implement any internal priority-based scheduling mechanism, hence, sharing a device can result in data of high priority tasks being delayed by data of low priority tasks. To address these problems, we designed a real-time I/O management system comprised of 1) real-time bridges with I/O virtualization capabilities, and 2) a peripheral scheduler. The proposed framework is used to transparently put the I/O subsystem of a COTS-based embedded system under the discipline of real-time scheduling, minimizing the timing unpredictability due to the peripherals sharing the bus. We also discuss computing the maximum delay due to buffered I/O data transactions as well as determining the buffer size needed to avoid data loss. Finally, we demonstrate experimentally that our prototype real-time I/O management system successfully exports multiple virtual devices for a single physical device and prioritizes I/O traffic, guaranteeing its timeliness.
['Emiliano Betti', 'Stanley Bak', 'Rodolfo Pellizzoni', 'Marco Caccamo', 'Lui Sha']
Real-Time I/O Management System with COTS Peripherals
221,844
With the increasing permeation of data into all dimensions of our information society, data is progressively becoming the basis for many products and services. It is hence becoming more and more vital to identify the means and methods how to exploit the value of this data. In this paper we provide our definition of the Data Value Network, where we specifically cater for non-tangible data products. We also propose a Demand and Supply Distribution Model with the aim of providing insight on how an entity can participate in the global data market by producing a data product, as well as a concrete implementation through the Demand and Supply as a Service. Through our contributions we project our vision of generating a new Economic Data Ecosystem that has the Web of Data as its core.
['Judie Attard', 'Fabrizio Orlandi', 'Sören Auer']
Data Value Networks: Enabling a New Data Ecosystem
994,010
Cinematic virtual reality (VR) aims to provide immersive visual experiences of real-world scenes on head-mounted displays. Current cinematic VR systems employ omnidirectional stereo videos from a fixed position, and therefore do not address head-motion parallax, which is an important cue for depth perception. We propose a new 3D video representation, referred to as depth augmented stereo panorama (DASP), to address this issue. DASP is developed considering data capture, postproduction, streaming, and rendering stages of the VR pipeline. The capabilities of this representation are evaluated by comparing the generated viewports with those from known 3D models. Results indicate that DASP can successfully create stereo and induce head-motion parallax in a predefined operating range.
['Jayant Thatte', 'Jean-Baptiste Boin', 'Haricharan Lakshman', 'Bernd Girod']
Depth augmented stereo panorama for cinematic virtual reality with head-motion parallax
883,051
The automated matching of mug-shot photographs with sketches drawn using eyewitness descriptions of criminals is a problem that has received much attention in recent years. However, most algorithms have been evaluated either on small datasets or using sketches that closely resemble the corresponding photos. In this paper, a method which extracts Multi-scale Local Binary Pattern (MLBP) descriptors from overlapping patches of log-Gabor-filtered images is used to obtain cross-modality templates for each photo and sketch. The Spearman Rank-Order Correlation Coefficient (SROCC) is then used for template matching. Log-Gabor filtering and MLBP provide global and local texture information, respectively, whose combination is shown to be beneficial for face photo-sketch recognition. Experimental results with a large database show that the proposed approach outperforms state-of-the-art methods, with a Rank-1 retrieval rate of 81.4%. Fusion with the intra-modality approach Eigenpatches improves the Rank-1 rate to 85.5%.
['Christian Galea', 'Reuben A. Farrugia']
Face photo-sketch recognition using local and global texture descriptors
961,255
Multi-resolution analysis (MRA) has been successfully used in image processing with the recent emergence of applications to texture classification. Several studies have investigated the discriminating power of wavelet-based features in various applications such as image compression, image denoising, and classification of natural textures. Recently, the curvelet and contourlet transforms have emerged as new multi-resolution analysis tools to deal with non-linear singularities present in the image. This article explores and proposes a texture based classification of remotely sensed multispectral images using features derived from the wavelet, curvelet and contourlet transforms. These features characterize the textural properties of the images and are used to train the classifier to recognize each texture class. Using these MRA based feature descriptors class separability is defined in feature space. The results are compared with Grey Level Co-occurrence Matrix (GLCM) based statistical features.
['Rizwan Ahmed Ansari', 'Krishna Mohan Buddhiraju']
Textural classification based on wavelet, curvelet and contourlet features
931,854
We present our state of the art multilingual text summarizer capable of single as well as multi-document text summarization. The algorithm is based on repeated application of TextRank on a sentence similarity graph, a bag of words model for sentence similarity and a number of linguistic pre- and post-processing steps using standard NLP tools. We submitted this algorithm for two different tasks of the MultiLing 2015 summarization challenge: Multilingual Singledocument Summarization and Multilingual Multi-document Summarization.
['Stefan Thomas', 'Christian Beutenmüller', 'Xose de la Puente', 'Robert Remus', 'Stefan Bordag']
ExB Text Summarizer
614,659
Official statistics such as demographics, environment, health, social-economy and education from national and subnational sources are a rich and important source of information for many important aspects of life and should be considered to be more used and acknowledged in education. Educators and their students would be able to get informed and at the same time participate in increasing the knowledge on how life is lived and can be improved. A lot of this statistics information can be reached on the Internet. This is producing what is often called information overload and causing people to be increasingly faced with the problems of filtering and interpreting enormous quantities of information. We know that official statistics are used as a more or less important background for decisions especially in public planning and policy making. However, in education, official statistics are much less recognized and used than they ought to be and among the informed public they are even less used. Web-enabled GeoAnalytics is a technique that can help illustrating complex statistical data which for the eye are hard to uncover or even are not possible to perceive or interpret. In this paper, we introduce novel “storytelling” means for the author (educator) to 1) select any spatiotemporal and multidimensional national or sub-national statistical data, 2) explore and discern trends and patterns, 3) then orchestrate and describe metadata, 4) collaborate with colleagues to confirm and 5) finally publish essential gained insight and knowledge embedded as dynamic visualization “Vislet” in blogs or wikis with associate metadata. The author can guide the reader in the directions of both context and discovery while at the same time follow the analyst’s way of logical reasoning. We are moving away from a clear distinction between authors and readers affecting the process through which knowledge is created and the traditional models which support editorial work. Value no longer relies solely on the content but also on the ability to access this information. Audiences are increasingly gathered around Web enabled technologies and this distribution channel is, more than ever, in control of the information value chain.
['Mikael Jern']
Educating Students in Official Statistics using Embedded Geovisual Analytics Storytelling Methods
609,903
Bell-shaped vibratory angular rate gyro (abbreviated as BVG) is a new type Coriolis vibratory gyro that was inspired by Chinese traditional clocks. The resonator fuses based on a variable thickness axisymmetric multicurved surface shell. Its characteristics can directly influence the performance of BVG. The BVG structure not only has capabilities of bearing high overload, high impact and, compared with the tuning fork, vibrating beam, shell and a comb structure, but also a higher frequency to overcome the influence of the disturbance of the exterior environment than the same sized hemispherical resonator gyroscope (HRG) and the traditional cylinder vibratory gyroscope. It can be widely applied in high dynamic low precision angular rate measurement occasions. The main work is as follows: the issue mainly analyzes the structure and basic principle, and investigates the bell-shaped resonator's mathematical model. The reasonable structural parameters are obtained from finite element analysis and an intelligent platform. Using the current solid vibration gyro theory analyzes the structural characteristics and principles of BVG. The bell-shaped resonator is simplified as a paraboloid of the revolution mechanical model, which has a fixed closed end and a free opened end. It obtains the natural frequency and vibration modes based on the theory of elasticity. The structural parameters are obtained from the orthogonal method by the research on the structural parameters of the resonator analysis. It obtains the modal analysis, stress analysis and impact analysis with the chosen parameters. Finally, using the turntable experiment verifies the gyro effect of the BVG.
['Zhong Su', 'Mengyin Fu', 'Qing Li', 'Ning Liu', 'Hong Liu']
Research on Bell-Shaped Vibratory Angular Rate Gyro's Character of Resonator
178,955
High-level modeling for operational amplifiers (opamps) has been previously carried out successfully using models generated by published automated model generation approaches. Furthermore, high-level fault modeling (HLFM) has been shown to work reasonably well using manually designed fault models. However, no evidence shows that published automated model generation approaches based on opamps have been used in HLFM. This paper describes HLFM for analog circuits using an adaptive self-tuning algorithm called multiple model generation system using delta. The generation algorithms and simulation models were written in MATLAB and the hardware description language VHDL-AMS, respectively. The properties of these self-tuning algorithms were investigated by modeling complementary metal-oxide-semiconductor opamps, and comparing simulations using the HLFM against those of the original simulation program with integrated circuit emphasis circuit utilizing transient analysis. Results show that the models can handle both linear and nonlinear fault situations with better accuracy than previously published HLFMs.
['Likun Xia', 'Ian M. Bell', 'A.J. Wilkinson']
Automated Model Generation Algorithm for High-Level Fault Modeling
392,767
This is a reply to the comment by H.-C. Wu and C.-T. Sun on the paper "A Computational Evolutionary Approach to Evolving Game Strategies and Cooperation." Key design problems, limitations and potential applications are discussed.
['Francisco Azuaje']
Reply to comments on 'A computational evolutionary approach to evolving game strategy and Cooperation'
333,386
Cost-effective and accurate fault simulation of very large digital designs on engineering workstations is proposed. The hierarchical approach reduces memory requirements drastically by storing the structure of common repeated subcircuits only once. The approach allows flexible multilevel simulation. The simulation algorithms are at the switch-level so that general MOS digital designs with bidirectional signal flow can be handled, and both stuck-at and transistor faults are treated accurately. The fault simulation algorithms have been implemented as a prototype that was used to determine the fault grade of a model of the Motorola 68000 microprocessor on SUN Microsystems workstations. >
['Daniel G. Saab', 'Robert B. Mueller-Thuns', 'David Blaauw', 'Joseph T. Rahmeh', 'Jacob A. Abraham']
Fault grading of large digital systems
232,307
In computer communication networks, routing is often accomplished by maintaining copies of the network topology and dynamic performance characteristics in various network nodes. The present paper describes an algorithm that allows complete flexibility in the placement of the topology information. In particular, we assume that an arbitrary subset of network nodes are capable of maintaining the topology. In this environment, protocols are defined to allow automatic updates to flow between these more capable nodes. In addition, protocols are defined to allow less capable nodes to report their topology data to the major nodes, and acquire route information from them.
['Jeffrey M. Jaffe', 'Adrian Segall']
Automatic Update of Replicated Topology Databases
36,887
To date, the best algorithms for performing placement on Field-Programmable Gate Arrays (FPGAs) are based on Simulated Annealing (SA). Unfortunately, these algorithms are not scalable due to the long convergence time of the latter. With an aim towards developing a scalable FPGA placer we present an analytic placement method based on a near-linear net model, called star+. The star+ model is a variant of the well-known star model and is continuously differentiable - a requirement of analytic methods that rely on the existence of first- and second-order derivatives. Most importantly, with the star+ model incremental changes in cost resulting from block movement can be computed in O(1) time, regardless of the size of the net. This makes it possible to construct time-efficient solution methods based on conjugate gradient and successive over-relaxation for solving the resulting non-linear equation system. When compared to VPR, the current state-of-the-art placer based on SA, our analytic method is able to obtain an 8-9% reduction in critical-path delay while achieving a speedup of nearly 5x when VPR is run in its fast mode.
['Ming Xu', 'Gary William Grewal', 'Shawki Areibi']
StarPlace: A new analytic method for FPGA placement
411,921
Heuristic search algorithms are popular Artificial Intelligence methods for solving the shortest-path problem. This research contributes new heuristic search algorithms that are either faster or scale up to larger problems than existing algorithms. Our contributions apply to both online and offline tasks. #R##N#For online tasks, existing real-time heuristic search algorithms learn better informed heuristic values and sometimes eventually converge to a shortest path by repeatedly executing the action leading to a successor state with a minimum cost-to-goal estimate. In contrast, we claim that real-time heuristic search converges faster to a shortest path when it always selects an action leading to a state with a minimum f-value (i.e., a minimum estimate of the cost of a shortest path from start to goal via the state), just like in the offline A* search algorithm. We support this claim by implementing this new non-trivial action-selection rule in FALCONS and by showing empirically that FALCONS significantly reduces the number of actions to convergence of a state-of-the-art real-time search algorithm. #R##N#For offline tasks, we scale up two best-first search approaches. First, a greedy variant of A* called WA* is known (1) to consume less memory to find solutions of equal cost when it is diversified (i.e., when it performs expansions in parallel), as in KWA*; and (2) to solve larger problems when it is committed (i.e., when it chooses the state to expand next among a fixed-size subset of the set of generated but unexpanded states), as in MSC-WA*. We claim that WA* solves even larger problems when it is enhanced with both diversity and commitment. We support this claim with our MSC-KWA* algorithm. Second, it is known that breadth-first search solves larger problems when it prunes unpromising states, resulting in the beam search algorithm. We claim that beam search quickly solves even larger problems when it is enhanced with backtracking based on limited discrepancy search. We support this claim with our BULB algorithm. We demonstrate the improved scaling of MSC-KWA* and BULB empirically in three standard benchmark domains. Finally, we apply anytime variants of BULB to the multiple sequence alignment problem in biology.
['David Furcy', 'Sven Koenig']
Speeding up the convergence of online heuristic search and scaling up offline heuristic search
81,678
Salutogenesis is now accepted as a part of the contemporary model of disease: an individual is not only affected by pathogenic factors in the environment, but those that promote well-being or salutogenesis. Given that "environment" extends to include the built environment, promotion of salutogenesis has become part of the architectural brief for contemporary healthcare facilities, drawing on an increasing evidence-base. Salutogenesis is inextricably linked with the notion of person-environment "fit". MyRoom is a proposal for an integrated architectural and pervasive computing model, which enhances psychosocial congruence by using real-time data indicative of the individual's physical status to enable the environment of his/her room (colour, light, temperature) to adapt on an on-going basis in response to bio-signals. This work is part of the PRTLI-IV funded programme NEMBES, investigating the use of embedded technologies in the built environment. Different care contexts require variations in the model, and iterative prototyping investigating use in different contexts will progressively lead to the development of a fully-integrated adaptive salutogenic single-room prototype.
['Cathy Dalton', 'Kevin McCartney']
Salutogenesis: A new paradigm for pervasive computing in healthcare environments?
186,645
Estimating accurately important nodes for routing in modern and future networks is a key process with numerous benefits. Towards this goal, in this paper we propose Hyperbolic Traffic Load Centrality (HTLC), as a novel alternative to the Traffic Load Centrality (TLC) metric, used for ranking nodes with respect to their importance in the routing operation. HTLC is based on network embedding in hyperbolic space, while assuming paths paved by greedy routing over hyperbolic coordinates, which requires less computational effort than shortest path routing (in terms of hop distances) used for TLC. Greedy routing in hyperbolic space also yields paths with lengths very close to the shortest ones for the social networks of interest bearing the scale-free property. Through analysis and simulation, we demonstrate that HTLC requires significantly lower computational time than TLC, and despite being more suitable for greedy routing constraints over hyperbolic space, it nevertheless achieves a close approximation of TLC for networks with scale-free properties when assuming shortest path routing. Thus, it can substitute TLC when analyzing very large network topologies.
['Eleni Stai', 'Konstantinos Sotiropoulos', 'Vasileios Karyotis', 'Symeon Papavassiliou']
Hyperbolic Traffic Load Centrality for large-scale complex communications networks
835,770
SUMMARY This paper presents a VLSI design of a TomlinsonHarashima (TH) precoder for multi-user MIMO (MU-MIMO) systems. The TH precoder consists of LQ decomposition (LQD), interference cancellation (IC), and weight coefficient multiplication (WCM) units. The LQ decomposition unit is based on an application specific instruction-set processor (ASIP) architecture with floating-point arithmetic for high accuracy operations. In the IC and WCM units with fixed-point arithmetic, the proposed architecture uses an arrayed pipeline structure to shorten a circuit critical path delay. The implementation result shows that the proposed architecture reduces circuit area and power consumption by 11% and 15%,
['Kosuke Shimazaki', 'Shingo Yoshizawa', 'Yasuyuki Hatakawa', 'Tomoko Matsumoto', 'Satoshi Konishi', 'Yoshikazu Miyanaga']
A VLSI Design of a Tomlinson-Harashima Precoder for MU-MIMO Systems Using Arrayed Pipelined Processing
319,801
This paper studies the Galerkin finite element approximation of time-fractional Navier–Stokes equations. The discretization in space is done by the mixed finite element method. The time Caputo-fractional derivative is discretized by a finite difference method. The stability and convergence properties related to the time discretization are discussed and theoretically proven. Under some certain conditions that the solution and initial value satisfy, we give the error estimates for both semidiscrete and fully discrete schemes. Finally, a numerical example is presented to demonstrate the effectiveness of our numerical methods.
['Xiaocui Li', 'Xiaoyuan Yang', 'Yinghan Zhang']
Error Estimates of Mixed Finite Element Methods for Time-Fractional Navier–Stokes Equations
861,610
The concept of website quality is consisting of many criteria: quality of service perspective, a user perspective, a content perspective or indeed a usability perspective. This research conducts some tests to measure the quality of e-government website of five Asian countries via web diagnostic tools online. We propose a methodology for determining and evaluate the best e-government sites based on many criteria of website quality. The approach has been implemented using analytical hierarchy process (AHP), the proposed model uses the AHP pairwise comparisons and the measure scale to generate the weights for the criteria which are much better and guarantee more fairly preference of criteria. Applying AHP approach for website evaluation has resulted in significant acceleration of implementation, raised the overall effectiveness with respect to the underlying methodology and ultimately enabled more efficient procedure. The result of this study confirmed that the e-government websites of Asian are neglecting performance and quality criteria.
['P. D. D. Dominic', 'Handaru Jati', 'Ganesan Kannabiran']
Performance evaluation on quality of Asian e-government websites – an AHP approach
181,172
This paper presents a novel method for diffuse texture extraction from a set of multiview images. We address the problem of specularities removal by pixel value minimization across multiple automatically aligned input images. Our method is based on the fact that the presence of specular reflection only increases the captured pixel value. Moreover, we propose an algorithm for estimation of material region in the image by optimization on the GPU. Previous methods for diffuse component separation from multiple images require a complex hardware setup. In contrast to that, our method is highly usable because only a mobile phone is needed to reconstruct diffuse texture in an environment with arbitrary lighting. Moreover, our method is fully automatic and besides capturing of images from multiple viewpoints it does not require any user intervention. Many fields can benefit from our method, particularly material reconstruction, image processing, and digital content creation.
['Peter Kán', 'Hannes Kaufmann']
Mobile Multiview Diffuse Texture Extraction
625,505
This paper presents the design of the robot AILA, a mobile dual-arm robot system developed as a research platform for investigating aspects of the currently booming multidisciplinary area of mobile manipulation. The robot integrates and allows in a single platform to perform research in most of the areas involved in autonomous robotics: navigation, mobile and dual-arm manipulation planning, active compliance and force control strategies, object recognition, scene representation, and semantic perception. AILA has 32 degrees of freedom, including 7-DOF arms, 4-DOF torso, 2-DOF head, and a mobile base equipped with six wheels, each of them with two degrees of freedom. The primary design goal was to achieve a lightweight arm construction with a payload-to-weight ratio greater than one. Besides, an adjustable body should sustain the dual-arm system providing an extended workspace. In addition, mobility is provided by means of a wheel-based mobile base. As a result, AILA's arms can lift 8kg and weigh 5.5kg, thus achieving a payload-to-weight ratio of 1.45. The paper will provide an overview of the design, especially in the mechatronics area, as well as of its realization, the sensors incorporated in the system, and its control software.
['Johannes Lemburg', 'José de Gea Fernández', 'Markus Eich', 'Dennis Mronga', 'Peter Kampmann', 'Andreas Vogt', 'Achint Aggarwal', 'Yuping Shi', 'Frank Kirchner']
AILA - design of an autonomous mobile dual-arm robot
108,140
BPMN Decision Footprint: Towards Decision Harmony Along BI Process
['Riadh Ghlala', 'Zahra Kodia Aouina', 'Lamjed Ben Said']
BPMN Decision Footprint: Towards Decision Harmony Along BI Process
891,189
The central question in my talk is how existing knowledge, in the form of available labeled datasets, can be (re-)used for solving a new (and possibly) unrelated image classification task. This brings together two of my recent research directions, which I'll discuss both. First, I'll present some recent works in zero-shot learning, where we use ImageNet objects and semantic embeddings for various classification tasks. Second, I'll present our work on active-learning. To re-use existing knowledge we propose to use zero-shot classifiers as prior information to guide the learning process by linking the new task to the existing labels. The work discussed in this talk has been published at ACM MM, CVPR, ECCV, and ICCV.
['Thomas Mensink']
Learning to Reuse Visual Knowledge
906,438
As digital images are increasing exponentially; it is very attractive to develop more effective machine learning frameworks for automatic image annotation. In order to address the most prominent issues (huge inter-concept visual similarity and huge intra-concept visual diversity) more effectively, an inter-related non-parametric Bayesian classifier training framework to support multi-label image annotation is developed. For this purpose, an image is viewed as a bag, and its instances are the over-segmented regions within it found automatically with an adopted Otsu's method segmentation algorithm. Here firefly algorithm (FA) is utilised to enhance Otsu's method in the direction of finding optimal multilevel thresholds using the maximum variance intra-clusters. FA has high convergence speed and less computation rate as compared with some evolutionary algorithms. By generating blobs, the extracted features for segmented regions, the concepts which are learned by the classifier tend to relate textually to the words which occur most often in the data and visually to the easiest to recognise segments. This allowing the opportunity to assign a word to each object (localised labelling). Extensive experiments on Corel benchmark image datasets will validate the effectiveness of the proposed solution to multi-label image annotation and label ranking problem.
['Saad M. Darwish']
Combining firefly algorithm and Bayesian classifier: new direction for automatic multilabel image annotation
743,333
In recent papers we have introduced Mobile Synchronizing Petri Nets, a new model for mobility based on coloured Petri Nets. It allows the description of systems composed of a collection of (possibly mobile) hardware devices and mobile agents, both modelled in a homogenous way and abstracting from middleware details. Our basic model introduced a colour to describe localities, but still lacked appropriate primitives to deal with security, and in fact it was equivalent to P/T nets. Then, we introduced the primitives to cope with security: a new colour for identifiers, basically corresponding to the natural numbers, that are created by means of a special transition. This mechanism allows us to deal with authentication issues. In this paper we discuss the expressiveness of the extended model with the authentication primitives. More specifically, we study several instances of the classical reachability and coverability problems. Finally, we also study a more abstract version of the mechanism to create identifiers, using abstract names, close to those in the @p-calculus or the Ambient Calculus. We have proved that both models are strictly in between P/T nets and Turing machines.
['Fernando Rosa-Velardo', 'David de Frutos-Escrig', 'Olga Marroquín-Alonso']
On the Expressiveness of Mobile Synchronizing Petri Nets
121,070
Some structural properties of a general Petri net (PN) are considered. The paper endeavors to improve the link between PNs, the theory of matrix analysis, and linear inequalities. Necessary and/or sufficient conditions for consistency, conservativeness, boundedness, and repetitiveness are given in terms of certain determinants of the incidence matrix of the net. Besides, when the incidence matrix is square, theorems are derived in terms of the modified-incidence-matrix eigenvalues and, thus, inherently reduce the state-space explosion problem. Examples are worked out to illustrate the results.
['Rachid Bouyekhf', 'Abdellah El Moudni']
On the analysis of some structural properties of Petri nets
296,179
The achievable rate of a wideband multi-input single-output channel with multi-carrier transmission is studied with limited feedback of channel state information (CSI). The set of sub-channel vectors are assumed to be jointly quantized and relayed back to the transmitter. Given a fixed feedback rate, the performance of an optimal joint quantization scheme can be characterized by the rate-distortion bound. The distortion metric is the average loss in capacity (forward rate) relative to the capacity with perfect channel state information at the transmitter and receiver. The corresponding rate distortion function gives the forward capacity as a function of feedback rate, and is determined explicitly by casting the minimization of mutual information in the rate-distortion problem as an optimal control problem. Numerical results show that when the feedback rate is relatively small, the rate-distortion bound significantly outperforms separate quantization of the state information of each sub-channel.
['Mingguang Xu', 'Dongning Guo', 'Michael L. Honig']
Limited feedback for multi-carrier beamforming: A rate-distortion approach
544,819
It is known that discretization of a continuous deconvolution problem can alleviate the ill-posedness of the problem. The currently used circulant matrix model, however, does not play such a role. Moreover, the approximation of deconvolution problems by circulant matrix model is rational only if the size of the kernel function is very small. We propose an aperiodic model of deconvolution. For discrete and finite deconvolution problems the new model is an exact one. In the general case, the new model can lead to a nonsingular system of equations that has a lower condition number than the circulant one, and the related computations in the deconvolution can be done efficiently by means of the DFT technique, as in the ease for circulant matrices. The rationality of the new model holds without regard to the size of the kernel and the image. The use of the aperiodic model is illustrated by gradient-based algorithms. >
['Zou Mou-Yan', 'Rolf Unbehauen']
On the computational model of a kind of deconvolution problem
104,681
Recent research has focused on designing new architectures and protocols for pure optical grooming without resorting to fast optical switching. This is achieved in three steps: (1) the circuit is configured in the form of a path or a tree (2) optical devices like couplers/splitters are used to allow a wavelength to aggregate traffic from multiple transmitters and/or receivers through one of the following means - point to point, point to multi-point, multi-point to point and multipoint to multi-point (3) an arbitration mechanism is provided to avoid contention among end users of the circuit. In this paper, we focus on design of mesh networks that aggregate traffic at the path level. We develop a shared, mixed protection algorithms for guaranteed survival from single link failures in the context of dynamic traffic for mesh networks. Based on our simulations on random graphs, we conclude that light-trails, that enable multi-point to multi-point aggregation, performs multiple orders of magnitude better than lightpaths that allow point to point aggregation.
['Srivatsan Balasubramanian', 'Arun K. Somani']
Dynamic Survivable Network Design for Path Level Traffic Grooming in WDM Optical Networks
199,568
The weighted entropy $H^{\rm w}_\phi (X)=H^{\rm w}_\phi (f)$ of a random variable $X$ with values $x$ and a probability-mass/density function $f$ is defined as the mean value ${\mathbb E} I^{\rm w}_\phi(X)$ of the weighted information $I^{\rm w}_\phi (x)=-\phi (x)\log\,f(x)$. Here $x\mapsto\phi (x)\in{\mathbb R}$ is a given weight function (WF) indicating a 'value' of outcome $x$. For an $n$-component random vector ${\mathbf{X}}_0^{n-1}=(X_0,\ldots ,X_{n-1})$ produced by a random process ${\mathbf{X}}=(X_i,i\in{\mathbb Z})$, the weighted information $I^{\rm w}_{\phi_n}({\mathbf x}_0^{n-1})$ and weighted entropy $H^{\rm w}_{\phi_n}({\mathbf{X}}_0^{n-1})$ are defined similarly, with an WF $\phi_n({\mathbf x}_0^{n-1})$. Two types of WFs $\phi_n$ are considered, based on additive and a multiplicative forms ($\phi_n({\mathbf x}_0^{n-1})=\sum\limits_{i=0}^{n-1}{\varphi} (x_i)$ and $\phi_n({\mathbf x}_0^{n-1})=\prod\limits_{i=0}^{n-1}{\varphi} (x_i)$, respectively). The focus is upon ${\it rates}$ of the weighted entropy and information, regarded as parameters related to ${\mathbf{X}}$. We show that, in the context of ergodicity, a natural scale for an asymptotically additive/multiplicative WF is $\frac{1}{n^2}H^{\rm w}_{\phi_n}({\mathbf{X}}_0^{n-1})$ and $\frac{1}{n}\log\;H^{\rm w}_{\phi_n}({\mathbf{X}}_0^{n-1})$, respectively. This gives rise to ${\it primary}$ ${\it rates}$. The next-order terms can also be identified, leading to ${\it secondary}$ ${\it rates}$. We also consider emerging generalisations of the Shannon-McMillan-Breiman theorem.
['Yuri Suhov', 'Izabella Stuhl']
Weighted information and entropy rates
968,673
Over the past decade, many data centers have been constructed around the world due to the explosive growth of data volume and type. The cost and energy consumption have become the most important challenges of building those data centers. Data centers today use commodity computers and switches instead of high-end servers and interconnections for cost-effectiveness. In this paper, we propose a new type of interconnection networks called Exchanged Cube-Connected Cycles (ExCCC) . The ExCCC network is an extension of Exchanged Hypercube (EH) network by replacing each node with a cycle. The EH network is based on link removal from a Hypercube network, which makes the EH network more cost-effective as it scales up. After analyzing the topological properties of ExCCC , we employ commodity switches to construct a new class of data center network models, namely ExCCC-DCN , by leveraging the advantages of the ExCCC architecture. The analysis and experimental results demonstrate that the proposed ExCCC-DCN models significantly outperform four state-of-the-art data center network models in terms of the total cost, power consumption, scalability, and other static characteristics. It achieves the goals of low cost, low energy consumption, high network throughput, and high scalability simultaneously.
['Zhen Zhang', 'Yuhui Deng', 'Geyong Min', 'Junjie Xie', 'Shuqiang Huang']
ExCCC-DCN: A Highly Scalable, Cost-Effective and Energy-Efficient Data Center Structure
889,880
The most well known search techniques are perhaps the PageRank and HITS algorithms. In this paper we argue that these algorithms miss an important dimension, the temporal dimension. Quality pages in the past may not be quality pages now or in the future. These techniques favor older pages because these pages have many in-links accumulated over time. New pages, which may be of high quality, have few or no in-links and are left behind. Research publication search has the same problem. If we use the PageRank or HITS algorithm, those older or classic papers will be ranked high due to the large number of citations that they received in the past. This paper studies the temporal dimension of search in the context of research publication. A number of methods are proposed to deal with the problem based on analyzing the behavior history and the source of each publication. These methods are evaluated empirically. Our results show that they are highly effective.
['Philip S. Yu', 'Xin Li', 'Bing Liu']
Adding the Temporal Dimension to Search " A Case Study in Publication Search
245,326
Profiling techniques have greatly advanced in recent years. Extensive amounts of dynamic information can be collected (e.g., control flow, address and data values, data, and control dependences), and sophisticated dynamic analysis techniques can be employed to assist in improving the performance and reliability of software. In this chapter we describe a novel representation called whole execution traces that can hold a vast amount of dynamic information in a form that provides easy access to this information during dynamic analysis. We demonstrate the use of this representation in locating faulty code in programs through dynamic-slicing- and dynamic-matching-based analysis of dynamic information generated by failing runs of faulty programs.
['Xiangyu Zhang', 'Neelam Gupta', 'Rajiv Gupta']
Whole Execution Traces and Their Use in Debugging
802,088
In this paper, we consider the performance analysis of a decode-and-forward (DF) cooperative relaying (CR) scheme over bursty impulsive noise channel. As compared to existing literature, here, we generalize the performance analysis to the multi-relay scenario with and without considering error propagation from the relays. For this scheme, we evaluate the bit error rate (BER) performance in the presence of Rayleigh fading with a maximum a posteriori (MAP) receiver. From the obtained results it is seen that, similar to single relay scheme, the proposed MAP receiver attains the lower bound derived for multi-relay DF CR scheme also, and performs significantly better than the conventional schemes developed for additive white Gaussian noise (AWGN) channel and memoryless impulsive noise channel. Moreover, the performance improvement of optimal MAP receiver over the memoryless receivers is substantial with increasing the number of relays.
['Md. Sahabul Alam', 'Fabrice Labeau']
Effect of Bursty Impulsive Noise on the Performance of Multi-Relay DF Cooperative Relaying Scheme
838,454
In this paper we propose to use maximal entropy random walk on a graph for tampering localization in digital image forensics. Our approach serves as an additional post-processing step after conventional sliding-window analysis with a forensic detector. Strong localization property of this random walk will highlight important regions and attenuate the background - even for noisy response maps. Our evaluation shows that the proposed method can significantly outperform both the commonly used threshold-based decision, and the recently proposed optimization-based approach with a Markovian prior.
['Pawel Korus', 'Jiwu Huang']
Improved Tampering Localization in Digital Image Forensics Based on Maximal Entropy Random Walk
590,617
A fully integrated passive UHF RFID tag chip compatible with ISO18000-6B protocol is pre- sented. In order to save die area, an all CMOS transistor regulator is adopted. An NMOS based traditional Dickson rectifier is employed with the advantage of circuit simplicity. An ultra-low power ring oscillator with temperature and process compensation is designed to guarantee the accuracy of the clock, which has a direct impact on the accuracy of backscatter link frequency (BLF). By utilizing several low power design approaches, a low power baseband processor is achieved. The whole tag chip is fabricated in TSMC 0.18m CMOS technology and the chip size is 730m × 605m. Measurement results show that the total power consumption of the tag is about 7W with a sensitivity of −13.1 dBm and an operational range of about 7.5 m under 4 W effective isotropic radiated (EIRP) power, and the accuracy of BLF is within ±10%. With the long operational range, the tag is suitable for many commercial applications, such as supply-chain and logistics managements as well as location sensing system.
['Liangbo Xie', 'Jiaxin Liu', 'Yao Wang', 'Chuan Yin', 'Guangjun Wen']
Design and implementation of a passive UHF RFID tag with a temperature and process compensation oscillator
627,757
Capacity restoration in a decentralized assembly system with supply disruption risks
['Guo Li', 'Lin Li', 'Ying Zhou', 'Xu Guan']
Capacity restoration in a decentralized assembly system with supply disruption risks
836,371
In this paper, we present a highly scalable algorithm for structurally clustering webpages for extraction. We show that, using only the URLs of the webpages and simple content features, it is possible to cluster webpages effectively and efficiently. At the heart of our techniques is a principled framework, based on the principles of information theory, that allows us to effectively leverage the URLs, and combine them with content and structural properties. Using an extensive evaluation over several large full websites, we demonstrate the effectiveness of our techniques, at a scale unattainable by previous techniques.
['Lorenzo Blanco', 'Nilesh N. Dalvi', 'Ashwin Machanavajjhala']
Highly efficient algorithms for structural clustering of large websites
194,415
A binary code is called a superimposed cover-free $(s,\ell)$-code if the code is identified by the incidence matrix of a family of finite sets in which no intersection of $\ell$ sets is covered by the union of $s$ others. A binary code is called a superimposed list-decoding $s_L$-code if the code is identified by the incidence matrix of a family of finite sets in which the union of any $s$ sets can cover not more than $L-1$ other sets of the family. For $L=\ell=1$, both of the definitions coincide and the corresponding binary code is called a superimposed $s$-code. Our aim is to obtain new lower and upper bounds on the rate of the given codes. In particular, we derive lower bounds on the rates of a superimposed cover-free $(s,\ell)$-code and list-decoding $s_L$-code based on the ensemble of constant weight binary codes. Also, we establish an upper bound on the rate of superimposed list-decoding $s_L$-code.
["Arkadii G. D'yachkov", 'Nikita Polyanskii', 'Vladislav Yu. Shchukin', 'I. V. Vorobyev']
Bounds on the rate of disjunctive codes (in Russian)
756,932
Terrain Guarding Problem(TGP), which is known to be NP-complete, asks to find a smallest set of guard locations on a terrain $T$ such that every point on $T$ is visible by a guard. Here, we study this problem on 1.5D orthogonal terrains where the edges are bound to be horizontal or vertical. We propose a 2-approximation algorithm that runs in O($n \log m$) time, where $n$ and $m$ are the sizes of input and output, respectively. This is an improvement over the previous best algorithm, which is a 2-approximation with O($n^2$) running time.
['Yangdi Lyu', 'Alper Üngör']
A Fast 2-Approximation Algorithm for Guarding Orthogonal Terrains
727,789
There exist multiple activity recognition solutions offering good results under controlled conditions. However, little attention has been given to the development of functional systems operating in realistic settings. In that vein, this work aims at presenting the complete process for the design, implementation and evaluation of a real-time activity recognition system. The proposed recognition system consists of three wearable inertial sensors used to register the user body motion, and a mobile application to collect and process the sensory data for the recognition of the user activity. The system not only shows good recognition capabilities after offline evaluation but also after analysis at runtime. In view of the obtained results, this system may serve for the recognition of some of the most frequent daily physical activities.
['Oresti Banos', 'Miguel Damas', 'Alberto Guillén', 'Luis Javier Herrera', 'Héctor Pomares', 'Ignacio Rojas', 'Claudia Villalonga', 'Sungyoung Lee']
On the Development of a Real-Time Multi-sensor Activity Recognition System
623,855
An Answer Set Programming Framework for Reasoning About Truthfulness of Statements by Agents.
['Tran Cao Son', 'Enrico Pontelli', 'Michael Gelfond', 'Marcello Balduccini']
An Answer Set Programming Framework for Reasoning About Truthfulness of Statements by Agents.
990,106
With the rapid growth of Web Service in the past decade, the issue of QoS-aware Web service recommendation is becoming more and more critical. Since the Web service QoS information collection work requires much time and effort, and is sometimes even impractical, the service QoS value is usually missing. There are some work to predict the missing QoS value using traditional collaborative filtering methods based on user-service static model. However, the QoS value is highly related to the invocation context (e.g., QoS value are various at different time). By considering the third dynamic context information, a Temporal QoS-aware Web Service Recommendation Framework is presented to predict missing QoS value under various temporal context. Further, we formalize this problem as a generalized tensor factorization model and propose a Non-negative Tensor Factorization (NTF) algorithm which is able to deal with the triadic relations of user-service-time model. Extensive experiments are conducted based on our real-world Web service QoS dataset collected on Planet-Lab, which is comprised of service invocation response-time and throughput value from 343 users on 5817 Web services at 32 time periods. The comprehensive experimental analysis shows that our approach achieves better prediction accuracy than other approaches.
['Wancai Zhang', 'Hailong Sun', 'Xudong Liu', 'Xiaohui Guo']
Temporal QoS-aware web service recommendation via non-negative tensor factorization
12,940
As one of the most basic photo manipulation processes, photo cropping is widely used in the printing, graphic design, and photography industries. In this paper, we introduce graphlets (i.e., small connected subgraphs) to represent a photo's aesthetic features, and propose a probabilistic model to transfer aesthetic features from the training photo onto the cropped photo. In particular, by segmenting each photo into a set of regions, we construct a region adjacency graph (RAG) to represent the global aesthetic feature of each photo. Graphlets are then extracted from the RAGs, and these graphlets capture the local aesthetic features of the photos. Finally, we cast photo cropping as a candidate-searching procedure on the basis of a probabilistic model, and infer the parameters of the cropped photos using Gibbs sampling. The proposed method is fully automatic. Subjective evaluations have shown that it is preferred over a number of existing approaches.
['Luming Zhang', 'Mingli Song', 'Qi Zhao', 'Xiao Liu', 'Jiajun Bu', 'Chun Chen']
Probabilistic Graphlet Transfer for Photo Cropping
249,486
The goal of this study was to determine if virtual reality graded exposure therapy (VRGET) was equally efficacious, more efficacious, or less efficacious, than imaginal exposure therapy in the treatment of fear of flying. Thirty participants (Age=39.8/spl plusmn/9.7) with confirmed DSM-IV diagnosis of specific phobia fear of flying were randomly assigned to one of three groups: VRGET with no physiological feedback (VRGETno), VRGET with physiological feedback (VRGETpm), or systematic desensitization with imaginal exposure therapy (IET). Eight sessions were conducted once a week. During each session, physiology was measured to give an objective measurement of improvement over the course of exposure therapy. In addition, self-report questionnaires, subjective ratings of anxiety (SUDs), and behavioral observations (included here as flying behavior before beginning treatment and at a three-month posttreatment followup) were included. In the analysis of results, the Chi-square test of behavioral observations based on a three-month posttreatment followup revealed a statistically significant difference in flying behavior between the groups [/spl chi//sup 2/(4)=19.41, p<0.001]. Only one participant (10%) who received IET, eight of the ten participants (80%) who received VRGETno, and ten out of the ten participants (100%) who received VRGETpm reported an ability to fly without medication or alcohol at three-month followup. Although this study included small sample sizes for the three groups, the results showed VRGET was more effective than IET in the treatment of flying. It also suggests that physiological feedback may add to the efficacy of VR treatment.
['Brenda K. Wiederhold', 'Dong Pyo Jang', 'Richard G. Gevirtz', 'Sun I. Kim', 'In Y. Kim', 'Mark D. Wiederhold']
The treatment of fear of flying: a controlled study of imaginal and virtual reality graded exposure therapy
155,669
Abstract Downwards passes on binary trees are essentially functions which pass information down a tree, from the root towards the leaves. Under certain conditions, a downwards pass is both ‘efficient’ (computable in a functional style in parallel time proportional to the depth of the tree) and ‘manipulable’ (enjoying a number of distributivity properties useful in program construction); we call a downwards pass satisfying these conditions a downwards accumulation . In this paper, we show that these conditions do in fact yield a stronger conclusion: the accumulation can be computed in parallel time proportional to the logarithm of the depth of the tree, on a Crew Pram machine.
['Jeremy Gibbons']
Computing Downwards Accumulations on Trees Quickly
410,283
Frugal Traffic Monitoring with Autonomous Participatory Sensing.
['Vladimir Coric', 'Nemanja Djuric', 'Slobodan Vucetic']
Frugal Traffic Monitoring with Autonomous Participatory Sensing.
748,919
We study the consensus-halving problem of dividing an object into two portions, such that each of $n$ agents has equal valuation for the two portions. The $\epsilon$-approximate consensus-halving problem allows each agent to have an $\epsilon$ discrepancy on the values of the portions. We prove that computing $\epsilon$-approximate consensus-halving solution using $n$ cuts is in PPA, and is PPAD-hard, where $\epsilon$ is some positive constant; the problem remains PPAD-hard when we allow a constant number of additional cuts. It is NP-hard to decide whether a solution with $n-1$ cuts exists for the problem. As a corollary of our results, we obtain that the approximate computational version of the Continuous Necklace Splitting Problem is PPAD-hard when the number of portions $t$ is two.
['Aris Filos-Ratsikas', 'Søren Kristoffer Stiil Frederiksen', 'Paul W. Goldberg', 'Jie Zhang']
Hardness Results for Consensus-Halving
891,944
The paper presents a systematic method for synthesizing asynchronous circuits from event-based specifications with conflicts on output signals. It describes a set of semantic-preserving transformations performed at the Petri net level, which introduce auxiliary signal transitions implemented by internally analogue components, Mutual Exclusion (ME) elements. The logic for primary outputs can therefore be realized free from hazards and external meta-stability. The technique draws upon the use of standard logic components and two-input MEs, available in a typical design library.
['Jordi Cortadella', 'Luciano Lavagno', 'Peter Vanbekbergen', 'Alex Yakovlev']
Designing asynchronous circuits from behavioural specifications with internal conflicts
21,623
In this paper, we propose a method to compose a classifier by non-linear discriminant analysis using kernel method combined with kernel feature selection for holistic recognition of historical hand-written string. Through experiments using historical hand-written string database HCD2, we show that our approach can obtain high recognition accuracy comparable to that of individual character recognition.
['Ryo Inoue', 'Hidehisa Nakayama', 'Nei Kato']
Historical Hand-Written String Recognition by Non-linear Discriminant Analysis using Kernel Feature Selection
301,157
Detecting communities that recur over time is a challenging problem due to the potential sparsity of encounter events at an individual scale and inherent uncertainty in human behavior. Existing methods for community detection in mobile human encounter networks ignore the presence of temporal patterns that lead to periodic components in the network. Daily and weekly routine are prevalent in human behavior and can serve as rich context for applications that rely on person-to-person encounters, such as mobile routing protocols and intelligent digital personal assistants. In this article, we present the design, implementation, and evaluation of an approach to decentralized periodic community detection that is robust to uncertainty and computationally efficient. This alternative approach has a novel periodicity detection method inspired by a neural synchrony measure used in the field of neurophysiology. We evaluate our approach and investigate human periodic encounter patterns using empirical datasets of inferred and direct-sensed encounters.
['Matthew Williams', 'Roger Marcus Whitaker', 'Stuart Michael Allen']
There and Back Again: Detecting Regularity in Human Encounter Communities
879,245
This paper presents an experimental functional database language Fudal which is a further development of our group's work on persistent functional database languages. In this latest work we consider how unknown or partially known information can be treated in the functional context. The language we have implemented, Fudal, includes certainty and possibility operators. We outline the problems that are caused by the use of null values and truth functional logic in conventional database languages, and show how these problems can be overcome by defining the semantics of queries of a database containing partial information in terms of its 'completions'. If D is a database containing partial information then a completion of D is a database which is consistent with D and contains no partial information. We demonstrate that, even when a database has a large number of completions, sensible queries can be constructed using certainty and possibility operators. Finally we show how these operators can be implemented and discuss the use of Fudal in practical contexts.
['David R. Sutton', 'Peter J. H. King']
Incomplete Information and the Functional Data Model
498,826
Evidence indicates that requesting video clips on demand accounts for a dramatic increase in data traffic over cellular networks. Caching part of popular videos in the storage of small-cell base stations (SBS) in cellular networks is an efficient method to reduce transmission latency and mitigate redundant transmissions. In this paper, we propose a commercial caching system consisting of a video retailer (VR) and multiple network service providers (NSPs). Each NSP leases its SBSs, with some price, to the VR for the purpose of making profits, and the VR, after storing popular videos in the rented SBSs, can provide better local video services to the mobile users, thereby gaining more profits. We conceive this system within the framework of a Stackelberg game by treating the SBSs as a specific type of resources. Then, we establish the profit models for both the NSPs and the VR based on stochastic geometry. We further investigate the Stackelberg equilibrium by solving the optimization problems in two cases, i.e., whether or not the VR has a budget plan on renting the SBSs. Numerical results are provided for quantifying the proposed framework by showing its efficiency on pricing and resource allocation.
['Jun Li', 'Jinshan Sun', 'Yuwen Qian', 'Feng Shu', 'Ming Xiao', 'Wei Xiang']
A Commercial Video-Caching System for Small-Cell Cellular Networks Using Game Theory
822,022
The ability to predict computer crimes has become increasingly important. The paper describes a method for discovering the preferences of computer criminals. This method involves sequential clustering based on the variance of clusters discovered in higher order clustering. These discovered preferences can be used for the direct protection of computer systems against ongoing attacks or for the construction of simulations of future attacks.
['Donald E. Brown', 'Louise E. Gunderson']
Using clustering to discover the preferences of computer criminals
26,539
This paper introduces reasoning about lawful behavior as an important computational thinking skill and provides examples from a novel introductory programming curriculum using Microsoft's Kodu Game Lab. We present an analysis of assessment data showing that rising 5th and 6th graders can understand the lawfulness of Kodu programs. We also discuss some misconceptions students may develop about Kodu, their causes, and potential remedies.
['David S. Touretzky', 'Christina Gardner-McCune', 'Ashish Aggarwal']
Teaching "Lawfulness" With Kodu
640,782
Fast Coding Unit Size Decision in HEVC Intra Coding
['Tao Fan', 'Guozhong Wang', 'Xiwu Shang']
Fast Coding Unit Size Decision in HEVC Intra Coding
834,712
Simplified broadband beamformers can be constructed by sharing a single tapped-delay-line within a narrowband subarray. This paper discusses the use of fractional delay filters to a steering in the digital domain. For the narrowband subarrays, an optimisation approach is proposed to maintain an off-broadside look direction constraint as best as possible across a given frequency range. We demonstrate the advantage that this approach has for generating beamformers with accurate off-broadside look direction compared to a benchmark.
['Abdullah Alshammary', 'Stephan Weiss']
Low-cost and accurate broadband beamforming based on narrowband sub-arrays
879,431
Does CPOE actually disrupt physicians-nurses communications?
['Sylvia Pelayo', 'Françoise Anceaux', 'Janine Rogalski', 'Marie-Catherine Beuscart-Zéphir']
Does CPOE actually disrupt physicians-nurses communications?
796,660
A Feasibility Study of Remote Inverse Manufacturing.
['Nozomu Mishima', 'Ooki Jun', 'Yuta Kadowaki', 'Kenta Torihara', 'Kiyoshi Hirose', 'Mitsutaka Matsumoto']
A Feasibility Study of Remote Inverse Manufacturing.
766,923
During the last 5 years, research on Human Activity Recognition (HAR) has reported on systems showing good overall recognition performance. As a consequence, HAR has been considered as a potential technology for e-health systems. Here, we propose a machine learning based HAR classifier. We also provide a full experimental description that contains the HAR wearable devices setup and a public domain dataset comprising 165,633 samples. We consider 5 activity classes, gathered from 4 subjects wearing accelerometers mounted on their waist, left thigh, right arm, and right ankle. As basic input features to our classifier we use 12 attributes derived from a time window of 150ms. Finally, the classifier uses a committee AdaBoost that combines ten Decision Trees. The observed classifier accuracy is 99.4%.
['Wallace Ugulino', 'Débora Cardador', 'Katia Vega', 'Eduardo Velloso', 'Ruy Luiz Milidiú', 'Hugo Fuks']
Wearable computing: accelerometers' data classification of body postures and movements
574,887
The Lienard equation is of a high importance from both mathematical and physical points of view. However a question about integrability of this equation has not been completely answered yet. Here we provide a new criterion for integrability of the Lienard equation using an approach based on nonlocal transformations. We also obtain some of the previously known criteria for integrability of the Lienard equation as a straightforward consequence of our approach’s application. We illustrate our results by several new examples of integrable Lienard equations.
['Nikolai A. Kudryashov', 'Dmitry I. Sinelshchikov']
On the criteria for integrability of the Liénard equation
690,597
Distributed Information Systems Development: A Framework for Understanding & Managing.
['Jolita Ralyté', 'Xavier Lamielle', 'Nicolas Arni-Bloch', 'Michel Léonard']
Distributed Information Systems Development: A Framework for Understanding & Managing.
125,871
Packet switched networks achieve significant resource savings due to statistical multiplexing. In this work we explore statistical multiplexing gains in single and multi-hop networks. To this end, we analyze performance metrics such as delay bounds for a through flow comparing different results from the stochastic network calculus. We distinguish different multiplexing gains that stem from independence assumptions between flows at a single hop as well as flows at consecutive hops of a network path. Further, we show corresponding numerical results. In addition to deriving the benefits of various statistical multiplexing models on performance bounds, we contribute insights into the scaling of end-to-end delay bounds in the number of hops n of a network path under statistical independence.
['Amr Rizk', 'Markus Fidler']
Leveraging statistical multiplexing gains in single- and multi-hop networks
179,983
Hyperbolic partial differential equations (PDE) are a very powerful mathematical tool to describe complex dynamic processes in science and engineering. Very often, hyperbolic PDE can be derived directly from first principles in physics, such as the conservation of mass, momentum and energy. These principles are universally accepted to be valid and can be used for the simulation of a very wide class of different problems, ranging from astrophysics (rotating gas clouds, the merger of a binary neutron star system into a black hole and the associated generation and propagation of gravitational waves) over geophysics (generation and propagation of seismic waves after an earthquake, landslides, avalanches, tidal waves, storm surges, flooding and morphodynamics of rivers) to engineering (turbulent flows over aircraft and the associated noise generation and propagation, rotating flows in turbo-machinery, multi-phase liquid-gas flows in internal combustion engines) and computational biology (blood flow in the human cardio-vascular system).It is of course impossible to cover all the above topics in a single special issue. However, all these apparently different applications have a common mathematical description under the form of nonlinear hyperbolic systems of partial differential equations, possibly containing also higher order derivative terms, non-conservative products and nonlinear (potentially stiff) source terms. From the mathematical point of view, the major difficulties in these systems arise due to the inherent nonlinearities and the formation of non-smooth solution features such as e.g. shock waves. The construction of robust and accurate numerical methods for such type of problems is even after decades of successful research an ongoing quest. This quest is fueled by recent advances in the development of novel methods with promising additional properties such as e.g. high spatial order on unstructured meshes, algorithmic simplicity for modern multi-core architectures and automatic mesh and/or trial function adaptation. The final goal is to construct methods that efficiently produce reliable results for such type of problems.This special issue is dedicated to recent advances in numerical methods for such nonlinear systems of hyperbolic PDE and tries to cover a wide spectrum of different problems and numerical approaches.
['Michael Dumbser', 'Gregor Gassner', 'Christian Rohde', 'Sabine Roller']
Preface to the special issue Recent Advances in Numerical Methods for Hyperbolic Partial Differential Equations
650,135
This paper looks at a case study in the commercial procurement of an IT system to support learners on short educational courses. It compares the use case model created before the system was built with the use case model after the system was delivered. The original use case model was created through the application of a requirements pattern language designed to be employed during the procurement phase of an IT system. The final use case model was reverse engineered from the working application. The objective was to discover how accurately the original model represented the final application to provide a measure of the potential usefulness of the pattern language during procurement.
['Peter Merrick', 'Patrick D. M. Barrow']
Testing the predictive ability of a requirements pattern language
280,175
This paper presents a power electronic converter used to redistribute the power among the phases in unbalanced power systems, which is supposed to be designed based on the involved degree of unbalance. A bidirectional converter is chosen for this purpose, whose modeling is presented in the dq0 system. This solution can be considered as part of a unified control system, where conventional active power filters may be solely responsible for compensating harmonics and/or the tuning of passive filters becomes easier, with consequent reduction in involved costs in a decentralized approach. The adopted control strategy is implemented in digital signal processor TMS320F2812, while experimental results obtained from an experimental prototype rated at 17.86 kVA are properly discussed considering that the converter is placed at the secondary side of a transformer supplying three distinct single-phase loads. It is effectively shown that the converter is able to balance the currents in the transformer phases, thus leading to the suppression of the neutral current.
['Aniel Silva de Morais', 'Fernando Lessa Tofoli', 'Ivo Barbi']
Modeling, Digital Control, and Implementation of a Three-Phase Four-Wire Power Converter Used as a Power Redistribution Device
706,477
This paper critically evaluates the regress argument for infinitism. The dialectic is essentially this. Peter Klein argues that only an infinitist can, without being dogmatic, enhance the credibility of a questioned non-evident proposition. In response, I demonstrate that a foundationalist can do this equally well. Furthermore, I explain how foundationalism can provide for infinite chains of justification. I conclude that the regress argument for infinitism should not convince us.
['John Turri']
On the regress argument for infinitism
186,364
This work explores the unique characteristics of flash memory in serving as a cache layer for disks. The experiments show that the proposed management scheme could save up to 20% energy consumption while reduce the read response time by the two third and the write response time by the five sixth of their counterparts. The estimated lifetime of the flash-memory cache is significantly improved as well.
['Jen-Wei Hsieh', 'Tei-Wei Kuo', 'Po-Liang Wu', 'Yu-Chung Huang']
Energy-efficient and performance-enhanced disks using flash-memory cache
273,399
Real-life cyber physical systems, such as automotive vehicles, building automation systems, and groups of unmanned vehicles are monitored and controlled by networked control systems (NCS). The overall system dynamics emerges from the interaction among physical dynamics, computational dynamics, and communication networks. Network uncertainties such as time-varying delay and packet loss cause significant challenges. This paper proposes a passive control architecture for designing NCS that are insensitive to network uncertainties. We describe the architecture for a system consisting of a robotic manipulator controlled by a digital controller over a wireless network and show that the system is stable even in the presence of time-varying delays. Experimental results demonstrate the advantages of the passivity-based architecture with respect to stability and performance and show that the system is insensitive to network uncertainties.
['Nicholas Kottenstette', 'Joseph F. Hall', 'Xenofon D. Koutsoukos', 'Janos Sztipanovits', 'Panos J. Antsaklis']
Design of Networked Control Systems Using Passivity
78,140
Java8 introduced the notion of streams that is a new data structure and supports multi-core processors. When the sum method is called for a stream of floating-point numbers, the summation is calculated at high-speed by applying MapReduce, which distributes computations to cores. However, since floating-point calculation causes an error, simple adaptation of this method can not determine the result uniquely. Then, in this study, we develop a summation program that can be applied to a stream with MapReduce. Our method can calculate at high-speed with keeping correctly rounded.
['Naoshi Sakamoto']
Parallel online exact sum for Java8
881,042
The advances on technologies have provided many tools that inspired new instructional models. Learners and instructors are experiencing a diverse environment where everyone can participate from anywhere in the world and share the same learning platforms. Although we already have some manuals, tutorials, and also MOOCs that can be useful for people who wants to learn Computer Music languages, the musical interaction is not offered in these solutions. In this paper we present an instructional model for computer music and live coding based on a cooperative live coding environment where participants can teach and learn through distributed pair programming. We also discuss the fundamental ideas and the tool used on this work during the first experiments.
['Antonio Deusany de Carvalho']
Cooperative live coding as an instructional model
582,760
Current best practices for identifying malicious activity in a network are to deploy network intrusion detection systems. Anomaly detection approaches hold out more promise,as they can detect new types of intrusions because these new intrusions, by assumption, will deviate from ”normal” behavior. But these methods generally suffer from several major drawbacks: computing the anomaly model itself is a time-consuming and processor-heavy task. To avoid these limits, we propose a mobile agent based model for intrusion detection system, called MAFIDS, including new metrics issued from emergent indicators of the agent synergy and a proposed event correlation engine. We detail the implementation of our model showing its capabilities to detect the SYN Flooding attack in a short time and lower false alarm rate by comparing it to SNORT
['Farah Barika Katata', 'Nabil El Kadhi', 'Khaled Ghedira']
Distributed Agent Architecture for Intrusion Detection Based on New Metrics
172,779
Sfinks is a shift register based stream cipher designed for hardware implementation and submitted to the eSTREAM project. In this paper, we analyse the initialisation process of Sfinks. We demonstrate a slid property of the loaded state of the Sfinks cipher, where multiple key-IV pairs may produce phase shifted keystream sequences. The state update functions of both the initialisation process and keystream generation and also the pattern of the padding affect generation of the slid pairs.
['Ali Alhamdan', 'Harry Bartlett', 'Ed Dawson', 'Leonie Simpson', 'Kenneth Koon-Ho Wong']
Slide attacks on the Sfinks stream cipher
664,189
The multiple signal classification (MUSIC) algorithm based on spatial time-frequency distribution (STFD) has been investigated for direction of arrival (DOA) estimation of closely-spaced sources. However, the limitations of the bilinear time-frequency based MUSIC (TF-MUSIC) algorithm lie in that it suffers from heavy implementation complexity, and its performance strongly depends on appropriate selection of auto-term location of the sources in time-frequency (TF) domain for the formulation of a group of STFD matrices, which is practically difficult especially when the sources are spectrally-overlapped. In order to relax these limitations, this paper aims to develop a novel DOA estimation algorithm. Specifically, we build a MUSIC algorithm based on spatial short-time Fourier transform (STFT), which effectively reduces implementation cost. More importantly, we propose an efficient method to precisely select single-source auto-term location for constructing the STFD matrices of each source. In addition to low complexity, the main advantage of the proposed STFT-MUSIC algorithm compared to some existing ones is that it can better deal with closely-spaced sources whose spectral contents are highly overlapped in TF domain. A short-time Fourier transform (STFT) based MUSIC algorithm is proposed.The proposed STFT-MUSIC algorithm has a low implementation complexity.A selection method of single-source time-frequency (TF) points in case of complex-valued mixing matrix is proposed.The STFT-MUSIC algorithm can be implemented with a small number of sensors in underdetermined cases.The STFT-MUSIC algorithm is especially suitable for closely-spaced and spectrally-overlapped sources.
['Haijian Zhang', 'Guoan Bi', 'Yunlong Cai', 'Sirajudeen Gulam Razul', 'Chong Meng Samson See']
DOA estimation of closely-spaced and spectrally-overlapped sources using a STFT-based MUSIC algorithm
649,486
Joint Spatial-Depth Feature Pooling for RGB-D Object Classification
['Hong Pan', 'Søren Ingvor Olsen', 'Yaping Zhu']
Joint Spatial-Depth Feature Pooling for RGB-D Object Classification
631,449
Some Randomness Experiments on TRIVIUM.
['Subhabrata Samajder', 'Palash Sarkar']
Some Randomness Experiments on TRIVIUM.
845,595
It is clear that writing software for parallel architectures is a non-trivial process. This has encouraged much research in an effort to provide tools to assist parallel software development. However, while these tools may cater for architecture-specific problems, they do little for the concept of parallel software engineering, as the end product is usually neither scaleable nor portable. The introduction of a level of abstraction in the expression of parallel algorithms can elevate the reasoning process above architectural constraints and assist the production of more flexible code. This paper outlines an object-oriented parallel algorithm development paradigm based on a task and channel notation, and examines the utilisation of Java TM technologies in the development of a distributed Java TM virtual machine architecture on which algorithms expressed in this notation may be executed dynamically.
['Paul Sage', 'Peter Milligan', 'Ahmed Bouridane']
Dynamic code management on a Java multicomputer
536,157
Chronic Obstructive Pulmonary Disease (COPD) and asthma each represent a large proportion of the global disease burden; COPD is the third leading cause of death worldwide and asthma is one of the most prevalent chronic diseases, afflicting over 300 million people. Much of this burden is concentrated in the developing world, where patients lack access to physicians trained in the diagnosis of pulmonary disease. As a result, these patients experience high rates of underdiagnosis and misdiagnosis. To address this need, we present a mobile platform capable of screening for Asthma and COPD. Our solution is based on a mobile smart phone and consists of an electronic stethoscope, a peak flow meter application, and a patient questionnaire. This data is combined with a machine learning algorithm to identify patients with asthma and COPD. To test and validate the design, we collected data from 119 healthy and sick participants using our custom mobile application and ran the analysis on a PC computer. For comparison, all subjects were examined by an experienced pulmonologist using a full pulmonary testing laboratory. Employing a two-stage logistic regression model, our algorithms were first able to identify patients with either asthma or COPD from the general population, yielding an ROC curve with an AUC of 0.95. Then, after identifying these patients, our algorithm was able to distinguish between patients with asthma and patients with COPD, yielding an ROC curve with AUC of 0.97. This work represents an important milestone towards creating a self-contained mobile phone-based platform that can be used for screening and diagnosis of pulmonary disease in many parts of the world.
['Daniel Chamberlain', 'Rahul Kodgule', 'Richard Ribon Fletcher']
A mobile platform for automated screening of asthma and chronic obstructive pulmonary disease
909,967
Track-and-hold (TH) circuits in the front end of high-speed high-resolution analog-to-digital converters (ADCs) typically limit ADC performance at high input signal frequencies. This paper develops mathematical models for THs implemented in both bipolar and MOS technologies. The models are derived by analyzing the sampling instant error and reveal that the nonlinear behavior is dependent on the input signal and it's derivatives. A digital post compensation method is then presented with it's coefficients estimated using an energy-free method in a background calibration configuration. Simulation results on a nonlinear TH model show that the proposed method achieves a significant improvement in the spurious free dynamic range (SFDR). The method is also applied to a commercially available ADC to demonstrate it's effectiveness.
['Kun Shi', 'Arthur J. Redfern']
Digital compensation of sampling instant errors in the track-and-hold portion of an ADC
434,501
A humanoid robot that can go up and down stairs, crawl underneath obstacles or simply walk around requires reliable perceptual capabilities for obtaining accurate and useful information about its surroundings. In this work we present a system for generating three-dimensional (3D) environment maps from data taken by stereo vision. At the core is a method for precise segmentation of range data into planar segments based on the algorithm of scan-line grouping extended to cope with the noise dynamics of stereo vision. In off-line experiments we demonstrate that our extensions achieve a more precise segmentation. When compared to a previously developed patch-let method, we obtain a richer segmentation with a higher accuracy while also requiring far less computations. From the obtained segmentation we then build a 3D environment map using occupancy grid and floor height maps. The resulting representation classifies areas into one of six different types while also providing object height information. We apply our perception method for the navigation of the humanoid robot QRIO and present experiments of the robot stepping through narrow space, walking up and down stairs and crawling underneath a table.
['Jens-Steffen Gutmann', 'Masaki Fukuchi', 'Masahiro Fujita']
3D Perception and Environment Map Generation for Humanoid Robot Navigation
452,441
Stochastic process algebras combine a high-level system description in terms of interacting components, with a rigorous low-level mathematical model in terms of a stochastic process. These have proved to be valuable modelling formalisms, particularly in the areas of performance modelling and systems biology. However, they do suffer from the problem of state space explosion. Currently, the underlying stochastic process is generally derived via the small step operational semantics of the process algebra and relies on a syntactical representation of the states of the process. In this paper, we propose a numerical representation schema based on a counting abstraction. This automatically detects symmetries within the state space based on replicated components, and produces a compact state space. Moreover, as we demonstrate, it is amenable to other interpretations and thus other forms of computational analysis, enriching the set of qualitative and quantitative measures that can be derived from a model.
['Jie Ding', 'Jane Hillston']
Numerically Representing Stochastic Process Algebra Models
198,845
Communication between processors has long been the bottleneck of distributed network computing. However, recent progress in switch-based high-speed local area networks (LANs) may be changing this situation. Asynchronous transfer mode (ATM) is one of the most widely-accepted and emerging high-speed network standards which can potentially satisfy the communication needs of distributed network computing. We investigate distributed network computing over local ATM networks. We first study the performance characteristics involving end-to-end communication in an environment that includes several types of workstations interconnected via a Fore Systems' ASX-100 ATM switch. We then compare the communication performance of four different application programming interfaces (APIs). The four APIs were Fore Systems' ATM API, the BSD socket programming interface, Sun's remote procedure call (RPC), and the parallel virtual machine (PVM) message passing library. Each API represents distributed programming at a different communication protocol layer. We evaluated two popular distributed applications, parallel matrix multiplication and parallel partial differential equations, over the local ATM network. The experimental results show that network computing is promising over local ATM networks, provided that the higher level protocols, device drivers, and network interfaces are improved. >
['Mengjou Lin', 'Jenwei Hsieh', 'David Hung-Chang Du', 'Joseph P. Thomas', 'James MacDonald']
Distributed network computing over local ATM networks
107,681