abstract
stringlengths 7
10.1k
| authors
stringlengths 9
1.96k
⌀ | title
stringlengths 6
367
| __index_level_0__
int64 5
1,000k
|
---|---|---|---|
In this paper we introduce organizations and roles in Shoham and Tennenholtz' artificial social systems, using a normative system. We model how real agents determine the behavior of organizations by playing roles in the organization, and how the organization controls the behavior of agents playing a role in it. We consider the design of an organization in terms of roles and the assignment of agents to roles, and the evolution of organizations. We do not present a complete formalization of the computational problems, but we illustrate our approach by examples. | ['Guido Boella', 'Leendert W. N. van der Torre'] | Organizations in artificial social systems | 855,367 |
A novel system has been developed to acquire digital multispectral ultraviolet (UV) induced visible fluorescence images of paintings. We present here the image processing needed to understand and further process the acquired multispectral UV fluorescence images. | ['Anna Pelagotti', 'Luca Pezzati', 'Alessandro Piva', 'A. Del Mastio'] | Multispectral UV fluorescence analysis of painted surfaces | 541,748 |
Convolutional Neural Networks (CNNs) have been widely adopted for many imaging applications. For image aesthetics prediction, state-of-the-art algorithms train CNNs on a recently-published large-scale dataset, AVA. However, the distribution of the aesthetic scores on this dataset is extremely unbalanced, which limits the prediction capability of existing methods. We overcome such limitation by using weighted CNNs. We train a regression model that improves the prediction accuracy of the aesthetic scores over state-of-the-art algorithms. In addition, we propose a novel histogram prediction model that not only predicts the aesthetic score, but also estimates the difficulty of performing aesthetics assessment for an input image. We further show an image enhancement application where we obtain an aesthetically pleasing crop of an input image using our regression model. | ['Bin Jin', 'Maria V. Ortiz Segovia', 'Sabine Süsstrunk'] | Image aesthetic predictors based on weighted CNNs | 878,345 |
A data-aided feedforward algorithm has been proposed by Kuo and Fitz (see IEEE Trans. Commun., vol.45, p.1412-26, 1997) for carrier frequency estimation in M-ary phase-shift keying (PSK) transmissions over frequency-flat Rayleigh fading channels. Its accuracy is very good but the estimation range may be limited under certain operating conditions. Also, its application requires a knowledge of the Doppler bandwidth. We show that the estimation range can be greatly extended without sacrificing the estimation accuracy and a simple technique is indicated to measure the Doppler bandwidth. This allows the algorithm to operate in an adaptive manner in a time-varying environment. | ['Michele Morelli', 'Umberto Mengali', 'Giorgio Matteo Vitetta'] | Further results in carrier frequency estimation for transmissions over flat fading channels | 80,432 |
The medium, the message and the memory | ['Sue Greener'] | The medium, the message and the memory | 726,792 |
It is shown that fine motion plans in the LMT framework developed by T. Lozano-Perez, M. Mason and R. Taylor (1984) are computable, and an algorithm for computing them by reducing fine motion planing to an algebraic decision problem is presented. Fine-motion planning involves planning a successful motion of a robot at the fine scale of assembly operations, where control and sensor uncertainty are significant. It is shown that, as long as the envelope of trajectories generated by the control system can be described algebraically, there is an effective procedure for deciding if a successful n-step plan exists. The proposed method makes use of recognizable sets as subgoals for multistep planning. These sets are finitely parameterizable, and it is shown that they are the only sets that need be considered as subgoals. Unfortunately, if the full generality of the LMT framework is used, finding a fine-motion plan can take time double exponential in the number of plant steps. > | ['John F. Canny'] | On computability of fine motion plans | 11,530 |
The SEBS model (the surface energy balance system) based on land surface energy balance equation is used to estimate sensible heat flux and latent heat flux using remotely sensed data. In this paper, The SEBS model is validated with two sets of data collected in two field experiment on winter-wheat field in Shunyi county of Beijing (116deg 26' E-117deg E; 40deg N-40deg 21' N) and on bare soil in Changping county of Beijing (116deg 26' E- 116deg 28' E; 40deg 10' N-40deg 12' N),China. Sensible and latent heat flux measured by eddy correlation method are compared with these SEBS estimates. The results show: (1) diurnal variant of Sensible and latent heat flux estimated bv SEBS basically agreed with the measured by eddy correlation method both on winter-wheat field and on bare soil, but the performance on winter-wheat field is better than on bare soil, the performance of sensible flux is better than that of latent flux. (2) Both on winter-wheat field and on bare soil, the precision of sensible heat flux estimated by SEBS is higher than that of latent heat flux, while the SEBS model performs better on winter-wheat field than on bare soil. (3) The sensitivity of SEBS to even' parameter is different. The SEBS model is most sensitive to the available energy, up to 0.3, while it is more sensitive to surface-air temperature difference and aerodynamic resistance, up to 0.1 and 0.09 respectively | ['Defa Mao', 'Shaomin Liu', 'Jiemin Wang', 'Zhongbo Su', 'Shiqi Yang', 'Xuehong Zhang'] | Validation of the SEBS model | 539,394 |
Understanding and optimizing the properties of solar cells is becoming a key issue in the search for alternatives to nuclear and fossil energy sources. A theoretical analysis via numerical simulations involves solving Maxwell's Equations in discretized form and typically requires substantial computing effort. We start from a hybrid-parallel (MPI+OpenMP) production code that implements the Time Harmonic Inverse Iteration Method (THIIM) with Finite-Difference Frequency Domain (FDFD) discretization. Although this algorithm has the characteristics of a strongly bandwidth-bound stencil update scheme, it is significantly different from the popular stencil types that have been exhaustively studied in the high performance computing literature to date. We apply a recently developed stencil optimization technique, multicore wavefront diamond tiling with multi-dimensional cache block sharing, and describe in detail the peculiarities that need to be considered due to the special stencil structure. Concurrency in updating the components of the electric and magnetic fields provides an additional level of parallelism. The dependence of the cache size requirement of the optimized code on the blocking parameters is modeled accurately, and an auto-tuner searches for optimal configurations in the remaining parameter space. We were able to completely decouple the execution from the memory bandwidth bottleneck, accelerating the implementation by a factor of three to four compared to an optimal implementation with pure spatial blocking on an 18-core Intel Haswell CPU. | ['Tareq M. Malas', 'Julian Hornich', 'Georg Hager', 'Hatem Ltaief', 'Christoph Pflaum', 'David E. Keyes'] | Optimization of an Electromagnetics Code with Multicore Wavefront Diamond Blocking and Multi-dimensional Intra-Tile Parallelization | 570,007 |
A common practice in the estimation of the complexity of objects, in particular of graphs, is to rely on graph- and information-theoretic measures. Here, using integer sequences with properties such as Borel normality, we explain how these measures are not independent of the way in which the object, such a graph, can be described or observed. From observations that can reconstruct the same graph and are therefore essentially translations of the same description, we will see that not only is it necessary to pre-select a feature of interest where there is one when applying a computable measure such as Shannon Entropy, and to make an arbitrary selection where there is not, but that more general properties, such as the causal likeliness of a graph as a measure (opposed to randomness), can be largely misrepresented by computable measures such as Entropy and Entropy rate. We introduce recursive and non-recursive (uncomputable) graphs and graph constructions based on these integer sequences, whose different lossless descriptions have disparate Entropy values, thereby enabling the study and exploration of a measure's range of applications and demonstrating the weaknesses of computable measures of complexity. | ['Hector Zenil', 'Narsis A. Kiani'] | Low Algorithmic Complexity Entropy-deceiving Graphs | 880,202 |
The hierarchical Dirichlet process (HDP) is a Bayesian nonparametric model that can be used to model mixed-membership data with a potentially infinite number of components. It has been applied widely in probabilistic topic modeling, where the data are documents and the components are distributions of terms that reflect recurring patterns (or “topics”) in the collection. Given a document collection, posterior inference is used to determine the number of topics needed and to characterize their distributions. One limitation of HDP analysis is that existing posterior inference algorithms require multiple passes through all the data—these algorithms are intractable for very large scale applications. We propose an online variational inference algorithm for the HDP, an algorithm that is easily applicable to massive and streaming data. Our algorithm is significantly faster than traditional inference algorithms for the HDP, and lets us analyze much larger data sets. We illustrate the approach on two large collections of text, showing improved performance over online LDA, the finite counterpart to the HDP topic model. | ['Chong Wang', 'John William Paisley', 'David M. Blei'] | Online Variational Inference for the Hierarchical Dirichlet Process | 565,325 |
Modern battlefields are characterized by increasing deployment of ad hoc communications among allied entities. These networks can be seen as a complex multi-layer ad hoc network, where each layer may be an independently acting soldiers' group, a group of drones, helicopters, vehicles and so on. Building a backbone network for these environments, which will guarantee efficient communication among all nodes (i.e., network-wide broadcasting) is of fundamental significance for the dissemination of information. In this article we generalize the concept of connected dominating sets for multi-layer networks and use them as the network backbone. We propose efficient methods to identify nodes that are efficient cross-layer spreaders along with a distributed algorithm to build the connected dominating set. Due to the lack of competing methods in the literature, we compare the proposed methods against some baseline methods and investigate the performance of all algorithms for a variety of multi-layer network topologies, illustrating their advantages and disadvantages; the result of the evaluation identifies the clPCI method of recognizing efficient cross-layer spreaders as the champion method. | ['Dimitrios Papakostas', 'Pavlos Basaras', 'Dimitrios Katsaros', 'Leandros Tassiulas'] | Backbone formation in military multi-layer ad hoc networks using complex network concepts | 965,325 |
The tactile detectability of sinusoidal and square-wave virtual texture gratings were measured and analyzed. Using a three-interval one-up three-down adaptive tracking procedure, detection thresholds for virtual gratings were estimated using a custom-designed high position-resolution 3-degrees-of-freedom force-feedback haptic device. Two types of gratings were used, defined by sinusoidal and square waveforms, with spatial wavelengths of 0.2 to 25.6 mm. The results indicated that the participants demonstrated a higher sensitivity (i.e., lower detection threshold) to square-wave gratings than to sinusoidal ones at all the wavelengths tested. When the square-wave gratings were represented by the explicative Fourier series, it became apparent that the detectability of the square-wave gratings could be determined by that of the sinusoidal gratings at the corresponding fundamental frequencies. This was true for any square-wave grating as long as the detection threshold for the fundamental component was below those of the harmonic components | ['Steven A. Cholewiak', 'Hong Z. Tan'] | Frequency Analysis of the Detectability of Virtual Haptic Gratings | 34,841 |
A jury of experts is often convened to decide between two states of Nature relevant to a managerial decision. For example, a legal jury decides between “innocent” and “guilty”, while an economic jury decides between “high” and “low” growth when there is an investment decision. Usually the jurors vary in their abilities to determine the actual state. When the jurors make their collective decision by sequential majority voting, the order of voting in terms of juror ability can affect the optimal probability Q of reaching a correct verdict. We show that when the jury has size three, Q is maximized if the juror of median ability votes first. When voting in this order, sequential voting can close more than 50% of the gap (in terms of Q) between simultaneous voting and the verdict that would be reached without voting if the jurors’ private information were made public. Our results have implications for larger juries, where we answer an age-old question by showing that voting by seniority (decreasing ability order) is significantly better than by anti-seniority (increasing ability order). | ['Steve Alpern', 'Bo Chen'] | The importance of voting order for jury decisions by sequential majority voting | 632,362 |
Relativity and Contrast Enhancement. | ['Amir Kolaman', 'Amir Egozi', 'Hugo Guterman', 'B. L. Coleman'] | Relativity and Contrast Enhancement. | 736,636 |
This paper describes the participation of the UNIBA team in the Task 13 of SemEval-2015 about Multilingual All-Words Sense Disambiguation and Entity Linking. We propose an algorithm able to disambiguate both word senses and named entities by combining the simple Lesk approach with information coming from both a distributional semantic model and usage frequency of meanings. The results for both English and Italian show satisfactory performance. | ['Pierpaolo Basile', 'Annalina Caputo', 'Giovanni Semeraro'] | UNIBA: Combining Distributional Semantic Models and Sense Distribution for Multilingual All-Words Sense Disambiguation and Entity Linking | 444,878 |
The economic crisis and its consequences require new solutions for the media industry, especially Internet-oriented solutions. This paper presents models of implementation of the social media by the Romanian media companies as a strategic objective for their journalistic products. The paper focuses on the manner and solutions chosen by the media companies for the implementation of the social media and the user generated content with the purpose to draw an online audience. The management methods and practices have evolved differently through the implementation of the new media platforms; in this way the solutions picked up by the Romanian media companies are different. This paper gives a perspective on the implementation of social media in the media companies. | ['Georgeta Drulâ'] | Strategy of social media in the media companies | 14,211 |
Optimal feature distribution and feature selection are of paramount importance for reliable fault diagnosis in induction motors. This paper proposes a hybrid feature selection model with a novel discriminant feature distribution analysis-based feature evaluation method. The hybrid feature selection employs a genetic algorithm- (GA-) based filter analysis to select optimal features and a -NN average classification accuracy-based wrapper analysis approach that selects the most optimal features. The proposed feature selection model is applied through an offline process, where a high-dimensional hybrid feature vector is extracted from acquired acoustic emission (AE) signals, which represents a discriminative fault signature. The feature selection determines the optimal features for different types and sizes of single and combined bearing faults under different speed conditions. The effectiveness of the proposed feature selection scheme is verified through an online process that diagnoses faults in an unknown AE fault signal by extracting only the selected features and using the -NN classification algorithm to classify the fault condition manifested in the unknown signal. The classification performance of the proposed approach is compared with those of existing state-of-the-art average distance-based approaches. Our experimental results indicate that the proposed approach outperforms the existing methods with regard to classification accuracy. | ['Rashedul Islam', 'Sheraz Ali Khan', 'Jong-Myon Kim'] | Discriminant Feature Distribution Analysis-Based Hybrid Feature Selection for Online Bearing Fault Diagnosis in Induction Motors | 594,777 |
Nondeterministic soliton automata with a single external vertex. | ['Miklós Krész'] | Nondeterministic soliton automata with a single external vertex. | 800,547 |
Wireless Sensor Networks (WSNs) technologies have been successfully applied to a great variety of outdoor scenarios but, in practical terms, little effort has been applied for indoor environments, and even less in the field of industrial applications. This paper work presents an intelligent hybrid WSN application for an indoor and industrial scenario, with the aim of improving and increasing the levels of human safety and to avoid the denial of service (DoS) attacks. Since its operates on a wireless network, an adversary can always perform a DoS attack by jamming the radio channel with a strong signal. The main contribution of our work is to use a hybrid approach that handles the problem of jamming. The proposed solution improves security by protecting against DoS and returns near-optimal solutions. The paper shows the viability of our approach in terms of performance, scalability, modularity and safety. | ['Nejla Rouissi', 'Hamza Gharsellaoui', 'Sadok Bouamama'] | A hybrid DS-FH-THSS approach anti-jamming in Wireless Sensor Networks | 846,352 |
For a physical system whose operating state is monitored by various sensors, one of the crucial steps involved in fault monitoring and diagnosis process is to validate the sensor values. A sensor value can be validated by observing redundant measurement values. When numerous sensors are installed at different locations in a system and if there exist certain relationships among the measured parameters, the redundancies of the sensors can be viewed as embedded throughout the system. In this paper, a technique is proposed that can systematically explore such embedded redundancies of the sensors in a system and utilize them in quickly validating sensor values. The technique is based on causal relations and their interrelations within sensor redundancy graphs (SRG's) as defined in this paper. Any sensor in an SRG can potentially benefit from any other sensor involved in the same SRG in validation. A validity level is defined and used to express the strength of the validity of a sensor value as supported by varying degrees of evidence. The validation results also yield valuable clues to the systems' fault diagnosis knowledge-based systems on the occurrences of system faults and their locations. > | ['S. C. Lee'] | Sensor value validation based on systematic exploration of the sensor redundancy for fault diagnosis KBS | 291,577 |
Modern radiotherapy requires accurate region of interest (ROI) inputs for plan optimization and delivery. Target delineation, however, remains operator-dependent and potentially serves as a major source of treatment delivery error. In order to optimize this critical, yet observer-driven process, a flexible web-based platform for individual and cooperative target delineation analysis and instruction was developed in order to meet the following unmet needs: (1) an open-source/open-access platform for automated/semiautomated quantitative interobserver and intraobserver ROI analysis and comparison, (2) a real-time interface for radiation oncology trainee online self-education in ROI definition, and (3) a source for pilot data to develop and validate quality metrics for institutional and cooperative group quality assurance efforts. The resultant software, Target Contour Testing/Instructional Computer Software (TaCTICS), developed using Ruby on Rails, has since been implemented and proven flexible, feasible, and useful in several distinct analytical and research applications. | ['Jayashree Kalpathy-Cramer', 'Musaddiq J. Awan', 'Steven Bedrick', 'Coen Rasch', 'David I. Rosenthal', 'Clifton D. Fuller'] | Development of a Software for Quantitative Evaluation Radiotherapy Target and Organ-at-Risk Segmentation Comparison | 222,926 |
In the evolution of Generative and Developmental Systems (GDS), the choice of where along the ontogenic trajectory to stop development in order to measure fitness can have a profound effect upon the emergent solutions. After illustrating the complexities of ontogenic fitness trajectories, we introduce a GDS encoding without an a priori fixed developmental duration, which instead slowly increases the duration over the course of evolution. Applied to a soft robotic locomotion task, we demonstrate how this approach can not only retain the well known advantages of developmental encodings, but also be more efficient and arrive at more parsimonious solutions than approaches with static developmental time frames. | ['John Rieffel'] | Heterochronic scaling of developmental durations in evolved soft robots | 183,404 |
We consider infrastructure-based mobile networks that are assisted by a single relay transmission where both the downstream destination and relay nodes are mobile. Selecting the optimal transmission path for a destination node requires up-to-date link quality estimates of all relevant links. If the relay selection is based on link quality measurements, the number of links to update grows quadratically with the number of nodes, and measurements need to be updated frequently when nodes are mobile. In this paper, we consider a location-based relay selection scheme where link qualities are estimated from node positions; in the scenario of a node-based location system such as GPS, the location-based approach reduces signaling overhead, which in this case only grows linearly with the number of nodes. This paper studies these two relay selection approaches and investigates how they are affected with varying information update interval, node mobility, location inaccuracy, and inaccurate propagation model parameters. Our results show that location-based relay selection performs better than SNR-based relay selection at typical levels of location error when medium-scale fading can be neglected or accurately predicted. | ['Jimmy Jessen Nielsen', 'Tatiana Kozlova Madsen', 'Hans-Peter Schwefel'] | On the benefits of location-based relay selection in mobile wireless networks | 574,261 |
Background#R##N#Array-based comparative genomic hybridization (CGH) is a commonly-used approach to detect DNA copy number variation in whole genome-wide screens. Several statistical methods have been proposed to define genomic segments with different copy numbers in cancer tumors. However, most tumors are heterogeneous and show variation in DNA copy numbers across tumor cells. The challenge is to reveal the copy number profiles of the subpopulations in a tumor and to estimate the percentage of each subpopulation. | ['Kai Wang', 'Jian Li', 'Shengting Li', 'Lars Bolund', 'Carsten Wiuf'] | Estimation of tumor heterogeneity using CGH array data | 485,418 |
Web Accessibility for Visually Impaired People: Requirements and Design Issues. | ['Mexhid Ferati', 'Bahtijar Vogel', 'Arianit Kurti', 'Bujar Raufi', 'David Salvador Astals'] | Web Accessibility for Visually Impaired People: Requirements and Design Issues. | 989,152 |
Abstract#R##N##R##N#The proliferation of electronic health records, driven by advances in technology and legislative measures, is stimulating interest in the analysis of passively collected administrative and clinical data. Observational data present exciting challenges and opportunities to researchers interested in comparing the effectiveness of different treatment regimes and, as personalized medicine requires, estimating how effectiveness varies among subgroups. In this study, we provide new motivation for the local control approach to the analysis of large observational datasets in which patients are first clustered in pretreatment covariate space and treatment comparisons are made within subgroups of similar patients. The motivation for such an analysis is that the resulting local treatment effect estimates make inherently fair comparisons even when treatment cohorts suffer variation in balance (treatment choice fraction) across pretreatment covariate space. We use an example of Simpson's paradox to show that estimates of the overall average treatment effect, which marginalize over covariate space, can be misleading. Thus, we provide an alternative definition that uses a single, shared marginal distribution to define overall treatment comparisons that are inherently fair given the observed covariates. However, we also argue that overall treatment comparisons should no longer be the focus of comparative effectiveness research; the possibility that treatment effectiveness does vary across patient subpopulations must not be left unexplored. In the spirit of the now ubiquitous concept of personalized medicine, estimating heterogeneous treatment effects in clinically relevant subgroups will allow for, within the limits of the available data, fair treatment comparisons that are more relevant to individual patients. | ['Kenneth K. Lopiano', 'Robert L. Obenchain', 'S. Stanley Young'] | Fair treatment comparisons in observational research | 126,201 |
A commodity is shared between some individuals: There is an initial allocation; some selection procedures are used to choose an alternative allocation and; individuals decide between keeping the initial allocation or shifting to the alternative allocation. The selection procedures are supposed to involve an element of randomness in order to reflect uncertainty about economic, social and political processes. It is shown that for every allocation, 8 , there exists a number, . (8) 0 "0,1› such that, if the number of individuals tends to infinity, then the probability that a proportion of the population smaller (resp. larger) than . (8) prefers an allocation chosen by the selection procedure converges to 1 (resp. 0). The index . (8) yields a complete order in the set of Pareto optimal allocations. Illustrations and interpretations of the selection procedures are provided. | ['Mich Tvede', 'Hervé Crès'] | Ordering Pareto-Optima through Majority Voting | 622,494 |
This paper presents a scheme to combine memory and power management for achieving better energy reduction. Our method periodically adjusts the size of physical memory and the timeout value to shut down a hard disk for reducing the average power consumption. We use Pareto distributions to model the distributions of idle time. The parameters of the distributions are adjusted at run-time for calculating the corresponding timeout value of the disk power management. The memory size is changed based on the inclusion property to predict the number of disk accesses at different memory sizes. Experimental results show more than 50% energy savings compared to a 2-competitive fixed-timeout method. | ['Le Cai', 'Yung-Hsiang Lu'] | Joint Power Management of Memory and Disk | 389,999 |
Since its demise in 1990, the technological-industrial policy for manufacturing computers in Brazil during the 1970s and 1980s has generally been regarded as a capital sin. People see the policy as having emerged from a spurious alliance between leftist and nationalist academics, bureaucrats, and the military and believe it provided nothing other than business opportunities for shrewd entrepreneurs. This article departs from this simplistic view and provides a new perspective on Brazil's computer market reserve policy. | ['Ivan da Costa Marques'] | Brazil's Computer Market Reserve: Democracy, Authoritarianism, and Ruptures | 557,669 |
The Internet is a cooperative and decentralized network built out of millions of participants that store and share large amounts of information with other users. Peer-to-peer systems go hand-in-hand with this huge decentralized network, where each individual node can serve content as well as request it. In this scenario, the analysis, development and testing of distributed search algorithms is a key research avenue. In particular, thematic search algorithms should lead to and benefit from the emergence of semantic communities that are the result of the interaction among participants. As a result, intelligent algorithms for neighbor selection should give rise to a logical network topology reflecting efficient communication patterns. This paper presents a series of algorithms which are specifically aimed at reducing the propagation of queries in the network, by applying a novel approach for learning peers’ interests. These algorithms were constructed in an incremental way so that each new algorithm presents some improvements over the previous ones. Several simulations were completed to analyze the connectivity and query propagation patterns of the emergent logical networks. The results indicate that the algorithms with better behavior are those that induce greater collaboration among peers. | ['Ana Lucía Nicolini', 'Carlos Martín Lorenzetti', 'Ana Gabriela Maguitman', 'Carlos Iván Chesñevar'] | Intelligent algorithms for improving communication patterns in thematic P2P search | 954,957 |
We define a framework for static optimization of sliding window conjunctive queries over infinite streams. When computational resources are sufficient, we propose that the goal of optimization should be to find an execution plan that minimizes resource usage within the available resource constraints. When resources are insufficient, on the other hand, we propose that the goal should be to find an execution plan that sheds some of the input load (by randomly dropping tuples) to keep resource usage within bounds while maximizing the output rate. An intuitive approach to load shedding suggests starting with the plan that would be optimal if resources were sufficient and adding "drop boxes" to this plan. We find this to be often times suboptimal - in many instances the optimal partial answer plan results from adding drop boxes to plans that are not optimal in the unlimited resource case. In view of this, we use our framework to investigate an approach to optimization that unifies the placement of drop boxes and the choice of the query plan from which to drop tuples. The effectiveness of our optimizer is experimentally validated and the results show the promise of this approach. | ['Ahmed M. Ayad', 'Jeffrey F. Naughton'] | Static optimization of conjunctive queries with sliding windows over infinite streams | 460,731 |
Abnormal IDDQ (Quiescent VDD supply current) indicates the existence of physical damage in a circuit. Using this phenomenon, a CAD-based fault diagnosis technology has been developed to enhance the manufacturing yield of logic LSI. This method to detect the fatal defect fragments in several abnormalities identified with wafer inspection apparatus includes a way to separate various leakage faults, and to define the diagnosis area encircling the abnormal portions. The proposed technique progressively narrows the faulty area by using logic simulation to extract the logic states of the diagnosis area, and by locating test vectors related to abnormal IDDQ. The fundamental diagnosis way employs the comparative operation of each circuit element to determine whether the same logic state with abnormal IDDQ exists in normal logic state or not. | ['Masaru Sanada'] | Defect Detection from Visual Abnormalities in Manufacturing Process Using IDDQ | 569,178 |
The current dominance of the service-based paradigm reflects the success of specific design and architectural principles embodied in terms like SOA and REST. This paper suggests further principles for the design of services exhibiting long-running transactions (that is, transactions whose characteristic feature is that in the case of failure not all system states can be automatically restored: system compensation is required). The principles are expressed at the level of scope-based compensation and fault handling, and ensure the consistency of data critical to the business logic. They do so by demanding (a) either the commitment of all of the transaction or none of it, and (b) that compensation is assured in case of failure in `parent' transactions. The notion of scope is captured algebraically (rather than semantically) in order to express design guidelines which ensure that a given transaction satisfies those principles. Transactional processes are constructed by parallel composition of services, and transactions with scopes in a single service are dealt with as a special case. The system semantics is formalised as a transition system (in Z) and the principles are expressed as formulae in linear temporal logic over runs of the transition system. That facilitates the model checking (using SAL) of their bounded versions. Two simple examples are used throughout to illustrate definitions and finally to demonstrate the approach. | ['Xi Liu', 'Shaofa Yang', 'Jeff W. Sanders'] | Compensation by design | 24,768 |
The Merrifield-Simmons index of a graph is defined as the total number of its independent sets, including the empty set. Denote by G(n,k) the set of connected graphs with n vertices and k cut vertices. In this paper, we characterize the graphs with the maximum and minimum Merrifield-Simmons index, respectively, among all graphs in G(n,k) for all possible k values. | ['Hongbo Hua', 'Shenggui Zhang'] | Graphs with given number of cut vertices and extremal Merrifield-Simmons index | 151,766 |
Perceptually distinguishing between Mandarin alveolar nasal coda [n] and velar [η] are difficult for Japanese natives in learning Chinese as a second language (CSL). Discovering relations between acoustic cues and perceptual responses is important for studying CSL acquisition and computer-aided pronunciation teaching. In order to investigate the influences of nasal coda's lengths on nasal perception by Chinese and Japanese, two studies were conducted. One is a statistical comparison of Mandarin nasal codas' durations. The other one is an identification experiment in which subjects perceive stimuli with gradually-shortened nasal codas. Results reveal that the difference between durations of [n] and [η] is non-significant. Furthermore, nasal codas' length hardly affects Chinese subjects to identify nasal type, but it is a relatively great impact for Japanese. Slightly more correct responses are obtained when Japanese identify stimuli with longer codas, and those with shorter endings are more likely to be identified as non-nasals. | ['Xijing Luo', 'Jinsong Zhang', 'Zuyan Wang', 'Hang Wang'] | Coda's duration on perception of mandarin syllables with alveolar/velar nasal endings by Japanese CSL learners | 591,979 |
Adopting aspect-oriented technologies for software development requires revisiting the entire traditional software lifecycle in order to identify and represent occurrences of crosscutting during software requirements engineering and design, and to determine how concerns are composed. In this work, we propose sets of quality measurements to be associated with the activities of aspect-oriented software development (AOSD). The intended goal of the measurements is to assist stakeholders with quantitative evidences to better map or iterate system modules at different activities in the development process and to better set the design decisions for the analyzed requirements. | ['Mohamad Kassab', 'Olga Ormandjieva', 'Constantinos Constantinides'] | Providing quality measurement for aspect-oriented software development | 215,874 |
Advanced instruments in a variety of scientific domains are collecting massive amounts of data that must be postprocessed and organized to support research activities. Astronomers have been pioneers in the use of databases to host sky survey data. Increasing data volumes from more powerful telescopes pose enormous challenges to state-ofthe- art database systems and data-loading techniques. In this paper we present SkyLoader, our novel framework for data loading that is being used to populate a multi-table, multi-terabyte database repository for the Palomar-Quest sky survey. SkyLoader consists of an efficient algorithm for bulk loading, an effective data structure to support data integrity, optimized parallelism, and guidelines for system tuning. Performance studies show the positive effects of these techniques, with load time for a 40-gigabyte data set reduced from over 20 hours to less than 3 hours. Our framework offers a promising approach for loading other large and complex scientific databases. | ['Y. Dora Cai', 'Ruth Aydt', 'Robert J. Brunner'] | Optimized Data Loading for a Multi-Terabyte Sky Survey Repository | 8,846 |
In a three-node wireless relay network, two nodes, BS1 and BS2, exchange information through a relay node, RL. Suppose time division duplex is used, physical network coding (PNC) uses two time slots for the information exchange instead of four time slots needed by the conventional method. In the first time slot, both BS1 and BS2 transmit simultaneously to RL. The relay node, RL does a PNC mapping based on the received signal and broadcast the mapped signal back to BS1 and BS2 simultaneously during the second time slot. The nodes, BS1 and BS2 are able to decode their desired information based on the received mapped signal and the signal which they had transmitted during the first time slot. In this paper, we analyze the average BER of the information exchanged between the two nodes in Rayleigh fading environments. We also derive the average BER of the mapped signal at the relay during the first time slot. With the derived BER of the mapped signal at RL, we propose to use power control at BS1 and BS2 to minimize the instantaneous BER of the mapped signal at RL. The proposed technique improves the BER of the desired information decoded at the two nodes. The solution turns out to be channel inversion based power control at both BS1 and BS2. The proposed power control technique improves both the average BER of the mapped signal at RL and the desired information at BS1 and BS2. | ['Edward Chu Yeow Peh', 'Ying-Chang Liang', 'Yong Liang Guan'] | Power control for physical-layer network coding in fading environments | 508,592 |
Significantly, e-governance has advanced from global to in-country, national, state, and local levels of government. Despite its importance, little evidence exists in Africa nay Nigeria, on the effect on local government manifest capacity. The paper provides a framework for understanding the role of e-governance in enhancing local government capacity in Nigeria, its many challenges and opportunities. The paper reviews various governments' initiatives on e-governance between 1999 and 2010. Questionnaire were also administered on officers of local government councils in southwestern Nigeria with a view to assessing local government capacity in ICT acquisition and deployment; and the efforts towards transforming the local government into e-governance, as a tool of achieving transparent and effective governance. Findings revealed that there is little impact of the deployment of Information and Communication Technologies and their application for the administration of local government. Likewise, there were notable integration problems due to lack of political will on the part of the government. The paper concludes that efficient local governance requires an effective, efficient and responsive public administration system; and that a good deal of the promise of democratic governance can be more readily accomplished not by mere adoption of ICT but also through appropriate adaptation. | ['Michael O. Adeyeye', 'Oba A. T. Aladesanmi'] | Re-inventing local government capacity in Nigeria: The e-governance imperative | 516,140 |
In this study, we propose active bone-conducted sound sens- ing for estimating a joint angle of a finger and simultaneous use as a haptic interface. For estimating the joint angle, an unnoticeable vibration is input to the finger, and a perceptible vibration is additionally input to the finger for providing hap- tic feedback. The joint angle is estimated by switching the estimation model depending on the haptic feedback and the average error of the estimation is within about seven degree. | ['Yuya Okawa', 'Kentaro Takemura'] | Haptic-enabled Active Bone-Conducted Sound Sensing | 604,215 |
We propose an analog integrated circuit to compute the motion field of a time-varying image by means of a multiple-constraint method. The chip converts optical input to an electrical form with an array of on-chip image sensors. A resistive network is used to smooth the input image. The spatial and temporal derivatives of the image are used to compute the optical-flow constraint. An array of motion cells enforce the optical-flow constraints, and two nonlinear resistive networks enforce the smoothness constraint over the optical-flow field. In order to preserve object boundaries, the smoothness constraint must be adjusted according to discontinuities in the optical-flow field and edges in the image. This is achieved by the nonlinear nature of the resistors and by adjusting the conductance of the resistors according to spatial gradients of image intensity. A 32/spl times/32 optical-flow based motion field detection chip is fabricated using a 0.5 /spl mu/m CMOS process. Measurement results show that the proposed IC can compute the optical-flow field in a scene efficiently and correctly so as to facilitate segmentation of moving objects in image sequences. | ['Ming-Han Lei', 'Tzi-Dar Chiueh'] | An analog motion field detection chip for image segmentation | 401,986 |
An Assessment of Experimental Protocols for Tracing Changes in Word Semantics Relative to Accuracy and Reliability. | ['Johannes Hellrich', 'Udo Hahn'] | An Assessment of Experimental Protocols for Tracing Changes in Word Semantics Relative to Accuracy and Reliability. | 879,656 |
We propose a non-linear concurrent revision control for centralised management of 3D assets and a novel approach to mesh differencing. Large models are decomposed into individual scene graph (SG) nodes through an asset import library and become versioned as collections of polymorphic documents in a NoSQL database (DB). Well-known operations such as 2- and 3-way diff and merging are supported via a custom DB front-end. By not relying on the knowledge of user edits, we make sure our system works with a range of editing software. We demonstrate the feasibility of our proposal on concurrent 3D editing and conflict resolution. | ['Jozef Doboš', 'Anthony Steed'] | Revision Control Framework for 3D Assets | 619,742 |
Dengue is a viral disease transmitted by the aedes aegypti mosquito. In Paraguay — South America, health authorities carry out entomological surveillance activities in order to monitor the vector density in endemic and non-endemic areas through techniques based on the use of traditional index. Currently there are numerous methods and most practical, efficient and economic indicators to determine the populations of aedes aegypti mosquito as larvitraps and ovitraps. The regionalized information obtained from the sampling procedures can be combined with environmental, demographic or epidemiological information in order to obtain detailed models that have the ability to monitor, simulate the behavior of the vector and therefore predict a possible outbreak of dengue. This paper presents the design and implementation of a predictive model to identify outbreaks of dengue vector infestation and the representation of its spread in a geographical information system. The model is implemented as a simulator of the evolutionary process of vector ecology, composed of a set of sub-models that seek to estimate the rate of development, mortality, reproduction and spread of the dengue vector exposed to simulations of climatic variations, where the initial population is generated from data obtained from larvitraps geographically referenced, in order to generate enough alphanumeric and geographical information to contribute to the early detection of potkential disease outbreaks. | ['Maximiliano Baez Gonzalez', 'Guillermo Gonzalez Rodas'] | Predictive model of dengue focus applied to geographic information systems | 571,600 |
This paper discusses the application of speech alignment, image processing, and language understanding technologies to build efficient interfaces into large digital oral history archives, as exemplified by a thousand hour HistoryMakers corpus. Browsing, querying, and navigation features are discussed. | ['Howard D. Wactlar', 'Julieanna Richardson', 'Michael G. Christel'] | Facilitating access to large digital oral history archives through informedia technologies | 184,699 |
We consider the dissipative coupling between a stochastic Lattice Boltzmann (LB) fluid and a particle-based Molecular Dynamics (MD) system, as it was first introduced by Ahlrichs and D\"unweg (J. Chem. Phys. 111 (1999) 8225). The fluid velocity at the position of a particle is determined by interpolation, such that a Stokes friction force gives rise to an exchange of momentum between the particle and the surrounding fluid nodes. For efficiency reasons, the LB time step is chosen as a multiple of the MD time step, such that the MD system is updated more frequently than the LB fluid. In this situation, there are different ways to implement the coupling: Either the fluid velocity at the surrounding nodes is only updated every LB time step, or it is updated every MD step. It is demonstrated that the latter choice, which enforces momentum conservation on a significantly shorter time scale, is clearly superior in terms of stability and accuracy, and nevertheless only marginally slower in terms of execution speed. The second variant is therefore the recommended implementation. | ['Nikita Tretyakov', 'Burkhard Dünweg'] | An improved dissipative coupling scheme for a system of Molecular Dynamics particles interacting with a Lattice Boltzmann fluid | 906,344 |
Block matching motion estimation algorithms are useful in many video applications such as the block-based video coding scheme employed in MPEG1/2. A single-chip implementation of a motion estimator (ME) for high quality video compression domains has been the goal of many ongoing research projects. There are several complementary directions along which we can reduce hardware complexity, for example, (1) reduction of search points, and (2) simplification of criterion functions. The last category is what this paper focuses on. We study the algorithmic and architectural potentials of the pixel difference classification (PDC) method and propose a generalisation called multi-level PDC (MPDC). The goal is to examine different hardware-complexity vs performance trade-offs. Moreover, we identify a subset of MPDC, the bit-truncation (BT) method which has the most potential for hardware saving. Experimental results show that it offers attractive trade-offs. Under fixed bit rate constraints, it gives picture quality degradation of less than 0.5 dB, which is non-perceivable, for up to 6-bit truncation. BT results in no complicated data or control flows. Hence the consequent hardware reduction is straightforward. The estimated overall encoder hardware saving ranges from 12% to 35% for 6-bit truncation. | ['Yin Chan', 'Sun-Yuan Kung'] | Multi-level pixel difference classification methods | 486,950 |
High-performance libraries, the performance-critical building blocks for high-level applications, will assume greater importance on modern processors as they become more complex and diverse. However, automatic library generators are still immature, forcing library developers to manually tune library to meet their performance objectives. We are developing a new script-controlled compilation framework to help domain experts reduce much of the tedious and error-prone nature of manual tuning, by enabling them to leverage their expertise and reuse past optimization experiences. We focus on demonstrating improved performance and productivity obtained through using our framework to tune BLAS3 routines on three GPU platforms: up to 5.4x speedups over the CUBLAS achieved on NVIDIA GeForce 9800, 2.8x on GTX285, and 3.4x on Fermi Tesla C2050. Our results highlight the potential benefits of exploiting domain expertise and the relations between different routines (in terms of their algorithms and data structures). | ['Huimin Cui', 'Lei Wang', 'Jingling Xue', 'Yang Yang', 'Xiaobing Feng'] | Automatic Library Generation for BLAS3 on GPUs | 73,493 |
We study the stability of the origin for the dynamical system, x/spl dot/(t)=u(t)Ax(t)+(1-u(t))Bx(t), where A and B are two 2/spl times/2 real matrices with eigenvalues having strictly negative real part, x/spl isin/R/sup 2/ and u(.):[0, /spl infin/[/spl rarr/ [0, 1] is a completely random measurable function. More precisely, we find a (coordinates invariant) necessary and sufficient condition on A and B for the origin to be asymptotically stable for each function u(.). This bidimensional problem assumes particular interest since linear systems of higher dimensions can be reduced to our situation. Two unpublished examples in the (more difficult) case in which both matrices have real eigenvalues are analyzed in details. | ['Ugo Boscain'] | Stability of planar switched systems: the linear single, input case | 141,788 |
The Learning to Rank (L2R) research field has experienced a fast paced growth over the last few years, with a wide variety of benchmark datasets and baselines available for experimentation. We here investigate the main assumption behind this field, which is that, the use of sophisticated L2R algorithms and models, produce significant gains over more traditional and simple information retrieval approaches. Our experimental results in the LETOR benchmarks surprisingly indicate that many L2R algorithms, when put up against the best individual features of each dataset, may not produce statistically significant differences, even if the absolute gains may seem large. We also find that most of the reported baselines are statistically tied, with no clear winner. | ['Guilherme de Castro Mendes Gomes', 'Vitor Campos de Oliveira', 'Jussara M. Almeida', 'Marcos André Gonçalves'] | Is Learning to Rank Worth it? A Statistical Analysis of Learning to Rank Methods in the LETOR Benchmarks | 598,079 |
The detection and classification of white blood cells (WBCs, also known as Leukocytes) is a hot issue because of its important applications in disease diagnosis. Nowadays the morphological analysis of blood cells is operated manually by skilled operators, which results in some drawbacks such as slowness of the analysis, a non-standard accuracy, and the dependence on the operator’s skills. Although there have been many papers studying the detection of WBCs or classification of WBCs independently, few papers consider them together. This paper proposes an automatic detection and classification system for WBCs from peripheral blood images. It firstly proposes an algorithm to detect WBCs from the microscope images based on the simple relation of colors R, B and morphological operation. Then a granularity feature (pairwise rotation invariant co-occurrence local binary pattern, PRICoLBP feature) and SVM are applied to classify eosinophil and basophil from other WBCs firstly. Lastly, convolution neural networks are used to extract features in high level from WBCs automatically, and a random forest is applied to these features to recognize the other three kinds of WBCs: neutrophil, monocyte and lymphocyte. Some detection experiments on Cellavison database and ALL-IDB database show that our proposed detection method has better effect almost than iterative threshold method with less cost time, and some classification experiments show that our proposed classification method has better accuracy almost than some other methods. | ['Jianwei Zhao', 'Minshu Zhang', 'Zhenghua Zhou', 'Jianjun Chu', 'Feilong Cao'] | Automatic detection and classification of leukocytes using convolutional neural networks | 938,943 |
We present a framework for audio background modeling of complex and unstructured audio environments. The determination of background audio is important for understanding and predicting the ambient context surrounding an agent, both human and machine. Our method extends the online adaptive Gaussian Mixture model technique to model variations in the background audio. We propose a method for learning the initial background model using a semi-supervised learning approach. This information is then integrated into the online background determination process, providing us with a more complete background model. We show that we can utilize both labeled and unlabeled data to improve audio classification performance. By incorporating prediction models in the determination process, we can improve the background detection performance even further. Experimental results on real data sets demonstrate the effectiveness of our proposed method. | ['Selina Chu', 'Shrikanth Narayanan', 'C.-C. Jay Kuo'] | A semi-supervised learning approach to online audio background detection | 105,433 |
Combinatorial Properties of Full-Flag Johnson Graphs | ['Irving Dai'] | Combinatorial Properties of Full-Flag Johnson Graphs | 710,608 |
Currently, Decision support systems in dynamic and complex environment involves the use of visual data mining technology for interactive data analysis and visualization. This paper presents a new architecture for designing such systems. The envisaged architecture based on the Multi-Agent System to improve coordination and communication between the different system modules to generate the appropriate solution for a specific problem. In this work, we have applied the proposed architecture to develop visual intelligent clinical decision support system for the fight against nosocomial infections. The developed prototype was evaluated to show the new architecture applicability. | ['Hamdi Ellouzi', 'Hela Ltifi', 'Mounir Ben Ayed'] | New Multi-Agent architecture of visual Intelligent Decision Support Systems application in the medical field | 861,233 |
We present a practical technique for pointing and selection using a combination of eye gaze and keyboard triggers. EyePoint uses a two-step progressive refinement process fluidly stitched together in a look-press-look-release action, which makes it possible to compensate for the accuracy limitations of the current state-of-the-art eye gaze trackers. While research in gaze-based pointing has traditionally focused on disabled users, EyePoint makes gaze-based pointing effective and simple enough for even able-bodied users to use for their everyday computing tasks. As the cost of eye gaze tracking devices decreases, it will become possible for such gaze-based techniques to be used as a viable alternative for users who choose not to use a mouse depending on their abilities, tasks and preferences. | ['Manu Kumar', 'Andreas Paepcke', 'Terry Winograd'] | EyePoint: practical pointing and selection using gaze and keyboard | 296,626 |
Traditional databases have limitation of scalability with respect to data as well as number of clients. Column-oriented databases have overcome this feature by minimising the cost. Column-oriented databases only ensure single row atomic transaction and does not support snapshot isolation. This paper presents about strong snapshot isolation (SI) and atomicity for multi-row distributed transactions in HBase. This HBase snapshot isolation uses a novel approach and handles distributed transactions at the end of individual clients. This is also designed to be scalable across large distributed databases in terms of data distribution. Some experiments have been performed extensively to preserve atomicity for distributed transactions in various environments. Experimental results show that the proposed methodology can serve better to preserve atomicity and snapshot isolation in column-oriented HDDBs for multi-row transactions. | ['Dharavath Ramesh', 'Chiranjeev Kumar', 'Amit Kumar Jain'] | Preserving atomicity and isolation for multi-row transactions in column-oriented heterogeneous distributed databases | 936,621 |
This paper presents a model of diverse programs that assumes there are a common set of potential software faults that are more or less likely to exist in a specific program version. Testing is modeled as a specific ordering of the removal of faults from each program version. Different models of testing are examined where common and diverse test strategies are used for the diverse program versions. Under certain assumptions, theory suggests that a common test strategy could leave the proportion of common faults unchanged, while diverse test strategies are likely to reduce the proportion of common faults. A review of the available empirical evidence gives some support to the assumptions made in the fault-based model. We also consider how the proportion of common faults can be related to the expected reliability improvement. | ['Peter G. Bishop'] | Modeling the Impact of Testing on Diverse Programs | 570,743 |
In this paper, we develop passive network tomography techniques for inferring link-level anomalies like excessive loss rates and delay from path-level measurements. Our approach involves placing a few passive monitoring devices on strategic links within the network, and then passively monitoring the performance of network paths that pass through those links. In order to keep the monitoring infrastructure and communication costs low, we focus on minimizing (1) the number of passive probe devices deployed, and (2) the set of monitored paths. For mesh topologies, we show that the above two minimization problems are NP-hard, and consequently, devise polynomial-time greedy algorithms that achieve a logarithmic approximation factor, which is the best possible for any algorithm. We also consider tree topologies typical of Enterprise networks, and show that while similar NP-hardness results hold, constant factor approximation algorithms are possible for such topologies. | ['Shipra Agrawal', 'K. V. M. Naidu', 'Rajeev Rastogi'] | Diagnosing Link-Level Anomalies Using Passive Probes | 488,577 |
Abstract#R##N##R##N#In this article, we develop a novel approach to detecting the movement changes of an autonomous land vehicle. Movement changes are of two pan values, namely, the pan angle and the pan distance between the vehicle and the navigation line. Both of these two values have to be taken into account while making movement calibration. The approach is comprised of a learning phase and an operation phase. In the former phase, a navigation map is generated by taking four images based on the preset maximum and minimum tolerant pan angles of the vehicle, while it is located on the preset maximum and minimum tolerant pan distances on the side parallel to the navigation line. In the latter phase, images are taken in regular time intervals while the vehicle follows the navigation line, and the two pan values are calculated through interpolation based on the navigation map. To increase the performance of the proposed system, two refinement approaches are also presented. Experimental results are given to reveal the superiority of the proposed approach. © 1998 John Wiley & Sons, Inc. | ['Chi-Fang Lin', 'Kuen‐Han Hsieh'] | Straight-line motion control for autonomous land vehicles using 2D image processing techniques | 206,230 |
Tabling in logic programming has been used to eliminate redundant computation and also to stop infinite loop. In this paper we add the third usage of tabling, i.e. to make infinite computation possible for probabilistic logic programs. Using PRISM, a logic-based probabilistic modeling language with a tabling mechanism, we generalize prefix probability computation for PCFGs to probabilistic logic programs. Given a top-goal, we search for all SLD proofs#R##N#by tabled search regardless of whether they contain loop or not. We then convert them to a set of linear probability equations and solve them by matrix operation. The solution gives us the probability of the top-goal, which, in nature, is an infinite sum of probabilities. Our generalized approach to prefix probability computation through tabling opens a way to logic-based probabilistic modeling of cyclic dependencies. | ['Taisuke Sato', 'Philipp J. Meyer'] | Tabling for infinite probability computation | 644,297 |
Recent advances in social network analysis methodologies for large (millions of nodes and billions of edges) and dynamic (evolving at different rates) networks have focused on leveraging new high performance architectures, parallel/distributed tools and novel data structures. However, there has been less focus on designing scalable and efficient algorithms to handle the challenges of dynamism in large-scale networks. In our previous work, we presented an overarching anytime anywhere framework for designing parallel and distributed social network analysis algorithms that are scalable to large network sizes and can handle dynamism. A key contribution of our work is to leverage the anytime and anywhere properties of graph analysis problems to design algorithms that can efficiently handle network dynamism by reusing partial results, and by reducing re-computations. In this paper, we present an algorithm for closeness centrality analysis that can handle changes in the network in the form of edge deletions. Using both theoretical analysis and experimental evaluations, we examine the performance of our algorithm with different network sizes and dynamism rates. | ['Eunice E. Santos', 'John Korah', 'Vairavan Murugappan', 'Suresh Subramanian'] | Efficient Anytime Anywhere Algorithms for Closeness Centrality in Large and Dynamic Graphs | 852,082 |
Proposes a simple paradigm for constructing heuristics for the static assignment of parallel programs onto asynchronous, distributed memory, multiprocessor architectures. The proposed paradigm involves capturing the dominant computation and communication components of an application and using this relatively simpler program representation to determine an assignment. Thus, the mapping problem is reduced from its most general form to a simpler form which often has optimal solutions. > | ['Ajay Mohindra', 'Sudhakar Yalamanchili'] | Dominant representations: a paradigm for mapping parallel computations | 381,133 |
Conventional communication methods employ a wide variety of signaling techniques that essentially map a bit sequence to a real-valued sequence (which is a representation of a point in the signal constellation). The real-valued sequence is in turn transmitted over a communications channel. However, communication techniques for the purpose of multimedia steganography or data hiding have to transmit the real-valued sequence corresponding to a point in the signal constellation superimposed on the original content (without affecting the fidelity of the original content noticeably). In this paper, we explore practical solutions for signaling methods for multimedia steganography. Data hiding is seen as a sophisticated signaling technique using a periodic signal constellation. We propose such a signaling method and present both theoretical and simulated evaluations of its performance in an additive noise scenario. The problem of optimal choice of the parameters for the proposed technique is also explored, and solutions are presented. | ['Mahalingam Ramkumar', 'Ali N. Akansu'] | Signaling methods for multimedia steganography | 243,311 |
Implementation of a cellular neural network–based segmentation algorithm on the bio-inspired vision system | ['Fethullah Karabiber', 'Giuseppe Grassi', 'Pietro Vecchio', 'Sabri Arik', 'M. Erhan Yalcin'] | Implementation of a cellular neural network–based segmentation algorithm on the bio-inspired vision system | 481,512 |
Together with elderly people, a system for the notification of medication intake and measuring the blood pressure has been developed. This system consists of a hardware part, displaying different visual signals to remind users, and a software part to configure the hardware accordingly. Those visual signals are displayed with the help of a lamp, which can be placed in the living room or at any other desired place. The colors of those signals can be adapted with the help of an application, where the time and medication can be configured to be displayed. The developed software was created following a user centered design process together with four elderly people to get to know their needs and to identify the key requirements for such a system. | ['René Baranyi', 'Sascha Rainer', 'Stefan Schlossarek', 'Nadja Lederer', 'Thomas Grechenig'] | Visual Health Reminder: A Reminder for Medication Intake and Measuring Blood Pressure to Support Elderly People | 954,675 |
The goal of our research is to develop a 'clean interface' between design and fabrication facilities for the production of custom machined parts. The research accomplishments are summarised: (1) creation of a new format for data exchange, Numerical Control Markup Language (NCML), (2) development of a prototype system to illustrate how NCML can be effectively used to conduct e-commerce for custom machined parts and (3) testing of the methodology with a number of parts obtained from three different sources: the Design Repository located at Drexel University, Manufacturing Quote, Inc., a commercial company that is in the business of matching buyers and sellers of custom machined parts and Stone Machine Co., a local machine shop that is typical of the type of fabrication facility which would use NCML. | ['Robert B. Jerard', 'Ok-Hyun Ryou'] | NCML: a data exchange format for internet-based machining | 358,139 |
Wireless sensor networks are deployed to monitor real-world phenomena, and are seeing growing demand in commerce and industry. These networks can benefit from time synchronized clocks on distributed nodes. The precision of time synchronization depends on error elimination or reliable estimation of errors associated with synchronization message delays. This paper examines an approach to time synchronize motes using onboard radio-controlled clocks. The advantage will be the minimisation of non-deterministic sources of errors in time synchronization amongst receivers. This approach of synchronization using out-of-band and dedicated time source is aimed to achieve network-wide, scalable, topology-independent, fast-convergent and less application-dependent solutions. | ['Waqas Ikram', 'Ivan Stoianov', 'Nina F. Thornhill'] | Towards a radio-controlled time synchronized wireless sensor network: A work in-progress paper | 308,835 |
Although the somatosensory homunculus is a classically used description of the way somatosensory inputs are processed in the brain, the actual contributions of primary (SI) and secondary (SII) somatosensory cortices to the spatial coding of touch remain poorly understood. We studied adaptation of the fMRI BOLD response in the somatosensory cortex by delivering pairs of vibrotactile stimuli to the finger tips of the index and middle fingers. The first stimulus (adaptor) was delivered either to the index or to the middle finger of the right or left hand, and the second stimulus (test) was always administered to the left index finger. The overall BOLD response evoked by the stimulation was primarily contralateral in SI and was more bilateral in SII. However, our fMRI adaptation approach also revealed that both somatosensory cortices were sensitive to ipsilateral as well as to contralateral inputs. SI and SII adapted more after subsequent stimulation of homologous as compared with nonhomologous fingers, showing a distinction between different fingers. Most importantly, for both somatosensory cortices, this finger-specific adaptation occurred irrespective of whether the tactile stimulus was delivered to the same or to different hands. This result implies integration of contralateral and ipsilateral somatosensory inputs in SI as well as in SII. Our findings suggest that SI is more than a simple relay for sensory information and that both SI and SII contribute to the spatial coding of touch by discriminating between body parts (fingers) and by integrating the somatosensory input from the two sides of the body (hands). | ['Luigi Tamè', 'Christoph Braun', 'Angelika Lingnau', 'Jens Schwarzbach', 'Gianpaolo Demarchi', 'Yiwen Li Hegner', 'Alessandro Farnè', 'Francesco Pavani'] | The contribution of primary and secondary somatosensory cortices to the representation of body parts and body sides: An fmri adaptation study | 364,948 |
A deadline-aware scheduling scheme for the lambda grid system is proposed to support a huge computer grid system based on an advanced photonic network technology. The assignment of wavelengths to jobs in order to efficiently carry various services is critical in lambda grid networks. Such services have different requirements such as the job completion deadlines and wavelength assignment must consider the job deadlines. The conventional job scheduling approach assigns a lot of time-slots to a call within a short period in order to finish the job as quickly as possible. This raises the blocking probability of short deadline calls. Our proposal assigns wavelengths in lambda grid networks so as to meet QoS (quality of service) guarantees. The proposed scheme assigns time-slots to a call over time according to its deadline, which allows it to increase the system performance in handling short deadline calls, for example, lowering their blocking probability. Computer simulations show that the proposed scheme can reduce the blocking probability by a factor of 100 compared with the conventional scheme under the low load condition in which the ratio of long deadline calls is high. The proposed scheduling scheme can realize more efficient lambda grid networks. | ['Hiroyuki Miyagi', 'Masahiro Hayashitani', 'Daisuke Ishii', 'Yutaka Arakawa', 'Naoaki Yamanaka'] | A Deadline-Aware Scheduling Scheme for Wavelength Assignment in l Grid Networks | 298,427 |
Purpose#R##N##R##N##R##N##R##N##R##N#Resource scheduling is the study of how to effectively measure, evaluate, analyze, and dispatch resources in order to meet the demands of corresponding tasks. Aiming at the problem of resource scheduling in the private cloud environment, the purpose of this paper is to propose a resource scheduling approach from an efficiency priority point of view.#R##N##R##N##R##N##R##N##R##N#Design/methodology/approach#R##N##R##N##R##N##R##N##R##N#To measure the computational efficiencies for the resource nodes in a private cloud environment, the data envelopment analysis (DEA) approach is incorporated and a suitable DEA model is proposed. Then, based on the efficiency scores calculated by the proposed DEA model for the resource nodes, the 0-1 programming technique is introduced to build a simple resource scheduling model.#R##N##R##N##R##N##R##N##R##N#Findings#R##N##R##N##R##N##R##N##R##N#The proposed DEA model not only has the ability of ranking all the decision-making units into different positions but also can handle non-discretionary inputs and undesirable outputs when evaluating the resource nodes. Furthermore, the resource scheduling model can generate for the calculation tasks an optimal resource scheduling scheme that has the highest total computational efficiency.#R##N##R##N##R##N##R##N##R##N#Research limitations/implications#R##N##R##N##R##N##R##N##R##N#The proposed method may also be used in studies of resource scheduling studies in the environments of public clouds and hybrid clouds.#R##N##R##N##R##N##R##N##R##N#Practical implications#R##N##R##N##R##N##R##N##R##N#The proposed approach can achieve the goal of resource scheduling in private cloud computing platforms by attaining the highest total computational efficiency, which is very significant in practice.#R##N##R##N##R##N##R##N##R##N#Originality/value#R##N##R##N##R##N##R##N##R##N#This paper uses an efficiency priority point of view to solve the problem of resource scheduling in private cloud environments. | ['Junfei Chu', 'Jie Wu', 'Qingyuan Zhu', 'Jiasen Sun'] | Resource scheduling in a private cloud environment: an efficiency priority perspective | 929,335 |
We consider a notion of relative homology (and cohomology) for surfaces with two types of boundaries. Using this tool, we study a generalization of Kitaev's code based on surfaces with mixed boundaries. This construction includes both Bravyi and Kitaev's and Freedman and Meyer's extension of Kitaev's toric code. We argue that our generalization offers a denser storage of quantum information. In a planar architecture, we obtain a three-fold overhead reduction over the standard architecture consisting of a punctured square lattice. | ['Nicolas Delfosse', 'Pavithran Iyer', 'David Poulin'] | Generalized surface codes and packing of logical qubits | 835,018 |
Thesauri used in online databases, an analytic guide | ['Bert R. Boyce'] | Thesauri used in online databases, an analytic guide | 317,048 |
We present a hybrid approach to sentence simplification which combines deep semantics and monolingual machine translation to derive simple sentences from complex ones. The approach differs from previous work in two main ways. First, it is semantic based in that it takes as input a deep semantic representation rather than e.g., a sentence or a parse tree. Second, it combines a simplification model for splitting and deletion with a monolingual translation model for phrase substitution and reordering. When compared against current state of the art methods, our model yields significantly simpler output that is both grammatical and meaning preserving. | ['Shashi Narayan', 'Claire Gardent'] | Hybrid Simplification using Deep Semantics and Machine Translation | 612,734 |
Due to the highly dynamic network feature, Vehicular Ad Hoc Networks (VANETs) suffer from the frequent link breakage and low packet delivery rate, which pose challenges to design routing protocols. To address this issue, we propose a Fuzzy Logic Routing Based on Forwarding (FLRBF) optimization scheme depending on receiving nodes for optimizing forwarding broadcast packets. We first calculate and record the distance factor and time delay from one source node to the destination node. Via the above information, we define a forwarding probability value for each node by the proposed fuzzy logic system based on high priority routing algorithm. Motivated by the defined forwarding probability values, nodes set timers and forward packets to achieve balancing broadcast efficiency, network throughput and average end-to-end delay. In the end, we conduct simulations to verify the performance of FLRBF. Results demonstrate that the proposed protocol performs well in terms of packet delivery ratio, end-to-end delay and overhead. | ['Zhifang Miao', 'Xuelian Cai', 'Quyuan Luo', 'Weiwei Dong'] | A FLRBF scheme for optimization of forwarding broadcast packets in vehicular ad hoc networks | 968,011 |
With the paradigm of aspect orientation, a developer is able to separate the code of so-called cross-cutting concerns from the rest of the program logic. This possibility is useful for formal specifications, too. For example, security aspects can be separated from the rest of the specification. Another use case for aspect orientation in specifications is the extension of specifications without touching the original ones. The definition of formal semantics for UML profiles without changing the original UML specification is an example for this application. This paper describes the implementation of the aspect oriented approach in Abstract State Machines. We introduce an aspect language with its syntax and formal semantics. It allows for specifying pointcuts where an original specification is augmented with aspect specification. Besides the general overview of this language extension, some ASM specific features of the realization are depicted in detail. | ['Marcel Dausend', 'Alexander Raschke'] | Introducing Aspect---Oriented Specification for Abstract State Machines | 577,237 |
In this letter, a broadband coupler is presented that makes use of a half mode substrate integrated waveguide (HMSIW) technique using a printed circuit board process. The coupler is realized by a parallel HMSIW line which couples energy by magnetic field. Compared with micro-strip coupler and conventional HMSIW coupler, it has lower loss and better Electromagnetic Compatibility owning to the closed field structure. Compared with SIW coupler, it has smaller size and lower cost owing to the half TE10 model. The coupler is simulated and measured at 8-12 GHz. Measured results show a good agreement with simulation. | ['Haiyan Jin', 'Li Jian', 'Guangjun Wen'] | A Novel Coupler Based on HMSIW | 53,623 |
Resource-based Lexical Approach to Tweet-Norm task. | ['Juan Manuel Cotelo Moya', 'Fermín L. Cruz', 'José A. Troyano'] | Resource-based Lexical Approach to Tweet-Norm task. | 802,163 |
In the last years, multi-objective evolutionary algorithms (MOEA) have been applied to different software engineering problems where many conflicting objectives have to be optimized simultaneously. In theory, evolutionary algorithms feature a nice property for runtime optimization as they can provide a solution in any execution time. In practice, based on a Darwinian inspired natural selection, these evolutionary algorithms produce many deadborn solutions whose computation results in a computational resources wastage: natural selection is naturally slow. In this paper, we reconsider this founding analogy to accelerate convergence of MOEA, by looking at modern biology studies: artificial selection has been used to achieve an anticipated specific purpose instead of only relying on crossover and natural selection (i.e., Muller et al [18] research on artificial mutation of fruits with X-Ray). Putting aside the analogy with natural selection , the present paper proposes an hyper-heuristic for MOEA algorithms named Sputnik 1 that uses artificial selective mutation to improve the convergence speed of MOEA. Sputnik leverages the past history of mutation efficiency to select the most relevant mutations to perform. We evaluate Sputnik on a cloud-reasoning engine, which drives on-demand provisioning while considering conflicting performance and cost objectives. We have conducted experiments to highlight the significant performance improvement of Sputnik in terms of resolution time. | ['Donia El Kateb', 'Francois Fouquet', 'Johann Bourcier', 'Yves Le Traon'] | Artificial Mutation inspired Hyper-heuristic for Runtime Usage of Multi-objective Algorithms | 565,185 |
Multimedia object caching, by which the same multimedia object can be adapted to diverse mobile appliances through the technique of transcoding, is an important technology for improving the scalability of Web services, especially in the environment of mobile networks. In this paper, we address the cache replacement problem for multimedia object caching by exploring the aggregate effect of caching multiple versions of the same multimedia object. First, we present an optimal solution for calculating the minimal access cost of caching multiple versions of the same multimedia object. Second, based on this solution, we propose an effective cache replacement algorithm for multimedia object caching. Finally, we evaluate the performance of the proposed solution with a set of simulation experiments for various performance metrics over a wide range of system parameters. | ['Keqiu Li', 'Takashi Nanya', 'Wenyu Qu'] | A Minimal Access Cost-Based Multimedia Object Replacement Algorithm | 81,162 |
Geospatial information of many kinds, from topographic maps to scientific data, is increasingly being made available through web mapping services. These allow georeferenced map images to be served from data stores and displayed in websites and geographic information systems, where they can be integrated with other geographic information. The Open Geospatial Consortium’s Web Map Service (WMS) standard has been widely adopted in diverse communities for sharing data in this way. However, current services typically provide little or no information about the quality or accuracy of the data they serve. In this paper we will describe the design and implementation of a new “quality-enabled” profile of WMS, which we call “WMS-Q”. This describes how information about data quality can be transmitted to the user through WMS. Such information can exist at many levels, from entire datasets to individual measurements, and includes the many different ways in which data uncertainty can be expressed. We also describe proposed extensions to the Symbology Encoding specification, which include provision for visualizing uncertainty in raster data in a number of different ways, including contours, shading and bivariate colour maps. We shall also describe new open-source implementations of the new specifications, which include both clients and servers. | ['Jonathan Blower', 'Joan Masó', 'Daniel Díaz', 'Charles Roberts', 'Guy H. Griffiths', 'Jane P. Lewis', 'Xiaoyu Yang', 'Xavier Pons'] | Communicating Thematic Data Quality with Web Map Services | 19,886 |
Computerization and Controversy: Value Conflicts and Social Choices | ['Helen Margetts'] | Computerization and Controversy: Value Conflicts and Social Choices | 609,793 |
This paper deals with the problem of summarization and visualization of communication patterns in a large scale corporate social network. The solution to the problem can have significant impact in understanding large scale social network dynamics. There are three key aspects to our approach. First we propose a ring based network representation scheme - the insight is that visual displays of temporal dynamics of large scale social networks can be accomplished without using graph based layout mechanisms. Second, we detect three specific network activity patterns - periodicity, isolated and widespread patterns at multiple time scales. For each pattern we develop specific visualizations within the overall ring based framework. Finally we develop an activity pattern ranking scheme and a visualization that enables us to summarize key social network activities in a single snapshot. We have validated our approach by using the large Enron corpus - we have excellent activity detection results, and very good preliminary user study results for the visualization. | ['Preetha Appan', 'Hari Sundaram', 'Belle L. Tseng'] | Summarization and visualization of communication patterns in a large-scale social network | 968,604 |
Open systems are part of a paradigm shift from algorithmic to interactive computation. Multiagent systems in nature that exhibit emergent behavior and stigmergy offer inspiration for research in open systems and enabling technologies for collaboration. This contribution distinguishes two types of interaction, directly via messages, and indirectly via persistent observable state changes. Models of collaboration are incomplete if they fail to explicitly represent indirect interaction; a richer set of system behaviors is possible when computational entities interact indirectly, including via analog media, such as the real world, than when interaction is exclusively direct. Indirect interaction is therefore a precondition for certain emergent behaviors. | ['David Keil', 'Dina Q. Goldin'] | Modeling indirect interaction in open computational systems | 311,346 |
Recently, Mobile Ad-hoc Networks (MANET) are continuing to attract the attention for their potential use in several fields. Most of the work has been done in simulation, because a simulator can give a quick and inexpensive understanding of protocols and algorithms. However, experimentation in the real world are very important to verify the simulation results and to revise the models implemented in the simulator. In this paper, we present the implementation and analysis of our testbed considering the Link Quality Window Size (LQWS) parameter for Optimized Link State Routing (OLSR) protocol. We investigate the effect of mobility in the throughput of a MANET. The mobile nodes move toward the destination at a regular speed. When the mobile nodes arrive at the corner, they stop for about three seconds. In our experiments, we consider two cases: only one node is moving (mobile node)and two nodes (intermediate nodes) are moving at the same time. We assess the performance of our testbed in terms of throughput, round trip time, jitter and packet loss. From our experiments, we found that throughput of TCP was improved by reducing LQWS. | ['Makoto Ikeda', 'Leonard Barolli', 'Masahiro Hiyama', 'Giuseppe De Marco', 'Tao Yang', 'Arjan Durresi'] | Performance Evaluation of Link Quality Extension in Multihop Wireless Mobile Ad-hoc Networks | 17,442 |
Currently, cloud data centers make use of server virtualization techniques that consolidate computational resources to provide various services and improve resource utilization. However, the pattern of resource usage in some management operations for virtualized systems, such as live migration and snapshot recording, is different from that in the case of a traditional data center in which the virtualization technology is not used. Therefore, understanding the system behavior during the execution of a large number of management operations is very important for the reliable management of a virtualized cloud data center. With this understanding, we studied the characteristics of management operation performance by executing many operations simultaneously in an experimental virtual server system. The experimental results revealed some notable characteristics, including interference between operations with virtual machines (VMs) hosted on different physical servers and the asymmetric nature of live migration. On the basis of these findings, we state that degradation of operation performance can be mitigated by orchestrating operations under a proper control policy. We confirm the validity of these suggestions by carrying out a case study. | ['Shinji Kikuchi', 'Yasuhide Matsumoto'] | What will happen if cloud management operations burst out | 115,551 |
Utility functions provide a natural and advantageous way for achieving self-adaptation in distributed systems. We implemented in a realistic prototype elevator group control system(EGCS), that demonstrates how utility functions can continually optimize the use of elevators in a dynamic environment. A global manager allocates elevators among the whole system based on throughput obtained from the monitors of the floors. We present empirical data that demonstrate the effectiveness of our utility function scheme in handling realistic, fluctuating workloads running on a test PC . | ['Hao Wu', 'Qingping Tan'] | Utility-Function-Based Self-Adaptation in Elevator Group Control System | 324,735 |
The probabilistic association algorithm (PDA) is proposed as a quasi-optimal solution for synchronous overloaded CDMA detection. Overloaded CDMA is useful in applications in which the band is a limited resource and more users are to be conveyed on the same channel. The PDA algorithm, already successfully applied to underloaded CDMA, is extended to the more difficult overloaded CDMA problem. PDA uses dynamic soft updates for the a posteriori probabilities and it is derived under the Gaussian assumption for the superposition of multiuser interference and Gaussian noise. A discussion on the separability of the two-class problem suggests how a dynamic PDA (D-PDA) may solve the problem more effectively in comparison to a fixed-order PDA (F-PDA). Performance results are reported in this paper and show that D-PDA can provide good decoding with limited computational complexity with the bit error rate going to zero as the signal-to-noise ratio increases. | ['Gianmarco Romano', 'Francesco Palmieri', 'Peter Willett'] | Soft iterative decoding for overloaded CDMA | 475,811 |
Guest Editorial: Reconfigurable Signal Processing Systems | ['Wayne Burleson', 'Naresh R. Shanbhag'] | Guest Editorial: Reconfigurable Signal Processing Systems | 351,311 |
We study the relative importance of online word of mouth and advertising on firm performance over time since product introduction. The current research separates the volume of consumer-generated online word of mouth OWOM from its valence, which has three dimensions---attribute, emotion, and recommendation oriented. Firm-initiated advertising content is also classified as attribute or emotion advertising. We also shed light on the role played by advertising content on generating the different types of OWOM conversations. We use a dynamic hierarchical linear model DHLM for our analysis. The proposed model is compared with a dynamic linear model, vector autoregressive/system of equations model, and a generalized Bass model. Our estimation accounts for potential endogeneity in the key measures. Among the different OWOM measures, only the valence of recommendation OWOM is found to have a direct impact on sales; i.e., not all OWOM is the same. This impact increases over time. In contrast, the impact of attribute advertising and emotion advertising decreases over time. Also, consistent with prior research, we observe that rational messages i.e., attribute-oriented advertising wears out a bit faster than emotion-oriented advertising. Moreover, the volume of OWOM does not have a significant impact on sales. This suggests that, in our data, “what people say” is more important than “how much people say.” Next, we find that recommendation OWOM valence is driven primarily by the valence of attribute OWOM when the product is new and driven by the valence of emotion OWOM when the product is more mature. Our brand-level results help us classify brands as consumer driven or firm driven, depending on the relative importance of the OWOM and advertising measures, respectively. | ['Shyam Gopinath', 'Jacquelyn S. Thomas', 'Lakshman Krishnamurthi'] | Investigating the Relationship Between the Content of Online Word of Mouth, Advertising, and Brand Performance | 327,412 |
Data driven architectures have significant potential in the design of high performance ASICs. By exploiting the inherent parallelism in the application, these architectures can maximize pipelining. The key consideration involved with the design of a data driven ASIC is ensuring that throughput is maximized while a relatively low area is maintained. Optimal throughput can be realized by ensuring that all operands arrive simultaneously at their corresponding operator node. If this condition is achieved, the underlying data flow graph is said to be balanced. If the initial data flow graph is unbalanced, buffers must be inserted to prevent the clogging of the pipeline along the shorter paths. A novel algorithm for the assignment of buffers in a data flow graph is proposed. The method can also be applied to achieve wave-pipelining in digital systems under certain restrictions. The algorithm uses a new application of the retiming technique; the number of buffers here is shown to be equal to the minimum number of buffers achieved by integer programming techniques. We also discuss an extension of this algorithm which can further reduce the number of buffers by altering the DFG without affecting functionality or performance. The time complexities of the proposed algorithms are O(V/spl times/E) and O(V/sup 2//spl times/logV), respectively, a considerable improvement over the existing strategies. Also proposed is a novel buffer distribution algorithm that exploits a unique feature of data driven operation. This procedure maximizes throughput by inserting substantially fewer buffers than other techniques. Experimental results show that the proposed algorithms outperform the existing methods. | ['Mitrajit Chatterjee', 'Savita Banerjee', 'Dhiraj K. Pradhan'] | Buffer assignment algorithms on data driven ASICs | 92,220 |
We develop an anisotropic perfectly matched layer (PML) method for solving the time harmonic electromagnetic scattering problems in which the PML coordinate stretching is performed only in one direction outside a cuboid domain. The PML parameters such as the thickness of the layer and the absorbing medium property are determined through sharp a posteriori error estimates. Combined with the adaptive finite element method, the proposed adaptive anisotropic PML method provides a complete numerical strategy to solve the scattering problem in the framework of FEM which produces automatically a coarse mesh size away from the fixed domain and thus makes the total computational costs insensitive to the choice of the thickness of the PML layer. Numerical experiments are included to illustrate the competitive behavior of the proposed adaptive method. | ['Zhiming Chen', 'Tao Cui', 'Linbo Zhang'] | An adaptive anisotropic perfectly matched layer method for 3-D time harmonic electromagnetic scattering problems | 392,488 |
HEDD: the human epigenetic drug database | ['Yunfeng Qi', 'Dadong Wang', 'Daying Wang', 'Taicheng Jin', 'Liping Yang', 'Hui Wu', 'Yaoyao Li', 'Jing Zhao', 'Fengping Du', 'Mingxia Song', 'Renjun Wang'] | HEDD: the human epigenetic drug database | 962,751 |
This paper focuses on producing fast and accurate co-segmentation to a pair of images that is scalable and able to apply multimodal features. We present a general solution for this purpose and specifically propose a noniterative and fully unsupervised method using pointwise color and regional covariance features for image co-segmentation. The scalability and generality of our method mainly attribute to the superpixel-level irregular graph formulation and multi-feature joint clustering. Through a unified similarity metric, the contributions of multiple features are finally embodied into the co-segmentation energy function. Experiments on common dataset validate the superior scalability of our method over state-of-the-art alternatives and its capability of generating comparable or even better labeling accuracy at the same time. We also find that multifeature co-segmentation usually produces better labeling accuracy than using single color feature only. | ['Shijie Zhang', 'Wei Feng', 'Liang Wan', 'Jiawan Zhang', 'Jianmin Jiang'] | Scalable image co-segmentation using color and covariance features | 196,205 |
This paper presents a path-tracking control method for a snake-like robot using screw drive mechanism. The operators are required to command only one unit in the head, then commands for the rest of the units are automatically calculated to track the path of the preceding units. Although the velocity commands for exact path tracking can be calculated using past command signals due to the omni-directional property of the robot, a simpler control law without using past signals is adopted from the implementation viewpoint. Asymptotic tracking error is investigated based on a Lyapunov approach for the case of a constant curvature. Furthermore, the effectiveness of the proposed method is investigated by computer simulations and laboratory experiments. | ['Hiroaki Fukushima', 'Motoyasu Tanaka', 'Tetsushi Kamegawa', 'Fumitoshi Matsuno'] | Path-tracking control of a snake-like robot using screw drive mechanism | 483,978 |
Reuse can provide major long-term benefits for software development within an organisation, in terms of cost savings and reliability. However, these benefits can only be achieved if the organisation is able to devote sufficient resources to establishing and (re)using a source of components. This paper describes an approach to establishing a source of reusable components and retrieving them during the development process, which has been designed to promote reuse while minimising the resources required of the host organisation. In order to obtain these benefits, the DesignMatcher approach is based on the principle that the developers understanding of the problem and possible solutions evolve during the design and development process. DesignMatcher works by making it easy to add design and code fragments to a design store, and offering unobtrusive suggestions for possible matching components as the new system is designed. | ['Peter Hornsby', 'Ian Newman'] | Migrating to reuse: the DesignMatcher approach | 386,509 |
We define the set of meandering curves and we partition this set into three classes, corresponding to the cases where neither, one or both extremities of the curve are covered by its arcs. We present enumerative results for each one of these classes and we associate these results with enumerating sequences for other known meandering curves. | ['A. Panayotopoulos', 'Panayiotis M. Vlamos'] | Partitioning the Meandering Curves | 414,944 |
In this work we summarise our approaches to improve software quality in a commercial organisation. We present different aspects of software quality that span across the organisation and which are not limited only to the software development. We present the approaches at design, tool, and process levels. We also present the future directions of our research. All the presented approaches add up to a small-step way of tackling software quality issues in a company. | ['Jakub Rudzki', 'Tarja Systä'] | Small Steps Approach to Tackling Software Quality in a Commercial Setting | 411,122 |
Motivation: The proliferation of public data repositories creates a need for meta-analysis methods to efficiently evaluate, integrate and validate related datasets produced by independent groups. A t-based approach has been proposed to integrate effect size from multiple studies by modeling both intra-and between-study variation. Recently, a non-parametric ‘rank product’ method, which is derived based on biological reasoning of fold-change criteria, has been applied to directly combine multiple datasets into one meta study. Fisher's Inverse χ2 method, which only depends on P-values from individual analyses of each dataset, has been used in a couple of medical studies. While these methods address the question from different angles, it is not clear how they compare with each other.#R##N##R##N#Results: We comparatively evaluate the three methods; t-based hierarchical modeling, rank products and Fisher's Inverse χ2 test with P-values from either the t-based or the rank product method. A simulation study shows that the rank product method, in general, has higher sensitivity and selectivity than the t-based method in both individual and meta-analysis, especially in the setting of small sample size and/or large between-study variation. Not surprisingly, Fisher's χ2 method highly depends on the method used in the individual analysis. Application to real datasets demonstrates that meta-analysis achieves more reliable identification than an individual analysis, and rank products are more robust in gene ranking, which leads to a much higher reproducibility among independent studies. Though t-based meta-analysis greatly improves over the individual analysis, it suffers from a potentially large amount of false positives when P-values serve as threshold. We conclude that careful meta-analysis is a powerful tool for integrating multiple array studies.#R##N##R##N#Contact: [email protected]#R##N##R##N#Supplementary information: Supplementary data are available at Bioinformatics online. | ['Fangxin Hong', 'Rainer Breitling'] | A comparison of meta-analysis methods for detecting differentially expressed genes in microarray experiments | 303,108 |
In mathematics after some centuries of development the semantical situation is very clean. This may not be surprising, as the subject attracts people who enjoy clarity, generality, and neatness. On the one hand we have our concepts of mathematical objects (numbers, relations, functions, sets), and on the other we have various formal means of expression . The mathematical expressions are generated for the most part in a very regular manner, and every effort is made to supply all expressions with denotations. (This is not always so easy to do. The theory of distributions, for example, provided a non-obvious construction of denotations for expressions of an operational calculus. The derivative operator was well serviced, but one still cannot multiply two distributions.) | ['Dana S. Scott'] | Mathematical concepts in programming language semantics | 95,057 |
For overhead transmission line monitoring, wireless sensor networks offer a low-cost solution to connect sensors on towers with the control center. However, these networks cannot meet stringent quality of service (QoS) requirements, in terms of packet delivery ratio and delay. Also, it is necessary to ensure robustness such that data can be delivered when a tower fails. In view of the QoS and robustness requirements, wide area network (WAN) connections, such as cellular and satellite network are needed, on top of wireless sensor networks. Different WAN connections have different characteristics in terms of availability, performance, and cost. We have proposed a novel scheme, called optimal placement for QoS and robustness (OPQR), which uses the canonical genetic algorithm to determine the numbers, locations, and types of WAN connections to be deployed to minimize cost while satisfying the QoS and robustness requirements. Evaluation results confirm that OPQR can indeed fulfil the desired requirements at minimum cost, and it is a very useful tool in cost-efficient communication network planning for transmission line monitoring. Specifically, OPQR can maintain cost below USD50 per day for a transmission line that has 80 towers spanning across 32 km, while maintaining the packet delay below 100 ms, packet delivery ratio above 99.99%, and each flow has two node-disjoint paths to the control center. | ['Peng-Yong Kong', 'Chih-Wen Liu', 'Joe-Air Jiang'] | Cost-Efficient Placement of Communication Connections for Transmission Line Monitoring | 957,427 |