abstract
stringlengths
7
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
5
1,000k
In this paper, the capabilities of hybrid polarimetric synthetic aperture radar are investigated to estimate soil moisture on bare and vegetated agricultural soils. A new methodology based on a compact polarimetric decomposition, together with a surface component inversion, is developed to retrieve surface soil moisture. A model-based compact decomposition technique is applied to obtain the surface scattering component under the assumption of a randomly oriented vegetation volume. After vegetation removal, the surface scattering component is inverted for soil moisture (under vegetation) by comparison with a surface component modeled by two physics-based scattering models: The integral equation method (IEM) and the extended Bragg model (X-Bragg). The developed algorithm, based on a two-layer (random volume over ground) scattering model, is applied on a time series of hybrid polarimetric C-band RISAT-1 right circular transmit linear receive data acquired from April to October 2014 over the Wallerfing test site in Lower Bavaria, Germany. The retrieved soil moisture is validated against in situ frequency-domain reflectometry measurements. Including the entire growing season (all acquired dates) and all crop types, the estimated soil moisture values indicate an overall rmse of 7 vol.% using the X-Bragg model and 10 vol.% using the IEM model. The proposed hybrid polarimetric soil-moisture inversion algorithm works well for bare soils ( $\mbox{rmse}=3.1$ –8.9 vol.%) with inversion rates of around 30–70%. The inversion rate for vegetation-covered soils ranges from 5% to 40%, including all phenological stages of the crops and different soil moisture conditions.
['G. G. Ponnurangam', 'Thomas Jagdhuber', 'Irena Hajnsek', 'Y. S. Rao']
Soil Moisture Estimation Using Hybrid Polarimetric SAR Data of RISAT-1
675,913
Should deep neural nets have ears? the role of auditory features in deep learning approaches.
['Angel Mario Castro Martinez', 'Niko Moritz', 'Bernd T. Meyer']
Should deep neural nets have ears? the role of auditory features in deep learning approaches.
772,738
High peak-to-average power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM) signals results in inefficient operations of nonlinear devices used in OFDM systems and high peak interference-to-carrier ratio (PICR) of received signals degrades bit-error rate (BER) performance of the system. Joint design problem for OFDM systems considering both PAPR and PICR is investigated in this paper. We formulate a joint constrained and a joint weighted PAPR-PICR reduction problems, so that both PAPR and PICR are reduced and the performance of the system is improved. Algorithms are also developed to solve the joint PAPR and PICR design problems. Simulation results are presented to demonstrate efficacy of the proposed algorithms in reducing PAPR and PICR.
['Kewei Yuan', 'Zhiwei Mao']
Joint PAPR and PICR Design in OFDM Systems
78,123
An Adaptive Regularisation algorithm using Cubics (ARC) is proposed for unconstrained optimization, generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by Weiser et al. (Optim Methods Softw 22(3):413–431, 2007). At each iteration of our approach, an approximate global minimizer of a local cubic regularisation of the objective function is determined, and this ensures a significant improvement in the objective so long as the Hessian of the objective is locally Lipschitz continuous. The new method uses an adaptive estimation of the local Lipschitz constant and approximations to the global model-minimizer which remain computationally-viable even for large-scale problems. We show that the excellent global and local convergence properties obtained by Nesterov and Polyak are retained, and sometimes extended to a wider class of problems, by our ARC approach. Preliminary numerical experiments with small-scale test problems from the CUTEr set show encouraging performance of the ARC algorithm when compared to a basic trust-region implementation.
['Coralia Cartis', 'Nicholas I. M. Gould', 'Philippe L. Toint']
Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results
382,220
Organizational decision support systems (ODSSs) provide the organization with a powerful vehicle to represent to its members the circumstance of the organization through the models embedded in the ODSS. The paper proposes an organizational exchange modeling (OEM) paradigm. The paper discusses the importance of exchange to organizational processes, provides a graphical model to represent exchange features and relates the proposed model to several other models and frameworks.
['John R. Landry']
An organizational exchange model: theory and implementation
487,223
The evaluation of quality of the railway passenger service is an important way to know the quality of the railway passenger service. This paper firstly constructs an evaluation indicator system of the quality of the railway passenger service on the foundation of the related literatures, and constructs the multiple-attribute evaluation matrix and indicator weight vector based on the linguistic variables. Then the linguistic variables are transformed to triangular fuzzy numbers, and the priority relation based on the evaluation of the “common rule” is constructed using the improved PROMETHEE-II method. Then the fuzzy positive flux, fuzzy negative flux and fuzzy net flux are calculated. According to the triangular fuzzy number expected value of the net flux, the order of the projects is confirmed. Finally, the evaluation model is used to the evaluation of passenger service quality of three railway station of Ji’nan railway Administration. The application instance shows that this method is easy to use and can be used widely.
['Peide Liu', 'Zhongliang Guan']
Evaluation Research on the Quality of the Railway Passenger Service Based on the Linguistic Variables and the Improved PROMETHEE-II Method
270,572
The article focuses on two related issues: authorship distinction and the analysis of characters’ voices in fiction. It deals with the case of Elisabeth Wolff and Agatha Deken, two women writers from the Netherlands who collaboratively published several epistolary novels at the end of the 18th century. First, the task division between the two authors will be analysed based on their usage of words and their frequencies. Next, any stylistic differences between the characters (letter writers) will be dealt with. The focus lies on Wolff’s and Deken’s first joint novel, Sara Burgerhart (1782). As to the authorship, nothing clearly showed a clear task division, which implies that Deken’s and Wolff’s writing styles are very much alike. This confirms findings of other scholars, who found that collaborating authors jointly produce a style that is distinguishable from both authors’ personal styles. As to stylistic differences in the voices of the characters in Sara Burgerhart, it was found that only a couple of the letter writers are clearly distinguishable compared with the main characters in the novel. I experimented with two possible tools to zoom in on the exact differences between those characters, but the methods are still too subjective to my taste. In the follow-up research, I will look further than words and their frequencies as building stones of literary style.
['Karina van Dalen-Oskam']
Epistolary voices. The case of Elisabeth Wolff and Agatha Deken
299,432
Multi-label classification with many classes has recently drawn a lot of attention. Existing methods address this problem by performing linear label space transformation to reduce the dimension of label space, and then conducting independent regression for each reduced label dimension. These methods however do not capture nonlinear correlations of the multiple labels and may lead to significant information loss in the process of label space reduction. In this paper, we first propose to exploit kernel canonical correlation analysis (KCCA) to capture nonlinear label correlation information and perform nonlinear label space reduction. Then we develop a novel label space reduction method that explicitly combines linear and nonlinear label space transformations based on CCA and KCCA respectively to address multi-label classification with many classes. The proposed method is a feature-aware label transformation method that promotes the label predictability in the transformed label space from the input features. We conduct experiments on a number of multi-label classification datasets. The proposed approach demonstrates good performance, comparing to a number of state-of-the-art label dimension reduction methods.
['Xin Li', 'Yuhong Guo']
Multi-label classification with feature-aware non-linear label space transformation
619,773
Herein, we propose a method for utilizing unmanned aerial vehicle (UAV) swarm when communication infrastructure is disabled due to war or natural disaster such as earthquake. The method involves using UAV swarm to perform a quick search, to locate the target by selecting main locations within the search area at which the target may be located, and to manipulate the swarm to establish continuous communication network to the base station. UAVs will first conduct the search of the whole area at high altitude to plan overall movement paths. They will then select N main locations within the searched area at which the target may be located, and divide into N groups to perform close search. The group that locates the target during the search will move toward the base station to transmit the information about the target, and the groups that did not locate the target also return to the base station. The groups that return to the base station receives information about the target then moves to the position between the target and the base station to maintain connectivity between the target and the base station. The proposed method was validated and its performance was evaluated using NS-2 simulation.
['Hyo Hyun Choi', 'Su Hyun Nam', 'Taeshik Shon']
Two Tier Search Scheme Using Micro UAV Swarm
622,707
Spectral mesh processing is an idea that was proposed at the beginning of the 1990s to port the "signal processing toolbox" to the setting of 3D mesh models. Recent advances in both computing power and numerical software make it possible to fully implement this vision. In the classical context of sound and image processing, Fourier analysis was a cornerstone in development of a wide spectrum of techniques, such as filtering and recognition, to name but a few. In this course, attendees learn how to transfer the underlying concepts to setting a mesh model, how to implement the "spectral mesh processing" toolbox, and how to use it for real applications, including filtering, shape matching, remeshing, segmentation, and parameterization, among others.
['Bruno Lévy', 'Hao Zhang']
Spectral mesh processing
244,614
Indications exist in literature that a screw’s pullout behaviour is among others influenced by the relative deformability of the screw and its hosting material. In addition it is known that the stress field developed in the vicinity of an orthopaedic implant significantly influences bone remodelling. In this context an experimentally validated finite element model of a screw and its hosting material was employed for the study of the pullout phenomenon. The results indicated that the stress distribution within the screw’s hosting material is strongly influenced by the ratio of the screw’s elastic modulus over the respective one of its hosting material. In addition it is concluded that an optimum value of this ratio exists for which the stresses are more uniformly distributed along the length of the screw improving this way the pullout behaviour, and therefore the overall mechanical response, of the ‘screw-hosting material’ complex.
['Panagiotis Chatzistergos', 'Charidimos E. Spyrou', 'Evangelos A. Magnissalis', 'S.K. Kourkoulis']
Dependence of the pullout behaviour of pedicle screws on the screw-hosting material relative deformability
515,883
Additional supporting information may be found in the#R##N#online version of this article: http://onlinelibrary.wiley.com/journal/10.1002/%28ISSN%291096-987X
['Frans T.I. Marx', 'Johan H.L. Jordaan', 'G. Lachmann', 'Hermanus C.M. Vosloo']
A Molecular modeling study of the changes of some steric properties of the precatalysts during the olefin metathesis reaction
137,850
We introduce a new paradigm of collaborative computing called the Ubiquitous Collaborative Activity Virtual Environment (UCAVE). UCAVEs are portable immersive virtual environments that leverage mobile communication platforms, motion trackers and displays to facilitate ad-hoc virtual collaboration. We discuss design criteria and research challenges for UCAVEs, as well as a prototype hardware configuration that enables UCAVE interactions using modern smart phones and head mounted displays.
['Aryabrata Basu', 'Andrew Raij', 'Kyle Johnsen']
Ubiquitous collaborative activity virtual environments
278,565
This paper proposes a method to transmit Multiple Input Multiple Output (MIMO) radio signals over optical fiber link shared with the 10 Gbps Ethernet link. Single radio signal transmission has reported before, however, a transmission for multiple radio signals has not been discussed. This paper evaluates the bit error rate (BER) performance of on-off keying (OOK) signal by means of theoretical analysis and simulation. It shows that the truncated normal distribution gives better probability density functions than normal distribution for estimating BER performance of OOK.
['Kazuma Nishiyasu', 'Yuya Kaneko', 'Takeshi Higashino', 'Minoru Okada']
BER performance analysis of OOK signal transmission over fiber with MIMO radio signals
722,088
We have previously developed a mobile robot system which uses scale invariant visual landmarks to localize and simultaneously build a 3D map of the environment In this paper, we look at global localization, also known as the kidnapped robot problem, where the robot localizes itself globally, without any prior location estimate. This is achieved by matching distinctive landmarks in the current frame to a database map. A Hough transform approach and a random sample consensus (RANSAC) approach for global localization are compared, showing that RANSAC is much more efficient. Moreover, robust global localization can be achieved by matching a small sub-map of the local region built from multiple frames.
['Stephen Se', 'David G. Lowe', 'James J. Little']
Global localization using distinctive visual features
318,050
Future power system is expected to be characterized by increased penetration of intermittent sources. Random and rapid fluctuations in demands together with intermittency in generation impose new challenges for power balancing in the existing system. Conventional techniques of balancing by large central or dispersed generations might not be sufficient for future scenario. One of the effective methods to cope with this scenario is to enable demand response. This paper proposes a dynamic voltage regulation based demand response technique to be applied in low voltage (LV) distribution feeders. An adaptive dynamic model has been developed to determine composite voltage dependency of an aggregated load on feeder level. Following the demand dispatch or control signal, optimum voltage setting at the LV substation is determined based on the voltage dependency of the load. Furthermore, a new technique has been proposed to estimate the voltage at consumer point of connection (POC) to ensure operation within voltage limits. Finally, the effectiveness of the proposed method is analyzed comprehensively with reference to three different scenarios on a low voltage (LV) feeder (Borup feeder) owned by Danish electricity distribution company SEAS-NVE.
['Bishnu Prasad Bhattarai', 'Birgitte Bak-Jensen', 'Pukar Mahat', 'Jayakrishnan Radhakrishna Pillai']
Voltage controlled dynamic demand response
923,708
Today's Internet provides a global data delivery service to millions of end users and routing protocols play a critical role in this service. It is important to be able to identify and diagnose any problems occurring in Internet routing. However, the Internet's sheer size makes this task difficult. One cannot easily extract out the most important or relevant routing information from the large amounts of data collected from multiple routers. To tackle this problem, we have developed Link-Rank, a tool to visualize Internet routing changes at the global scale. Link-Rank weighs links in a topological graph by the number of routes carried over each link and visually captures changes in link weights in the form of a topological graph with adjustable size. Using Link-Rank, network operators can easily observe important routing changes from massive amounts of routing data, discover otherwise unnoticed routing problems, understand the impact of topological events, and infer root causes of observed routing changes
['Mohit Lad', 'Daniel Massey', 'Lixia Zhang']
Visualizing Internet Routing Changes
327,699
The importance of fields of knowledge like Biology, Psychology, and Social Sciences as sources of inspiration for Computational Intelligence has been increasing in the last years, deeply influencing Evolutionary Computation and its applications, inspiring the development of algorithms and methodologies like evolutionary programming and particle swarm optimization. However, the proliferation of biologically-inspired algorithms and solutions indicates the actual focus of researchers and, consequently, Philosophy is still faced as a sort of obscure and enigmatic knowledge, despite the power of generalization and the systematic nature of philosophical investigative methods like dialectics. This work proposes an evolutionary class of algorithms based on the materialist dialectics, namely the Objective Dialectical Method, to be used in search and optimization problems. To validate our proposal we developed simulations using several benchmarks functions. The generated results were evaluated in minimization problems concerning how near the results are from the minimum value and how many iterations were used until the estimated minimum value reached a specific threshold value set as a determined precision. This work showed that the proposed dialectical algorithm has good performance in global optimization.
['Wellington P. dos Santos', 'Francisco Marcos de Assis']
Optimization based on dialectics
108,904
Extracting Certainty from Uncertainty: Regret Bounded by Variation in Costs.
['Elad Hazan', 'Satyen Kale']
Extracting Certainty from Uncertainty: Regret Bounded by Variation in Costs.
773,142
This study examines the working memory systems involved in human wayfinding. In the learning phase, 24 participants learned two routes in a novel photorealistic virtual environment displayed on a 220 ◦ screen while they were disrupted by a visual, a spatial, a verbal, or—in a control group—no secondary task. In the following wayfinding phase, the participants had to find and to “virtually walk” the two routes again. During this wayfinding phase, a number of dependent measures were recorded. This research shows that encoding wayfinding knowledge interfered with the verbal and with the spatial secondary task. These interferences were even stronger than the interference of wayfinding knowledge with the visual secondary task. These findings are consistent with a dual-coding approach of wayfinding knowledge.
['T Meilinger', 'Markus Knauff', 'Hh Bülthoff']
Working Memory in Wayfinding—A Dual Task Experiment in a Virtual City
302,927
This paper presents a distributed and concurrent multiple access-based collision avoidance MAC scheme to increase spatial reuse and hence the overall throughput of wireless networks. We make use of location information and power capture techniques to realize this protocol. While other wireless MAC protocols have used power capture technique in a general manner to reject the interfering signals during the entire reception of a packet, we exploit the additional capability of power capture to "lock-on" to the intended packet during a period known as "capture time" to admit other parallel transmissions. Simulation results show that our protocol can outperform the IEEE 802.11 throughput performance by a factor of 2 or more when there are short and medium range transmissions within a given area.
['Jaya Shankar Pathmasuntharam', 'Amitabha Das', 'Anil K. Gupta']
SYNCHRONIZED AND CONCURRENT ENABLING OF NEIGHBORHOOD TRANSMISSION (SCENT) — A MAC PROTOCOL FOR CONCURRENT TRANSMISSION IN WIRELESS AD-HOC NETWORKS
191,925
This paper presents the results from research work done in the field of reconfigurable architectures and systems. Dynamic and partial reconfiguration has mainly been investigated as a way to configure functionalities in hardware on-demand, controlled either by the user or by the system itself. This paper presents work that was aimed at applying hardware reconfiguration even for run-time adaptation of functional implementation in order to enable self-optimization of power and performance according to the run-time specific requirements of the application.
['Katarina Paulsson', 'Michael Hübner', 'Jürgen Becker']
Exploitation of dynamic and partial hardware reconfiguration for on-line power/performance optimization
294,286
GAML: a Parallel Implementation of Lazy ML
['Luc Maranget']
GAML: a Parallel Implementation of Lazy ML
93,111
Epigenomic k-mer dictionaries: shedding light on how sequence composition influences in vivo nucleosome positioning
['Raffaele Giancarlo', 'Simona E. Rombo', 'Filippo Utro']
Epigenomic k-mer dictionaries: shedding light on how sequence composition influences in vivo nucleosome positioning
685,741
Although object-oriented languages can improve programmingpractices, their characteristics may introducenew problems for software engineers. One important problemis the presence of implicit control flow caused byexception handling and polymorphism. Implicit controlflow causes complex interactions, and can thus complicatesoftware-engineering tasks. To address this problem, wepresent a systematic and structured approach, for supportingthese tasks, based on the static and dynamic analyses ofconstructs that cause implicit control flow. Our approachprovides software engineers with information for supportingand guiding development and maintenance tasks. Wealso present empirical results to illustrate the potential usefulnessof our approach. Our studies show that, for thesubjects considered, complex implicit control flow is alwayspresent and is generally not adequately exercised.
['Saurabh Sinha', 'Alessandro Orso', 'Mary Jean Harrold']
Automated Support for Development, Maintenance, and Testing in the Presence of Implicit Control Flow
297,123
An algebraic soft-decision decoder for Hermitian codes is presented. We apply Koetter and Vardy's soft-decision decoding framework, now well established for Reed-Solmon codes, to Hermitian codes. First we provide an algebraic foundation for soft-decision decoding. Then we present an interpolation algorithm to find the Q -polynomial that plays a key role in the decoding. With some simulation results, we compare performances of the algebraic soft-decision decoders for Hermitian codes and Reed-Solmon codes, favorable to the former.
['Kwankyu Lee', "Michael E. O'Sullivan"]
Algebraic Soft-Decision Decoding of Hermitian Codes
248,600
Backdoors are a powerful tool to obtain efficient algorithms for hard problems. Recently, two new notions of backdoors to planning were introduced. However, for one of the new notions (i.e., variable-deletion) only hardness results are known so far. In this work we improve the situation by defining a new type of variable-deletion backdoors based on the extended causal graph of a planning instance. For this notion of backdoors several fixed-parameter tractable algorithms are identified. Furthermore, we explore the capabilities of polynomial time preprocessing, i.e., we check whether there exists a polynomial kernel. Our results also show the close connection between planning and verification problems such as Vector Addition System with States (VASS).
['Martin Kronegger', 'Sebastian Ordyniak', 'Andreas Pfandler']
Variable-deletion backdoors to planning
582,718
Dung's abstract argumentation framework consists of a set of interacting arguments and a series of semantics for evaluating them. Those semantics partition the powerset of the set of arguments into two classes: extensions and non-extensions. In order to reason with a specific semantics, one needs to take a credulous or skeptical approach, i.e. an argument is eventually accepted, if it is accepted in one or all extensions, respectively. In our previous work [1], we have proposed a novel semantics, called counting semantics, which allows for a more fine-grained assessment to arguments by counting the number of their respective attackers and defenders based on argument graph and argument game. In this paper, we continue our previous work by presenting some supplementaries about how to choose the damaging factor for the counting semantics, and what relationships with some existing approaches, such as Dung's classical semantics, generic gradual valuations. Lastly, an axiomatic perspective on the ranking semantics induced by our counting semantics are presented.
['Fuan Pu', 'Jian Luo', 'Guiming Luo']
Some Supplementaries to the Counting Semantics for Abstract Argumentation
602,385
To serve a large number of enterprise customers, enterprise supplier needs to set up many custom B2B stores, each personalized with special promotions and dedicated contracts. The custom store needs to empower business buyers to shop in the following fashions: 1) buyers visit the custom store, make direct decisions on product selection, based on entitled pricing, and check out orders; 2) buyers use their own company's procurement software, perform catalog shopping on one or more remote custom stores, return an order quote locally for approval, and subsequently send approved orders to supplier for fulfillment; 3) buyers shop with own company's procurement software, using catalog pre-loaded with product and pricing information extracted from supplier's custom stores, and then place purchase orders. This paper describes a practical implementation experience, on how these shopping paradigms can be accommodated effectively and efficiently using a unified commerce server architecture. It is explained, how one master catalog subsystem can be personalized to accommodate B2B direct catalog shopping, B2B procurement with remote catalog punchout, B2B procurement with local catalog, and entitled catalog extraction. A detailed discussion on how to build the commerce server in a 2-tier architecture is illustrated by the handling of various different punchout protocols in one common implementation. The study also reports on the issues and efficiency considerations by different approaches for real-time entitled pricing lookup vs. batch pricing extraction operation.
['Trieu C. Chieu', 'Florian Pinel', 'Jih-Shyr Yih']
Unified commerce server architecture for large number of enterprise stores
454,659
Fourteen popular scoring functions, i.e., X-Score, DrugScore, five scoring functions in the Sybyl software (D-Score, PMF-Score, G-Score, ChemScore, and F-Score), four scoring functions in the Cerius2 software (LigScore, PLP, PMF, and LUDI), two scoring functions in the GOLD program (GoldScore and ChemScore), and HINT, were tested on the refined set of the PDBbind database, a set of 800 diverse protein-ligand complexes with high-resolution crystal structures and experimentally determined K i or K d values. The focus of our study was to assess the ability of these scoring functions to predict binding affinities based on the experimentally determined high-resolution crystal structures of proteins in complex with their ligands. The quantitative correlation between the binding scores produced by each scoring function and the known binding constants of the 800 complexes was computed. X-Score, DrugScore, Sybyl::ChemScore, and Cerius2::PLP provided better correlations than the other scoring functions with standard deviations of 1.8-2.0 log units. These four scoring functions were also found to be robust enough to carry out computation directly on unaltered crystal structures. To examine how well scoring functions predict the binding affinities for ligands bound to the same target protein, the performance of these 14 scoring functions were evaluated on three subsets of protein-ligand complexes from the test set: HIV-1 protease complexes (82 entries), trypsin complexes (45 entries), and carbonic anhydrase II complexes (40 entries). Although the results for the HIV-1 protease subset are less than desirable, several scoring functions are able to satisfactorily predict the binding affinities for the trypsin and the carbonic anhydrase II subsets with standard deviation as low as 1.0 log unit (corresponding to 1.3-1.4 kcal/mol at room temperature). Our results demonstrate the strengths as well as the weaknesses of current scoring functions for binding affinity prediction.
['Renxiao Wang', 'Yipin Lu', 'Xueliang Fang', 'Shaomeng Wang']
An extensive test of 14 scoring functions using the PDBbind refined set of 800 protein-ligand complexes
431,336
As mobile wireless networks increase in popularity and pervasiveness, using internet ubiquitously isn't a dream. The future Wireless Internet is expected to consist of different types of wireless network with different coverage ranges. The integration of Wireless Metropolitan Area Network (WMAN) and Wireless Local Area Network (WLAN) technologies seems to be a feasible option for better and cheaper wireless coverage extension. This hybrid network can take advantage of the benefits of both technologies to provide ubiquitous services with high quality of service (QoS). In order to improve the bandwidth utilization and reduce the interference in heterogeneous WMAN/WIAN networks, this work applies genetic algorithm-based bandwidth allocation techniques along with space division multiple access (SDMA) to manage the precious bandwidth resource in the heterogeneous wireless networks. The simulation results show that our scheme is effective in terms of performance metrics, including dropping probability of handoff calls, new call blocking probability, and bandwidth utilization.
['Chenn-Jung Huang', 'Hong-Xin Chen', 'I-Fan Chen', 'Kai-Wen Hu']
A Location-Aware Resource Management Scheme for Ubiquitous Wireless Networks
405,926
Cloud technology is moving towards multi-cloud environments with the inclusion of various devices. Cloud and IoT integration resulting in so-called edge cloud and fog computing has started. This requires the combination of data centre technologies with much more constrained devices, but still using virtualised solutions to deal with scalability, flexibility and multi-tenancy concerns. Lightweight virtualisation solutions do exist for this architectural setting with smaller, but still virtualised devices to provide application and platform technology as services. Containerisation is a solution component for lightweight virtualisation solution. Containers are furthermore relevant for cloud platform concerns dealt with by Platform-as-a-Service (PaaS) clouds like application packaging and orchestration. We demonstrate an architecture for edge cloud PaaS. For edge clouds, application and service orchestration can help to manage and orchestrate applications through containers. In this way, computation can be brought to the edge of the cloud, rather than data from the Internet-of-Things (IoT) to the cloud. We show that edge cloud requirements such as cost-efficiency, low power consumption, and robustness can be met by implementing container and cluster technology on small single-board devices like Raspberry Pis. This architecture can facilitate applications through distributed multi-cloud platforms built from a range of nodes from data centres to small devices, which we refer to as edge cloud. We illustrate key concepts of an edge cloud PaaS and refer to experimental and conceptual work to make that case.
['Claus Pahl', 'Sven Helmer', 'Lorenzo Miori', 'Julian Sanin', 'Brian Lee']
A Container-Based Edge Cloud PaaS Architecture Based on Raspberry Pi Clusters
910,338
We consider Alamouti encoding that draws symbols from M-ary phase-shift keying (M-PSK) and develop a new differential modulation scheme that attains full rate for any constellation order. In contrast to past work, the proposed scheme guarantees that the encoded matrix maintains the characteristics of the initial codebook and, at the same time, attains full rate so that all possible sequences of space-time matrices become valid. The latter property is exploited to develop a polynomial-complexity maximum-likelihood noncoherent sequence decoder whose order is solely determined by the number of receive antennas. We show that the proposed scheme is superior to contemporary alternatives in terms of encoding rate, decoding complexity, and performance.
['Panos P. Markopoulos', 'George N. Karystinos']
Novel full-rate noncoherent alamouti encoding that allows polynomial-complexity optimal decoding
278,502
In this paper, the importance of including small image features at the initial levels of a progressive second generation video coding scheme is presented. It is shown that a number of meaningful small features called details should be coded, even at very low data bit-rates, in order to match their perceptual significance to the human visual system. We propose a method for extracting, perceptually selecting and coding of visual details in a video sequence using morphological techniques. Its application in the framework of a multiresolution segmentation-based coding algorithm yields better results than pure segmentation techniques at higher compression ratios, if the selection step fits some main subjective requirements. Details are extracted and coded separately from the region structure and included in the reconstructed images in a later stage. The bet of considering the local background of a given detail for its perceptual selection breaks the concept of "partition" in the segmentation scheme. As long as details are not considered as adjacent regions but isolated features spread over the image, "detail coding" can be seen as one step towards the so called feature-based video coding techniques. >
['Josep R. Casas', 'Luis Torres']
Coding of details in very low bit-rate video systems
168,803
In this paper we propose a log likelihood-ratio function of foreground and background models used in a particle filter to track the eye region in dark-bright pupil image sequences. This model fuses information from both dark and bright pupil images and their difference image into one model. Our enhanced tracker overcomes the issues of prior selection of static thresholds during the detection of feature observations in the bright-dark difference images. The auto-initialization process is performed using cascaded classifier trained using adaboost and adapted to IR eye images. Experiments show good performance in challenging sequences with test subjects showing large head movements and under significant light conditions.
['D. Witzner Hansen', 'Randi Satria', 'J. Sorensen', 'R. Hammoud']
Improved Likelihood Function in Particle-based IR Eye Tracking
419,803
Scheduling Skeleton-Based Grid Applications Using PEPA and NWS
['Anne Benoit', 'Murray Cole', 'Stephen Gilmore', 'Jane Hillston']
Scheduling Skeleton-Based Grid Applications Using PEPA and NWS
446,366
The latency of direct networks is modeled, taking into account both switch and wire delays. A simple closed-form expression for contention in buffered, direct networks is derived and found to agree closely with simulations. The model includes the effects of packet size and communication locality. Network analysis under various constraints and under different workload parameters reveals that performance is highly sensitive to these constraints and workloads. A two-dimensional network is shown to have the lowest latency only when switch delays and network contention are ignored; three- or four-dimensional networks are favored otherwise. If communication locality exists, two-dimensional networks regain their advantage. Communication locality decreases both the base network latency and the network bandwidth requirements of applications. It is shown that a much larger fraction of the resulting performance improvement arises from the reduction in bandwidth requirements than from the decrease in latency. >
['Anant Agarwal']
Limits on interconnection network performance
375,991
This study explores use of the social network site Facebook for online political discussion. Online political discussion has been criticized for isolating disagreeing persons from engaging in discussion and for having an atmosphere of uncivil discussion behavior. Analysis reveals the participation of disagreeing parties within the discussion with the large majority of posters (73 percent) expressing support for the stated position of the Facebook group, and a minority of posters (17 percent) expressing opposition to the position of the group. Despite the presence of uncivil discussion posting within the Facebook group, the large majority of discussion participation (75 percent) is devoid of flaming. Results of this study provide important groundwork and raise new questions for study of online political discussion as it occurs in the emergent Internet technologies of social network sites.
['Matthew J. Kushin', 'Kelin Kitchener']
Getting political on social network sites: Exploring online political discourse on Facebook
193,132
Complexity#R##N#Early View (Online Version of Record published before inclusion in an issue)
['Pedro P. B. de Oliveira', 'Eurico L. P. Ruivo', 'Wander L. Costa', 'Fábio Tokio Miki', 'Victor V. Trafaniuc']
Advances in the study of elementary cellular automata regular language complexity
351,538
Recent studies have applied different approaches for summarizing software artifacts, and yet very few efforts have been made in summarizing the source code fragments available on web. This paper investigates the feasibility of generating code fragment summaries by using supervised learning algorithms.We hire a crowd of ten individuals from the same work place to extract source code features on a corpus of 127 code fragments retrieved from Eclipse and Net- Beans Official frequently asked questions (FAQs). Human annotators suggest summary lines. Our machine learning algorithms produce better results with the precision of 82% and performstatistically better than existing code fragment classifiers. Evaluation of algorithms on several statistical measures endorses our result. This result is promising when employing mechanisms such as data-driven crowd enlistment improve the efficacy of existing code fragment classifiers.
['Najam Nazar', 'He Jiang', 'Guojun Gao', 'Zhang T', 'Xiaochen Li', 'Zhilei Ren']
Source code fragment summarization with small-scale crowdsourcing based features
641,532
We propose and implement a set of efficient on-line algorithms for a router to sample the passing packets and identify multi-attribute high-volume traffic aggregates. Besides the obvious applications in traffic engineering and measurement, we describe its application in defending against certain classes of DoS attacks. Our contributions include three novel algorithms. The reservoir sampling algorithm employs a biased sampling strategy that favors packets from high-volume aggregates. Based on the samples, two efficient algorithms are proposed to identify single-attribute aggregates and multi-attribute aggregates, respectively. We implement the algorithms on a Linux router and demonstrate that the router can effectively filter out malicious packets unstateful DoS attacks.
['Yong Tang', 'Shigang Chen']
Online identification of multi-attribute high-volume traffic aggregates through sampling
274,747
This paper presents the theoretical underpinning of a model for symbolically representing probabilistic transition systems, an extension of labelled transition systems for the modelling of general (discrete as well as continuous or singular) probability spaces. These transition systems are particularly suited for modelling softly timed systems, real-time systems in which the time constraints are of random nature. For continuous probability spaces these transition systems are infinite by nature. Stochastic automata represent their behaviour in a finite way. This paper presents the model of stochastic automata, their semantics in terms of probabilistic transition systems, and studies several notions of bisimulation. Furthermore, the relationship of stochastic automata to generalised semi-Markov processes is established.
["Pedro R. D'Argenio", 'Joost-Pieter Katoen']
A theory of stochastic systems part I: Stochastic automata
363,322
Test Expectancy Effects on Metacomphrenesion, Self-regulation, and Learning.
['Thomas D. Griffin', 'Jennifer Wiley', 'Keith W. Thiede']
Test Expectancy Effects on Metacomphrenesion, Self-regulation, and Learning.
787,277
Two fundamentally different techniques for compressing stereopairs are discussed. The first technique, called disparity-compensated transform-domain predictive coding, attempts to minimize the mean-square error between the original stereopair and the compressed stereopair. The second technique, called mixed-resolution coding, is a psychophysically justified technique that exploits known facts about human stereovision to code stereopairs in a subjectively acceptable manner. A method for assessing the quality of compressed stereopairs is also presented. It involves measuring the ability of an observer to perceive depth in coded stereopairs. It was found that observers generally perceived objects to be further away in compressed stereopairs than they did in originals. It is proved that the rate distortion limit for coding stereopairs cannot in general be achieved by a coder that first codes and decodes the right picture sequence independently of the left picture sequence, and then codes and decodes the left picture sequence given the decoded right picture sequence. >
['Michael G. Perkins']
Data compression of stereopairs
385,120
Cardiac Phase-resolved Blood Oxygen-Level-Dependent (CP-#R##N#BOLD) MRI is a new contrast agent- and stress-free imaging technique#R##N##R##N#for the assessment of myocardial ischemia at rest. The precise registration #R##N#among the cardiac phases in this cine type acquisition is essential for automating the analysis of images of this technique, since it can potentially lead to better specificity of ischemia detection. However, inconsistency in myocardial intensity patterns and the changes in myocardial shape#R##N#due to the heart’s motion lead to low registration performance for state-#R##N#of-the-art methods. This low accuracy can be explained by the lack of distinguishable features in CP-BOLD and inappropriate metric defini-#R##N#tions in current intensity-based registration frameworks. In this paper,#R##N#the sparse representations, which are defined by a discriminative dictionary learning approach for source and target images, are used to improve myocardial registration. This method combines appearance with Gabor and HOG features in a dictionary learning framework to sparsely represent features in a low dimensional space. The sum of squared differences#R##N#of these distinctive sparse representations are used to define a similarity term in the registration framework. The proposed descriptor is validated on a challenging dataset of CP-BOLD MR and standard CINE MR acquired in baseline and ischemic condition across 10 canines.
['Ilkay Oksuz', 'Anirban Mukhopadhyay', 'Marco Bevilacqua', 'Rohan Dharmakumar', 'Sotirios A. Tsaftaris']
Dictionary Learning Based Image Descriptor for Myocardial Registration of CP-BOLD MR
590,888
Twin support vector machine is a machine learning algorithm developing from standard support vector machine. The performance of twin support vector machine is always better than support vector machine on datasets that have cross regions. Recently proposed wavelet twin support vector machine introduces the wavelet kernel function into twin support vector machine to make the combination of wavelet analysis techniques and twin support vector machine come true. Wavelet twin support vector machine not only expands the range of the kernel function selection, but also greatly improves the generalization ability of twin support vector machine. However, similar with twin support vector machine, wavelet twin support vector machine cannot deal with the parameter selection problem well. Unsuitable parameters reduce the classification capability of the algorithm. In order to solve the parameter selection problem in wavelet twin support vector machine, in this paper, we use glowworm swarm optimization method to optimize the parameters of wavelet twin support vector machine and propose wavelet twin support vector machine based on glowworm swarm optimization. Wavelet twin support vector machine based on glowworm swarm optimization takes the parameters of wavelet twin support vector machine as the position information of glowworms, regards the function to calculate the wavelet twin support vector machine classification accuracy as objective function and starts glowworm swarm optimization algorithm to update the glowworms. The optimal parameters are the position information of glowworms that we get when the glowworm swarm optimal algorithm stops. Wavelet twin support vector machine based on glowworm swarm optimization determines the parameters in wavelet twin support vector machine automatically before the training process to avoid difficulty of parameter selection. Reasonable parameters promote the performance of wavelet twin support vector machine and improve the accuracy. The experimental results on benchmark datasets indicate that the proposed approach is efficient and has high classification accuracy.
['Shifei Ding', 'Yuexuan An', 'Xiekai Zhang', 'Fulin Wu', 'Yu Xue']
Wavelet twin support vector machines based on glowworm swarm optimization
937,341
Sparse network of fixed or mobile wireless devices, where most of the time there does not exist a complete path from a source to a destination are often referred to as delay tolerant networks. The store-carry-and-forward principle, according to which messages can be stored at mobile nodes moving around the network area before being forwarded to the destination, allows the transmission of messages in such systems. In this paper, we use an analytical framework to study delay tolerant networks, based on asCSL model checking techniques. In particular we focus on the case in which fixed sensors exhibit on-off behavior to overcome battery capacity limitations.
['Michele Garetto', 'Marco Gribaudo']
Model Checking Techniques for the Performance Analysis of Delay Tolerant Networks with On-off Behavior
141,199
Conventional type systems specify interfaces in terms of values and domains. We present a light-weight formalism that captures the temporal aspects of software component interfaces. Specifically, we use an automata-based language to capture both input assumptions about the order in which the methods of a component are called, and output guarantees about the order in which the component calls external methods. The formalism supports automatic compatability checks between interface models, and thus constitutes a type system for component interaction. Unlike traditional uses of automata, our formalism is based on an optimistic approach to composition, and on an alternating approach to design refinement. According to the optimistic approach, two components are compatible if there is some environment that can make them work together. According to the alternating approach, one interface refines another if it has weaker input assumptions, and stronger output guarantees. We show that these notions have game-theoretic foundations that lead to efficient algorithms for checking compatibility and refinement.
['Luca de Alfaro', 'Thomas A. Henzinger']
Interface automata
663,429
The application of emerging technologies of Internet of Things (IoT) and cloud computing have increasing the popularity of smart homes, along with which, large volumes of heterogeneous data have been generating by home entities. The representation, management and application of the continuously increasing amounts of heterogeneous data in the smart home data space have been critical challenges to the further development of smart home industry. To this end, a scheme for ontology-based data semantic management and application is proposed in this paper. Based on a smart home system model abstracted from the perspective of implementing users’ household operations, a general domain ontology model is designed by defining the correlative concepts, and a logical data semantic fusion model is designed accordingly. Subsequently, to achieve high-efficiency ontology data query and update in the implementation of the data semantic fusion model, a relational-database-based ontology data decomposition storage method is developed by thoroughly investigating existing storage modes, and the performance is demonstrated using a group of elaborated ontology data query and update operations. Comprehensively utilizing the stated achievements, ontology-based semantic reasoning with a specially designed semantic matching rule is studied as well in this work in an attempt to provide accurate and personalized home services, and the efficiency is demonstrated through experiments conducted on the developed testing system for user behavior reasoning.
['Ming Tao', 'Kaoru Ota', 'Mianxiong Dong']
Ontology-based data semantic management and application in IoT- and cloud-enabled smart homes
939,460
This paper is devoted to the following decremental problem. Initially, a graph and a distinguished subset of vertices, called initial group, are given. This group is connected by an initial tree. The decremental part of the input is given by an on-line sequence of withdrawals of vertices of the initial group, removed on-line one after one. The goal is to keep connected each successive group by a tree, satisfying a quality constraint: The maximum distance (called diameter) in each constructed tree must be kept in a given range compared to the best possible one. Under this quality constraint, our objective is to minimize the number of critical stages of the sequence of constructed trees. We call “critical” a stage where the current tree is rebuilt. We propose a strategy leading to at most O(logi) critical stages (i is the number of removed members). We also prove that there exist situations where Ω(logi) critical stages are necessary to any algorithm to maintain the quality constraint. Our strategy is then worst case optimal in order of magnitude
['Nicolas Thibault', 'Christian Laforest']
An optimal rebuilding strategy for a decremental tree problem
301,165
Semantic computing and enterprise Linked Data have recently gained traction in enterprises. Although the concept of Enterprise Knowledge Graphs (EKGs) has meanwhile received some attention, a formal conceptual framework for designing such graphs has not yet been developed. By EKG we refer to a semantic network of concepts, properties, individuals and links representing and referencing foundational and domain knowledge relevant for an enterprise. Through the efforts reported in this paper, we aim to bridge the gap between the increasing need for EKGs and the lack of formal methods for realising them. We present a thorough study of the key concepts of knowledge graphs design along with an analysis of the advantages and disadvantages of various design decisions. In particular, we distinguish between two polar approaches towards data fusion, i.e., the unified and the federated approach, describe their benefits and point out shortages.
['Galkin Mv', 'Sören Auer', 'Hak Lae Kim', 'Simon Scerri']
Integration Strategies for Enterprise Knowledge Graphs
693,784
Technologies and learning platforms approaches are changing each day, one of the reasons for it is that organizations, enterprises and institutions are growing and producing more knowledge, whereas workers are becoming knowledge workers and need to be adapted for the fast change and share of information.From past experience, it has been denoted that strategies and pedagogical processes are tasks that can be created, enriched and boosted by actors who participate in learning and training processes: course managers, teachers and students. The challenge posed to the different actors involved also accelerates the changes that have been happening in education and training, empowering a society based on knowledge. Thus, it has been developed a new platform, Learning Roadmap Studio, which powers eLearning 2.0 concepts, that tends to promote more efficient learning and training. For teachers and course managers, it enables the creation, edition and deployment of learning roadmaps, that can be edit by students in order to communicate and share their knowledge among the learning community. For the students, the learning roadmap aims at promoting self-study and supervised study, endowing the pupil with the capabilities to find the relevant information and to capture the concepts in the study materials. The outcome will be a stimulating learning process together with an organized management of those materials.It is not intended to create new learning management systems. Instead, it is presented as an application that enables the edition and creation of learning processes and strategies, giving primary relevance to teachers, instead of focusing on tools, features and contents.
['Miguel Oliveira', 'J. Artur Serrano']
Learning Roadmap Studio: eLearning 2.0 Based Web Platform for Deploying and Editing Learning Roadmaps
290,476
In this paper we study efficient triangular grid-based sensor deployment planning for coverage when sensor placements are perturbed by random errors around their corresponding grid vertices, where the random errors are modeled by uniform displacements inside error disks of a given finite radius. The average coverage percentage of the sensing field is derived as a function of the length of the grid tiles d, and the radius of the random error disks, R. Our expressions for the average coverage percentage are computed numerically and verified by Monte-Carlo simulations. The analytical methods can be used with other types of grid-based deployment with little modification, such as square grid-based deployment. One appealing feature of grid-based deployment that we observe is that the sensing coverage is rather resilient to random errors. Based on this observation and the quantitative results from our analysis, we discuss several approaches to efficient grid-based deployment planning for coverage and illustrate these through numerical examples.
['Glen Takahara', 'Kenan Xu', 'Hossam S. Hassanein']
Efficient Coverage Planning for Grid-Based Wireless Sensor Networks
499,630
Motivation: Signaling pathways are dynamic events that take place over a given period of time. In order to identify these pathways, expression data over time are required. Dynamic Bayesian network (DBN) is an important approach for predicting the gene regulatory networks from time course expression data. However, two fundamental problems greatly reduce the effectiveness of current DBN methods. The first problem is the relatively low accuracy of prediction, and the second is the excessive computational time.#R##N##R##N#Results: In this paper, we present a DBN-based approach with increased accuracy and reduced computational time compared with existing DBN methods. Unlike previous methods, our approach limits potential regulators to those genes with either earlier or simultaneous expression changes (up- or down-regulation) in relation to their target genes. This allows us to limit the number of potential regulators and consequently reduce the search space. Furthermore, we use the time difference between the initial change in the expression of a given regulator gene and its potential target gene to estimate the transcriptional time lag between these two genes. This method of time lag estimation increases the accuracy of predicting gene regulatory networks. Our approach is evaluated using time-series expression data measured during the yeast cell cycle. The results demonstrate that this approach can predict regulatory networks with significantly improved accuracy and reduced computational time compared with existing DBN approaches.#R##N##R##N#Availability: The programs described in this paper can be obtained from the corresponding author upon request.#R##N##R##N#Contact: [email protected]
['Min Zou', 'Suzanne D. Conzen']
A new dynamic Bayesian network (DBN) approach for identifying gene regulatory networks from time course microarray data
191,115
We study a natural process for allocating m balls into n bins that are organized as the vertices of an undirected graph G. Balls arrive one at a time. When a ball arrives, it first chooses a vertex u in G uniformly at random. Then the ball performs a local search in G starting from u until it reaches a vertex with local minimum load, where the ball is finally placed on. Then the next ball arrives and this procedure is repeated. For the case m = n, we give an upper bound for the maximum load on graphs with bounded degrees. We also propose the study of the cover time of this process, which is defined as the smallest m so that every bin has at least one ball allocated to it. We establish an upper bound for the cover time on graphs with bounded degrees. Our bounds for the maximum load and the cover time are tight when the graph is vertex transitive or sufficiently homogeneous. We also give upper bounds for the maximum load when m > n.
['Karl Bringmann', 'Thomas Sauerwald', 'Alexandre Stauffer', 'He Sun']
Balls into bins via local search: Cover time and maximum load
495,937
A significant aspect of cloud computing is scheduling of a large number of real-time concurrent workflow instances. Most of the existing scheduling algorithms are designed for a single complex workflow instance. This study examined instance-intensive workflows bounded by SLA constraints, including user-defined deadlines.The scheduling method for these workflows with dynamic workloads should be able to handle changing conditions and maximize the utilization rate of the cloud resources. The study proposes an adaptive two-stage deadline-constrained scheduling (ATSDS) strategy that considers run-time circumstances of workflows in the cloud environment. The stages are workflow fragmentation and resource allocation.In the first stage, the workflows according to cloud run-time circumstances (number of Virtual Machines (VMs) and average available bandwidth) are dynamically fragmented. In the second stage, using the workflow deadline and the capacity of the VMs, the workflow fragments created are allocated to the VMs to be executed. The simulation results show improvements in terms of workflow completion time, number of messages exchanged, percentage of workflows that meet the deadline and VM usage cost compared to other approaches.
['Reihaneh Khorsand', 'Faramarz Safi-Esfahani', 'Naser Nematbakhsh', 'Mehran Mohsenzade']
ATSDS: adaptive two-stage deadline-constrained workflow scheduling considering run-time circumstances in cloud computing environments
949,196
Temporal logic has been successfully used for modeling and analyzing the behavior of reactive and concurrent systems.One shortcoming of (standard) temporal logic is that it is inadequate for real-time applications, because it only deals with qualitative timing properties.This is overcome by metric temporal logics which offer a uniform logical framework in which both qualitative and quantitative timing properties can be expressed by making use of a parameterized operator of relative temporal realization. In this paper we deal with completeness issues for basic systems of metric temporal logic --- despite their relevance, such issues have been ignored or only partially addressed in the literature.We view metric temporal logics as two-sorted formalisms having formulae ranging over time instants and parameters ranging over an (ordered) abelian group of temporal displacements.We first provide an axiomatization of the pure metric fragment of the logic, and prove its soundness and completeness.Then, we show how to obtain the metric temporal logic of linear orders by adding an ordering over displacements.Finally, we consider general metric temporal logic allowing quantification over algebraic variables and free mixing of algebraic formulae and temporal propositional symbols.
['Angelo Montanari', 'de Rijke']
Two-sorted metric temporal logics
997,854
Making a strategic decision is one of important matters especially in a military organization such as Indonesian Air Force (IDAF). There are many cases which need an accurate and quick decision in the IDAF such as selecting the most representative weaponry systems to replace the old ones. In this paper we propose the use of A3S information-inferencing fusion method for selecting a new light transport aircraft from four prospective aircraft candidates. The result of the selection will be used by the decision maker to select the best light transport aircraft candidate as the new weaponry system. In this paper we use original data from four aircraft candidates. After computed by using A3S method, we found that aircraft “A” is the most probable candidate to replace the old aircraft “X” with Degree of Certainty (DoC) 27,8%.
['Arwin Datumaya Wahyudi Sumari', 'Adang Suwandi Ahmad', 'Aciek Ida Wuryandari', 'Jaka Sembiring']
Strategic decision making based on A3S information-inferencing fusion method
78,864
In open multi-agent systems (MAS), interaction between agents is uncertain. Trust plays an important role in deciding with whom to interact and how to interact with it. Current trust models have solved the problem of how to quantify, deduce and evaluate trust, but there are still challenges. This paper presents FIRE + trust model using utility loss of issue to evaluate trust values. With the representation of matrix, it efficiently increases the rating information. Also by introducing D-S theory and WMA, it can handle uncertain information and dishonest witness finely. Meanwhile, our approach introduces the concept of information-amount to derive each issue 's weight so as to gain the whole trust value of the target agent. It also takes witness reliability into account. Experiments show that FIRE + is more effective than others.
['Ping-Ping Lu', 'Bin Li', 'Mao-Lin Xing', 'Liang Li']
D-S Theory-based Trust Model FIRE^+ in Multi-agent Systems
268,537
Medical Imaging applications use images coming from different sources such as magnetic resonance imaging (MRI), computer tomography (CT), positron emission tomography (PET), to generate 3D data. Starting from these volumetric data, applications reconstruct 3D models of anatomical structures which could be manipulated and analyzed. In this paper we present a new approach for the visualization and interaction with volumetric datasets in a fully immersive environment. It allows to handle the reconstructed models directly within the virtual scene; in particular a technique is described for outlining the Volume Of Interest (VOI) functionality in a three-dimensional dataset for a visual interactive inspection and manipulation of the organ of interest.
['G. De Pietro', 'Luigi Gallo', 'Ivana Marra', 'Carmela Vanzanella']
A New Approach For Handling 3D Medical Data In An Immersive Environment
174,353
Continuous phase modulation schemes, such as Gaussian minimum-shift keying (GMSK), are frequently used with limiter-discriminator (LD) detectors. This paper studies how the side information derived from the signal envelope can enhance the performance of a Viterbi algorithm (VA)-based receiver operating on the LD output of a GMSK scheme. By considering the joint probability density function of envelope and frequency, different approximations yield different novel metrics for VA, using the three-variables envelope and its derivative, and frequency error in different combinations. Simulation results confirm that such envelope-aided VA gives significant performance gains, and that envelope information complements the frequency information output by the LD detector in frequency-selective fading channels.
['Ramon Sanchez-Perez', 'Subbarayan Pasupathy', 'Francisco Javier Casajús-Quirós']
Envelope-aided Viterbi receivers for GMSK signals with limiter-discriminator detection
64,407
This paper characterizes the dynamic deformation of soft fingertips applied to multi-fingered object manipulations of humanoid robots and analyzes its significance for multi-fingered operations. To this end, we present a dynamic model for soft-fingered object manipulations. For analyzing the control performance of soft-fingered manipulating tasks, we present a dynamic manipulation control scheme that considers the deformation effects of soft fingertips. The simulation results validate the influence of the dynamic deformation of soft fingertips during manipulation in a two-fingered operation, and will enable us to recognize its significance for precise manipulating tasks.
['Byoung-Ho Kim', 'Shinichi Hirai']
Characterizing the dynamic deformation of soft fingertips and analyzing its significance for multi-fingered operations
253,368
Using BlueJ to teach Java (poster session)
['Dianne Hagan']
Using BlueJ to teach Java (poster session)
478,766
Congestion management is likely to become a critical issue in interconnection networks, as increasing power consumption and cost concerns lead to improvements in the efficiency of network resources. In previous configurations, networks were usually oversized and underutilized. In a smaller network, however, contention is more likely to occur and blocked packets cause head-of-line (HoL) blocking among the rest of the packets, spreading congestion quickly. The best-known solution to HoL blocking is Virtual Output Queues (VOQs). However, the cost of implementing VOQs increases quadratically with the number of output ports in the network, making it unpractical. The situation is aggravated when several priorities and/or Quality of Service (QoS) levels must be supported. Therefore, a more scalable and cost-effective solution is required to reduce or eliminate HoL blocking. In this paper, we present a family of methodologies, referred to as Destination-Based Buffer Management (DBBM), to reduce/eliminate the HoL blocking effect on interconnection networks. DBBM efficiently uses the resources (mainly memory queues) of the network. These methodologies are comprehensively evaluated in terms of throughput, scalability, and fairness. Results show that using the DBBM strategy, with a reduced number of queues at each switch, it is possible to achieve roughly the same throughput as the VOQ mechanism. Moreover, all of the proposed strategies are designed in such a way that they can be used in any switch architecture. We compare DBBM with RECN, a sophisticated mechanism that eliminates HoL blocking in congestion situations. Our mechanism is able to achieve almost the same performance with very low logic requirements (in contrast with RECN).
['T. Nachiondo', 'Jose Flich', 'José Duato']
Buffer Management Strategies to Reduce HoL Blocking
473,275
Service-oriented computing, cloud computing, and web services composition mark cornerstones of a paradigm shift in software systems engineering. The general new idea is to use as much as possible services that are made available by others, mostly disseminated via the web. In this paper, we present an abstract model for clouds as federations of services together with a specification of semantics and quality characteristics. For the services as such we adopt the abstract model of abstract state services, which is based on views on some hidden database layer that are equipped with service operations. For the semantics we adopt types for in- and output, pre- and post-conditions, and a description of functionality within an operations ontology. In addition, quality characteristics capture performance, costs, availability, etc. On the basis of this model of clouds, users may conduct a (web) search for usable services, extract service components, and recompose these components. The quality characteristics can be used to optimise the selection of usable services.
['Hui Ma', 'Klaus-Dieter Schewe', 'Qing Wang']
An abstract model for service provision, search and composition
364,473
With the advent of cloud and virtualization technologies and the integration of various computer communication technologies, today’s computing environments can provide virtualized high quality services. The network traffic has also continuously increased with remarkable growth. Software defined networking/network function virtualization (SDN/NFV) enhancing the infrastructure agility, thus network operators and service providers are able to program their own network functions on vendor independent hardware substrate. However, in order for the SDN/NFV to realize a profit, it must provide a new resource sharing and monitoring procedures among the regionally distributed and virtualized computers. In this paper, we proposes a NFV monitoring architecture based practical measuring framework for network performance measurement. We also proposes a end-to-end connectivity support platform across a whole SDN/NFV networks has not been fully addressed.
['Hyuncheol Kim', 'Seunghyun Yoon', 'Hongseok Jeon', 'Wonhyuk Lee', 'Seungae Kang']
Service platform and monitoring architecture for network function virtualization (NFV)
892,674
Integrating behavioural and cognitive psychology: A modern categorization theoretical approach.
['Darren J. Edwards']
Integrating behavioural and cognitive psychology: A modern categorization theoretical approach.
570,443
Consistency of preferences is related to rationality, which is associated with the transitivity property . Many properties suggested to model transitivity of preferences are inappropriate for reciprocal preference relations. In this paper, a functional equation is put forward to model the ldquocardinal consistency in the strength of preferencesrdquo of reciprocal preference relations. We show that under the assumptions of continuity and monotonicity properties, the set of representable uninorm operators is characterized as the solution to this functional equation. Cardinal consistency with the conjunctive representable cross ratio uninorm is equivalent to Tanino's multiplicative transitivity property. Because any two representable uninorms are order isomorphic, we conclude that multiplicative transitivity is the most appropriate property for modeling cardinal consistency of reciprocal preference relations. Results toward the characterization of this uninorm consistency property based on a restricted set of ( n -1) preference values, which can be used in practical cases to construct perfect consistent preference relations, are also presented.
['Francisco Chiclana', 'Enrique Herrera-Viedma', 'Sergio Alonso', 'Francisco Herrera']
Cardinal Consistency of Reciprocal Preference Relations: A Characterization of Multiplicative Transitivity
7,903
Toolglass™ widgets are new user interface tools that can appear, as though on a transparent sheet of glass, between an application and a traditional cursor. They can be positioned with one hand while the other positions the cursor. The widgets provide a rich and concise vocabulary for operating on application objects. These widgets may incorporate visual filters, called Magic Lens™ filters, that modify the presentation of application objects to reveal hidden information, to enhance data of interest, or to suppress distracting information. Together, these tools form a see-through interface that offers many advantages over traditional controls. They provide a new style of interaction that better exploits the user’ s everyday skills. They can reduce steps, cursor motion, and errors. Many widgets can be provided in a user inter face, by designers and by users, without requiring dedicated screen space. In addition, lenses provide rich context -dependent feedback and the ability to view details and context simultaneous ly. Our widgets and lenses can be combined to form operation and viewing macros, and can be used over multiple applications. CR Categories and Subject Descriptors: I.3.6 [Computer Graphics]: Methodology and Techniques- interaction techniques; H.5.2 [Information Interfaces and Presentation]: User Interfaces- interaction styles; I.3.3 [ Computer Graphics]: Picture/Image Generation- viewing algorithms; I.3.4 [Computer Graphics]: Graphics Utilities - graphics editors
['Eric A. Bier', 'Maureen C. Stone', 'Kenneth A. Pier', 'William Buxton', 'Tony DeRose']
Toolglass and magic lenses: the see-through interface
330,425
This paper describes a new architecture and implementation of an adaptive streaming system (e.g., television over IP, video on demand) based on cross-layer interactions. At the center of the proposed architecture is the meet in the middle concept involving both bottom-up and top-down cross layer interactions. Each streaming session is entirely controlled at the RTP layer where we maintain a rich context that centralizes the collection of (i) instantaneous network conditions measured at the underlying layers (i.e.: link, network, and transport layers) and (ii) user- and terminal-triggered events that impose new real-time QoS adaptation strategies. Thus, each active multimedia session is tied to a broad range of parameters, which enable it to coordinate the QoS adaptation throughout the protocol layers and thus eliminating the overhead and preventing counter-productiveness among separate mechanisms implemented at different layers. The MPEG-21 framework is used to provide a common support for implementing and managing the end-to-end QoS of audio/video streams. Performance evaluations using peak signal to noise ratio (PSNR) and structural similarity index (SSIM) objective video quality metrics show the benefits of using the proposed Meet In the Middle cross-layer design compared to traditional media delivery approaches.
['Ismail Djama', 'Toufik Ahmed', 'Abdelhamid Nafaa', 'Raouf Boutaba']
Meet In the Middle Cross-Layer Adaptation for Audiovisual Content Delivery
385,806
This paper presents a methodology to create 3D visualization from discrete event simulation for simulation of electronics assembly processes. The methodology connects discrete event simulation directly to 3D animation with its novel methods of converting discrete simulation results into animation events to trigger 3D animation. It also develops a 3D animation framework for the visualization of discrete simulation results. This framework supports the reuse of both the existing 3D animation objects and behaviour components, and allows the rapid development of new 3D animation objects by users with no special knowledge in computer graphics. This methodology has been implemented with the software component technology. Examples are presented to demonstrate the efficiency of the proposed methodology.
['Yongmin Zhong']
A Virtual Environment for Visualization of Electronics Assembly Processes
498,475
Compact polarimetric (CP) synthetic aperture radar (SAR) is a potential tool for operationally monitoring oil slicks and oil platforms because its large-coverage swath and abundant polarimetric scattering information. In this study, we use C-band RADARSAT-2 quad-polarization (quad-pol) SAR data to simulate CP covariance matrix elements and then construct pseudo quad-pol scattering coefficients by utilizing two different CP reconstruction algorithms. We develop an unsupervised classification method to discriminate oil slicks and platforms from clean ocean waters, using the relative phase, a logical scalar threshold that separates odd and even scattering events. The relative phases are estimated with the reconstructed co- and cross-polarization backscatters, which are positive in clean ocean surfaces where odd scattering is dominant but are negative for oil platforms and oil slick-covered areas associated with even scattering.
['Biao Zhang', 'Xiaofeng Li', 'William Perrie', 'Oscar Garcia-Pineda']
Marine oil slick and platform detection by compact polrimetric synthetic aperture radar
931,654
In this paper, we present a new method for analysis of musical structure that captures local prediction and global repetition properties of audio signals in one information processing framework. The method is motivated by a recent work in music perception where machine features were shown to correspond to human judgments of familiarity and emotional force when listening to music. Using a notion of information rate in a model-based framework, we develop a measure of mutual information between past and present in a time signal and show that it consist of two factors - prediction property related to data statistics within an individual block of signal features, and repetition property based on differences in model likelihood across blocks. The first factor, when applied to spectral representation of audio signals, is known as spectral anticipation, and the second factor is known as recurrence analysis. We present algorithms for estimation of these measures and create a visualization that displays their temporal structure in musical recordings. Considering these features as a measure of the amount of information processing that a listening system performs on a signal, information rate is used to detect interest points in music. Several musical works with different performances are analyzed in this paper, and their structure and interest points are displayed and discussed. Extensions of this approach towards a general framework of characterizing machine listening experience are suggested.
['Shlomo Dubnov']
Unified View of Prediction and Repetition Structure in Audio Signals With Application to Interest Point Detection
195,528
A general architecture for distributed third-generation surveillance systems is discussed. In particular an approach for selecting the optimal distribution of intelligence (task allocation) is presented. The introduction of recognition tasks which can cause the interruption of the processing and transmission flow is discussed. Experimental results over a simulated system illustrate the presented approach for optimal distribution of intelligence.
['Franco Oberti', 'Giancarlo Ferrari', 'Carlo S. Regazzoni']
Recognition driven burst transmissions in distributed third generation surveillance systems
240,192
We present a novel processor architecture designed specifically for use in low-power wireless sensor-network nodes. Our sensor network asynchronous processor (SNAP/LE) is based on an asynchronous data-driven 16-bit RISC core with an extremely low-power idle state, and a wakeup response latency on the order of tens of nanoseconds. The processor instruction set is optimized for sensor-network applications, with support for event scheduling, pseudo-random number generation, bitfield operations, and radio/sensor interfaces. SNAP/LE has a hardware event queue and event coprocessors, which allow the processor to avoid the overhead of operating system software (such as task schedulers and external interrupt servicing), while still providing a straightforward programming interface to the designer. The processor can meet performance levels required for data monitoring applications while executing instructions with tens of picojoules of energy.We evaluate the energy consumption of SNAP/LE with several applications representative of the workload found in data-gathering wireless sensor networks. We compare our architecture and software against existing platforms for sensor networks, quantifying both the software and hardware benefits of our approach.
['Virantha Ekanayake', 'Clinton W. Kelly', 'Rajit Manohar']
An ultra low-power processor for sensor networks
445,188
This article presents a comprehensive path-planning method for lunar and planetary exploration rovers. In this method, two new elements are introduced as evaluation indices for path planning: 1) determined by the rover design and 2) derived from a target environment. These are defined as the rover's internal and external elements, respectively. In this article, the rover's locomotion mechanism and insolation (i.e., shadow) conditions were considered to be the two elements that ensure the rover's safety and energy, and the influences of these elements on path planning were described. To examine the influence of the locomotion mechanism on path planning, experiments were performed using track and wheel mechanisms, and the motion behaviors were modeled. The planned paths of the tracked and wheeled rovers were then simulated based on their motion behaviors. The influence of the insolation condition was considered through path plan simulations conducted using various lunar latitudes and times. The simulation results showed that the internal element can be used as an evaluation index to plan a safe path that corresponds to the traveling performance of the rover's locomotion mechanism. The path derived for the tracked rover was found to be straighter than that derived for the wheeled rover. The simulation results also showed that path planning using the external element as an additional index enhances the power generated by solar panels under various insolation conditions. This path-planning method was found to have a large impact on the amount of power generated in the morning/evening and at high-latitude regions relative to in the daytime and at low-latitude regions on the moon. These simulation results suggest the effectiveness of the proposed pathplanning method.
['Masataku Sutoh', 'Masatsugu Otsuki', 'Sachiko Wakabayashi', 'Takeshi Hoshino', 'Tatsuaki Hashimoto']
The Right Path: Comprehensive Path Planning for Lunar Exploration Rovers
159,335
With the multi-fold development in communication technology, there has been a rapid growth in consumption of multimedia services using mobile devices. In particular, lots of research has been done to provide multimedia services using a 3G/WLAN dual mode to mobile device users. All users download contents from content providers through cellular networks and can share their contents with their neighbors. This may lead to a situation where the cost of receiving multimedia contents is reduced owing to contents sharing. In other words, users effectively reduce the cost of receiving contents as they share some contents with neighbors. However, we believe that selfish users (or users with limited resources) are not willing to cooperate or share their contents with other users. In this paper, we propose an incentive scheme that gives the benefit of contents sharing to users in a 3G/WLAN dual mode supporting network. There are two types of users; premium users and ordinary users. Premium users can get high quality contents at the expense of spending their own money and resources of power and bandwidth, while ordinary users receive plain quality contents freely or with paying little amount of money. Our incentive mechanism designs the content provider to offer each premium user a discounted price for downloading high quality contents. Through our approach, each premium user will receive some incentive of a discounted price for receiving high quality contents in proportion to the contribution to the content provider. By doing so, the content provider can increase its total utility too. Our simulation results confirm that our proposed incentive scheme performs well.
['Heesu Im', 'Yugyung Lee', 'Saewoong Bahk']
Incentive-Driven Content Distribution in Wireless Multimedia Service Networks
255,154
Modal logic has a good claim to being the logic of choice for describing the reactive behaviour of systems modelled as coalgebras. Logics with modal operators obtained from so-called predicate liftings have been shown to be invariant under behavioural equivalence. Expressivity results stating that, conversely, logically indistinguishable states are behaviourally equivalent depend on the existence of separating sets of predicate liftings for the signature functor at hand. Here, we provide a classification result for predicate liftings which leads to an easy criterion for the existence of such separating sets, and we give simple examples of functors that fail to admit expressive normal or monotone modal logics, respectively, or in fact an expressive (unary) modal logic at all. We then move on to polyadic modal logic, where modal operators may take more than one argument formula. We show that every accessible functor admits an expressive polyadic modal logic. Moreover, expressive polyadic modal logics are, unlike unary modal logics, compositional.
['Lutz Schröder']
Expressivity of coalgebraic modal logic: the limits and beyond
502,065
Secure computation consists of protocols for secure arithmetic: secret values are added and multiplied securely by networked processors. The striking feature of secure computation is that security is maintained even in the presence of an adversary who corrupts a quorum of the processors and who exercises full, malicious control over them. One of the fundamental primitives at the heart of secure computation is secret-sharing. Typically, the required secret-sharing techniques build on Shamir's scheme, which can be viewed as a cryptographic twist on the Reed-Solomon error correcting code. In this work we further the connections between secure computation and error correcting codes. We demonstrate that threshold secure computation in the secure channels model can be based on arbitrary codes. For a network of size n , we then show a reduction in communication for secure computation amounting to a multiplicative logarithmic factor (in n ) compared to classical methods for small, e.g., constant size fields, while tolerating $t players to be corrupted, where ? > 0 can be arbitrarily small. For large networks this implies considerable savings in communication. Our results hold in the broadcast/negligible error model of Rabin and Ben-Or, and complement results from CRYPTO 2006 for the zero-error model of Ben-Or, Goldwasser and Wigderson (BGW). Our general theory can be extended so as to encompass those results from CRYPTO 2006 as well. We also present a new method for constructing high information rate ramp schemes based on arbitrary codes, and in particular we give a new construction based on algebraic geometry codes.
['Hao Chen', 'Ronald Cramer', 'Shafi Goldwasser', 'Robbert de Haan', 'Vinod Vaikuntanathan']
Secure Computation from Random Error Correcting Codes
186,807
We analyze a time-fenced planning system where both expediting and canceling are allowed inside the time fence, but only with a penalty. Previous research has allowed only for the case of expediting inside the time fence and has overlooked the opportunity for additional improvement by also allowing for cancelations. Some researchers also have found that for traditional time-fenced models, the choice of the more complex stochastic linear programming approach versus the simpler deterministic approach is not justified. We formulate both the deterministic and stochastic problems as dynamic programs and develop analytic bounds that limit the search space (and reduce the complexity) of the stochastic approach. We run extensive simulations and numerical experiments to understand better the benefit of adding cancelation and to compare the performance of the stochastic model with the more common deterministic model when they are employed as heuristics in a rolling-horizon setting. Across all experiments, we find that allowing expediting (but not canceling) lowered costs by 11.3% using the deterministic approach, but costs were reduced by 27.8% if both expediting and canceling are allowed. We find that the benefit of using the stochastic model versus the deterministic model varies widely across demand distributions and levels of recourse—the ratio of stochastic average costs to deterministic average costs ranged from 43.3% to 78.5%.
['Gregory D. DeYong', 'Kyle Cattani']
Fenced in? Stochastic and deterministic planning models in a time-fenced, rolling-horizon scheduling system
689,520
Multiple-input multiple-output (MIMO) relays can improve cell coverage and data throughput for wireless networks. The key challenge for the success of MIMO relay networks is effectively managing the intersymbol interference (ISI) and multiantenna interference (MAI) in multipath channels. In this paper, equalize-and-forward (EF) relaying strategies are employed to mitigate the interference by jointly optimizing equalizer weights and power allocation for dual-hop MIMO relay networks. Two scenarios with different channel state information (CSI) knowledge are investigated: 1) full CSI at the relays and 2) only backward CSI at the relays and the destination. By considering CSI availability and using the minimum-mean-square-error (MMSE) criterion, iterative algorithms are proposed for the joint design of equalizer weights and power allocation to resolve the interference problem. We then extend the design to a more general case, in which the direct link between the source and the destination is taken into account. Furthermore, two relay selection algorithms based on allocated power and MSE performance are investigated for the two scenarios, which attain a performance that is comparable to that of cases with brute-force search or without relay selection. The design framework can capture the impact of the available CSI at the relays and the destination on the performance of MIMO multirelay networks with multipath receptions.
['Keshav Singh', 'Meng-Lin Ku', 'Jia-Chin Lin']
Joint Power Allocation, Equalization, and Relay Selection for MIMO Relay Networks With Multipath Receptions
854,659
We describe a wavelet-based series expansion for wide-sense stationary processes. The expansion coefficients are uncorrelated random variables, a property similar to that of a Karhunen-Loeve (KL) expansion. Unlike the KL expansion, however, the wavelet-based expansion does not require the solution of the eigen equation and does not require that the process be time-limited. This expansion also has advantages over Fourier series, which is often used as an approximation to the KL expansion, in that it completely eliminates correlation and that the computation for its coefficients are more stable over large time intervals. The basis functions of this expansion can be obtained easily from wavelets of the Lemaire-Meyer (1990) type and the power spectral density of the process. Finally, the expansion can be extended to some nonstationary processes, such as those with wide-sense stationary increments. >
['Jun Zhang', 'Gilbert Waiter']
A wavelet-based KL-like expansion for wide-sense stationary random processes
391,243
A fully submersible force transducer system for use with isolated heart cells has been implemented using microelectromechanical systems (MEMS) technology. By using integrated circuit fabrication techniques to make mechanical as well as electrical components, the entire low-mass transducer is only a few cubic millimeters in size and is of higher fidelity (/spl ap/100 nN and 13.3 kHz in solution) than previously available. When chemically activated, demembranated single cells attached to the device contract and slightly deform a strain gauge whose signal is converted to an amplified electrical output. When integrated with a video microscope, the system is capable of optical determination of contractile protein striation periodicity and simultaneous measurement of heart cell forces in the 100-nN to 50-/spl mu/N range. The average measured maximal force was F/sub max/=5.77/spl plusmn/2.38 /spl mu/N. Normalizing for the cell's cross-sectional area, F/sub max//area was 14.7/spl plusmn/7.7 mN/mm/sup 2/. Oscillatory stiffness data at frequencies up to 1 kHz has also been recorded from relaxed and contracted cells. This novel MEMS force transducer system permits higher fidelity measurements from cardiac myocytes than available from standard macro-sized transducers.
['Gisela Lin', 'R. E. Palmer', 'Kristofer S. J. Pister', 'Kenneth P. Roos']
Miniature heart cell force transducer system implemented in MEMS technology
12,194
Inverse Eigenvalue Problem for Real Five-Diagonal Matrices with Proportional Relation
['Mingxing Tian', 'Zhibin Li']
Inverse Eigenvalue Problem for Real Five-Diagonal Matrices with Proportional Relation
644,883
Philippe Flajolet, mathematician and computer scientist extraordinaire, the father of analytic combinatorics, suddenly passed away on 22 March 2011, at the prime of his career. He is celebrated for opening new lines of research in the analysis of algorithms, developing powerful new methods, and solving difficult open problems. His research contributions will have an impact for generations, and his approach to research, based on curiosity, discriminating taste, broad knowledge and interests, intellectual integrity, and a genuine sense of camaraderie, will serve as an inspiration to those who knew him, for years to come.
['Bruno Salvy', 'Bob Sedgewick', 'Michèle Soria', 'Wojciech Szpankowski', 'Brigitte Vallée']
Philippe flajolet 1 december 1948-22 march 2011
442,951
Unconditional security through quantum uncertainty
['Pramode K. Verma']
Unconditional security through quantum uncertainty
886,291
Static always-on wireless sensor networks (WSNs) are affected by the energy sink-hole problem, where sensors nearer a central gathering node, called the sink, suffer from significant depletion of their battery power (or energy). It has been shown through analysis and simulation that it is impossible to guarantee uniform energy depletion of all the sensors in static uniformly distributed always-on WSNs with constant data reporting to the sink when the sensors use their nominal communication range to transmit data to the sink. We prove that the energy sink-hole problem can be solved provided that the sensors adjust their communication ranges. This solution, however, imposes a severe restriction on the size of a sensor field. To overcome this limitation, we propose a sensor deployment strategy based on energy heterogeneity with a goal that all the sensors deplete their energy at the same time. Simulation results show that such a deployment strategy helps achieve this goal. To solve the energy sink-hole problem for homogeneous WSNs, we propose a localized energy-aware-Voronoi-diagram-based data forwarding (EVEN) protocol. EVEN combines sink mobility with a new concept, called energy-aware Voronoi diagram. Through simulations, we show that EVEN outperforms similar greedy geographical data forwarding protocols and has performance that is comparable to that of an existing data collection protocol that uses a joint mobility and routing strategy. Precisely, we find that EVEN yields an improvement of more than 430 percent in terms of network lifetime.
['Habib M. Ammari', 'Sajal K. Das']
Promoting Heterogeneity, Mobility, and Energy-Aware Voronoi Diagram in Wireless Sensor Networks
200,478
In-Cell Projected Capacitive Touch Panel Technology
['Yasuhiro Sugita', 'Kazutoshi Kida', 'Shinji Yamagishi']
In-Cell Projected Capacitive Touch Panel Technology
366,583
Explicit Substitutions for Objects and Functions
['Delia Kesner', 'Pablo E. Martínez López']
Explicit Substitutions for Objects and Functions
552,297
This paper is a thought experiment in finding struc tures for the elicitation of requirements on top of struc tures for retrieving requirements. We transfer from structures used by the German sociologist Niklas Luhmann, who applied a wooden slip box to collect notes on processed literature. Each slip contains bib liographic information and Luhmann's thoughts to the referenced content. In this paper we propose to use Luhmann's slip box approach for requirements engi neering, by enhancing it to a multi user suited to sort requirements in a requirements repository.
['Andreas Faatz', 'Birgit Zimmermann', 'Eicke Godehardt']
Luhmann's Slip Box -- What can we Learn from the Device for Knowledge Representation in Requirements Engineering?
122,854
With more and more peer-to-peer (P2P) applications being utilized, the P2P traffic accounts for the majority in Internet, and thus leading to the network congestion problem. Reducing the redundant propagations of messages is an effective approach for solving such problem in unstructured P2P networks. In this paper, we first define a novel message structure which contains the information of message propagation path, and then three operations, including the inheritance, supplement and collection, on the message transmission paths are proposed, based on which a node could forward the message to the nodes who have not received the message yet purposefully by using the information of the past received messages and the characteristics of space and time of node activities, and thus eliminating the bandwidth consumption problem caused by the flooding-based message propagation approaches. The simulation results show that our strategy could effectively reduce the number of the redundant messages without lowering the message coverage ratio.
['Xianfu Meng', 'Dongxu Liu']
A traffic-efficient message forwarding approach in unstructured P2P networks
263,129
Journal of the Association for Information Science and Technology#R##N#Early View (Online Version of Record published before inclusion in an issue)
['Janette Lehmann', 'Carlos Castillo', 'Mounia Lalmas', 'Ricardo A. Baeza-Yates']
Story-focused reading in online news and its potential for user engagement
815,095
On the cybernetics of fixed points.
['Louis H. Kauffman']
On the cybernetics of fixed points.
744,433
Terrain rendering is widely used in industry and research. GIS software packages as well as navigation systems make use of terrain rendering to visualize terrain information. Recent trends in research show that scientific terrain visualization is shifting more and more to an interactive analysis tool. This allows domain specific users to perform visual analysis tasks within an interactive visual environment. Visual analysis tools are software package acting as a toolbox and providing functionality to support the work of domain specific users such as data exploration, data analysis and data presentation. Such software packages still suffer from limitations such as restricted or imprecise data and problems with large data handling. These challenges will also be at the core of research in scientific terrain visualization in the near future. In this paper we describe some open challenges for scientific terrain visualization in the acquisition, processing and rendering of terrain related geospatial information as well as new methods which could be used to address these challenges.
['Matthias Thöny', 'Markus Billeter', 'Renato Pajarola']
Vision paper: the future of scientific terrain visualization
682,941
Real-time traffic speed is indispensable for many ITS applications, such as traffic-aware route planning and eco-driving advisory system. Existing traffic speed estimation solutions assume vehicles travel along roads using constant speed. However, this assumption does not hold due to traffic dynamicity and can potentially lead to inaccurate estimation in real world. In this paper, we propose a novel in-network traffic speed estimation approach using infrastructure-free vehicular networks. The proposed solution utilizes macroscopic traffic flow model to estimate the traffic condition. The selected model only relies on vehicle density, which is less likely to be affected by the traffic dynamicity. In addition, we also demonstrate an application of the proposed solution in real-time route planning applications. Extensive evaluations using both traffic trace based large scale simulation and testbed based implementation have been performed. The results show that our solution outperforms some existing ones in terms of accuracy and efficiency in traffic-aware route planning applications.
['Zongjian He', 'Buyang Cao', 'Yan Liu']
Accurate real-time traffic speed estimation using infrastructure-free vehicular networks
256,514
Drahtlose Sensornetze bestehen aus sehr vielen batteriebetriebenen, uber Funk kommunizierenden Sensorknoten, die gemeinsam Messgrosen erfassen und verarbeiten. Ein wichtiger Forschungsaspekt auf diesem Gebiet ist deren Programmierung. Im Rahmen unseres acoowee-Projekts spezifizieren wir eine auf UML2 Aktivitats Diagrammen basierende Programmiersprache. Wir entwickeln ein Framework, das es ermoglicht das Verhalten einzelner Sun SPOTs und des Netzes mit Aktivitaten grafisch zu programmieren. Nach einer Transformation in ein Austauschformat wird die Aktivitat von einem Interpreter, der auf den Sun SPOTs lauft, ausgefuhrt.#R##N#Einzelne Sensorknoten fallen mit der Zeit wegen leerer Batterien aus. Schlimmsten Falls kann so eine Aktivitat nicht mehr im Sensornetzwerk ausgefuhrt werden kann, obwohl es mit dynamischer Anpassung noch moglich ware. In diesem Artikel stellen wir die Grundlagen des acoowee-Projekts vor und zeigen an Hand von Beispielen auf, wie wir mit Mobilitat und Reprogrammierung von Sensorknoten bzw. Allocation und Modifikation von Aktivitaten dynamische Anpassung ermoglichen wollen. Unser Ziel ist es dynamische Anpassung beim acoowee zu integrieren.
['Gerhard Fuchs', 'Reinhard German']
Möglichkeiten der dynamischen Anpassung von Sensornetzwerken am Beispiel des acoowee-Projekts
498,484
Large scale optimization of systems governed by partial differential equations (PDEs) is a frontier problem in scientific computation. The state-of-the-art for solving such problems is reduced-space quasi-Newton sequential quadratic programming (SQP) methods. These take full advantage of existing PDE solver technology and parallelize well. However, their algorithmic scalability is questionable; for certain problem classes they can be very slow to converge. In this paper we propose a full-space Newton-Krylov SQP method that uses the reduced-space quasi-Newton method as a preconditioner. The new method is fully parallelizable; exploits the structure of and available parallel algorithms for the PDE forward problem; and is quadratically convergent close to a local minimum. We restrict our attention to boundary value problems and we solve a model optimal flow control problem, with both Stokes and Navier-Stokes equations as constraints. Algorithmic comparisons, scalability results, and parallel performance on a Cray T3E-900 are presented. On the model problems solved, the new method is a factor of 5-10 faster than reduced space quasi-Newton SQP, and is scalable provided a good forward preconditioner is available.
['George Biros', 'Omar Ghattas']
Parallel Netwon-Krylov Methods for PDE-Constrained Optimization
523,373
Cloud Computing is a successful paradigm for deploying scalable and highly available web applications at low cost. In real life scenarios, the applications are expected to be scalable and consistent. Data partitioning is a commonly used technique for improving scalability. Traditional horizontal partitioning techniques are not capable of tracking the data access patterns of web applications. The development of novel, scalable workload-driven data partitioning is a requirement for improving scalability. This paper proposes a novel workload-aware approach, with scalable workload-driven data partitioning based on data access patterns of web applications for transaction processing. It is specially designed to scale out using NoSQL data stores. In contrast to the existing static approaches, this approach offers high throughput, lower response time, and a less number of distributed transactions. Further, implementation and validation of scalable workload-driven partitioning scheme is carried out through experimentation over cloud data stores such as Hadoop HBase and Amazon SimpleDB. An experimental results of the concerned partitioning scheme is conducted using the industry standard TPC-C benchmark. Analytical and experimental results are observed and it shows that scalable workload-driven data partitioning outperforms the schema level and graph partitioning in terms of throughput, response time and distributed transactions.
['Swati Ahirrao', 'Rajesh Ingle']
Scalable transactions in cloud data stores
550,336
Fault diagnosis plays an important role in the operation of modern robotic systems. A number of researchers have proposed fault diagnosis architectures for robotic manipulators using the model-based analytical redundancy approach. One of the key issues in the design of such fault diagnosis schemes is the effect of modeling uncertainties on their performance. This paper investigates the problem of fault diagnosis in rigid-link robotic manipulators with modeling uncertainties. A learning architecture with sigmoidal neural networks is used to monitor the robotic system for off-nominal behavior due to faults. The robustness, sensitivity, missed detection and stability properties of the fault diagnosis scheme are rigorously established. Simulation examples are presented to illustrate the ability of the neural network based robust fault diagnosis scheme to detect and accommodate faults in a two-link robotic manipulator.
['Arun T. Vemuri', 'Marios M. Polycarpou']
A methodology for fault diagnosis in robotic systems using neural networks
274,525
This paper describe techniques to facilitate communication and learning among people, who are participating in a distributed role-play. The distributed and asynchronous setting poses new problems for collaborative role-playing. We present three techniques for a facilitator to guide participants through a story. This to allow them to work as a group even though they access the role-play distributed and asynchronously. The three techniques presented are: synchronizing points, providing written instructions and through peer pressure. We describe the techniques and present our insights from the evaluation.
['Johan Lundin', 'Farshad Taghizadeh']
Techniques for synchronizing distributed participants in a net-scenario
259,504