abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
In this paper, the influence of temperature on quartz crystal microbalance (QCM) sensor response during dew point calibration is investigated. The aim is to present a compensation method to eliminate temperature impact on frequency acquisition. A new sensitive structure is proposed with double QCMs. One is kept in contact with the environment, whereas the other is not exposed to the atmosphere. There is a thermal conductivity silicone pad between each crystal and a refrigeration device to keep a uniform temperature condition. A differential frequency method is described in detail and is applied to calibrate the frequency characteristics of QCM at the dew point of −3.75 °C. It is worth noting that frequency changes of two QCMs were approximately opposite when temperature conditions were changed simultaneously. The results from continuous experiments show that the frequencies of two QCMs as the dew point moment was reached have strong consistency and high repeatability, leading to the conclusion that the sensitive structure can calibrate dew points with high reliability.
['Ningning Lin', 'Xiaofeng Meng', 'Jing Nie']
Dew Point Calibration System Using a Quartz Crystal Sensor with a Differential Frequency Method
938,972
The construction of the Beijing–Tianjin high-speed railway line started in July 2005. The railway has been completed and opened to traffic in August 2008, just before the Beijing Olympic Games. The 113 km railway costing 13.3 billion Yuan (US $1.5 billion) is the country's first high-standard passenger rail. It is also believed to be the pilot project of a massive high-speed rail network in China. The train shuttles passengers between the cities in just half an hour, 45 minutes shorter than the usual travel time. The train is designed to go 200 km/h, but can reach speeds of 350 km/h. The current railway is under pressure, handling 25.55 million passengers each year. The new passenger rail line is expected to handle 32 million passengers from 2008 and 54 million passengers in 2015. Both Tianjin and Beijing have been suffering from ground subsidence due to groundwater extraction. This study focuses on mapping the impact of such subsidence on the Beijing–Tianjin high-speed railway using differential radar in...
['Linlin Ge', 'Xiaojing Li', 'Hsing-Chung Chang', 'Alex Hay-Man Ng', 'Kui Zhang', 'Zhe Hu']
Impact of ground subsidence on the Beijing–Tianjin high-speed railway as mapped by radar interferometry
7,327
This article aims to discuss the role that copyright plays in movements of free access, and vice versa. The doctrine has appointed, on of them, for example, the Open Access, as a new rule needed to be built into the copyright. How the movements searching for free access to copyrighted works can influence or transform the current regulation, statements and understanding of copyright extension? Moreover, the question must also be answered from the viewpoint of copyright itself, in the sense of what it can do for the fight for access, like Open Access: improve, strengthen, undermine or weaken the access? This gives rise to disputes between competing interests, being necessary to understand the dimension and the possibility of copyright really being influenced or even maybe integrated by the movements of free access to copyrighted works. On this way it also focuses in the information access and the battle for independence of the universal access besides some reluctance of governments and economic industries which want to control the access, and transform, not only the information but also the access, in a consumer product.
['Angela Kretschmann']
Copyright and Movements of Access: Business is Business, but Friends are Friends
358,236
Is 'not bad' good enough? Aspects of unknown voices' likability.
['Benjamin Weiss', 'Felix Burkhardt']
Is 'not bad' good enough? Aspects of unknown voices' likability.
777,948
Route tracing allows authorities to determine the route traversed by a vehicle, which can be used valuably in crime investigations. In VANETs (Vehicular Ad hoc NETwork), a conditional anonymity scheme is normally used to meet the conflicting demands of the government (authentication) and users (privacy). These schemes provide a mechanism to revoke the anonymity of individual beacons sent by vehicles. However, using this mechanism to trace a route of a vehicle violates privacy of many other vehicles. In this paper, we provide a simple route tracing mechanism that reveals only the route of the targeted vehicle. The proposed method can be easily added to any existing VANET solutions and also includes a way to prevent authorities from abusing this new function.
['Sangjin Kim', 'Heekuck Oh']
A Simple Privacy Preserving Route Tracing Mechanism for VANET
429,473
Some recent works in machine learning and computer vision involve the solution of a bi-level optimization problem. Here the solution of a parameterized lower-level problem binds variables that appear in the objective of an upper-level problem. The lower-level problem typically appears as an argmin or argmax optimization problem. Many techniques have been proposed to solve bi-level optimization problems, including gradient descent, which is popular with current end-to-end learning approaches. In this technical report we collect some results on differentiating argmin and argmax optimization problems with and without constraints and provide some insightful motivating examples.
['Stephen Gould', 'Basura Fernando', 'Anoop Cherian', 'Peter Anderson', 'Rodrigo Santa Cruz', 'Edison Guo']
On Differentiating Parameterized Argmin and Argmax Problems with Application to Bi-level Optimization
862,136
This paper studies discrete-time adaptive failure compensation control of systems with uncertain actuator failures, using an indirect adaptive control method. A discrete-time model of a continuous-time linear system with actuator failures is derived and its key features are clarified. A new discrete-time adaptive actuator failure compensation control scheme is developed, which consists of a total parametrization of the system with parameter and failure uncertainties, a stable adaptive parameter estimation algorithm, and an on-line design procedure for feedback control. This work represents a new design of direct adaptive compensation of uncertain actuator failures, using an indirect adaptive control method. Such an adaptive design ensures desired closed-loop system stability and asymptotic tracking properties despite uncertain actuator failures. Simulation results are presented to show the desired adaptive actuator failure compensation performance.
['Chang Tan', 'Ruiyun Qi', 'Gang Tao']
A discrete-time parameter estimation based adaptive actuator failure compensation control scheme
365,578
Many applications require protection of secret or sensitive information, from sensor nodes and embedded applications to large distributed systems. The confidentiality of data can be protected by encryption using symmetric-key ciphers, and the integrity of the data can be ensured by using a cryptographic hash function to calculate a "digital fingerprint." In this paper, we propose reconfigurable FPGA hardware components that enable rapid deployment of cryptographic and other algorithms. The novelty of our hardware components is in their general-purpose design which enables easy mappings of algorithms to allow customizations of data protection for different usage scenarios. Since we utilize only a small part of an FPGA chip, our design can be readily integrated with other processing needs of a mobile device, a sensor node or a System-on-Chip. Important block ciphers like the Advanced Encryption Standard (AES) as well as advanced cryptographic hash algorithms like Whirlpool map well onto our general-purpose components. Our solution facilitates easy hardware implementation of customizable encryption and hashing solutions, with area and speed performance comparable to custom FPGA implementations targeted at a specific cipher or hash algorithm. We achieve the best efficiency in Mbps/slice for Whirlpool. Furthermore, the components that we have proposed can be used for many other applications - not just for implementing block ciphers and cryptographic hash functions.
['Jakub Szefer', 'Yu-Yuan Chen', 'Ruby B. Lee']
General-purpose FPGA platform for efficient encryption and hashing
478,763
The generalized Turan number ex(G,H) of two graphs G and H is the maximum number of edges in a subgraph of G not containing H. When G is the complete graph K"m on m vertices, the value of ex(K"m,H) is (1-1/(@g(H)-1)+o(1))(m2), where o(1)->0 as m->~, by the Erdos-Stone-Simonovits theorem. In this paper we give an analogous result for triangle-free graphs H and pseudo-random graphs G. Our concept of pseudo-randomness is inspired by the jumbled graphs introduced by Thomason [A. Thomason, Pseudorandom graphs, in: Random Graphs '85, Poznan, 1985, North-Holland, Amsterdam, 1987, pp. 307-331. MR 89d:05158]. A graph G is (q,@b)-bi-jumbled if|e"G(X,Y)-q|X||Y||= 0 there exists @c>0 so that the following holds: any large enough m-vertex, (q,@cq^@D^+^1^/^2m)-bi-jumbled graph G satisfiesex(G,H)=<(1-1@g(H)-1+@d)|E(G)|.
['Yoshiharu Kohayakawa', 'Vojtěch Rödl', 'Mathias Schacht', 'Papa Sissokho', 'Jozef Skokan']
Turán's theorem for pseudo-random graphs
3,876
The paper presents the methods of integrating prior knowledge with a first-order Single-Input Single-Output (SISO) Interval Type-2 Takagi-Sugeno-Kang (TSK) Fuzzy Logic System (IT2FLS) for function approximation under noisy circumstances. Firstly, sufficient conditions on the antecedent and the consequent parameters of the IT2FLS are given to ensure that three kinds of prior knowledge — monotonicity, symmetry and special points, can be embedded into the IT2FLS. And then, we use three optimization algorithms — constrained least squares algorithm, active-set algorithm and hybrid learning algorithm to design the IT2FLS, respectively. The effectiveness of the three algorithms and the comparisons of their performance are demonstrated by simulation examples.
['Tiechao Wang', 'Jianqiang Yi']
Design of interval type-2 fuzzy logic systems using prior knowledge via optimization algorithms
481,745
XPath plays a prominent role as an XML navigational language due to several factors, including its ability to express queries of interest, its close connection to yardstick database query languages (e.g., first-order logic), and the low complexity of query evaluation for many fragments. Another common database model---graph databases---also requires a heavy use of navigation in queries; yet it largely adopts a different approach to querying, relying on reachability patterns expressed with regular constraints. Our goal here is to investigate the behavior and applicability of XPath-like languages for querying graph databases, concentrating on their expressiveness and complexity of query evaluation. We are particularly interested in a model of graph data that combines navigation through graphs with querying data held in the nodes, such as, for example, in a social network scenario. As navigational languages, we use analogs of core and regular XPath and augment them with various tests on data values. We relate these languages to first-order logic, its transitive closure extensions, and finite-variable fragments thereof, proving several capture results. In addition, we describe their relative expressive power. We then show that they behave very well computationally: they have a low-degree polynomial combined complexity, which becomes linear for several fragments. Furthermore, we introduce new types of tests for XPath languages that let them capture first-order logic with data comparisons and prove that the low complexity bounds continue to apply to such extended languages. Therefore, XPath-like languages seem to be very well-suited to query graphs.
['Leonid Libkin', 'Wim Martens', 'Domagoj Vrgoč']
Querying graph databases with XPath
93,969
This paper proposes a new multi-Bernoulli filter called the Adaptive Labeled Multi-Bernoulli filter. It combines the relative strengths of the known δ-Generalized Labeled Multi-Bernoulli and the Labeled Multi-Bernoulli filter. The proposed filter provides a more precise target tracking in critical situations, where the Labeled Multi-Bernoulli filter looses information through the approximation error in the update step. In non-critical situations it inherits the advantage of the Labeled Multi-Bernoulli filter to reduce the computational complexity by using the LMB approximation.
['Andreas Danzer', 'Stephan Reuter', 'Klaus Dietmayer']
The Adaptive Labeled Multi-Bernoulli Filter
878,479
A basic premise of model driven development (MDD) is to capture all important design information in a set of formal or semi-formal models which are then automatically kept consistent by tools. The concept however is still relatively immature and there is little by way of empirically validated guidelines. In this paper we report on the use of MDD on a significant real-world project over several years. Our research found the MDD approach to be deficient in terms of modelling architectural design rules. Furthermore, the current body of literature does not offer a satisfactory solution as to how architectural design rules should be modelled. As a result developers have to rely on time-consuming and error-prone manual practices to keep a system consistent with its architecture. To realise the full benefits of MDD it is important to find ways of formalizing architectural design rules which then allow automatic enforcement of the architecture on the system model. Without this, architectural enforcement will remain a bottleneck in large MDD projects.
['Anders Mattsson', 'Björn Lundell', 'Brian Lings', 'Brian Fitzgerald']
Linking Model-Driven Development and Software Architecture: A Case Study
92,474
A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast is composed of two types of tissue, fat and parenchyma. Effective linear attenuation coefficients of these tissues are derived from empirical data as a function of tube voltage (kVp), anode material, filtration, and compressed breast thickness. By employing these, tissue composition at a given pixel is computed after performing breast thickness compensation, using a reference value for fatty tissue determined by the maximum pixel value in the breast tissue projection. Validation has been performed using 22 FFDM cases acquired with a GE Senographe 2000D by comparing the volume estimates with volumes obtained by semi-automatic segmentation of breast magnetic resonance imaging (MRI) data. The correlation between MRI and mammography volumes was 0.94 on a per image basis and 0.97 on a per patient basis. Using the dense tissue volumes from MRI data as the gold standard, the average relative error of the volume estimates was 13.6%.
['S. van Engeland', 'Peter R. Snoeren', 'H.J. Huisman', 'Carla Boetes', 'Nico Karssemeijer']
Volumetric breast density estimation from full-field digital mammograms
280,485
Aloha and Carrier Sense Multiple Access (CSMA) are two representative random-access protocols. Despite their simplicity in concept, the performance analysis of Aloha and CSMA networks has long been known as notoriously difficult. Numerous models and analytical approaches have been proposed in the past four decades. Yet how to integrate them into a coherent theory remains an open challenge. Toward this end, a unified analytical framework was recently proposed in , based on which a comprehensive study of throughput, delay and stability performance of Aloha networks was presented. In this paper, the framework is further extended to CSMA networks. The analysis shows that both CSMA and Aloha have the same bi-stable property, and the performance of both networks critically depends on the selection of backoff parameters. Different from Aloha, however, substantial gains can be achieved in CSMA networks by reducing the mini-slot length a and the collision-detection time x. The maximum throughput with CSMA is derived as an explicit function of a and x, and shown to be higher than that with Aloha if a 1/ϵ - 1≈0.445. With a small mini-slot length a, CSMA networks are also found to be more robust than Aloha networks thanks to larger stable regions of backoff parameters. To demonstrate how to properly tune the backoff parameters to stabilize the network, the complete stable region of the initial transmission probability q 0 is characterized, and illustrated via the example of p-persistent CSMA with the cutoff phase K=0. The optimal values of q 0 to maximize the network throughput and to minimize the first and second moments of access delay are also obtained, which shed important light on practical network control and optimization.
['Lin Dai']
Toward a Coherent Theory of CSMA and Aloha
46,979
In this paper, we describe a software architecture supporting code generation from within Ptolemy II. Ptolemy II is a componentbased design tool intended for embedded and real-time system design. The infrastructure described here provides a platform for experimentation with synthesis of embedded software or hardware designs from high-level, Java-based specifications. A specification consists of a set of interconnected actors whose functionality is defined in Java. The provided infrastructure parses the existing Java code for the actors and presents a simple API, using the Visitor pattern, for transforming the abstract syntax tree and writing code transformers and back-end code generators. A code transformer is described that performs generic optimizations, such as specialization of polymorphic data types, and domain-specific optimizations, such as static buffer allocation for communication between dataflow actors. The back end resynthesizes the transformed Java code. We describe a simple example where the generated Java code runs on a PalmOS handheld computer, and a more elaborate example where significant performance improvements are achieved by transformations.
['Jeff Tsay', 'Christopher Hylands', 'Edward A. Lee']
A code generation framework for Java component-based designs
408,462
Wireless charging has provided a convenient alternative to renew nodes’ energy in wireless sensor networks. Due to physical limitations, previous works have only considered recharging a single node at a time, which has limited efficiency and scalability. Recent advances on multi-hop wireless charging is gaining momentum and provides fundamental support to address this problem. However, existing single-node charging designs do not consider and cannot take advantage of such opportunities. In this paper, we propose a new framework to enable multi-hop wireless charging using resonant repeaters. First, we present a realistic model that accounts for detailed physical factors to calculate charging efficiencies. Second, to achieve balance between energy efficiency and data latency, we propose a hybrid data gathering strategy that combines static and mobile data gathering to overcome their respective drawbacks and provide theoretical analysis. Then, we formulate multi-hop recharge schedule into a bi-objective NP-hard optimization problem. We propose a two-step approximation algorithm that first finds the minimum charging cost and then calculates the charging vehicles’ moving costs with bounded approximation ratios. Finally, upon discovering more room to reduce the total system cost, we develop a post-optimization algorithm that iteratively adds more stopping locations for charging vehicles to further improve the results while ensuring none of the nodes will deplete battery energy. Our extensive simulations show that the proposed algorithms can handle dynamic energy demands effectively, and can cover at least three times of nodes and reduce service interruption time by an order of magnitude compared to the single-node charging scheme.
['Cong Wang', 'J. Li', 'Fan Ye', 'Yuanyuan Yang']
A Novel Framework of Multi-Hop Wireless Charging for Sensor Networks Using Resonant Repeaters
729,262
Biogeography-Based optimisation for data clustering
['Abdelaziz I. Hammouri', 'Salwani Abdullah']
Biogeography-Based optimisation for data clustering
734,929
Epipolar Consistency in Fluoroscopy for Image-Based Tracking.
['André Aichert', 'Jian Wang', 'Roman Schaffert', 'Arnd Dörfler', 'Joachim Hornegger', 'Andreas K. Maier']
Epipolar Consistency in Fluoroscopy for Image-Based Tracking.
696,664
A generic tactical model is developed considering third party price policies for the optimization of coordinated and centralized multi-product Supply Chains (SCs). To allow a more realistic assessment of these policies in each marketing situation, different price approximation models to estimate these policies are proposed, which are based on the demand elasticity theory, and result in different model implementations (LP, NLP, and MINLP). The consequences of using the proposed models on the SCs coordination, regarding not only their practical impact on the tactical decisions, but also the additional mathematical difficulties to be solved, are verified through a case study in which the coordination of a production–distribution SC and its energy generation SC is analyzed. The results show how the selection of the price approximation model affects the tactical decisions. The average price approximation leads to the worst decisions with a significant difference in the real total cost in comparison with the best piecewise approximation.
['Kefah Hjaila', 'José M. Laínez-Aguirre', 'Miguel Zamarripa', 'Luis Puigjaner', 'Antonio Espuña']
Optimal integration of third-parties in a coordinated supply chain management environment
588,141
The reliable identification of human activities in video, for example whether a person is walking, clapping, waving, etc. is extremely important for video interpretations. Since different people could perform the same action across different number of frames, matching two different sequences of the same actions is not a trivial task. In this paper we discuss a new technique for video sequence matching where the matched sequences are of different sizes. The proposed technique is based on frequency domain analysis of feature data. The experiments are shown to achieve high recognition accuracy of 95.4% on recognizing 8 different human actions, and out-perform two baseline methods of comparison.
['Jessica JunLin Wang', 'Sameer Singh']
Spatial feature based recognition of human dynamics in video sequences
544,861
Decision support systems often rely on a mathematical decision model allowing the comparison of alternatives and the selection of a proper solution.
['Nawal Benabbou']
Possible Optimality and Preference Elicitation for Decision Making
688,056
This paper summarizes a methodology for reliability prediction of new products where field data are sparse, and the allowed number & length of experiments are limited. The methodology relies on estimating a set where the unknown parameters are most likely to be found, calculation of an upper bound for the reliability metric of interest conditioned that the parameters reside in the estimated set, and tightening the bounds via design of experiments. Models of failure propagation, failure acceleration, system operations, and time/cycle to failure at various levels of fidelity & expert elicited information may be incorporated to enhance the accuracy of the predictions. The application of the model is illustrated through numerical studies.
['Payman Sadegh', 'Adrian Thompson', 'Xiaodong Luo', 'Young Park', 'Tobias Sienel']
A methodology for predicting service life and design of reliability experiments
430,656
This paper describes the opportunities offered by executing distributed simulations in a grid environment and discusses the research challenges that must be addressed before these opportunities can be fully exploited. It presents a conceptual framework for the next generation of Grid-based distributed simulations and describes SOHR [4], a Service Oriented HLA RTI framework that implements the RTI entirely using Grid services following the Grid-oriented approach.
['Stephen John Turner']
Distributed Simulation on the Grid: Opportunities and Challenges
445,434
We present a stereo-based obstacle avoidance system for mobile vehicles. The system operates in three steps. First, it models the surface geometry of the supporting surface and removes the supporting surface from the scene. Next, it segments the remaining stereo disparities into connected components in image and disparity space. Finally, it projects the resulting connected components onto the supporting surface and plans a path around them. One interesting aspect of this system is that it can detect both positive and "negative" obstacles (e.g. stairways) in its path. The algorithms we have developed have been implemented on a mobile robot equipped with a real-time stereo system. We present experimental results on indoor environments with planar supporting surfaces that show the algorithms to be both fast and robust.
['D. Burschkal', 'Sang‐Ho Lee', 'Greg Hager']
Stereo-based obstacle avoidance in indoor environments with active sensor re-calibration
441,754
To digitize the ultra-wideband (UWB) signal at its Nyquist rate, a frequency channelized receiver for UWB radio based on hybrid filter banks (HFB) is presented. Among the challenges of such receivers are the uncertainties of the analog analysis filters and the slow convergence speed. To overcome these problems, a channelized receiver operating at slightly above the critically sampling rate is presented. The proposed receiver, which is designed for use in transmitted reference (TR) systems, combines the synthesis filters and the matched filter so that the joint response of the analysis filter and the propagation channel can be estimated independently in each subband. When the input noise is colored or a narrowband interference (NBI) is present, the weighting in each subband can be adaptively adjusted so to approximate the noise whitening and matched filtering operation for near optimal detection. The adaptive performance of the proposed receiver is slightly better than an ideal full-band receiver when the input noise is white and significantly better when a NBI is present. The effect of the automatic gain controller (AGC) and the analog-to-digital converter (ADC) are also considered. The proposed receiver with 3-bit ADC seems sufficient to achieve performance comparable to an infinite bit channelized receiver even in the presence of large NBI
['Lei Feng', 'Won Namgoong']
An oversampled channelized UWB receiver with transmitted reference modulation
223,034
Software test environments (STEs) provide a means of automating the test process and integrating testing tools to support required testing capabilities across the test process. Specifically, STEs may support test planning, test management, test measurement, test failure analysis, test development and test execution. The software architecture of an STE describes the allocation of the environment's functions to specific implementation structures. An STE's architecture can facilitate or impede modifications such as changes to processing algorithms, data representation or functionality. Performance and reusability are also subject to architecturally imposed constraints. Evaluation of an STE's architecture can provide insight into modifiability, extensibility, portability and reusability of the STE. This paper proposes a reference architecture for STEs. Its analytical value is demonstrated by using SAAM (Software Architectural Analysis Method) to compare three software test environments: PROTest II (PROLOG Test Environment, Version II), TAOS (Testing with Analysis and Oracle Support), and CITE (CONVEX Integrated Test Environment).
['Nancy S. Eickelmann', 'Debra J. Richardson']
An evaluation of software test environment architectures
98,724
Multi-source Information Fusion Using Measure Representations
['Ronald R. Yager']
Multi-source Information Fusion Using Measure Representations
825,860
Conventional images store a very limited dynamic range of brightness. The true luma in the bright area of such images is often lost due to clipping. When clipping changes the R, G, B color ratios of a pixel, color distortion also occurs. In this paper, we propose an algorithm to enhance both the luma and chroma of the clipped pixels. Our method is based on the strong chroma spatial correlation between clipped pixels and their surrounding unclipped area. After identifying the clipped areas in the image, we partition the clipped areas into regions with similar chroma, and estimate the chroma of each clipped region based on the chroma of its surrounding unclipped region. We correct the clipped R, G, or B color channels based on the estimated chroma and the unclipped color channel(s) of the current pixel. The last step involves smoothing of the boundaries between regions of different clipping scenarios. Both objective and subjective experimental results show that our algorithm is very effective in restoring the color of clipped pixels.
['Di Xu', 'Colin Doutre', 'Panos Nasiopoulos']
Correction of Clipped Pixels in Color Images
126,225
Publicly verifiable secret sharing is a kind of special verifiable secret sharing, in which the shares of a secret can be verified by everyone, not only shareholders. Because of the property of public verifiability, it plays an important role in key-escrow, electronic voting, and so on. In this paper, we discuss a problem of how to publicly verifiably expand a member without changing old shares in a secret sharing scheme and present such a scheme. In the presented scheme, a new member can join a secret sharing scheme based on discrete logarithms to share the secret with the help of old shareholders. Furthermore, everyone besides the new member can verify the validity of the new member's share and any old shareholder doesn't need to change her old share. That means it is very convenient for key management.
['Jia Yu', 'Fanyu Kong', 'Rong Hao', 'Xuliang Li']
How to Publicly Verifiably Expand a Member without Changing Old Shares in a Secret Sharing Scheme
358,682
This paper presents a fast and efficient sequential learning method for RBF networks that can perform classification directly for multi-category cancer diagnosis problems based on microarray data. The recently developed algorithm, referred to as Fast Growing And Pruning-RBF (FGAP-RBF) can perform incremental learning on the future data directly. No training of all the previous data is needed. This character can reduce the learning complexity and improve the learning efficiency and is greatly favored in the real implementation of a gene expression-based cancer diagnosis system. We have evaluated FGAP-RBF algorithm on a benchmark multi-category cancer diagnosis problem based on microarray data, namely GCM dataset. The results indicate that compared with the results available in literature FGAP-RBF algorithm produces a higher classification accuracy with reduced training time and implementation complexity.
['Runxuan Zhang', 'Narasimhan Sundararajan', 'Guang-Bin Huang', 'P. Saratchandran']
An Efficient Sequential RBF Network for Gene Expression-Based Multi-category classification
370,658
A recently introduced power-combining scheme for a Class-E amplifier is, for the first time, experimentally validated in this paper. A small value choke of 2.2 nH was used to substitute for the massive dc-feed inductance required in the classic Class-E circuit. The power-combining amplifier presented, which operates from a 3.2-V dc supply voltage, is shown to be able to deliver a 24-dBm output power and a 9.5-dB gain, with 64% drain efficiency and 57% power-added efficiency at 2.4 GHz. The power amplifier exhibits a 350-MHz bandwidth within which a drain efficiency that is better than 60% and an output power that is higher than 22 dBm were measured. In addition, by adopting three-harmonic termination strategy, excellent second- and third-harmonic suppression levels of 50 and 46 dBc, respectively, were obtained. The complete design cycle from analysis through fabrication to characterization is explained.
['Mury Thian', 'Vincent Fusco', 'P. Gardner']
Power-Combining Class-E Amplifier With Finite Choke
221,614
Service-Oriented Architectures are responsible for mapping relevant business processes to the corresponding services that, together, add value to the final user. Such architectures should meet main dependability requirements – which include high availability and high reliability. The objective of this work is to describe a software infrastructure, called Archmeds, that operates in the communication between a web service's clients and the web service itself, in order to implement fault tolerance techniques that make effective use of redundancy out of undependable existing services. The Archmeds infrastructure was designed to be remotely accessible via web services technology, so that it can be easily reused during the implementation of different web services-based service oriented architectures. The proposed solution was validated using web services-based applications implemented for the BioCORE biodiversity project. The results show that Archmeds is able to mediate requests to web services ensuring fault tolerance of their responses in the presence of failure scenarios.
['Eduardo Machado Gonçalves', 'Cecília M. F. Rubira']
Archmeds: An Infrastructure for Dependable Service-Oriented Architectures
340,515
Fuzzy set covering was introduced as an extended counterpart of crisp machine learning methods using a separate-and-conquer approach to concept learning. This approach follows a general-to-specific search through a space of partially ordered conjunctive descriptions. The search path followed depends to a large extent on the evaluation function used. This paper investigates the effect of the evaluation function on the quality of induced rules and the number of conjunctions examined.
['Ian Cloete', 'J. van Zyl']
Evaluation function guided search for fuzzy set covering
214,492
Learning and making decisions in a complex uncertain multiagent environment like RoboCup Soccer Simulation 3D is a non-trivial task. In this paper, a probabilistic approach to handle such uncertainty in RoboCup 3D is proposed, specifically a Naive Bayes classifier. Although its conditional independence assumption is not always accomplished, it has proved to be successful in a whole range of applications. Typically, the Naive Bayes model assumes discrete attributes, but in RoboCup 3D the attributes are continuous. In literature, Naive Bayes has been adapted to handle continuous attributes mainly using Gaussian distributions or discretizing the domain, both of which present certain disadvantages. In the former, the probability density of attributes is not always well-fitted by a normal distribution. In the latter, there can be loss of information. Instead of discretizing, the use of a Fuzzy Naive Bayes classifier is proposed in which attributes do not take a single value, but a set of values with a certain membership degree. Gaussian and Fuzzy Naive Bayes classifiers are implemented for the pass evaluation skill of 3D agents. The classifiers are trained with different number of training examples and different number of attributes. Each generated classifier is tested in a scenario with three teammates and four opponents. Additionally, Gaussian and Fuzzy approaches are compared versus a random pass selector. Finally, it is shown that the Fuzzy Naive Bayes approach offers very promising results in the RoboCup 3D domain.
['Carlos Bustamante', 'Leonardo Garrido', 'Rogelio Soto']
Comparing fuzzy naive bayes and gaussian naive bayes for decision making in robocup 3D
862,826
There are many situations where policy makers would like to induce firms to make a major discrete conversion in production technology to help the environment. This paper examines how heterogeneity in the operating condition of firms' plant and equipment, which cannot be observed by policy makers, can affect the choice between incentives to encourage conversion to a cleaner technology. By relating different conditions of firms' plant and equipment to production costs, extent of environmental damage, and cost of conversion to a cleaner technology, we show when a perfectly discriminating incentive to encourage conversion is not feasible. In addition, we show that firms with plant and equipment in better condition will convert their technology to mitigate their environmental damage, and firms with plant and equipment in poorer condition will not. This and a series of additional results lead to conditions under which an administratively simple uniform lump-sum incentive to switch to cleaner technology is preferable to one based on output. These results and conditions extend to cases where there are network externalities in conversion, and where there is strategic timing in firms' choice of when to convert.
['Maurice D. Levi', 'Barrie R. Nault']
Converting Technology to Mitigate Environmental Damage
115,777
In the past few years, wireless local area networks (WLANs) have been widely deployed, where the most important standard is IEEE 802.11. To support high data rate applications in the next generation WLANs, a common approach is to aggregate multiple upper layer packets into one large frame in the MAC layer. However, with the increase of frame size, the frame error rate will also be increased in error-prone wireless environments. Therefore, existing frame retransmission scheme may not be efficient since the entire frame will be retransmitted. In this paper, we study a retransmission scheme that is suitable for the aggregation-based MAC protocol, in which only the packets that encounter transmission errors will be retransmitted. The main contribution of our study is to develop an analytical model for evaluating the saturated throughput performance of the MAC protocol. Extensive simulation and analytical results show that, our model is highly accurate and the proposed retransmission scheme can significantly improve the throughput performance in error-prone wireless channels.
['Kejie Lu', 'Yi Qian']
Performance Analysis of A Retransmission Scheme for High-Data-Rate MAC Protocol in Wireless LANs
407,810
We introduce a new approach to three important problems in ray tracing: antialiasing, distributed light sources, and fuzzy reflections of lights and other surfaces. For antialiasing, our approach combines the quality of supersampling with the advantages of adaptive supersampling. In adaptive supersampling, the decision to partition a ray is taken in image-space, which means that small or thin objects may be missed entirely. This is particularly problematic in animation, where the intensity of such objects may appear to vary. Our approach is based on considering pyramidal rays (pyrays) formed by the viewpoint and the pixel. We test the proximity of a pyray to the boundary of an object, and if it is close (or marginal), the pyray splits into 4 sub-pyrays; this continues recursively with each marginal sub-pyray until the estimated change in pixel intensity is suciently small. The same idea also solves the problem of soft shadows from distributed light sources, which can be calculated to any required precision. Our approach also enables a method of defocusing reflected pyrays, thereby producing realistic fuzzy reflections of light sources and other objects. An interesting byproduct of our method is a substantial speedup over regular supersampling even when all pixels are supersampled. Our algorithm was implemented on polygonal and circular objects, and produced images comparable in quality to stochastic sampling, but with greatly reduced run times.
['Jon Genetti', 'Dan Gordon', 'Glen Williams']
Adaptive Supersampling in Object Space Using Pyramidal Rays
465,045
To account for the randomness of propagation channels and interference levels in hierarchical spectrum sharing, a novel approach to multihop routing is introduced for cognitive random access networks, whereby packets are randomly routed according to outage probabilities. Leveraging channel and interference level statistics, the resultant cross-layer optimization framework provides optimal routes, transmission probabilities, and transmit-powers, thus enabling cognizant adaptation of routing, medium access, and physical layer parameters to the propagation environment. The associated optimization problem is non-convex, and hence hard to solve in general. Nevertheless, a successive convex approximation approach is adopted to efficiently find a Karush-Kuhn-Tucker solution. Augmented Lagrangian and primal decomposition methods are employed to develop a distributed algorithm, which also lends itself to online implementation. Enticingly, the fresh look advocated here permeates benefits also to conventional multihop wireless networks in the presence of channel uncertainty.
["Emiliano Dall'Anese", 'Georgios B. Giannakis']
Statistical Routing for Multihop Wireless Cognitive Networks
347,164
Multiple access interference and near-far effect cause the performance of the conventional single user detector in DS/CDMA systems to degrade. Due to high complexity of the optimum multiuser detector, suboptimal multiuser detectors with less complexity and reasonable performance have received considerable attention. In this paper we apply computational intelligence techniques including proposed modified genetic algorithm and our analysis of multilayer perceptron neural network for multiuser detection of DS/CDMA signals. We compare their performance form different points of view such as computational complexity, implementation and error rate. We also compare their performance with the other detectors used in CDMA.
['Mahrokh G. Shayesteh', 'Mohammad Bagher Menhaj', 'Hamidreza Amindavar']
Computational intelligence techniques for multiuser detection of DS/CDMA signals
485,285
Background#R##N#The graph-theoretical analysis of molecular networks has a long tradition in chemoinformatics. As demonstrated frequently, a well designed format to encode chemical structures and structure-related information of organic compounds is the Molfile format. But when it comes to use modern programming languages for statistical data analysis in Bio- and Chemoinformatics, R as one of the most powerful free languages lacks tools to process Molfile data collections and import molecular network data into R.
['Martin Grabner', 'Kurt Varmuza', 'Matthias Dehmer']
RMol: a toolset for transforming SD/Molfile structure information into R objects
155,084
Auto-Configuration of ACL Policy in Case of Topology Change in Hybrid SDN
['Rashid Amin', 'Nadir Shah', 'Babar Shah', 'Omar Alfandi']
Auto-Configuration of ACL Policy in Case of Topology Change in Hybrid SDN
967,538
Measuring the causal effects of online advertising (adfx) on user behavior is important to the health of the WWW publishing industry. In this paper, using three controlled experiments, we show that observational data frequently lead to incorrect estimates of adfx. The reason, which we label "activity bias," comes from the surprising amount of time-based correlation between the myriad activities that users undertake online. In Experiment 1, users who are exposed to an ad on a given day are much more likely to engage in brand-relevant search queries as compared to their recent history for reasons that had nothing do with the advertisement. In Experiment 2, we show that activity bias occurs for page views across diverse websites. In Experiment 3, we track account sign-ups at a competitor's (of the advertiser) website and find that many more people sign-up on the day they saw an advertisement than on other days, but that the true "competitive effect" was minimal. In all three experiments, exposure to a campaign signals doing "more of everything" in given period of time, making it difficult to find a suitable "matched control" using prior behavior. In such cases, the "match" is fundamentally different from the exposed group, and we show how and why observational methods lead to a massive overestimate of adfx in such circumstances.
['Randall A. Lewis', 'Justin M. Rao', 'David Reiley']
Here, there, and everywhere: correlated online behaviors can lead to overestimates of the effects of advertising
505,668
We propose and study a class of transmit beamforming techniques for systems with multiple transmit and multiple receive antennas with a per-antenna transmit power constraint. The per-antenna transmit power constraint is more realistic than the widely used total (across all transmit antennas) power constraint, since in practice each transmit antenna is driven by a separate power amplifier with a maximum power rating. Under the per-antenna power constraint, from an implementation perspective, it becomes desirable to vary only the phases (as opposed to both power and phase variation) of the signals departing from the transmit antennas. We name this class of techniques generalized co-phasing and formulate an optimization problem to calculate the transmit antenna phases. Furthermore, we propose five heuristic algorithms to solve the optimization problem. All the proposed algorithms except one are optimal for the case of two transmit antennas and an arbitrary number of receive antennas. For an arbitrary number of transmit and receive antennas, simulations indicate that the proposed algorithms perform very close to the optimal solution calculated through an exhaustive search of all possible transmit phases.
['Jungwon Lee', 'Rohit U. Nabar', 'Jihwan P. Choi', 'Hui-Ling Lou']
Generalized co-phasing for multiple transmit and receive antennas
315,582
Background#R##N#Fluorescent and luminescent reporter genes have become popular tools for the real-time monitoring of gene expression in living cells. However, mathematical models are necessary for extracting biologically meaningful quantities from the primary data.
['Hidde de Jong', 'Caroline Ranquet', 'Delphine Ropers', 'Corinne Pinel', 'Johannes Geiselmann']
Experimental and computational validation of models of fluorescent and luminescent reporter genes in bacteria.
501,276
It is shown that the paradoxes of quantum theory can be traced to an assumption that observers can know which physical degrees of freedom are causally responsible for each of their experiences. The untenability of this assumption is demonstrated, and a quantum theory without it is proposed. Removing this assumption from quantum theory sheds new light on the nature of semantics, and perhaps also on the nature of freedom.
['Chris Fields']
A whole box of Pandoras: Systems, boundaries and free will in quantum theory
342,102
Outdoor Air Quality Inference from Single Image
['Zheng Zhang', 'Huadong Ma', 'Huiyuan Fu', 'Xinpeng Wang']
Outdoor Air Quality Inference from Single Image
357,508
In two-phase cooperative multicast communications, the unbalanced outage probabilities of the cell-center and cell-edge users are the main reason that caps the energy-efficiency of the system. In this paper, we consider the unbalanced outage probability and propose a probability-based relay selection and power control method to improve the energy-efficiency, in which each user can autonomously decide whether to participate in the relay transmission. In particular, we obtain the optimal solution that can minimize the user power consumption. In addition, since our method works in a distributed manner, it does not require any feedback either between the BS and users, or among the users. This saves the extra energy consumption caused by the feedback. Simulation results demonstrate that the proposed method can reduce the user energy consumption up to 54%.
['Liying Li', 'Guodong Zhao', 'Wuyu Shi', 'Zhi Chen', 'Qi Zhang']
Autonomous relaying scheme for energy-efficient cooperative multicast communications
889,408
“To Listen, Share, and to Be Relevant” - Learning Netiquette by Reflective Practice
['Halvdan Haugsbakken']
“To Listen, Share, and to Be Relevant” - Learning Netiquette by Reflective Practice
876,407
OSAM*.KBMS: an object-oriented knowledge base management system for supporting advanced applications
['Stanley Y. W. Su', 'Herman Lam', 'Srinivasa Eddula', 'Javier Arroyo', 'Neeta Prasad', 'Ronghao Zhuang']
OSAM*.KBMS: an object-oriented knowledge base management system for supporting advanced applications
203,606
New paradigms in industrial robotics no longer require physical separation between robotic manipulators and humans. Moreover, in order to optimize production, humans and robots are expected to collaborate to some extent. In this scenario, involving a shared environment between humans and robots, common motion generation algorithms might turn out to be inadequate for this purpose.
['Andrea Maria Zanchettin', 'Nicola Maria Ceriani', 'Paolo Rocco', 'Hao Ding', 'Bjorn Matthias']
Safety in Human-Robot Collaborative Manufacturing Environments: Metrics and Control
699,026
In order to achieve autonomous operation of a vehicle in urban situations with unpredictable traffic, several realtime systems must interoperate, including environment perception, localization, planning, and control. In addition, a robust vehicle platform with appropriate sensors, computational hardware, networking, and software infrastructure is essential.
['Jesse Levinson', 'Jake Askeland', 'Jan Becker', 'Jennifer Dolson', 'David Held', 'Sören Kammel', 'J. Zico Kolter', 'Dirk Langer', 'Oliver Pink', 'Vaughan R. Pratt', 'Michael Sokolsky', 'Ganymed Stanek', 'David Stavens', 'Alex Teichman', 'Moritz Werling', 'Sebastian Thrun']
Towards fully autonomous driving: Systems and algorithms
212,829
Presents an approach to efficient multirobot mapping and exploration which exploits a market architecture in order to maximize information gain while minimizing incurred costs. This system is reliable and robust in that it can accommodate dynamic introduction and loss of team members in addition to being able to withstand communication interruptions and failures. Results showing the capabilities of our system on a team of exploring autonomous robots are given.
['Robert Zlot', 'Anthony Stentz', 'M.B. Dias', 'Scott Thayer']
Multi-robot exploration controlled by a market economy
176,427
We propose an efficient and deterministic algorithm for computing the one-dimensional dilation and erosion (max and min) sliding window filters. For a p-element sliding window, our algorithm computes the 1D filter using 1.5 + o(1) comparisons per sample point. Our algorithm constitutes a deterministic improvement over the best previously known such algorithm, independently developed by van Herk (1992) and by Gil and Werman (1993) (the HGW algorithm). Also, the results presented in this paper constitute an improvement over the Gevorkian et al. (1997) (GAA) variant of the HGW algorithm. The improvement over the GAA variant is also in the computation model. The GAA algorithm makes the assumption that the input is independently and identically distributed (the i.i.d. assumption), whereas our main result is deterministic. We also deal with the problem of computing the dilation and erosion filters simultaneously, as required, e.g., for computing the unbiased morphological edge. In the case of i.i.d. inputs, we show that this simultaneous computation can be done more efficiently then separately computing each. We then turn to the opening filter, defined as the application of the min filter to the max filter and give an efficient algorithm for its computation. Specifically, this algorithm is only slightly slower than the computation of just the max filter. The improved algorithms are readily generalized to two dimensions (for a rectangular window), as well as to any higher finite dimension (for a hyperbox window), with the number of comparisons per window remaining constant. For the sake of concreteness, we also make a few comments on implementation considerations in a contemporary programming language.
['Joseph Gil', 'Ron Kimmel']
Efficient dilation, erosion, opening, and closing algorithms
233,621
Effective processing of source data matched to appropriate visualisation can greatly enhance the user’s ability to explore and comprehend complex information. While this is a fundamental problem for many domains, in medical applications it is particularly important. None-invasive scanning technologies, such as MRI, have greatly enhanced our ability to ‘image’ the internal body, however the resultant visualisation is often difficult to comprehend due to both inadequacies in the scanning process and sub-optimal approaches to visualisation and data representation. These factors impose significant cognitive load on the user, requiring skill and experience to accurately comprehend detail and intense concentration, and in less experienced users, to understand the structures present. Our broader research aims to identify whether 3D representations of MRI data sets offer a more intuitive means of viewing the data and thereby enable easier understanding and comprehension of the scanned body region. As part of this research we have constructed a 3D MRI viewing application, raaMediVol, which utilises recent developments in 3D computer graphics hardware, to present an interactive environment that enables the user to view both traditional 2D slice representations and an enhanced 3D volumetric form that is freely explorable and configurable both on traditional 2D computer desktop displays and within Immersive Projection Technologies (IPTs)[1]. Initial evaluation of the two representational paradigms been undertaken through the comparative assessment of experienced clinicians’ performance in diagnosing a range of soft tissue pathologies within the shoulder, displayed in both traditional 2D slice, and evolved 3D volumetric representational form. An overview of the application, its technical operation, and the results of the evaluation trials are presented.
['Rob Aspin', 'Matt Smith', 'M. A. Nazar', 'Charles Hutchinson', 'Len Funk']
MediVol: A Practical Initial Evaluation of Refined, 3D, Interactive Volumetric Representations of Soft Tissue Pathologies in Virtual Environments
377,070
We consider geographically distributed data centers forming a collectively managed cloud computing system, hosting multiple Service Oriented Architecture (SOA) based context aware applications, each subject to Service Level Agreements (SLA). The Service Level Agreements for each context aware application require the response time of a certain percentile of the input requests to be less than a specified value for a profit to be charged by the cloud provider. We present a novel approach of data-oriented dynamic service-request allocation with gi-FIFO scheduling, in each of the geographically distributed data centers, to globally increase the profit charged by the cloud computing system. Our evaluation shows that our dynamic scheme far outperforms the commonly deployed static allocation with either First in First Out (FIFO) or Weighted Round Robin (WRR) scheduling.
['Keerthana Boloor', 'Rada Chirkova', 'Yannis Viniotis', 'Tiia J. Salo']
Dynamic Request Allocation and Scheduling for Context Aware Applications Subject to a Percentile Response Time SLA in a Distributed Cloud
283,466
We developed Hi-sap, a Web server system that ensures the security in a server and has high performance when processing dynamic content. In existing servers, server embedded programs cannot be used safely in large-scale environments like a shared hosting service. These problems occur because server processes run under the privilege of an identical user. For example, server embedded interpreters are commonly used to improve performance in processing dynamic content, like Weblogs and wikis. However, other customers that share the same server can steal, delete, and tamper with data files of Weblogs and wikis. To solve these problems, we designed a new Web server system, Hi-sap. In the system, Web objects that are stored in a server are divided into partitions. Server processes run under the privilege of different users in every partition. We implemented Hi-sap on a Linux OS and tested the effectiveness of the system. Experimental results show that Hi-sap has high performance and scalability.
['Daisuke Hara', 'Yasuichi Nakayama']
Secure and high-performance Web server system for shared hosting service
514,810
The dorsal medial frontal cortex (dMFC) is highly active during choice behavior. Though many models have been proposed to explain dMFC function, the conflict monitoring model is the most influential. It posits that dMFC is primarily involved in detecting interference between competing responses thus signaling the need for control. It accurately predicts increased neural activity and response time (RT) for incompatible (high-interference) vs. compatible (low-interference) decisions. However, it has been shown that neural activity can increase with time on task, even when no decisions are made. Thus, the greater dMFC activity on incompatible trials may stem from longer RTs rather than response conflict. This study shows that (1) the conflict monitoring model fails to predict the relationship between error likelihood and RT, and (2) the dMFC activity is not sensitive to congruency, error likelihood, or response conflict, but is monotonically related to time on task.
['Jack Grinband', 'Judith Savitskaya', 'Tor D. Wager', 'Tobias Teichert', 'Vincent P. Ferrera', 'Joy Hirsch']
The dorsal medial frontal cortex is sensitive to time on task, not response conflict or error likelihood.
72,406
FLEXIBLE ADVANCE-RESERVATION (FAR) FOR CLOUDS
['José Luis Lucas', 'Carmen Carrión', 'Blanca Caminero']
FLEXIBLE ADVANCE-RESERVATION (FAR) FOR CLOUDS
737,654
We present a new procedure for testing satisfiability (over the reals) of a conjunction of polynomial equations. There are three possible return values for our procedure: it either returns a model for the input formula, or it says that the input is unsatisfiable, or it fails because the applicability condition for the procedure, called the eigen-condition, is violated. For the class of constraints where the eigen-condition holds, our procedure is a decision procedure. We describe satisfiability-preserving transformations that can potentially convert problems into a form where eigen-condition holds. We experimentally evaluate the procedure on constraints generated by template-based verification and synthesis procedures.
['Ashish Tiwari', 'Patrick Lincoln']
A search-based procedure for nonlinear real arithmetic
714,516
a b s t r a c t Data-driven clock gating is reducing the total power consumption of VLSI chips by 20%. There, flip-flops are grouped and share a common clock signal. Finding the optimal clusters is the key for maximizing the power savings. Clustering by the minimal cost perfect graph matching algorithm (MCPM) proposed by other works is not optimal. We show that the optimal clustering problem is NP-hard, and study the quality of MCPM heuristics, showing by experiments that it falls 5% above the optimal solution.
['Shmuel Wimer']
On optimal flip-flop grouping for VLSI power minimization
98,687
Complex "system-of-systems" architectures are subject to a myriad of issues arising from the dynamic inter-operability these systems are intended to provide. Many of these issues can be addressed or avoided by considering the messaging interactions between system nodes prior to and during the construction of the component systems. Standardization of messages and interfaces is an ideal way to provide a consistent, vendor agnostic vehicle for interaction and interoperability of systems in this class of complex architectures.
['Michael A. Corsello']
System-of-Systems Architectural Considerations for Complex Environments and Evolving Requirements
161,541
Quality of experience (QoE) measures the overall perceived quality of mobile video delivery from subjective user experience and objective system performance. Current QoE prediction models have two main limitations: (1) insufficient consideration of the factors influencing QoE, and (2) limited studies on QoE models for acceptability prediction. In this paper, a set of novel acceptability-based QoE models, denoted as A-QoE, is proposed based on the results of comprehensive user studies on subjective quality acceptance assessments. The models are able to predict users' acceptability and pleasantness in various mobile video usage scenarios. Statistical nonlinear regression analysis has been used to build the models with a group of influencing factors as independent predictors, which include encoding parameters and bitrate, video content characteristics, and mobile device display resolution. The performance of the proposed A-QoE models has been compared with three well-known objective Video Quality Assessment metrics: PSNR, SSIM and VQM. The proposed A-QoE models have high prediction accuracy and usage flexibility. Future user-centred mobile video delivery systems can benefit from applying the proposed QoE-based management to optimize video coding and quality delivery strategies.
['Wei Song', 'Dian Tjondronegoro']
Acceptability-Based QoE Models for Mobile Video
6,841
Web 2.0 and online social networking websites heavily affect today most of the online activities and their effect on tourism is obviously rather important. This paper aims at verifying the impact that online social networks (OSN) have on the popularity of tourism websites. Two OSNs have been considered: Facebook and Twitter. The pattern of visits to a sample of Italian tourism websites was analysed and the relationship between the total visits and those having the OSNs as referrals were measured. The analysis shows a clear correlation and confirms the starting hypothesis. Consequences and implications of these outcomes are discussed.
['Roberta Saramella Milano', 'Rodolfo Baggio', 'Robert Piattelli']
The effects of online social media on tourism websites
559,962
Gathering evidence about whether a search result is relevant is a core concern in the evaluation and improvement of information retrieval systems. Two common sources of evidence for establishing relevance are judgements from trained assessors and logs of online user behavior. However, both are limited; it is hard for a trained assessor to know exactly what users want to find, and user behavior only provides an implicit and ambiguous signal. In this paper, we aim to address these limitations by collecting explicit feedback on web search results from users in situ as they search. When users return to the search result page via the browser back button after having clicked on a result, we ask them to provide a binary thumbs up or thumbs down judgment and text feedback. We collect in situ feedback from a large commercial search engine, and compare this feedback with the judgments provided by trained assessors. We find that in situ feedback differs significantly from traditional relevance judgments, and that it suggests a different interpretation of behavior signals, with the dwell time threshold between negative and positive in situ feedback being 87 seconds, longer than the more common heuristic of 30 seconds. Using text feedback from users, we discuss why user feedback may differ from editorial judgments.
['Jin Young Kim', 'Jaime Teevan', 'Nick Craswell']
Explicit In Situ User Feedback for Web Search Results
830,214
This study explored the suitability of the Normalized Difference Vegetation Index (NDVI) from the Moderate Resolution Imaging Spectrometer (MODIS) obtained for six sugar management zones, over nine years (2002-2010), to forecast sugarcane yield on an annual and zonal base. To take into account the characteristics of the sugarcane crop management (15-month cycle for a ratoon, accompanied with continuous harvest in Western Kenya), the temporal series of NDVI was normalized through an original weighting method that considered the growth period of the sugarcane crop (wNDVI), and correlated it with historical yield datasets. Results when using wNDVI were consistent with historical yield and significant at P-value = 0.001, while results when using traditional annual NDVI integrated over the calendar year were not significant. This correlation between yield and wNDVI is mainly drawn by the spatial dimension of the data set (R 2 = 0.53, when all years are aggregated together), rather than by the temporal dimension of the data set (R 2 = 0.1, when all zones are aggregated). A test on 2012 yield estimation with this model realized a RMSE less than 5 t· ha −1 . Despite progress in the methodology through the weighted
['Betty Mulianga', 'Agnès Bégué', 'Margareth Simoes', 'Pierre Todoroff']
Forecasting Regional Sugarcane Yield Based on Time Integral and Spatial Aggregation of MODIS NDVI
273,943
In the hybrid fiber/coax (HFC) architecture, over several hundreds subscribers in CATV (community antenna TV) network may cause serious collisions. In this paper, we propose a new network architecture which using an intelligent node (IN) to stand for a group of subscribers to request the demand resources. The IN has the ability to reduce the collision probability as well as the collision resolving period. The simulation results show that the proposed architecture in terms of throughput, buffer delay, and fairness outperforms the standard architecture.
['Shiann-Tsong Sheu', 'Meng-Hong Chen']
A new network architecture with intelligent node (IN) to enhance IEEE 802.14 HFC networks
182
Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (\(R=0.98\) for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining \(R=0.73\) compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to \(R=0.93\). Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple \(\hbox {p}K_{\text {a}}\) correction improved agreement with experiment from \(R=0.54\) to \(R=0.66\), despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.
['Tyler Luchko', 'Nikolay Blinov', 'Garrett C. Limon', 'Kevin P. Joyce', 'Andriy Kovalenko']
SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling
868,356
This paper presents an unconventional tri-rotor unmanned aerial vehicle (UAV) design that only employs three brushless motors with three fixed pitch propellers for propulsion and flight control. No additional mechanics for dynamically tilting motor(s), found on existing tricopters, are used. The dynamic model of the proposed system is developed. Then a control strategy to control the stability and manoeuvring of the UAV is presented. The control strategy is achieved by only manipulating the rotational speeds of the propellers at each rotor. Two of the rotors rotate in the same direction while the third rotates in the opposite direction. The control methodology is novel compared to other systems that require either coaxial rotors or an extra servo motor to control the yaw of the UAV. Results show that the proposed design achieved stable flight with minimal position-attitude cross control effect. The fixed nature of the rotors in the proposed design, reduced mechanical requirements and cost compared to exis...
['Belal H. Sababha', "Hamzeh M. Al Zu'bi", 'Osamah Rawashdeh']
A rotor-tilt-free tricopter UAV: design, modelling, and stability control
716,921
Layer-Based Approach for Image Pair Fusion
['Chang-Hwan Son', 'Xiao-Ping Zhang']
Layer-Based Approach for Image Pair Fusion
714,972
In the field of computer graphics, simulation of physical phenomena is of great interest. We focus on the optical effects of soap bubbles. Soap bubbles have fascinating coloration and interesting physical properties. Therefore they are useful for the entertainment such as movies and games. Soap bubbles change their shapes by surface tension and external forces, and therefore their surface thickness also changes. Since the thickness of the soap bubble is several hundred nanometers, interference of the light occurs. This paper proposes a fast rendering method for the soap bubbles taking into account light interference and dynamics. In our method, the reflectivities of the thin film that is the cause of the light interference are calculated in advance and stored as textures. This makes it possible to render the de-formable soap bubbles in real-time
['Kei Iwasaki', 'Keichi Matsuzawa', 'Tomoyuki Nishita']
Real-time rendering of soap bubbles taking into account light interference
53,747
"Air Haptics" is a visuo-haptic Augmented Reality (AR) system which gives us a sense of interacting with AR objects and characters like pinching and pulling, without any haptic devices, only using an effect of visuo-haptic interaction.
['Yuki Ban', 'Takuji Narumi', 'Tomohiro Tanikawa', 'Michitaka Hirose']
Air haptics: displaying feeling of contact with AR object using visuo-haptic interaction
224,206
Understanding recombination is a central problem in population genetics. In this paper, we address an established problem in Computational Biology: compute lower bounds on the minimum number of historical recombinations for generating a set of sequences [11,13,9,1,2,15]. In particular, we propose a new recombination lower bound: the forest bound. We show that the forest bound can be formulated as the minimumperfect phylogenetic forest problem, a natural extension to the classic binary perfect phylogeny problem, which may be of interests on its own. We then show that the forest bound is provably higher than the optimal haplotype bound [13], a very good lower bound in practice [15].We prove that, like several other lower bounds [2], computing the forest bound isNPhard. Finally, we describe an integer linear programming (ILP) formulation that computes the forest bound precisely for certain range of data. Simulation results show that the forest bound may be useful in computing lower bounds for low quality data.
['Yufeng Wu', 'Dan Gusfield']
A new recombination lower bound and the minimum perfect phylogenetic forest problem
63,794
Video-on-Demand (VoD) services have achieved great success recently. Most such streaming systems are P2P-CDNhybrid systems. To ensure reliable performance, the most efficient way is to subject those VoD streaming networks to large-scale, realistic performance evaluations. Our previous Shadow Stream system is a production Internet live streaming network with performance evaluation as a built-in capability. In this paper, we extend the same idea into the VoD services. There exists significant difference between live and VoD, hence Shadow Stream cannot be directly used in VoD context. Firstly, clients in P2PVoD service are not synchronized in viewing progress, secondly, in VoD there exists interactive operations (e.g., Pause and drag), thirdly, the different play points of users also bring difficulty to replacing departed real clients. In this paper, we solve all above mentioned challenges. We implement Shadow VoD and demonstrate its benefits through extensive evaluations.
['Hanzi Mao', 'Chen Tian', 'Jingdong Sun', 'Junhua Yan', 'Weimin Wu', 'Benxiong Huang']
Shadow VoD: Performance Evaluation as a Capability in Production P2P-CDN Hybrid VoD Networks
49,287
The use case concept is a tool for capturing the requirements of a system. A single use case describes a subset of a system's functionality in terms of the interactions between the system and a set of users or actors. A use case is initiated by a particular user, and serves the purpose of delivering some meaningful unit of work, service, or value to the initiator. When capturing requirements, a use case views the system as a black box. Due to their popularity, the concept of use cases has been abused to some extent, and been applied to specifying the "requirements" of all sorts of things, such as those of a subsystem of the system architecture. Cockburn (1997) acknowledges 18 different definitions of use cases. This has created a great deal of confusion and a need for clear definitions. Just what is a "use case"? We seek to answer that question, by providing a specification of a use case and its related concepts using the Z formalism.
['Greg Butler', 'Peter Grogono', 'Ferhat Khendek']
A Z specification of use cases: a preliminary report
21,088
In this paper, a framework of designing a low-error and power-efficient two's-complement fixed-width Booth multiplier that receives two n-bit numbers and produces an n-bit product is proposed. The design methodology of the framework involving four steps results in one better error-compensation bias. The better error-compensation bias can be mapped to a simple low-error fixed-width Booth multiplier with a little penalty in power consumption. For the benchmark of 8/spl times/8 multipliers, the simulation results show that a reduction of 82.04% average error compared to that using the direct-truncated fixed-width Booth multiplier can be obtained. Moreover, the power consumption can be reduced by 40.68% compared to that of full-precision Booth multiplier design.
['Min An Song', 'Lan-Da Van', 'Chih-Chyau Yang', 'Shih-Chieh Chiu', 'Sy-Yen Kuo']
A framework for the design of error-aware power-efficient fixed-width Booth multipliers
73,127
This paper uses active learning to solve the problem of mining signal temporal requirements of cyber-physical systems or simply the requirement mining problem. By utilizing robustness degree, we formulate the requirement mining problem as an optimization problem. We then propose a new active learning algorithm called Gaussian Process Adaptive Confidence Bound (GP-ACB) to help in solving the optimization problem. We show theoretically that the GP-ACB algorithm has a lower regret bound-thus a larger convergence rate-than some existing active learning algorithms, such as GP-UCB. We finally illustrate and apply our requirement mining algorithm with two case studies: the Ackley's function and a real world automotive power steering model. Our results demonstrate that there is a principled and efficient way of extracting requirements for complex cyber-physical systems.
['G. Chen', 'Zachary Sabato', 'Zhaodan Kong']
Active learning based requirement mining for cyber-physical systems
975,689
The first four papers in this session focus on wireless sensing applications. They can be powered by harvesting energy from RF, piezoelectric, photovoltaic and thermoelectric energy sources. Many innovations focus on optimization of cold-start conditions in no-battery or dead-battery situations. Several techniques are used to minimize the number and size of external components such as inductors and capacitors. Maximum-Power-Point Tracking (MPPT) remains an important feature that needs optimization. The last four papers in the session focus on the fast-growing market of wireless charging for cell phones and wearables. The emphasis here is to comply with all the available standards (A4WP, Qi and PMA) with a single solution.
['Anton Bakker', 'Yuan Gao']
Session 21 overview: Harvesting and wireless power
654,230
The brain is first and foremost a control system that is capable of building an internal representation of the external world, and using this representation to make decisions, set goals and priorities, formulate plans, and control behavior with intent to achieve its goals. The internal representation is distributed throughout the brain in two forms: (1) firmware embedded in synaptic connections and axon-dendrite circuitry, and (2) dynamic state-variables encoded in the firing rates of neurons in computational loops in the spinal cord, midbrain, subcortical nuclei, and arrays of cortical columns. It assumes that clusters and arrays of neurons are capable of computing logical predicates, smooth arithmetic functions, and matrix transformations over a space defined by large input vectors and arrays. Feedback from output to input of these neural computational units enable them to function as finite-state-automata (fsa), Markov decision processes (MDP), or delay lines in processing signals and generating strings and grammars. Thus, clusters of neurons are capable of parsing and generating language, decomposing tasks, generating plans, and executing scripts. In the cortex, neurons are arranged in arrays of cortical columns that interact in tight loops with their underlying subcortical nuclei. It is hypothesized that these circuits compute sophisticated mathematical and logical functions that maintain and use complex abstract data structures. It is proposed that cortical hypercolumns together with their underlying thalamic nuclei can be modeled as a cortical computational unit (CCU) consisting of a frame-like data structure (containing attributes and pointers) plus the computational processes and mechanisms required to maintain it and use it for perception cognition, and sensory-motor behavior. In sensory processing areas of the brain, CCU processes enable focus of attention, segmentation, grouping, and classification. Pointers stored in CCU frames define relationships that link pixels and signals to objects and events in situations and episodes. CCU frame pointers also link objects and events to class prototypes and overlay them with meaning and emotional values. In behavior generating areas of the brain, CCU processes make decisions, set goals and priorities, generate plans, and control behavior. In general, CCU pointers are used to define rules, grammars, procedures, plans, and behaviors. CCU pointers also define abstract data structures analogous to lists, frames, objects, classes, rules, plans, and semantic nets. It is suggested that it may be possible to reverse engineer the human brain at the CCU level of fidelity using next-generation massively parallel computer hardware and software.
['James S. Albus']
A model of computation and representation in the brain
52,200
Automatic discovery of isolated land cover web map services (LCWMSs) can potentially help in sharing land cover data. Currently, various search engine-based and crawler-based approaches have been developed for finding services dispersed throughout the surface web. In fact, with the prevalence of geospatial web applications, a considerable number of LCWMSs are hidden in JavaScript code, which belongs to the deep web. However, discovering LCWMSs from JavaScript code remains an open challenge. This paper aims to solve this challenge by proposing a focused deep web crawler for finding more LCWMSs from deep web JavaScript code and the surface web. First, the names of a group of JavaScript links are abstracted as initial judgements. Through name matching, these judgements are utilized to judge whether or not the fetched webpages contain predefined JavaScript links that may prompt JavaScript code to invoke WMSs. Secondly, some JavaScript invocation functions and URL formats for WMS are summarized as JavaScript invocation rules from prior knowledge of how WMSs are employed and coded in JavaScript. These invocation rules are used to identify the JavaScript code for extracting candidate WMSs through rule matching. The above two operations are incorporated into a traditional focused crawling strategy situated between the tasks of fetching webpages and parsing webpages. Thirdly, LCWMSs are selected by matching services with a set of land cover keywords. Moreover, a search engine for LCWMSs is implemented that uses the focused deep web crawler to retrieve and integrate the LCWMSs it discovers. In the first experiment, eight online geospatial web applications serve as seed URLs (Uniform Resource Locators) and crawling scopes; the proposed crawler addresses only the JavaScript code in these eight applications. All 32 available WMSs hidden in JavaScript code were found using the proposed crawler, while not one WMS was discovered through the focused crawler-based approach. This result shows that the proposed crawler has the ability to discover WMSs hidden in JavaScript code. The second experiment uses 4842 seed URLs updated daily. The crawler found a total of 17,874 available WMSs, of which 11,901 were LCWMSs. Our approach discovered a greater number of services than those found using previous approaches. It indicates that the proposed crawler has a large advantage in discovering LCWMSs from the surface web and from JavaScript code. Furthermore, a simple case study demonstrates that the designed LCWMS search engine represents an important step towards realizing land cover information integration for global mapping and monitoring purposes.
['Dongyang Hou', 'Jun Chen', 'Hao Wu']
Discovering Land Cover Web Map Services from the Deep Web with JavaScript Invocation Rules
832,910
ArchFeature: A Modeling Environment Integrating Features into Product Line Architecture.
['Gharib Gharibi', 'Yongjie Zheng']
ArchFeature: A Modeling Environment Integrating Features into Product Line Architecture.
986,313
The interest in Bluetooth technology has stimulated much research in algorithms for topology creation and control of networks comprised of large numbers of Bluetooth devices. In particular, the issue of scatternet formation has been addressed by researchers in a number of papers in the technical literature. This paper is an extension of the work presented in [14, 15]. In this paper we present a complete description of what we believe to be a promising scatternet formation protocol --- BlueNet, which was first proposed in [15]. Some modifications and enhancements are made to improve the connectivity of resulting scatternets. The metrics are chosen to evaluate the performance of resulting scatternets, such as the reliability, the routing efficiency, the piconet density, and the information carrying capacity. Based on the chosen metrics, performance is then compared among the scatternet samples generated by BlueNet and other two representative multi-hop scatternet formation protocols, i.e., BlueTrees [16] and LSBS [1]. Finally in the conclusion a discussion is presented on the compared scatternet formation protocols.
['Zhifang Wang', 'Robert J. Thomas', 'Zygmunt J. Haas']
Performance comparison of Bluetooth scatternet formation protocols for multi-hop networks
27,869
Network Transmission of 3D Mesh Data Using Progressive Representation
['Krzysztof Skabek', 'Łukasz Ząbik']
Network Transmission of 3D Mesh Data Using Progressive Representation
618,121
Current neuroprosthetic systems based on electrophysiological recording have an extended, yet finite working lifetime. Some posited lifetime-extension solutions involve improving device biocompatibility or suppressing host immune responses. Our objective was to test an alternative solution comprised of applying a voltage pulse to a microelectrode site, herein termed "rejuvenation." Previously, investigators have reported preliminary electrophysiological results by utilizing a similar voltage pulse. In this study we sought to further explore this phenomenon via two methods: 1) electrophysiology; 2) an equivalent circuit model applied to impedance spectroscopy data. The experiments were conducted via chronically implanted silicon-substrate iridium microelectrode arrays in the rat cortex. Rejuvenation voltages resulted in increased unit recording signal-to-noise ratios (10%/spl plusmn/2%), with a maximal increase of 195% from 3.74 to 11.02. Rejuvenation also reduced the electrode site impedances at 1 kHz (67%/spl plusmn/2%). Neither the impedance nor recording properties of the electrodes changed on neighboring microelectrode sites that were not rejuvenated. In the equivalent circuit model, we found a transient increase in conductivity, the majority of which corresponded to a decrease in the tissue resistance component (44%/spl plusmn/7%). These findings suggest that rejuvenation may be an intervention strategy to prolong the functional lifetime of chronically implanted microelectrodes.
['Kevin J. Otto', 'Matthew D. Johnson', 'Daryl R. Kipke']
Voltage pulses change neural interface properties and improve unit recordings with chronically implanted microelectrodes
89,409
A Flexible Infrastructure for Large Monolingual Corpora.
['Uwe Quasthoff', 'Christian Wolff']
A Flexible Infrastructure for Large Monolingual Corpora.
795,126
The EATCS Council meeting took place over lunch on the 12th and the 13th of July 2016 during ICALP 2016 in Rome This piece provides a brief and informal report on the discussions that took place at the meeting for the benefit of our members.
['Luca Aceto']
Report on The EATCS Council Meeting
799,166
Conventions and etiquettes evolve or are developed for technologically mediated modes of communication. The Internet is now over a decade old, and the use of its predecessor networks and multiuser computer systems for computer-mediated communication (CMC) dates back to the 1960s. However, although the Internet is increasingly a feature of the global popular culture as a concept, and in the construction of its various associated images, its systematic use for CMC has only recently begun to extend beyond the sub-cultural. Although at its deeper social and psychological levels the use and effects of CMC present a fertile new field of investigation for communication researchers, in many of its general aspects conventions for courteous and effective networking behaviour have long been codified and widely documented. The present paper identifies some of the main electronic and print sources of netiquette advice and provides a digest of their consensus.
['George McMurdo']
Netiquettes for networkers
403,265
Cognitive Effective Modeling Using Tablets.
['Jeannette Stark', 'Martin Burwitz', 'Richard Braun', 'Werner Esswein']
Cognitive Effective Modeling Using Tablets.
748,415
The family of FIR digital filters with maximally flat magnitude and group delay response is considered. The filters were proposed by Baher (1982), who furnished them with an analytic procedure for derivation of their transfer function. The contributions of this paper are the following. A simplified formula is presented for the transfer function of the filters. The equivalence of the novel formula with a formula that is derived from Baher's analytical procedure is proved using a modern method for automatic proof of identities involving binomial coefficients. The universality of Baher's filters is then established by proving that they include linear-phase filters, generalized half-band filters, and fractional delay systems. In this way, several classes of maximally flat filters are unified under a single formula. The generating function of the filters is also derived. This enables us to develop multiplierless cellular array structures for exact realization of a subset of the filters. The subset that enjoys such multiplierless realizations includes linear-phase filters, some nonsymmetric filters, and generalized halfband filters. A procedure for designing the cellular array structures is also presented.
['Saed Samadi', 'Akinori Nishihara', 'Hiroshi Iwakura']
Universal maximally flat lowpass FIR systems
16,984
The nature and amount of information needed for learning a natural language, and the underlying mechanisms involved in this process, are the subject of much debate: is it possible to learn a language from usage data only, or some sort of innate knowledge and/or bias is needed to boost the process? This is a topic of interest to (psycho)linguists who study human language acquisition, as well as computational linguists who develop the knowledge sources necessary for largescale natural language processing systems. Children are a source of inspiration for any such study of language learnability. They learn language with ease, and their acquired knowledge of language is flexible and robust.
['Afra Alishahi']
Computational Modeling of Human Language Acquisition
194,217
In this paper, we describe the structure of a multiprocessor control system for a gas sensing array, inspired to the IEEE 1451 standard. The system (Smart Transducer Interface Module (STIM)) features a simplified Transducer Independent Interface (TII) based on a 3-wire RS232 asynchronous communication and is conceived as a cluster between identical monosensor subsystems and a central Controller. After a brief illustration of the gas sensor array, an overview on the modular system architecture, on the basic monosensor modules and of the simplified TII will be given. Finally, we will illustrate the characterisation of the system performances and the experimental results obtained with the proposed gas sensing array.
['Lucia Bissi', 'A. Scorzoni', 'P. Placidi', 'Luca Marrocchi', 'Michele Cicioni', 'L. Roselli', 'S. Zampolli', 'Luca Masini', 'I. Elmi', 'G.C. Cardinali']
A smart gas sensor for environmental monitoring, compliant with the IEEE 1451 standard and featuring a simplified transducer interface
308,744
Online bookstores have highly thrived and changed consumer behaviors in these years. However, most customers go to online bookstores only when they have specific targets. One reason is that the current web interfaces are usually too complex and cluttered for users to browse. In addition, current visualization interfaces only display the results associated with a single attribute, thus requiring users to interact intensively to find their targets. Inspired by the user experiences (UX) in physical bookstores, we present Bookwall, an online bookstore interface which comprises two components: Category Map and Wall View, enabling users to find their targets more efficiently and releasing users from the burden of complicated operations. Specifically, the category map produces a map with a "natural" map-like look, providing an overview of the clusters and neighborhood of book categories. The wall view enables displaying query results satisfying dual query attributes simultaneously. The results show that Bookwall can provide the users a favourable alternative visualization.
['Hsin-I Chen', 'Wei-Ting Lin', 'Bing-Yu Chen']
Bookwall: Visualizing books online based on user experience in physical bookstores
725,623
Monitoring FAZ area enlargement enables physicians to monitor progression of the DR. At present, it is difficult to discern the FAZ area and to measure its enlargement in an objective manner using digital fundus images. A semi-automated approach for determination of FAZ using color images has been developed. Here, a binary map of retinal blood vessels is computer generated from the digital fundus image to determine vessel ends and pathologies surrounding FAZ for area analysis. The proposed method is found to achieve accuracies from 66.67% to 98.69% compared to accuracies of 18.13-95.07% obtained by manual segmentation of FAZ regions from digital fundus images.
['M. H. Ahmad Fadzil', 'Lila Iznita Izhar', 'Hanung Adi Nugroho']
Determination of foveal avascular zone in diabetic retinopathy digital fundus images
13,569
The crossing number of a graph G is the minimum number of pairwise intersections of edges in a drawing of G. Motivated by the recent work [Faria, L., Figueiredo, C.M.H. de, Sykora, O., Vrt'o, I.: An improved upper bound on the crossing number of the hypercube. J. Graph Theory 59, 145-161 (2008)], we give an upper bound of the crossing number of n-dimensional bubble-sort graph Bn.
['Baigong Zheng', 'Yuansheng Yang', 'Xirong Xu']
An upper bound for the crossing number of bubble-sort graph Bn
627,428
Motivation: Circular RNAs (circRNAs) are an abundant class of highly stable RNAs that can affect gene regulation by binding and preventing microRNAs (miRNAs) from regulating their messenger RNA (mRNA) targets. Mammals have thousands of circRNAs with predicted miRNA binding sites, but only two circRNAs have been verified as being actual miRNA sponges. As it is unclear whether these thousands of predicted miRNA binding sites are functional, we investigated whether miRNA seed sites within human circRNAs are under selective pressure.#R##N##R##N#Results: Using SNP data from the 1000 Genomes Project, we found a significant decrease in SNP density at miRNA seed sites compared with flanking sequences and random sites. This decrease was similar to that of miRNA seed sites in 3' untranslated regions, suggesting that many of the predicted miRNA binding sites in circRNAs are functional and under similar selective pressure as miRNA binding sites in mRNAs.#R##N##R##N#Contact: [email protected]#R##N##R##N#Supplementary information: Supplementary data are available at Bioinformatics online.
['Laurent F. Thomas', 'Pål Sætrom']
Circular RNAs are depleted of polymorphisms at microRNA binding sites.
373,696
Industrial systems are more and more dominated by software. In addition, this software is present across several compute domains, from decentralized edge to centralized datacenters and clouds. While the productivity of application lifecycle management has tremendously improved in the area of cloud computing, there is no homogeneous and seamless environment to build, deploy and operate software across these domains. This leads to separation, inefficient processes, and duplicate efforts to provide software running across different layers. This poster introduces the concept of Continuous Computing, which provides a seamless computing environment for multi-domain applications, supporting the mobility of workloads across compute domains, from cloud to edge. The chosen top down approach is based on transferring established, de-facto standard cloud computing technologies to the resource-constrained compute environments in the edge. The objective is to support multi-domain applications by enabling workload mobility, i.e. the ability to locate and relocate application components within and across compute domains. We present a functional reference model for Continuous Computing and validate existing cloud technologies with respect to the match of their capabilities.
['Harald Mueller', 'Spyridon V. Gogouvitis', 'Houssam Haitof', 'Andreas Seitz', 'Bernd Bruegge']
Poster Abstract: Continuous Computing from Cloud to Edge
955,034
In this paper, we will explore the use of autonomous agent- based systems to counter asymmetric threats from non- state sponsored terror organizations.
['Gregory O. Gibson', 'Paul Hyden']
Using Autonomous Agent-Based Systems to Counter Asymmetric Threats from Non-State Sponsored Terror Organizations.
768,856
Modeling and Controlling Friendliness for An Interactive Museum Robot
['Chien-Ming Huang', 'Takamasa Iio', 'Satoru Satake', 'Takayuki Kanda']
Modeling and Controlling Friendliness for An Interactive Museum Robot
675,832
This research invokes two theoretical perspectives—the equalization hypothesis and the SIDE model—to examine the impact of individuals' sex on group members' use of anonymous, computer-mediated collaborative technologies. Data from 127 individuals in 22 enduring task groups indicate that the strategies employed differentially by men and women correspond with inferred motivations: men are more likely to seek ways to make computer-mediated interactions more like a face-to-face interaction with women, whereas women are more likely to employ strategies that maintain the reduced social cues of computer-mediated communication and afford them greater potential influence in mixed-sex interactions. The integration of theories previously regarded as oppositional, and the empirical support of hypotheses derived from these perspectives, suggest a richer, more complex view of technological support of group work at a time when collaborative technologies are increasingly important, given shifts toward more dispersed, gl...
['Andrew J. Flanagin', 'Vanessa Tiyaamornwong', "Joan O'Connor", 'David R. Seibold']
Computer-Mediated Group Work: The Interaction of Sex and Anonymity
265,382
With the feature of convenience and low cost, remote healthcare monitoring (RHM) has been extensively used in modern disease management to improve the quality of life. Due to the privacy of health data, it is of great importance to implement RHM based on a secure and dependable network. However, the network connectivity of existing RHM systems is unreliable in disaster area because of the unforeseeable damage to the communication infrastructure. To design a secure RHM system in disaster area, this paper presents a Secure VANET-Assisted Remote Healthcare Monitoring System (SVC) by utilizing the unique “store-carry-forward” transmission mode of vehicular ad hoc network (VANET). To improve the network performance, the VANET in SVC is designed to be a two-level network consisting of two kinds of vehicles. Specially, an innovative two-level key management model by mixing certificate-based cryptography and ID-based cryptography is customized to manage the trust of vehicles. In addition, the strong privacy of the health information including context privacy is taken into account in our scheme by combining searchable public-key encryption and broadcast techniques. Finally, comprehensive security and performance analysis demonstrate the scheme is secure and efficient.
['Xuefeng Liu', 'Hanyu Quan', 'Yuqing Zhang', 'Qianqian Zhao', 'Ling Liu']
SVC : Secure VANET-Assisted Remote Healthcare Monitoring System in Disaster Area
827,920