FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0045790616300337
The Internet of Things (IoT) has plenty of applications including Smart Grid (SG). IoT enables smooth and efficient utilization of SG. It is assumed as the prevalent illustration of IoT at the moment. IP-based communication technologies are used for setting SG communication network, but they are challenged by huge volume of delay sensitive data and control information between consumers and utility providers. It is also challenged by numerous security attacks due to resource constraints in smart meters. Sundry schemes proposed for addressing these problems are inappropriate due to high communication, computation overhead and latency. In this paper, we propose a hybrid Diffie–Hellman based lightweight authentication scheme using AES and RSA for session key generation. To ensure message integrity, the advantages of hash based message authentication code are exploited. The scheme provides mutual authentication, thwarting replay and man-in-the-middle attacks and achieves message integrity, while reducing overall communication and computation overheads.
A lightweight message authentication scheme for Smart Grid communications in power sector
S0045790616300349
The crisis of electricity generation in Egypt requires the need to study the possibility of using renewable energy as a way to solve this problem. Also, renewable energy reduces pollution resulting from the use of traditional methods of electricity generation. The main objective of the paper is the dynamic study of a stand-alone wind turbine based self-excited induction generator (SEIG) under nonlinear resistive loads at fixed pitch angle and different wind speed. The approach is based on the dynamic equations for SEIG, turbine, and nonlinear resistive loads using MATLAB/SIMULINK. The dynamic study of the isolated wind turbine based SEIG under nonlinear resistive load, indicate that the system is reliable, dependable, and fulfillment. From the results the system can be used as a reflected source of the unified network as a step to solve the crisis of electricity generation in Egypt and also as a clean energy.
Study of wind turbine based self-excited induction generator under nonlinear resistive loads as a step to solve the Egypt electricity crisis
S0045790616300350
Due to the limited network bandwidth, a noise robust low bit-rate compression scheme of Mel frequency cepstral coefficients (MFCCs) is desired for distributed speech recognition (DSR) services. In this paper, we present an efficient MFCCs compression method based on weighted least squares (W-LS) polynomial approximation through the exploitation of the high correlation across consecutive MFCC frames. Polynomial coefficients are quantized by designing a tree structured vector quantization (TSVQ) based scheme. Recognition experiments are conducted on the noisy Aurora-2 database, under both clean and multi-condition training modes. The results show that the proposed W-LS encoder slightly exceeds the ETSI advanced front-end (ETSI-AFE) baseline system for bit-rates ranging from 1400 bps to 1925 bps under clean training mode. However, a negligible degradation is observed in case of multi-condition training mode (around 0.6% and 0.2% at 1400 bps and 1925 bps, respectively). Furthermore, the obtained performance generally outperforms the ETSI-AFE source encoder at 4400 bps under clean training and provides similar performance, at 1925 bps, under multi-condition training.
An efficient low bit-rate compression scheme of acoustic features for distributed speech recognition
S0045790616300398
Active contours, or snakes, have a wide range of applications in object segmentation, which use an energy minimizing spline to extract objects’ borders. Classical snakes have several drawbacks, such as the initial contour sensitivity and convergence ability to local minima. Many approaches based on active contours are put forward to addressing these problems. However, these approaches have limitation that they all depend too much on the amplitude of edge gradient and abandon directional information. This can lead to poor convergence toward the object boundary in the presence of strong background edges and cluttered noises. To deal with these issues, we first propose a novel external force, called adaptive edge preserving generalized gradient vector flow based on component-based normalization (CN-AEGGVF), which can adaptively adjust the process of diffusion according to the local characteristics of an image and preserve weak edges by adding the gradient information of an image. The experimental results show that the new model provides much better results than other approaches in terms of noise robustness, weak edge preserving, and convergence. Secondly, an improved multi-step decision model based on CN-AEGGVF is presented, which added new effective weighting function to attenuate the magnitudes of unwanted edges and adopted narrow band method to reduce time complexity. The novel method is analyzed visually and qualitatively on nature image dataset. Experimental results and comparisons against other methods show that the proposed method has better segmentation accuracy than other comparative approaches.
A novel snake model using new multi-step decision model for complex image segmentation
S0045790616300416
Accurate surface estimation is a critical step for autonomous robot navigation on a rough terrain. In this paper, we present a new method for estimating the surface of an unknown arbitrarily shaped terrain from the range data. The terrain modeling problem is generally formulated as the estimation of a function whose zero-set corresponds to the surface to be reconstructed. A Laser range scanner has been built for acquisition of range data. The range data from the scanner samples the terrain unevenly, and is more sparse for distant regions from the sensor. The paper describes the surface estimation problem as a max-margin based formulation of a non-stationary kernel function and minimizes the objective function using sub-gradient method. Unlike other methods, additional geometric ray based information is used to eliminate the unnecessary bumps on the surface and increase the precision. The experimental results validate the robustness of the proposed approach.
Kernel based approach for accurate surface estimation
S0045790616300532
Inspired by the observation that a healthcare system usually involves various intelligent technologies from different disciplines, especially metaheuristics and data mining, this paper provides a brief survey of metaheuristics for healthcare system and a roadmap for researchers working on metaheuristics and healthcare to develop a more efficient and effective healthcare system. This paper begins with a discussion of changes for healthcare, followed by a brief review of the features of “up-to-date technologies for healthcare.”. Then, a learnable big data analytics framework for healthcare system is presented which provides a high performance solution to the forthcoming challenges of big data. Finally, changes, potentials, open issues, and future trends of metaheuristics for healthcare are addressed.
Metaheuristic Algorithms for Healthcare: Open Issues and Challenges
S0045790616300568
The paper describes the development of an algorithm for detecting and classifying MRI brain slices into normal and abnormal. The proposed technique relies on the prior-knowledge that the two hemispheres of a healthy brain have approximately a bilateral symmetry. We use the modified grey level co-occurrence matrix method to analyze and measure asymmetry between the two brain hemispheres. 21 co-occurrence statistics are used to discriminate the images. The experimental results demonstrate the efficacy of our proposed algorithm in detecting brain abnormalities with high accuracy and low computational time. The dataset used in the experiment comprises 165 patients with 88 having different brain abnormalities whilst the remaining do not exhibit any detectable pathology. The algorithm was tested using a ten-fold cross-validation technique with 10 repetitions to avoid the result depending on the sample order. The maximum accuracy achieved for the brain tumors detection was 97.8% using a Multi-Layer Perceptron Neural Network.
Automated screening of MRI brain scanning using grey level statistics
S0045790616300593
In this paper, an intelligent fuzzy logic control strategy optimized by genetic algorithm (GA) has been proposed for uniaxial parallel hybrid electric vehicle (PHEV), in the fuzzy controller, the ratio between the motor target torque and the total demand torque is the first input variable, and the state of charge (SOC) of the battery is the second input variable. The torque distribution coefficient between the engine and the motor is as the output variable. The proposed strategy is compared to the electric auxiliary control strategy. The whole vehicle is modeled based on experimental data and the strategy is verified in automotive simulation software ADVISOR, the results show that the proposed control strategy can reduce fuel consumption, reduce emissions, ensure balanced charging and discharging of the battery, effectively avoid the production of the engine peak torque, and improve the overall performance of the vehicle.
Intelligent fuzzy energy management research for a uniaxial parallel hybrid electric vehicle
S0045790616300611
Retina recognition is the most stable and reliable biometric system due to its stability, uniqueness and non-replicable nature of vascular pattern. On the other hand, the complexity of vascular pattern in diseased retina makes the extraction of blood vessels very hard, which majorally effects the recognition rate. The main aim of this paper is to design a robust retinal recognition system with reduced computational complexity and to explore novel retinal features. This paper presents two different approaches for retinal recognition; one is vascular-based feature extraction with an improved vessel segmentation algorithm and second is non-vascular based feature extraction. Vascular-based method uses vessel properties of retinal images and aims to improve the efficiency of retinal recognition system. Whereas, non-vascular based method intends to analyze non-vessel properties of retinal images in order to reduce time complexity. The proposed system is assessed on two local and three public databases.
Person identification using vascular and non-vascular retinal features
S0045790616300623
The aim of multi-focus image fusion is to combine several images taken by different sensors and with different focuses to increase the perception of scene. The proposed methods suffer from some undesirable side effects like blurring artifact and/or blocking which decreases the quality of the output image. This paper presents an efficient approach for multi-focus image fusion based on variance and spatial frequency calculated in the wavelet domain. The proposed method remarkably reduces the amount of distortion artifacts and contrast loss due to the fact that variance and spatial frequency-based fusion significantly enhances reliability in feature selection and data fusion procedures. The algorithm ensures to a great extant the access to the data of the images. The experimental results verify the efficiency of the proposed method in the output image quality, as well as its lower complexity, in comparison with several recently related works.
Multi-focus image fusion using sharpness criteria for visual sensor networks in wavelet domain
S0045790616300672
This work presents two low-power Secure Hash Algorithm-3 (SHA-3) designs on Field Programmable Gate Array (FPGA) using embedded Digital Signal Processing (DSP48E) slice, one for area constrained environments and the other for high-speed applications. The seven equations of SHA-3 are logically optimized to three and four stage pipelined organizations for our compact and high-speed designs, respectively. The maximum parallelism between all the bitwise operations of different stages of SHA-3 is explored with respect to the 48-bit structure of DSP slice. Further Logical Cascade Structure (LCS) design strategy is proposed in accordance with the DSP slice organization. These optimizations result in saving of resources and at the same time achieve low-power with high performance. Our compact design results in saving of 79.10% DSP slices and consumes only 1/7 th of power while 1600-bit DSP design provides 23.57 Gbps throughput and consumes only 1/5 th of power as compared to the conventional SHA-3 designs.
A low-power SHA-3 designs using embedded digital signal processing slice on FPGA
S0045790616300696
In this paper, we consider the speech enhancement and acoustic noise reduction problem in a moving car through a blind source separation scheme employing two loosely spaced microphones. We propose a new efficient frequency domain-symmetric adaptive decorrelation (FD-SAD) algorithm that removes punctual noise components from noisy speech signals. The FD-SAD algorithm is combined with the forward blind source separation FBSS structure to enhance the performances of its time-domain symmetric adaptive decorelating (TD-SAD) version. The proposed algorithm has a good tracking behaviour and fast convergence speed even in very noisy conditions with loosely spaced microphones. Intensive experiments have been done on the newly proposed algorithm in terms of the Segmental Signal-to-Noise-Ratio (SegSNR), the System Mismatch (SM), the Segmental Mean Square Error (SegMSE), and the Cepstral Distance (CD) criteria. The comparison results with the state-of-the-art algorithms have highlighted the excellent performance of the proposed algorithm, and have shown its ability to completely remove the correlated noise components from speech signal even in very noisy conditions when controlled by a voice activity detector.
An efficient frequency-domain adaptive forward BSS algorithm for acoustic noise reduction and speech quality enhancement
S0045790616300714
This work develops LED (Light Emitting Diode) image display system that can display patterns on the spokes of a bicycle. The proposed system of LED lights that are mounted on the wheel can provide safety by producing a beautiful pattern. The main control board and LED lighting strips that determine the displayed pattern are developed using embedded system and electrical circuit designs. A mobile application program is developed to control the lighting hardware remotely; the system communicates with the main control board via a Wi-Fi wireless network interface. The cyclist can change the patterns using the mobile application program or by pushing buttons on the main control board before riding. Six patterns are designed for display using this system, and the pattern can be made to change repeatedly at present intervals. Experimental results reveal that the proposed system performs effectively on the wheel up to a maximum speed of 40 km/hr.
LED image display system with mobile APP control
S0045790616300726
The compositional and content attributes of images carry information that enhances the performance of image retrieval. Standard images are constructed by following the rule of thirds that divides an image into nine equal parts by placing objects or regions of interest at the intersecting lines of the grid. An image represents regions and objects that are in a spatial semantic relationship with respect to each other. While the Bag of Features (BoF) representation is commonly used for image retrieval, it lacks spatial information. In this paper, we present two novel image representation methods based on the histograms of triangles, which add spatial information to the inverted index of BoF representation. Histograms of triangles are computed at two levels, by dividing an image into two and four triangles that are evaluated separately. Extensive experiments and comparisons conducted on two datasets demonstrate that the proposed image representations enhance the performance of image retrieval.
Image retrieval by addition of spatial information based on histograms of triangular regions
S0045790616300738
Several methods for collecting psychophysiological data from humans have been developed, including galvanic skin response (GSR), electromyography (EMG), electroencephalography (EEG), and the electrocardiogram (ECG). This paper proposes a feature extraction method for emotion recognition in EEG-based human brain signals. In this research, emotions were elicited from subjects using emotion-related stimuli from the International Affective Picture System (IAPS) database. We selected four kinds of emotional stimuli in the arousal-valence domain. Raw brain signals were preprocessed using independent component analysis (ICA) to remove artifacts. We introduced a feature extraction method using LPP, and implemented a benchmark based on statistical and frequency domain features. The LPP-based results show the highest accuracy when using SVM in the all-selected feature set. The results also provide evidence and suggest a way for further developing a more specialized emotion recognition system using brain signals.
A novel feature extraction method based on late positive potential for emotion recognition in human brain signal patterns
S0045790616300854
We consider a novel variational model for image reconstruction, based on second-order partial differential equations, in this paper. This inpainting approach is inspired by the anisotropic diffusion-based denoising solutions. A stable variational scheme is proposed first. Then, a nonlinear differential model is derived from it, by determining the corresponding Euler–Lagrange equation and applying the steepest descent method. A rigorous mathematical treatment of this anisotropic diffusion scheme is provided next. This nonlinear second order diffusion model is then numerically approximated, a consistent and explicit finite-difference based discretization scheme being developed. Some successful image inpainting experiments and method comparison are also provided in this article.
Variational image inpainting technique based on nonlinear second-order diffusions
S0045794914000856
The spatial variation of cell size in a functionally graded cellular structure is achieved using error diffusion to convert a continuous tone image into binary form. Effects of two control parameters, greyscale value and resolution on the resulting cell size measures were investigated. Variation in cell edge length was greatest for the Voronoi connection scheme, particularly at certain parameter combinations. Relationships between these parameters and cell size were identified and applied to an example, where the target was to control the minimum and maximum cell size. In both cases there was an 8% underestimation of cell area for target regions.
An error diffusion based method to generate functionally graded cellular structures
S0045794915000255
The consistently linearized eigenproblem (CLE) plays an important role in stability analysis of structures. Solution of the CLE requires computation of the tangent stiffness matrix K ∼ T and of its first derivative with respect to a dimensionless load parameter λ, denoted as K ∼ ̇ T . In this paper, three approaches of computation of K ∼ ̇ T are discussed. They are based on (a) an analytical expression for the derivative of the element tangent stiffness matrix K ∼ T e , (b) a load-based finite difference approximation (LBFDA), and (c) a displacement-based finite difference approximation (DBFDA). The convergence rate, the accuracy, and the computing time of the LBFDA and the DBFDA are compared, using the analytical solution as the benchmark result. The numerical investigation consists of the analysis of a circular arch subjected to a vertical point load at the vertex, and of a thrust-line arch under a uniformly distributed load. The main conclusion drawn from this work is that the DBFDA is superior to the LBFDA.
Assessment of solutions from the consistently linearized eigenproblem by means of finite difference approximations
S0045794915000449
A robust finite element procedure for modelling the localised fracture of reinforced concrete beams at elevated temperatures is developed. In this model a reinforced concrete beam is represented as an assembly of 4-node quadrilateral plain concrete, 3-node main reinforcing steel bar, and 2-node bond-link elements. The concrete element is subdivided into layers for considering the temperature distribution over the cross-section of a beam. An extended finite element method (XFEM) has been incorporated into the concrete elements in order to capture the localised cracks within the concrete. The model has been validated against previous fire test results on the concrete beams. regular strain–displacement transformation matrix enhanced strain–displacement transformation matrix material constitutive matrix of plain concrete element internal force vector enhanced element internal force vector regular element internal force vector element internal force vector corresponding to traction fracture energy of concrete enhanced element stiffness matrix regular element stiffness matrix element stiffness matrix corresponding to traction tangent stiffness of traction–separation relation traction within the cracks sign function vector of continuous displacement field vector of discontinuous displacement field enhancement function
An extended finite element model for modelling localised fracture of reinforced concrete beams in fire
S0045794915001418
This paper focuses on the application of structural topology optimisation technique to design steel perforated I-sections as a first attempt to replace the traditional cellular beams and better understand the mechanisms involved when subjected to bending and shear actions. An optimum web opening configuration is suggested based on the results of parametric studies. A FE analysis is further employed to determine the performance of the optimised beam in comparison to the conventional widely used cellular type beam. It is found that the optimised beam overperforms in terms of load carrying capacities, deformations, and stress intensities. Barriers to the implementation of the topology optimisation technique to the routine design of beam web are highlighted.
Application of structural topology optimisation to perforated steel beams
S0045794915003053
We extend our existing hp-finite element framework for non-conducting magnetic fluids (Jin et al., 2014) to the treatment of conducting magnetic fluids including magnetostriction effects in both two- and three-dimensions. In particular, we present, to the best of our knowledge, the first computational treatment of magnetostrictive effects in conducting fluids. We propose a consistent linearisation of the coupled system of non-linear equations and solve the resulting discretised equations by means of the Newton–Raphson algorithm. Our treatment allows the simulation of complex flow problems, with non-homogeneous permeability and conductivity, and, apart from benchmarking against established analytical solutions for problems with homogeneous material parameters, we present a series of simulations of multiphase flows in two- and three-dimensions to show the predicative capability of the approach as well as the importance of including these effects.
hp-Finite element solution of coupled stationary magnetohydrodynamics problems including magnetostrictive effects
S0045794915003302
An elliptical ring test method is proposed to replace the circular ring test recommended by ASTM and AASHTO for faster and more reliable assessment of cracking tendency of concrete. Numerical models are also established to simulate stress development and crack initiation/propagation in restrained concrete rings. Cracking age, position and propagation in various rings are obtained from numerical analyses that agree well with experimental results. Elliptical thin rings of certain geometry can shorten the ring test duration as desirable. In thin rings, crack initiation is caused by external restraint effect so that a crack occurs at the inner circumference and propagates towards the outer one. In thick rings, crack initiation is mainly due to the self-restraint effect so that a crack occurs at the outer circumference and propagates towards their inner one. Therefore, thick elliptical concrete rings do not necessarily crack earlier than circular ones as observed from experiment.
Effects of specimen size on assessment of shrinkage cracking of concrete via elliptical rings: Thin vs. thick
S0045794916300608
Tensile fabric membranes present opportunities for efficient structures, combining the cladding and support structure. Such structures must be doubly curved to resist external loads, but doubly curved surfaces cannot be formed from flat fabric without distorting. Computational methods of patterning are used to find the optimal composition of planar panels to generate the form, but are sensitive to the models and techniques used. This paper presents a detailed discussion of, and insights into, the computational process of patterning. A new patterning method is proposed, which uses a discrete model, advanced flattening methods, dynamic relaxation, and re-meshing to generate accurate cutting patterns. Comparisons are drawn with published methods of patterning to show the suitability of the method.
Patterning of tensile fabric structures with a discrete element model using dynamic relaxation
S0097849313000447
In this work, we propose and experiment an original solution to 3D face recognition that supports face matching also in the case of probe scans with missing parts. In the proposed approach, distinguishing traits of the face are captured by first extracting 3D keypoints of the scan and then measuring how the face surface changes in the keypoints neighborhood using local shape descriptors. In particular: 3D keypoints detection relies on the adaptation to the case of 3D faces of the meshDOG algorithm that has been demonstrated to be effective for 3D keypoints extraction from generic objects; as 3D local descriptors we used the HOG descriptor and also proposed two alternative solutions that develop, respectively, on the histogram of orientations and the geometric histogram descriptors. Face similarity is evaluated by comparing local shape descriptors across inlier pairs of matching keypoints between probe and gallery scans. The face recognition accuracy of the approach has been first experimented on the difficult probes included in the new 2D/3D Florence face dataset that has been recently collected and released at the University of Firenze, and on the Binghamton University 3D facial expression dataset. Then, a comprehensive comparative evaluation has been performed on the Bosphorus, Gavab and UND/FRGC v2.0 databases, where competitive results with respect to existing solutions for 3D face biometrics have been obtained.
Matching 3D face scans using interest points and local histogram descriptors
S0097849313000459
Recent hardware technologies have enabled acquisition of 3D point clouds from real world scenes in real time. A variety of interactive applications with the 3D world can be developed on top of this new technological scenario. However, a main problem that still remains is that most processing techniques for such 3D point clouds are computationally intensive, requiring optimized approaches to handle such images, especially when real time performance is required. As a possible solution, we propose the use of a 3D moving fovea based on a multiresolution technique that processes parts of the acquired scene using multiple levels of resolution. Such approach can be used to identify objects in point clouds with efficient timing. Experiments show that the use of the moving fovea shows a seven fold performance gain in processing time while keeping 91.6% of true recognition rate in comparison with state-of-the-art 3D object recognition methods.
Efficient 3D object recognition using foveated point clouds
S0097849313000460
Recent results in geometry processing have shown that shape segmentation, comparison, and analysis can be successfully addressed through the spectral properties of the Laplace–Beltrami operator, which is involved in the harmonic equation, the Laplacian eigenproblem, the heat diffusion equation, and the definition of spectral distances, such as the bi-harmonic, commute time, and diffusion distances. In this paper, we study the discretization and the main properties of the solutions to these equations on 3D surfaces and their applications to shape analysis. Among the main factors that influence their computation, as well as the corresponding distances, we focus our attention on the choice of different Laplacian matrices, initial boundary conditions, and input shapes. These degrees of freedom motivate our choice to address this study through the executable paper, which allows the user to perform a large set of experiments and select his/her own parameters. Finally, we represent these distances in a unified way and provide a simple procedure to generate new distances on 3D shapes. Robustness of the biharmonic distance from a source (black) point, which has been computed using the linear FEM mass matrix as weight, with respect to (b) tiny and missing triangles, (c) noise, (d) holes of an irregularly sampled surface (a) with local shape artifacts.
An interactive analysis of harmonic and diffusion equations on discrete 3D shapes
S0097849313000472
There are hundreds of distinct 3D, CAD and engineering file formats. As engineering design and analysis has become increasingly digital, the proliferation of file formats has created many problems for data preservation, data exchange, and interoperability. In some situations, physical file objects exist on legacy media and must be identified and interpreted for reuse. In other cases, file objects may have varying representational expressiveness. We introduce the problem of automated file recognition and classification in emerging digital engineering environments, where all design, manufacturing and production activities are “born digital.” The result is that massive quantities and varieties of data objects are created during the product lifecycle. This paper presents an approach to automated identification of engineering file formats. This work operates independent of any modeling tools and can identify families of related file objects as well as variations in versions. This problem is challenging as it cannot assume any a priori knowledge about the nature of the physical file object. Applications for these methods include support for a number of emerging applications in areas such as forensic analysis, data translation, as well as digital curation and long-term data management.
A flexible and extensible approach to automated CAD/CAM format classification
S0097849313000484
In this paper, we present a new approach for generic 3D shape retrieval based on a mesh partitioning scheme. Our method combines a mesh global description and mesh partition descriptions to represent a 3D shape. The partitioning is useful because it helps us to extract additional information in a more local sense. Thus, part descriptions can mitigate the semantic gap imposed by global description methods. We propose to find spatial agglomerations of local features to generate mesh partitions. Hence, the definition of a distance function is stated as an optimization problem to find the best match between two shape representations. We show that mesh partitions are representative and therefore it helps to improve the effectiveness in retrieval tasks. We present exhaustive experimentation using the SHREC'09 Generic Shape Retrieval Benchmark.
Data-aware 3D partitioning for generic shape retrieval
S0097849313000551
This paper focuses on the problem of 3D shape categorization. For a given set of training 3D shapes, a 3D shape recognition system must be able to predict the class label for a test 3D shape. We introduce a novel discriminative approach for recognizing 3D shape categories which is based on a 3D Spatial Pyramid (3DSP) decomposition. 3D local descriptors computed on the 3D shapes have to be extracted, to be then quantized in order to build a 3D visual vocabulary for characterizing the shapes. Our approach repeatedly subdivides a cube inscribed in the 3D shape, and computes a weighted sum of histogram of visual word occurrences at increasingly fine sub-volumes. Additionally, we integrate this pyramidal representation with different types of kernels, such as the Histogram Intersection Kernel and the extended Gaussian Kernel with ?2 distance. Finally, we perform a thorough evaluation on different publicly available datasets, defining an elaborate experimental setup to be used for establishing further comparisons among different 3D shape categorization methods.
Evaluating 3D spatial pyramids for classifying 3D shapes
S0097849313000575
We present an efficient and robust algorithm for the landmark transfer on 3D meshes that are approximately isometric. Given one or more custom landmarks placed by the user on a source mesh, our method efficiently computes corresponding landmarks on a family of target meshes. The technique is useful when a user is interested in characterization and reuse of application-specific landmarks on meshes of similar shape (for example, meshes coming from the same class of objects). Consequently, across a set of multiple meshes consistency is assured among landmarks, regardless of landmark geometric distinctiveness. The main advantage of our method over existing approaches is its low computation time. Differently from existing non-rigid registration techniques, our method detects and uses a minimum number of geometric features that are necessary to accurately locate the user-defined landmarks and avoids performing unnecessary full registration. In addition, unlike previous techniques that assume strict consistency with respect to geodesic distances, we adopt histograms of geodesic distance to define feature point coordinates, in order to handle the deviation of isometric deformation. This allows us to accurately locate the landmarks with only a small number of feature points in proximity, from which we build what we call a minimal graph. We demonstrate and evaluate the quality of transfer by our algorithm on a number of Tosca data sets.
Landmark transfer with minimal graph
S0097849313001052
Illumination is one of the key components in the creation of realistic renderings of scenes containing virtual objects. In this paper, we present a set of novel algorithms and data structures for visualization, processing and rendering with real world lighting conditions captured using High Dynamic Range (HDR) video. The presented algorithms enable rapid construction of general and editable representations of the lighting environment, as well as extraction and fitting of sampled reflectance to parametric BRDF models. For efficient representation and rendering of the sampled lighting environment function, we consider an adaptive (2D/4D) data structure for storage of light field data on proxy geometry describing the scene. To demonstrate the usefulness of the algorithms, they are presented in the context of a fully integrated framework for spatially varying image based lighting. We show reconstructions of example scenes and resulting production quality renderings of virtual furniture with spatially varying real world illumination including occlusions. In this paper, we present a novel image based framework capturing and rendering with the lighting found in real world scenes. The framework is based on recent developments in HDR video capture, enabling efficient capture of the full dimensionality of the illumination in large environments exhibiting both complex spatial and angular variations. The image shows a comparison between traditional image based lighting (left), and our method (right). It is evident that the lighting complexity enabled by HDR video based scene capture increases the realism and visual interest in the resulting renderings significantly.
Spatially varying image based lighting using HDR-video
S0097849314000648
In this paper, we propose a method for estimating the camera pose for an environment in which the intrinsic camera parameters change dynamically. In video see-through augmented reality (AR) technology, image-based methods for estimating the camera pose are used to superimpose virtual objects onto the real environment. In general, video see-through-based AR cannot change the image magnification that results from a change in the camera?s field-of-view because of the difficulty of dealing with changes in the intrinsic camera parameters. To remove this limitation, we propose a novel method for simultaneously estimating the intrinsic and extrinsic camera parameters based on an energy minimization framework. Our method is composed of both online and offline stages. An intrinsic camera parameter change depending on the zoom values is calibrated in the offline stage. Intrinsic and extrinsic camera parameters are then estimated based on the energy minimization framework in the online stage. In our method, two energy terms are added to the conventional marker-based method to estimate the camera parameters: reprojection errors based on the epipolar constraint and the constraint of the continuity of zoom values. By using a novel energy function, our method can accurately estimate intrinsic and extrinsic camera parameters. We confirmed experimentally that the proposed method can achieve accurate camera parameter estimation during camera zooming.
Camera pose estimation under dynamic intrinsic parameter change for augmented reality
S0097849314000661
We present a robust approach for reconstructing the main architectural structure of complex indoor environments given a set of cluttered 3D input range scans. Our method uses an efficient occlusion-aware process to extract planar patches as candidate walls, separating them from clutter and coping with missing data, and automatically extracts the individual rooms that compose the environment by applying a diffusion process on the space partitioning induced by the candidate walls. This diffusion process, which has a natural interpretation in terms of heat propagation, makes our method robust to artifacts and other imperfections that occur in typical scanned data of interiors. For each room, our algorithm reconstructs an accurate polyhedral model by applying methods from robust statistics. We demonstrate the validity of our approach by evaluating it on both synthetic models and real-world 3D scans of indoor environments.
Automatic room detection and reconstruction in cluttered indoor environments with complex room layouts
S0097849314001411
Dehydrated core/shell fruits, such as jujubes, raisins and plums, show very complex buckles and wrinkles on their exocarp. It is a challenging task to model such complicated patterns and their evolution in a virtual environment even for professional animators. This paper presents a unified physically-based approach to simulate the morphological transformation for the core/shell fruits in the dehydration process. A finite element method (FEM), which is based on the multiplicative decomposition of the deformation gradient into an elastic part and a dehydrated part, is adopted to model the morphological evolution. In the method, the dehydration pattern can be conveniently controlled through physically prescribed parameters according to the geometry and material of the real fruits. The effects of the parameters on the final dehydrated surface patterns are investigated and summarized in detail. Experiments on jujubes, wolfberries, raisins and plums are given, which demonstrate the efficacy of the method.
Dehydration of core/shell fruits
S0097849315001119
We present an automatic approach for the reconstruction of parametric 3D building models from indoor point clouds. While recently developed methods in this domain focus on mere local surface reconstructions which enable e.g. efficient visualization, our approach aims for a volumetric, parametric building model that additionally incorporates contextual information such as global wall connectivity. In contrast to pure surface reconstructions, our representation thereby allows more comprehensive use: first, it enables efficient high-level editing operations in terms of e.g. wall removal or room reshaping which always result in a topologically consistent representation. Second, it enables easy taking of measurements like e.g. determining wall thickness or room areas. These properties render our reconstruction method especially beneficial to architects or engineers for planning renovation or retrofitting. Following the idea of previous approaches, the reconstruction task is cast as a labeling problem which is solved by an energy minimization. This global optimization approach allows for the reconstruction of wall elements shared between rooms while simultaneously maintaining plausible connectivity between all wall elements. An automatic prior segmentation of the point clouds into rooms and outside area filters large-scale outliers and yields priors for the definition of labeling costs for the energy minimization. The reconstructed model is further enriched by detected doors and windows. We demonstrate the applicability and reconstruction power of our new approach on a variety of complex real-world datasets requiring little or no parameter adjustment.
Automatic reconstruction of parametric building models from indoor point clouds
S0097849315001387
Estimating surface normals from a single image alone is a challenging problem. Previous work made various simplifications and focused on special cases, such as having directional lighting, known reflectance maps, etc. This is problematic, however, as shape from shading becomes impractical outside the lab. We argue that addressing more realistic settings requires multiple shading cues to be combined as well as generalized to natural illumination. However, this requires coping with an increased complexity of the approach and more parameters to be adjusted. Starting from a novel large-scale dataset for training and analysis, we pursue a discriminative learning approach to shape from shading. Regression forests enable efficient pixel-independent prediction and fast learning. The regression trees are adapted to predicting surface normals by using von Mises–Fisher distributions in the leaves. Spatial regularity of the normals is achieved through a combination of spatial features, including texton as well as novel silhouette features. The proposed silhouette features leverage the occluding contours of the surface and yield scale-invariant context. Their benefits include computational efficiency and good generalization to unseen data. Importantly, they allow estimating the reflectance map robustly, thus addressing the uncalibrated setting. Our method can also be extended to handle perspective projection. Experiments show that our discriminative approach outperforms the state of the art on various synthetic and real-world datasets.
A discriminative approach to perspective shape from shading in uncalibrated illumination
S0097849315001806
Handheld Augmented Reality (HAR) has the potential to introduce Augmented Reality (AR) to large audiences due to the widespread use of suitable handheld devices. However, many of the current HAR systems are not considered very practical and they do not fully answer to the needs of the users. One of the challenging areas in HAR is the in-situ AR content creation where the correct and accurate positioning of virtual objects to the real world is fundamental. Due to the hardware limitations of handheld devices and possible restrictions in the environment, the correct 3D positioning of objects can be difficult to achieve we are unable to use AR markers or correctly map the 3D structure of the environment. We present SlidAR, a 3D positioning for Simultaneous Localization And Mapping (SLAM) based HAR systems. SlidAR utilizes 3D ray-casting and epipolar geometry for virtual object positioning. It does not require a perfect 3D reconstruction of the environment nor any virtual depth cues. We have conducted a user experiment to evaluate the efficiency of SlidAR method against an existing device-centric positioning method that we call HoldAR. Results showed that SlidAR was significantly faster, required significantly less device movement, and also got significantly better subjective evaluation from the test participants. SlidAR also had higher positioning accuracy, although not significantly.
SlidAR: A 3D positioning method for SLAM-based handheld augmented reality
S0097849316300218
In this paper we present an intuitive tool suitable for 2D artists using touch-enabled pen tablets. An artist-oriented tool should be easy-to-use, real-time, versatile, and locally refinable. Our approach uses an interactive system for 3D character posing from 2D strokes. We employ a closed-form solution for the 2D strokes to 3D skeleton registration problem. We first construct an intermediate 2D stroke representation by extracting local features using meaningful heuristics. Then, we match 2D stroke segments to 3D bones. Finally, 3D bones are carefully realigned with the matched 2D stroke segments while enforcing important constraints such as bone rigidity and depth. Our technique is real-time and has a linear time complexity. It is versatile, as it works with any type of 2D stroke and 3D skeleton input. Finally, thanks to its coarse-to-fine design, it allows users to perform local refinements and thus keep full control over the final results. We demonstrate that our system is suitable for 2D artists using touch-enabled pen tablets by posing 3D characters with heterogeneous topologies (bipeds, quadrupeds, hands) in real-time.
Artist-oriented 3D character posing from 2D strokes
S0097849316300504
Shape registration is fundamental to 3D object acquisition; it is used to fuse scans from multiple views. Existing algorithms mainly utilize geometric information to determine alignment, but this typically results in noticeable misalignment of textures (i.e. surface colors) when using RGB-depth cameras. We address this problem using a novel approach to color-aware registration, which takes both color and geometry into consideration simultaneously. Color information is exploited throughout the pipeline to provide more effective sampling, correspondence and alignment, in particular for surfaces with detailed textures. Our method can furthermore tackle both rigid and non-rigid registration problems (arising, for example, due to small changes in the object during scanning, or camera distortions). We demonstrate that our approach produces significantly better results than previous methods. Using rigid and non-rigid registration to correct misalignments in geometry and texture. Two input textured surfaces (a) are captured by RGB-D cameras; the camera configuration provides initial alignment (b). Successive rigid (c) and non-rigid (d) steps improve it, giving a final surface with high-quality textures
Color-aware surface registration
S0098300413001799
Analysis of geophysical borehole data can often be hampered by too much information and noise in the trace leading to subjective interpretation of layer boundaries. Wavelet analysis of borehole data has provided an effective way of mitigating noise and delineating relevant boundaries. We extend wavelet analysis by providing a complete set of code and functions that will objectively block a geophysical trace based on a derivative operator algorithm that searches for inflection points in the bore log. Layer boundaries detected from the operator output are traced back to a zero-width operator so that boundaries are consistently and objectively detected. Layers are then classified based on importance and analysis is completed by selecting either total number of layers, a portion of the total number of layers, selection of minimum layer thickness, or layers detected by a specified minimum operator width. We demonstrate the effectiveness of the layer blocking technique by applying it to a case study for alluvial aquifer detection in the Gascoyne River area of Western Australia.
Derivative analysis for layer selection of geophysical borehole logs
S0098300413002124
The Google Maps/Earth GIS has been integrated with a microscale meteorological model to improve the system's functionality and ease of use. Almost all the components of the model system, including the terrain data processing, morphological data generation, meteorological data gathering and initialization, and displaying/visualizing the model results, have been improved by using this approach. Different from the traditional stand-along model system, this novel system takes advantages of enormous resources in map and image data retrieving/handling, four-dimensional (space and time) data visualization, overlaying, and many other advanced GIS features that the Google Maps/Earth platform has to offer. We have developed modular components for all of the model system controls and data processing programs which are glued together with the JavaScript language and KML/XML data. We have also developed small modular software using the Google application program interface to convert the model results and intermediate data for visualizations and animations. Capabilities such as high-resolution image, street view, and 3D buildings in the Google Earth/Map are also used to quickly generate small-scale vegetation and building morphology data that are required for the microscale meteorological models. This system has also been applied to visualize the data from other instruments such as Doppler wind lidars. Because of the tight integration of the internet based GIS and a microscale meteorology model, the model system is more versatile, intuitive, and user-friendly than a stand-along system we had developed before. This kind of system will enhance the user experience and also help researchers to explore new phenomena in fine-scale meteorology.
Integration of Google Maps/Earth with microscale meteorology models and data visualization
S0098300413002185
An image processing software has been developed which allows quantitative analysis of multi- and hyperspectral data from oceanic, coastal and inland waters. It has been implemented into the Water Colour Simulator WASI, which is a tool for the simulation and analysis of optical properties and light field parameters of deep and shallow waters. The new module WASI-2D can import atmospherically corrected images from airborne sensors and satellite instruments in various data formats and units like remote sensing reflectance or radiance. It can be easily adapted by the user to different sensors and to optical properties of the studied area. Data analysis is done by inverse modelling using established analytical models. The bio-optical model of the water column accounts for gelbstoff (coloured dissolved organic matter, CDOM), detritus, and mixtures of up to 6 phytoplankton classes and 2 spectrally different types of suspended matter. The reflectance of the sea floor is treated as sum of up to 6 substrate types. An analytic model of downwelling irradiance allows wavelength dependent modelling of sun glint and sky glint at the water surface. The provided database covers the spectral range from 350 to 1000 nm in 1 nm intervals. It can be exchanged easily to represent the optical properties of water constituents, bottom types and the atmosphere of the studied area.
WASI-2D: A software tool for regionally optimized analysis of imaging spectrometer data from deep and shallow waters
S0098300413002719
Data semantics play an extremely significant role in spatial data infrastructures by providing semantic specifications to geospatial data and enabling in this way data sharing and interoperability. By applying, on the fly, composite geospatial processes on the above data it is possible to produce valuable geoinformation over the web directly available and applicable to a wide range of geo-activities of significant importance for the research and industry community. Cloud computing may enable geospatial processing since it refers to, among other things, efficient computing resources providing on demand processing services. In this context, we attempt to provide a design and architectural framework for web applications based on open geospatial standards. Our approach includes, in addition to geospatial processing, data acquisition services that are essential especially when dealing with satellite images and applications in the area of remote sensing and similar fields. As a result, by putting in a common framework all data and geoprocesses available in the Cloud, it is possible to combine the appropriate services in order to produce a solution for a specific need.
Geospatial services in the Cloud
S0098300413002720
Machine learning algorithms (MLAs) are a powerful group of data-driven inference tools that offer an automated means of recognizing patterns in high-dimensional data. Hence, there is much scope for the application of MLAs to the rapidly increasing volumes of remotely sensed geophysical data for geological mapping problems. We carry out a rigorous comparison of five MLAs: Naive Bayes, k-Nearest Neighbors, Random Forests, Support Vector Machines, and Artificial Neural Networks, in the context of a supervised lithology classification task using widely available and spatially constrained remotely sensed geophysical data. We make a further comparison of MLAs based on their sensitivity to variations in the degree of spatial clustering of training data, and their response to the inclusion of explicit spatial information (spatial coordinates). Our work identifies Random Forests as a good first choice algorithm for the supervised classification of lithology using remotely sensed geophysical data. Random Forests is straightforward to train, computationally efficient, highly stable with respect to variations in classification model parameter values, and as accurate as, or substantially more accurate than the other MLAs trialed. The results of our study indicate that as training data becomes increasingly dispersed across the region under investigation, MLA predictive accuracy improves dramatically. The use of explicit spatial information generates accurate lithology predictions but should be used in conjunction with geophysical data in order to generate geologically plausible predictions. MLAs, such as Random Forests, are valuable tools for generating reliable first-pass predictions for practical geological mapping applications that combine widely available geophysical data.
Geological mapping using remote sensing data: A comparison of five machine learning algorithms, their response to variations in the spatial distribution of training data and the use of explicit spatial information
S0098300413002951
The internal orientation of fossil mass occurrences can be exploited as useful source of information about their primary depositional conditions. A series of studies, using different kinds of fossils, especially those with elongated shape (e.g., elongated gastropods), deal with their orientation and the subsequent reconstruction of the depositional conditions (e.g., paleocurrents and transport mechanisms). However, disk-shaped fossils like planispiral cephalopods or gastropods were used, up to now, with caution for interpreting paleocurrents. Moreover, most studies just deal with the topmost surface of such mass occurrences, due to the easier accessibility. Within this study, a new method for three-dimensional reconstruction of the internal structure of a fossil mass occurrence and the subsequent calculation of its spatial shell orientation is established. A 234 million-years-old (Carnian, Triassic) monospecific mass occurrence of the ammonoid Kasimlarceltites krystyni from the Taurus Mountains in Turkey, embedded in limestone, is used for this pilot study. Therefore, a 150×45×140 mm3 block of the ammonoid bearing limestone bed has been grinded to 70 slices, with a distance of 2 mm between each slice. By using a semi-automatic region growing algorithm of the 3D-visualization software Amira, ammonoids of a part of this mass occurrence were segmented and a 3D-model reconstructed. Landmarks, trigonometric and vector-based calculations were used to compute the diameters and the spatial orientation of each ammonoid. The spatial shell orientation was characterized by dip and dip-direction and aperture direction of the longitudinal axis, as well as by dip and azimuth of an imaginary sagittal-plane through each ammonoid. The exact spatial shell orientation was determined for a sample of 675 ammonoids, and their statistical orientation analyzed (i.e., NW/SE). The study combines classical orientation analysis with modern 3D-visualization techniques, and establishes a novel spatial orientation analyzing method, which can be adapted to any kind of abundant solid matter.
Computed reconstruction of spatial ammonoid-shell orientation captured from digitized grinding and landmark data
S0098300413002975
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis
S0098300414000259
In seismology, waveform cross correlation has been used for years to produce high-precision hypocenter locations and for sensitive detectors. Because correlated seismograms generally are found only at small hypocenter separation distances, correlation detectors have historically been reserved for spotlight purposes. However, many regions have been found to produce large numbers of correlated seismograms, and there is growing interest in building next-generation pipelines that employ correlation as a core part of their operation. In an effort to better understand the distribution and behavior of correlated seismic events, we have cross correlated a global dataset consisting of over 300 million seismograms. This was done using a conventional distributed cluster, and required 42 days. In anticipation of processing much larger datasets, we have re-architected the system to run as a series of MapReduce jobs on a Hadoop cluster. In doing so we achieved a factor of 19 performance increase on a test dataset. We found that fundamental algorithmic transformations were required to achieve the maximum performance increase. Whereas in the original IO-bound implementation, we went to great lengths to minimize IO, in the Hadoop implementation where IO is cheap, we were able to greatly increase the parallelism of our algorithms by performing a tiered series of very fine-grained (highly parallelizable) transformations on the data. Each of these MapReduce jobs required reading and writing large amounts of data. But, because IO is very fast, and because the fine-grained computations could be handled extremely quickly by the mappers, the net was a large performance gain.
Large-scale seismic signal analysis with Hadoop
S0098300414001873
Landslide susceptibility mapping (LSM) is making increasing use of GIS-based spatial analysis in combination with multi-criteria evaluation (MCE) methods. We have developed a new multi-criteria decision analysis (MCDA) method for LSM and applied it to the Izeh River basin in south-western Iran. Our method is based on fuzzy membership functions (FMFs) derived from GIS analysis. It makes use of nine causal landslide factors identified by local landslide experts. Fuzzy set theory was first integrated with an analytical hierarchy process (AHP) in order to use pairwise comparisons to compare LSM criteria for ranking purposes. FMFs were then applied in order to determine the criteria weights to be used in the development of a landslide susceptibility map. Finally, a landslide inventory database was used to validate the LSM map by comparing it with known landslides within the study area. Results indicated that the integration of fuzzy set theory with AHP produced significantly improved accuracies and a high level of reliability in the resulting landslide susceptibility map. Approximately 53% of known landslides within our study area fell within zones classified as having “very high susceptibility”, with the further 31% falling into zones classified as having “high susceptibility”.
A GIS-based extended fuzzy multi-criteria evaluation for landslide susceptibility mapping
S0098300414002532
The depth of valley incision and valley volume are important parameters in understanding the geologic history of early Mars, because they are related to the amount sediments eroded and the quantity of water needed to create the valley networks (VNs). With readily available digital elevation model (DEM) data, the Black Top Hat (BTH) transformation, an image processing technique for extracting dark features on a variable background, has been applied to DEM data to extract valley depth and estimate valley volume. Previous studies typically use a single window size for extracting the valley features and a single threshold value for removing noise, resulting in finer features such as tributaries not being extracted and underestimation of valley volume. Inspired by similar algorithms used in LiDAR data analysis to remove above-ground features to obtain bare-earth topography, here we propose a progressive BTH (PBTH) transformation algorithm, where the window size is progressively increased to extract valleys of different orders. In addition, a slope factor is introduced so that the noise threshold can be automatically adjusted for windows with different sizes. Independently derived VN lines were used to select mask polygons that spatially overlap the VN lines. Volume is calculated as the sum of valley depth within the selected mask multiplied by cell area. Application of the PBTH to a simulated landform (for which the amount of erosion is known) achieved an overall relative accuracy of 96%, in comparison with only 78% for BTH. Application of PBTH to Ma’adim Vallies on Mars not only produced total volume estimates consistent with previous studies, but also revealed the detailed spatial distribution of valley depth. The highly automated PBTH algorithm shows great promise for estimating the volume of VN on Mars on global scale, which is important for understanding its early hydrologic cycle.
A progressive black top hat transformation algorithm for estimating valley volumes on Mars
S0098300414002660
We present an algorithm developed for GIS-applications in order to produce maps of landside susceptibility in postglacial and glacial sediments in Sweden. The algorithm operates on detailed topographic and Quaternary deposit data. We compare our algorithm to two similar computational schemes based on a global visibility operator and a shadow-casting algorithm. We find that our algorithm produces more reliable results in the vicinity of stable material than the global visibility algorithm. We also conclude that our algorithm is more computationally efficient than the other two methods, which is important when we may want to assess the effects of uncertainty in the data by evaluating many different models. Our method also provides the possibility to take other data into account. We show how different soil types with different geotechnical properties may be modelled. Our algorithm may also take depth information, i.e. the thicknesses of the deposits into account. We thus propose that our method may be used to provide more refined maps than the overview maps in areas where more detailed geotechnical/geological data have been acquired. The efficiency of our algorithm suggests that it may replace any global visibility operators used in other applications or processing schemes of gridded map data.
A fast and efficient algorithm to map prerequisites of landslides in sensitive clays based on detailed soil and topographical information
S0098300414002829
The EverVIEW Data Viewer is a cross-platform desktop application that combines and builds upon multiple open source libraries to help users to explore spatially-explicit gridded data stored in Network Common Data Form (NetCDF). Datasets are displayed across multiple side-by-side geographic or tabular displays, showing colorized overlays on an Earth globe or grid cell values, respectively. Time-series datasets can be animated to see how water surface elevation changes through time or how habitat suitability for a particular species might change over time under a given scenario. Initially targeted toward Florida's Everglades restoration planning, EverVIEW has been flexible enough to address the varied needs of large-scale planning beyond Florida, and is currently being used in biological planning efforts nationally and internationally.
EverVIEW: A visualization platform for hydrologic and Earth science gridded data
S0098300414002866
X-ray micro-tomography (XMT) is increasingly used for the quantitative analysis of the volumes of features within the 3D images. As with any measurement, there will be error and uncertainty associated with these measurements. In this paper a method for quantifying both the systematic and random components of this error in the measured volume is presented. The systematic error is the offset between the actual and measured volume which is consistent between different measurements and can therefore be eliminated by appropriate calibration. In XMT measurements this is often caused by an inappropriate threshold value. The random error is not associated with any systematic offset in the measured volume and could be caused, for instance, by variations in the location of the specific object relative to the voxel grid. It can be eliminated by repeated measurements. It was found that both the systematic and random components of the error are a strong function of the size of the object measured relative to the voxel size. The relative error in the volume was found to follow approximately a power law relationship with the volume of the object, but with an exponent that implied, unexpectedly, that the relative error was proportional to the radius of the object for small objects, though the exponent did imply that the relative error was approximately proportional to the surface area of the object for larger objects. In an example application involving the size of mineral grains in an ore sample, the uncertainty associated with the random error in the volume is larger than the object itself for objects smaller than about 8 voxels and is greater than 10% for any object smaller than about 260 voxels. A methodology is presented for reducing the random error by combining the results from either multiple scans of the same object or scans of multiple similar objects, with an uncertainty of less than 5% requiring 12 objects of 100 voxels or 600 objects of 4 voxels. As the systematic error in a measurement cannot be eliminated by combining the results from multiple measurements, this paper introduces a procedure for using volume standards to reduce the systematic error, especially for smaller objects where the relative error is larger.
Quantifying and minimising systematic and random errors in X-ray micro-tomography based volume measurements
S0098300415000928
The internal structure and petrophysical property distribution of fault zones are commonly exceedingly complex compared to the surrounding host rock from which they are derived. This in turn produces highly complex fluid flow patterns which affect petroleum migration and trapping as well as reservoir behavior during production and injection. Detailed rendering and forecasting of fluid flow inside fault zones require high-resolution, explicit models of fault zone structure and properties. A fundamental requirement for achieving this is the ability to create volumetric grids in which modeling of fault zone structures and properties can be performed. Answering this need, a method for generating volumetric fault zone grids which can be seamlessly integrated into existing standard reservoir modeling tools is presented. The algorithm has been tested on a wide range of fault configurations of varying complexity, providing flexible modeling grids which in turn can be populated with fault zone structures and properties.
A method for generating volumetric fault zone grids for pillar gridded reservoir models
S0098300415001089
Portable gas analyzers have become a powerful tool for the real-time monitoring of volcanic gas composition over the last decade. Gas analyzers make it possible to retrieve in real-time the chemical composition of a fumarole system or a plume in an open-conduit volcano via periodic field-deployments or at permanent stations. The core of a multicomponent volcanic gas analyzer (MultiGAS) consists of spectroscopic and electrochemical sensors that are used to determine the concentrations of the most abundant volcanic gases (H2O, CO2, SO2, H2S, H2, CO and HCl) in a diluted plume and their mutual molar ratios. Processing such data is often difficult due to the high sensitivity of the sensors to environmental conditions such as humidity, gas concentrations, and pressure, with all involving occasional instrumental drift. Analyses require accurate and time-consuming processing by an operator. This paper presents a stand-alone program for the processing of chemical data obtained using the MultiGAS, called Ratiocalc. The Ratiocalc program has a user-friendly interface to enable volcanologists to process large datasets in a simple and rapid manner, thereby reducing the processing time associated with volcano monitoring and surveying.
Ratiocalc: Software for processing data from multicomponent volcanic gas analyzers
S0098300415001466
Metrics to track seasonal transitions are needed for a wide variety of ecological and climatological applications. Here a MATLAB©toolkit for calculating spring indices is documented. The spring indices have been widely used in earlier studies to model phenological variability and change through time across a wide range of spatial scales. These indices require only daily minimum and maximum temperature observations (e.g., from meteorological records) as input along with latitude, and produce a day of year value corresponding to the simulated average timing of first leaf and first bloom events among three plant cultivars. Core functions to calculate the spring indices require no external dependencies, and data for running several illustrative test cases are included. Instructions and routines for conducing more sophisticated monitoring and modeling studies using the spring indices are also supplied and documented.
A Matlab© toolbox for calculating spring indices from daily meteorological data
S0098300415300625
Spectral induced polarisation (SIP) measurements capture the low-frequency electrical properties of soils and rocks and provide a non-invasive means to access lithological, hydrogeological, and geochemical properties of the subsurface. The Debye decomposition (DD) approach is now increasingly being used to analyse SIP signatures in terms of relaxation time distributions due to its flexibility regarding the shape of the spectra. Imaging and time-lapse (monitoring) SIP measurements, capturing SIP variations in space and time, respectively, are now more and more conducted and lead to a drastic increase in the number of spectra considered, which prompts the need for robust and reliable DD tools to extract quantitative parameters from such data. We here present an implementation of the DD method for the analysis of a series of SIP data sets which are expected to only smoothly change in terms of spectral behaviour, such as encountered in many time-lapse applications where measurement geometry does not change. The routine is based on a non-linear least-squares inversion scheme with smoothness constraints on the spectral variation and in addition from one spectrum of the series to the next to deal with the inherent ill-posedness and non-uniqueness of the problem. By means of synthetic examples with typical SIP characteristics we elucidate the influence of the number and range of considered relaxation times on the inversion results. The source code of the presented routines is provided under an open source licence as a basis for further applications and developments.
Debye decomposition of time-lapse spectral induced polarisation data
S0098300415300832
This paper presents a fully automatic method for seismic event classification within a sparse regional seismograph network. The method is based on a supervised pattern recognition technique called the Support Vector Machine (SVM). The classification relies on differences in signal energy distribution between natural and artificial seismic sources. We filtered seismic records via 20 narrow band-pass filters and divided them into four phase windows: P, P coda, S, and S coda. We then computed a short-term average (STA) value for each filter channel and phase window. The 80 discrimination parameters served as a training model for the SVM. We calculated station specific SVM models for 19 on-line seismic stations in Finland. The training data set included 918 positive (earthquake) and 3469 negative (non-earthquake) examples. An independent test period determined method and rules for integrating station-specific classification results into network results. Finally, we applied the network classification rules to independent evaluation data comprising 5435 fully automatic event determinations, 5404 of which had been manually identified as explosions or noise, and 31 as earthquakes. The SVM method correctly identified 94% of the non-earthquakes and all but one of the earthquakes. The result implies that the SVM tool can identify and filter out blasts and spurious events from fully automatic event solutions with a high level of accuracy. The tool helps to reduce the work-load and costs of manual seismic analysis by leaving only a small fraction of automatic event determinations, the probable earthquakes, for more detailed seismological analysis. The self-learning approach presented here is flexible and easily adjustable to the requirements of a denser or wider high-frequency network.
Automatic classification of seismic events within a regional seismograph network
S0098300416300206
In this study, enhancements to the numerical representation of sluice gates and turbines were made to the hydro-environmental model Environmental Fluid Dynamics Code (EFDC), and applied to the Severn Tidal Power Group Cardiff–Weston Barrage. The extended domain of the EFDC Continental Shelf Model (CSM) allows far-field hydrodynamic impact assessment of the Severn Barrage, pre- and post-enhancement, to demonstrate the importance of accurate hydraulic structure representation. The enhancements were found to significantly affect peak water levels in the Bristol Channel, reducing levels by nearly 1m in some areas, and even affect predictions as far-field as the West Coast of Scotland, albeit to a far lesser extent. The model was tested for sensitivity to changes in the discharge coefficient, C d , used in calculating discharge through sluice gates and turbines. It was found that the performance of the Severn Barrage is not sensitive to changes to the C d value, and is mitigated through the continual, rather than instantaneous, discharge across the structure. The EFDC CSM can now be said to be more accurately predicting the impacts of tidal range proposals, and the investigation of sensitivity to C d improves the confidence in the modelling results, despite the uncertainty in this coefficient.
Impact of representation of hydraulic structures in modelling a Severn barrage
S0098300416300498
The static offsets caused by earthquakes are well described by elastostatic models with a discontinuity in the displacement along the fault. A traditional approach to model this discontinuity is to align the numerical mesh with the fault and solve the equations using finite elements. However, this distorted mesh can be difficult to generate and update. We present a new numerical method, inspired by the Immersed Interface Method (Leveque and Li, 1994), for solving the elastostatic equations with embedded discontinuities. This method has been carefully designed so that it can be used on parallel machines on an adapted finite difference grid. We have implemented this method in Gamra, a new code for earth modeling. We demonstrate the correctness of the method with analytic tests, and we demonstrate its practical performance by solving a realistic earthquake model to extremely high precision.
Gamra: Simple meshing for complex earthquakes
S0140366413001229
Cognitive radio refers to an intelligent radio with the capability of sensing the radio environment and dynamically reconfiguring the operating parameters. Recent research has focused on using cognitive radios in ad hoc environments. Spectrum sensing is the most important aspect of successful cognitive radio ad hoc network deployment to overcome spectrum scarcity. Multiple cognitive radio users can cooperate to sense the primary user and improve sensing performance. Cognitive radio ad hoc networks are dynamic in nature and have no central point for data fusion. In this paper, gradient-based fully distributed cooperative spectrum sensing in cognitive radio is proposed for ad hoc networks. The licensed band used for TV transmission is considered the primary user. The gradient field changes with the energy sensed by cognitive radios, and the gradient is calculated based on the components, which include energy sensed by secondary users and received from neighbors. The proposed scheme was evaluated from the perspective of reliable sensing, convergence time, and energy consumption. Simulation results demonstrated the effectiveness of the proposed scheme.
Distributed cooperative spectrum sensing in cognitive radio for ad hoc networks
S0140366414000255
A big portion of Internet traffic nowadays is video. A good understanding of user behavior in online Video-on-Demand (VoD) systems can help us design, configure and manage video content distribution. With the help of a major VoD service provider, we conducted a detailed study of user behavior watching streamed videos over the Internet. We engineered the video player at the client side to collect user behavior reports for over 540 million sessions. In order to isolate the possible effect of session quality of experience (QoE) on user behavior, we use only the sessions with perfect QoE, and leave out those sessions with QoE impairments (such as freezes). Our main finding is that users spend a lot of time browsing: viewing part of one video after another, and only occasionally (around 20% of the time) watching a video to its completion. We consider seek (jump to a new position of the video) as a special form of browsing – repeating partial viewing of the same video. Our analysis leads to a user behavior model in which a user transitions through a random number of short views before a longer view, and repeats the process a random number of times. This model can be considered an extension, and a more detailed alternative to the closed queueing network formulation introduced by Wu et al. (2009) [1]. As an application of our user behavior model, we use it to measure video popularity. We study the similarity of our approach to subjective evaluation and simple view count based metric, and conclude our approach gives results closer to subjective evaluation.
A study of user behavior in online VoD services
S0140366414000449
Software-Defined Networking (SDN) has been widely recognized as a promising way to deploy new services and protocols in future networks. The ability to “program” the network enables applications to create innovative new services inside the network itself. However, current SDN programmability comes with downsides that could hinder its adoption and deployment. First, in order to offer complete control, today’s SDN networks provide low-level API’s on which almost any type of service can be written. Because the starting point is a set of low-level API calls, implementing high-level complex services needed by future network applications becomes a challenging task. Second, the set of emerging SDN technologies that are beginning to appear have little in common with one another, making it difficult to set up a flow that traverses multiple SDN technologies/providers. In this paper we propose a new way to set up SDN networks spanning multiple SDN providers. The key to our approach is a Network Hypervisor service. The Network Hypervisor offers high-level abstractions and APIs that greatly simplify the task of creating complex SDN network services. Moreover, the Network Hypervisor is capable of internetworking various SDN providers together under a single interface/abstraction so that applications can establish end-to-end flows without the need to see, or deal with, the differences between SDN providers.
Network Hypervisors: Enhancing SDN Infrastructure
S0140366414000875
Traditional wireless sensor networks (WSNs) are constrained by limited battery energy that powers the sensor nodes, which impedes the large-scale deployment of WSNs. Wireless power transfer technology provides a promising way to solve this problem. With such novel technology, recent works propose to use a single mobile charger (MC) traveling through the network fields to replenish energy to every sensor node so that none of the nodes will run out of energy. These algorithms work well in small-scale networks. In large-scale networks, these algorithms, however, do not work efficiently, especially when the amount of energy the MC can provide is limited. To address this issue, multiple MCs can be used. In this paper, we investigate the minimum MCs problem (MinMCP) for two-dimensional (2D) wireless rechargeable sensor networks (WRSNs), i.e., how to find the minimum number of energy-constrained MCs and design their recharging routes in a 2D WRSN such that each sensor node in the network maintains continuous work, assuming that the energy consumption rate for all sensor nodes are identical. By reduction from the Distance Constrained Vehicle Routing Problem (DVRP), we prove that MinMCP is NP-hard. Then we propose approximation algorithms for this problem. Finally, we conduct extensive simulations to validate the effectiveness of our algorithms.
Minimizing the number of mobile chargers for large-scale wireless rechargeable sensor networks
S0140366414000887
In wireless communication systems users compete for communication opportunities through a medium access control protocol. Previous research has shown that selfish behavior in medium access games could lead to inefficient and unfair resource allocation. We introduce a new notion of reciprocity in a medium access game and derive the corresponding Nash equilibrium. Further, using mechanism design we show that this type of reciprocity can remove unfair/inefficient equilibrium solutions. The best response learning method for the reciprocity game framework is studied. It demonstrates that the game converges to the unique and stable Nash equilibrium if the nodes have low collision costs or high psychological sensitivity. For symmetric games the converged Nash equilibrium turns out to be the fair strategy.
Reciprocity, fairness and learning in medium access control games
S0140366414000929
This paper examines the problem of rate allocation for multicasting over slow Rayleigh fading channels using network coding. In the proposed model, the network is treated as a collection of Rayleigh fading multiple access channels. In this model, rate allocation scheme that is based solely on the statistics of the channels is presented. The rate allocation scheme is aimed at minimizing the outage probability. An upper bound is presented for the probability of outage in the fading multiple access channel. A suboptimal solution based on this bound is given. A distributed primal–dual gradient algorithm is derived to solve the rate allocation problem.
Topology management and outage optimization for multicasting over slowly fading multiple access networks
S0140366414000930
Femtocells offer many advantages in wireless networks such as improved cell capacity and coverage in indoor areas. As these femtocells can be deployed in an ad hoc manner by different consumers in the same frequency band, the femtocells can interfere with each other. To fully realize the potential of the femtocells, it is necessary to allocate resources to them in such a way that interference is mitigated. We propose a distributed resource allocation algorithm for femtocell networks that is modelled after link-state routing protocols. Resource allocation using Link State Propagation (RALP) consists of a graph formation stage, where individual femtocells build a view of the network, an allocation stage, where every femtocell executes an algorithm to assign OFDMA resources to all the femtocells in the network and local scheduling stage, where a femtocell assigns resources to all user equipments based on their throughput requirements. Our evaluation shows that RALP performs better than existing femtocell resource allocation algorithms with respect to spatial reuse and satisfaction rate of required throughput.
Resource allocation using Link State Propagation in OFDMA femto networks
S0140366414000954
In this paper, we introduce a moving target defense mechanism that defends authenticated clients against Internet service DDoS attacks. Our mechanism employs a group of dynamic, hidden proxies to relay traffic between authenticated clients and servers. By continuously replacing attacked proxies with backup proxies and reassigning (shuffling) the attacked clients onto the new proxies, innocent clients are segregated from malicious insiders through a series of shuffles. To accelerate the process of insider segregation, we designed an efficient greedy algorithm which is proven to have near optimal empirical performance. In addition, the insider quarantine capability of this greedy algorithm is studied and quantified to enable defenders to estimate the resource required to defend against DDoS attacks and meet defined QoS levels under various attack scenarios. Simulations were then performed which confirmed the theoretical results and showed that our mechanism is effective in mitigating the effects of a DDoS attack. The simulations also demonstrated that the overhead introduced by the shuffling procedure is low.
A moving target DDoS defense mechanism
S0140366414000966
Vehicular safety is an emergent application in inter-vehicular communications. As this application is based on fast multi-hop message propagation, including information such as position, direction, and speed, it is crucial for the data exchange system of the vehicular application to be resilient to security attacks. To make vehicular networks viable and acceptable to consumers, we have to design secure protocols that satisfy the requirements of the vehicular safety applications. The contribution of this work is threefold. First, we analyze the vulnerabilities of a representative approach named Fast Multi-hop Algorithm (FMBA) to the position cheating attack. Second, we devise a fast and secure inter-vehicular accident warning protocol which is resilient against the position cheating attack. Finally, an exhaustive simulation study shows the impact of the attack on the protocol FMBA on delaying the transmission of alert messages. Furthermore, we show that our secure solution is effective in mitigating the position cheating attack.
A secure alert messaging system for safe driving
S0140366414002461
A fundamental goal of datacenter networking is to efficiently interconnect a large number of servers in a cost-effective way. Inspired by the commodity servers in today’s data centers that come with dual-port, we consider how to design low-cost, robust, and symmetrical network structures for containerized data centers with dual-port servers and low-end switches. In this paper, we propose a family of such network structure called a DCube, including H-DCube and M-DCube. The DCube consists of one or multiple interconnected sub-networks, each of which is a compound graph made by interconnecting a certain number of basic building blocks by means of a hypercube-like graph. More precisely, the H-DCube and M-DCube utilize the hypercube and 1-möbius cube, respectively, while the M-DCube achieves a considerably higher aggregate bottleneck throughput compared to H-DCube. Mathematical analysis and simulation results show that the DCube exhibits graceful performance degradation as the failure rate of server or switch increases. Moreover, the DCube significantly reduces the required wires and switches compared to the BCube and fat-tree. In addition, the DCube achieves a higher speedup than the BCube does for the one-to-several traffic patterns. The proposed methodologies in this paper can be applied to the compound graph of the basic building block and other hypercube-like graphs, such as Twisted cube, Flip MCube, and fastcube.
DCube: A family of network structures for containerized data centers using dual-port servers
S0140366414002497
Having in mind multimedia systems applications, we propose a novel model-based approach to estimate the clock-offset between two nodes on the Internet. Different than current clock-offset schemes in the literature, which are iterative in nature, our scheme is aimed at getting a good non-iterative clock-offset estimation in real time (in the order of milliseconds). In our clock-offset estimation approach, the One-Way Delay (OWD) measurements are modeled with a shifted gamma distribution representing the current state of the probing link. By using the QQ-probability plot technique and linear regression model, we estimate the (shift parameter or) minimum value of the gamma distribution with probability zero. This estimated value represents the clock offset plus network propagation and transmission delay (queuing delay has already been eliminated) for the corresponding receiving path. End nodes exchange their corresponding minimum estimates and get an improved final clock offset estimate considering the network path asymmetries. Based on real experiments, we show that our scheme provides an extremely fast clock-offset estimation with lower RMSE and superior stability than NTP and current NTP-like state of the art methodologies in the literature Jeske and Sampath (2003), Choi and Yoo (2005), Adhikari et al. (2003), Tsuru et al. (2002). Moreover, our proposed scheme is non-intrusive (no kernel programming needed), easy to implement, and targeted as part of more complex real-time multimedia distribution protocols requiring a fast and reliable OWD estimates.
A new model-based clock-offset approximation over IP networks
S0140366414002801
The Smart Grid is expected to utilize a wireless infrastructure for power data collection in its Advanced Metering Infrastructure (AMI) applications. One of the options to implement such a network infrastructure is to use a wireless mesh network based on IEEE 802.11s mesh standard. However, IEEE 802.11s standard relies on MAC-based routing and thus requires the availability of MAC addresses of destinations. Due to large size of AMI networks, this creates a broadcast storm problem when such information is to be obtained via Address Resolution Protocol (ARP) broadcast packets. In this paper, we propose a mechanism to significantly alleviate such broadcast storm problem in order to improve the scalability of 802.11s and thus make it better suited for Smart Grid AMI applications. Our contribution is adapting 802.11s standard for addressing ARP broadcast storm problem in a secure and efficient manner. Specifically, we utilize the proactive Path Request (PREQ) packet and Path Reply (PREP) of layer-2 path discovery protocol of 802.11s, namely HWMP, for piggybacking ARP information. In this way, the MAC address resolution is handled during routing tree creation/maintenance and hence the broadcasting of ARP requests by the smart meters (SMs) to learn the MAC address of the data collector (i.e., the gateway/root node) is completely eliminated. Furthermore, since piggybacking the ARP via PREQ may pose vulnerabilities for possible ARP cache poisoning attacks, the data collector also authenticates the messages it sends to SMs by using Elliptic Curve Digital Signature Algorithm (ECDSA). We have extensively analyzed the behavior and overhead of the proposed mechanism using implementation of IEEE 802.11s in ns-3 simulator. The evaluations for both UDP and TCP show that compared to the original ARP broadcast operations, our approach reduces the end-to-end delay significantly without negatively impacting the packet delivery ratio and throughput.
PARP-S: A secure piggybacking-based ARP for IEEE 802.11s-based Smart Grid AMI networks
S0140366414002825
The massive integration of renewable energy sources in the power grid ecosystem with the aim of reducing carbon emissions must cope with their intrinsically intermittent and unpredictable nature. Therefore, the grid must improve its capability of controlling the energy demand by adapting the power consumption curve to match the trend of green energy generation. This could be done by scheduling the activities of deferrable and/or interruptible electrical appliances. However, communicating the users’ needs about the usage of their appliances also leaks sensitive information about their habits and lifestyles, thus arising privacy concerns. This paper proposes a framework to allow the coordination of energy consumption without compromising the privacy of the users: the service requests generated by the domestic appliances are divided into crypto-shares using Shamir Secret Sharing scheme and collected through an anonymous routing protocol by a set of schedulers, which schedule the requests by directly operating on the shares. We discuss the security guarantees provided by our proposed infrastructure and evaluate its performance, comparing it with the optimal scheduling obtained by means of an Integer Linear Programming formulation.
Privacy-friendly load scheduling of deferrable and interruptible domestic appliances in Smart Grids
S0140366414002989
Smart technologies play a key role in sustainable economic growth. They transform houses, offices, factories, and even cities into autonomic, self-controlled systems acting often without human intervention and thus sparing people routine connected with information collecting and processing. The paper gives an overview of a novel Wi-Fi technology, currently under development, which aims to organize communication between various devices used in such applications as smart grids, smart meters, smart houses, smart healthcare systems, smart industry, etc.
A survey on IEEE 802.11ah: An enabling networking technology for smart cities
S0140366414002990
Vehicular Ad-Hoc Networks (VANETs) will play an important role in Smart Cities and will support the development of not only safety applications, but also car smart video surveillance services. Recent improvements in multimedia over VANETs allow drivers, passengers, and rescue teams to capture, share, and access on-road multimedia services. Vehicles can cooperate with each other to transmit live flows of traffic accidents or disasters and provide drivers, passengers, and rescue teams rich visual information about a monitored area. Since humans will watch the videos, their distribution must be done by considering the provided Quality of Experience (QoE) even in multi-hop, multi-path, and dynamic environments. This article introduces an application framework to handle this kind of services and a routing protocol, the DBD (Distributed Beaconless Dissemination), that enhances the dissemination of live video flows on multimedia highway VANETs. DBD uses a backbone-based approach to create and maintain persistent and high quality routes during the video delivery in opportunistic Vehicle to Vehicle (V2V) scenarios. It also improves the performance of the IEEE 802.11p MAC layer, by solving the Spurious Forwarding (SF) problem, while increasing the packet delivery ratio and reducing the forwarding delay. Performance evaluation results show the benefits of DBD compared to existing works in forwarding videos over VANETs, where main objective and subjective QoE results are measured.
A distributed beaconless routing protocol for real-time video dissemination in multimedia VANETs
S0140366414003004
Mobile network providers face an ever-increasing number of mobile devices requesting similarly increasing amounts of data. In this article, we present a two-step approach to modeling and simulating the amounts of data produced by mobile devices based on applications that are highly utilized on the network. In the first step, we separate the applications on a mobile device into highly utilized and background ones for the overall population to be modeled. With the identified overall application groups, we employ a four-state Hidden Markov Model to capture the characteristics of the high utilization applications as aggregates per device; the characteristics of the background applications are matched to four states, dependent on the high utilization aggregates’ states. Utilizing the Exponential distribution for both, we closely match their original user-based characteristics. The suitability of our model is lastly corroborated through simulation-based comparisons of estimations for the bandwidth requirements of the individual users; or our model’s estimates are typically within ten percent of the original values.
Capacity level modeling of mobile device bandwidth requirements employing high utilization mobile applications
S0140366414003120
Multi-tier networks comprising of macro-cellular network overlaid with less power, short range, home-base station like femtocells provide an economically feasible solution for meeting the unrelenting traffic demands. However, femtocells that use co-channel allocation with macrocells result in cross-tier interference which eventually degrades the system performance. It is for this reason that, cross-polarized data transmission is proposed in this paper as a potential approach towards improving the spectral efficiency of cellular systems and at the same time permitting co-channel allocation. Here two independent information channels occupying the same frequency band can be transmitted over a single link. The paper evaluates a scenario where femtocell network makes use of right hand circular polarization (RHCP) and macrocell network makes use of left hand circular polarization (LHCP) for signal transmission. The polarizations being orthogonal to each other due to their sense of rotation ensure isolation between the networks and enable both of them to use the same spectral resources simultaneously. Analytical and simulation results prove that this opens the scope for an easily implementable, remarkable opportunity in the context of two-tier femto–macro network that can increase the system capacity. The paper closes by discussing the technical challenges involved in the implementation as well as the possible solutions to overcome the same.
Orthogonal circular polarized transmission for interference control in femto–macro networks
S0140366414003326
Barrier coverage is one of the most important applications of wireless sensor networks. It is used to detect mobile objects are entering into the boundary of a sensor network field. Energy efficiency is one of the main concerns in barrier coverage for wireless sensor networks and its solution can be widely used in sensor barrier applications, such as intrusion detectors and border security. In this work, we take the energy efficiency as objectives of the study on barrier coverage. The cost in the present paper can be any performance measurement and normally is defined as any resource which is consumed by sensor barrier. In this paper, barrier coverage problem is modeled based on stochastic coverage graph first. Then, a distributed learning automata-based method is proposed to find a near optimal solution to the stochastic barrier coverage problem. The stochastic barrier coverage problem seeks to find minimum required number of sensor nodes to construct sensor barrier path. To study the performance of the proposed method, computer simulations are conducted. The simulation results show that the proposed algorithm significantly outperforms the greedy based algorithm and optimal method in terms of number of network barrier paths.
Stochastic barrier coverage in wireless sensor networks based on distributed learning automata
S0140366414003648
The widespread use of mobile and high definition video devices is changing Internet traffic, with a significant increase in multimedia content, especially video on demand (VoD) and Internet protocol television (IPTV). However, the success of these services is strongly related to the video quality perceived as by the user, also known as quality of experience (QoE). This paper reviews current methodologies used to evaluate the quality of experience in a video streaming service. A typical video assessment diagram is described, and analyses of the subjective, objective, and hybrid approaches are presented. Finally, considering the moving target scenario of mobile and high definition devices, the text outlines challenges and future research directions that should be considered in the measurement and assessment of the quality of experience for video streaming services.
A concise review of the quality of experience assessment for video streaming
S0140366414003661
Internet Protocol Television (IPTV) traffic streaming over vehicular ad hoc networks (VANETs) challenge could be distinguished by the frequent VANETs topology change, paired with real-time streaming of IPTV traffic which requires high bandwidth and strict performance service quality. For VANETs to deliver better quality IPTV traffic, the network must satisfy the compelling Quality of Service (QoS) requirement of real-time traffic streaming, such as jitter, bandwidth, delay, and loss. There are numerous metrics defined in the literature for evaluating multimedia streaming traffic QoS such as: video quality metric, peak-signal-to-noise-ratio, moving picture quality metric and many more. However, these metrics relied upon the objective approach of quality evaluation that requires the original signal as reference, and cannot isolate the impact of network impairments on the video quality. Therefore, fail to provide any mapping between the network QoS parameters and the respective deteriorated quality of the multimedia traffic. Similarly, such procedures are not practically applicable to VANETs whose network characteristics make it practically impossible to access the reference video sequence. Hence, in this paper, we conduct an experiment to determine the feasibility of delivering a qualitative IPTV service over VANETs. We derived analytical model to quantify the IPTV QoS influencing parameters, where we establish relationships between the variables of each parameter and their impacts on the IPTV traffic QoS. Through extensive experiment, we evaluate the IPTV transmission QoS parameters, to assure a priority for handling bandwidth allocation, delay and loss control to a negligible level.
Network centric QoS performance evaluation of IPTV transmission quality over VANETs
S0140366414003697
Resource allocation is an important issue that has not been well resolved in multi-relay multi-user Decode-and-Forward (DF) OFDM systems with subcarrier-pairing at relays in the presence of heterogeneous flows, i.e., simultaneous real-time (RT) and non-real-time (NRT) traffic. In this paper, we address the issue by first formulating it as a joint optimization problem of relay-user pair selection, subcarrier-pair assignment and power allocation and then solving it through dual decomposition and subgradient methods. Asymptotic optimal iterative algorithms with polynomial complexity are proposed with the objective of maximizing sum-transmission-rate of NRT traffic while providing quality-of-service (QoS) guarantee for RT traffic, in two cases: (i) with total network power constraint and (ii) with individual power constraints. Effective selections of initial dual variables and stepsize for the subgradient method are also presented. Simulation results are provided in multiple scenarios and they show that: (i) the proposed algorithms outperform the existing schemes in terms of providing QoS requirements of RT users and maximizing sum network transmission rates; (ii) algorithms designed under total power constraint exploit the power resource much better than those under individual power constraints.
Resource allocation in multi-relay multi-user OFDM systems for heterogeneous traffic
S0140366415000328
Efficient message dissemination is of utmost importance to propel the development of useful services and applications in Vehicular ad hoc Networks (VANETs). In this paper, we propose a novel adaptive system that allows each vehicle to automatically adopt the most suitable dissemination scheme in order to fit the warning message delivery policy to each specific situation. Our mechanism uses as input parameters the vehicular density and the topological characteristics of the environment where the vehicles are located, in order to decide which dissemination scheme to use. We compare our proposal with respect to two static dissemination schemes (eMDR and NJL), and three adaptive dissemination systems (UV-CAST, FDPD, and DV-CAST). Simulation results demonstrate that our approach significantly improves upon these solutions, being able to support more efficient warning message dissemination in all situations ranging from low densities with complex maps, to high densities in simple scenarios. In particular, RTAD improves existing approaches in terms of percentage of vehicles informed, while significantly reducing the number of messages sent, thus mitigating broadcast storms.
RTAD: A real-time adaptive dissemination system for VANETs
S0140366415000560
In typical mobile sensing architectures, sensing data are collected from users and stored in centralized servers at third parties, making it difficult to effectively protect users’ privacy. A better way to protect privacy is to upload sensing data on personal data stores, which are owned and controlled by the users, enabling them to supervise and limit personal data disclosure and exercise access control to their data. The problem however remains how data requesters can discover the users who can offer them the data they need. In this paper we suggest a mobile sensing platform that enables data requesters to discover data producers within a specific geographic region and acquire their data. Our platform protects the anonymity of both requesters and producers, while at the same time it enables the incorporation of trust frameworks, incentive mechanisms and privacy-respecting reputation schemes. We also present extensive experimental results that demonstrate the efficiency of our approach in terms of scalability, load balancing and performance.
A platform for privacy protection of data requesters and data providers in mobile sensing
S0140366415000572
Currently, more and more mobile terminals embed a number of sensors and generate massive data. Effective utilization to such information can enable people to get more personalized services, and also help service providers to sell their products accurately. As the information may contain privacy information of people, they are typically encrypted before transmitted to the service providers. This, however, significantly limits the usability of data due to the difficulty of searching over the encrypted data. To address the above issues, in this paper, we first leverage the secure kNN technique to propose an efficient and privacy-preserving multi-feature search scheme for mobile sensing. Furthermore, we propose an extended scheme, which can personalize query based on the historical search information and return more accurate result. Using analysis, we prove the security of the proposed scheme on privacy protection of index and trapdoor and unlinkability of trapdoor. Via extensive experiment on real-world cloud systems, we validate the performance of the proposed scheme in terms of functionalities, computation and communication overhead.
Achieving efficient and privacy-preserving multi-feature search for mobile sensing
S0140366415000584
At present, in mobile sensing environment, almost all the existing secure large data objects dissemination algorithms are centralized. The centralized servers publicize the sensing tasks and are also the authorized parties to initiate sensed data dissemination. This paper proposes a novel social role and network coding based security distributed data dissemination algorithm referred as PRXeluge to overcome the shortcomings of existing centralized data dissemination algorithms. Unlike the existing participatory sensing applications, in PRXeluge, service provider just publicizes the sensing tasks and utilizes a conditional proxy re-signature technique to authorize different social roles such as authorized smartphone users to be utilized as a contracted picture reporters, which sense the data and directly disseminate the sensed large data. Furthermore, PRXeluge proposes the XOR (Exclusive-OR) network coding scheme on the basis of Seluge security framework. To maximize the number of successfully decoded packets, PRXeluge introduces a neighbor node table to determine the optimal coding scheme. Experimental results reveal that the proposed PRXeluge shows better performance in terms of lower data packet transmission and dissemination delay compared to that of Seluge. Furthermore, it is observed from the experiment that the proposed algorithm is stronger as compared to that of centralized scheme and performs the fine grain access control without giving any additional load to subscriber nodes.
Social role-based secure large data objects dissemination in mobile sensing environment
S0140366415000729
Mobile phone sensing is a new paradigm which takes advantage of smart phones to collect and analyze data at large scale but with a low cost. Supporting pervasive communications among mobile devices in such a large-scale mobile social network becomes a key challenge for this new mobile sensing system. One possible solution is allowing packet delivery among mobile devices via opportunistic communications during intermittent contacts. However, the lack of rich contact opportunities still causes poor delivery ratio and long delay, especially for large-scale networks. Deployment of additional stationary throwboxes can create a greater number of contact opportunities, thus improve the performance of routing. However, the locations of deployed throwboxes are critical to such improvement. In this paper, we investigate where to deploy throwboxes in a large-scale throwbox-assisted mobile social DTN. By leveraging the social properties discovered from real-life tracing data, we propose a set of social-based throwbox placement algorithms which smartly pick the location of each throwbox. Extensive simulations are conducted with a real-life wireless tracing dataset and a wide range of existing DTN routing methods. The results confirm the efficiency of the proposed methods.
Social based throwbox placement schemes for large-scale mobile social delay tolerant networks
S0140366415001279
Poor medication adherence is a prevalent medical problem resulting in significant morbidity and mortality, especially for elder adults. In this paper, we propose a Socialized Prompting System (SPS), which combines ubiquitous sensors in the smart home and mobile social networks to improve medication adherence. Ubiquitous sensors benefit the seamless monitoring of medication intake behaviors, while the mobile social networks contribute to social prompting in a community. The mechanisms of medication monitoring with ubiquitous sensors and the collaborative prompting based on mobile social network are presented. The experimental results showed that the medication adherence of the testing subjects has been improved by using the proposed system.
Facilitating medication adherence in elderly care using ubiquitous sensors and mobile social networks
S0140366415001589
With the rapid proliferation of data centers, their energy consumption and greenhouse gas emissions have significantly increased. Some efforts have been made to control and lower energy consumption of data centers such as, proportional energy consuming hardware, dynamic provisioning, and virtualization machine techniques. However, it is still common that many servers and network resources are often underutilized, and idle servers spend a large portion of their peak power consumption. We first build a novel model of virtual network embedding in order to minimize energy usage in data centers for both computing and network resources by taking practical factors into consideration. Due to the NP-hardness of the proposed model, we develop a heuristic algorithm for virtual network scheduling and mapping. In doing so, we specifically take the expected energy consumption at different times, virtual network operation and future migration costs, and a data center architecture into consideration. Our extensive evaluation results show that our algorithm could reduce energy consumption up to 40% and take up to a 57% higher number of virtual network requests over other existing virtual mapping schemes.
Energy efficient virtual network embedding for green data centers using data center topology and future migration
S0140366415002121
Smart grid combines a set of functionalities that can only be achieved through ubiquitous sensing and communication across the electrical grid. The communication infrastructure must be able to cope with an increasing number of traffic types which is as a result of increased control and monitoring, penetration of renewable energy sources and adoption of electric vehicles. The communication infrastructure must serve as a substrate that supports different traffic requirements such as QoS (i.e. latency, bandwidth and delay) across an integrated communication system. This engenders the implementation of middleware systems which considers QoS requirements for different types of traffic in order to allow prompt delivery of these traffic in a smart grid system. A heterogeneous communication applied through the adaptation of the Ubiquitous Sensor Network (USN) layered structure to smart grid has been proposed by the International Telecommunication Union (ITU). This paper explores the ITU's USN architecture and presents the communication technologies which can be deployed within the USN schematic layers for a secure and resilient communication together with a study of their pro's and con's, vulnerabilities and challenges. It also discusses the factors that can affect the selection of communication technologies and suggests possible communications technologies at different USN layers. Furthermore, the paper highlights the USN middleware system as an important mechanism to tackle scalability and interoperability problems as well as shield the communication complexities and heterogeneity of smart grid.
Resilient communication for smart grid ubiquitous sensor network: State of the art and prospects for next generation
S0140366415002169
Location-based social networks (LBSNs) have recently attracted a lot of attention due to the number of novel services they can offer. Prior work on analysis of LBSNs has mainly focused on the social part of these systems. Even though it is important to know how different the structure of the social graph of an LBSN is as compared to the friendship-based social networks (SNs), it raises the interesting question of what kinds of linkages exist between locations and friendships. The main problem we are investigating is to identify such connections between the social and the spatial planes of an LBSN. In particular, in this paper we focus on answering the following general question “What are the bonds between the social and spatial information in an LBSN and what are the metrics that can reveal them?” In order to tackle this problem, we employ the idea of affiliation networks. Analyzing a dataset from a specific LBSN (Gowalla), we make two main interesting observations; (i) the social network exhibits signs of homophily with regards to the “places/venues” visited by the users, and (ii) the “nature” of the visited venues that are common to users is powerful and informative in revealing the social/spatial linkages. We further show that the “entropy” of a venue can be used to better connect spatial information with the existing social relations. The entropy records the diversity of a venue and requires only location history of users (it does not need temporal history). Finally, we provide a simple application of our findings for predicting existing friendship relations based on users’ historic spatial information. We show that even with simple unsupervised or supervised learning models we can achieve significant improvement in prediction when we consider features that capture the “nature” of the venue as compared to the case where only apparent properties of the location history are used (e.g., number of common visits).
Socio-spatial affiliation networks
S0140366415002248
Distributed reflective denial of service (DRDoS) attacks, especially those based on UDP reflection and amplification, can generate hundreds of gigabits per second of attack traffic, and have become a significant threat to Internet security. In this paper we show that an attacker can further make the DRDoS attack more dangerous. In particular, we describe a new DRDoS attack called store-and-flood DRDoS, or SF-DRDoS, which leverages peer-to-peer (P2P) file-sharing networks. An attacker can store carefully prepared data on reflector nodes before the flooding phase, to greatly increase the amplification factor of an attack. In this way, SF-DRDoS is more surreptitious and powerful than traditional DRDoS. We present two prototype SF-DRDoS attacks on two popular Kademlia-based P2P file-sharing networks, Kad and BT-DHT. Experiments in real-world environments showed that, this attack can achieve an amplification factor of 2400 on average in Kad, and reach an upper bound of attack bandwidth at 670 Gbps and 10 Tbps for Kad and BT-DHT, respectively. We also propose some candidate defenses to mitigate the SF-DRDoS threat.
SF-DRDoS: The store-and-flood distributed reflective denial of service attack
S0140366415002273
Location-aware applications are one of the biggest innovations brought by the smartphone era, and are effectively changing our everyday lives. But we are only starting to grasp the privacy risks associated with constant tracking of our whereabouts. In order to continue using location-based services in the future without compromising our privacy and security, we need new, privacy-friendly applications and protocols. In this paper, we propose a new compact data structure based on Bloom filters, designed to store location information. The spatial Bloom filter (SBF), as we call it, is designed with privacy in mind, and we prove it by presenting two private positioning protocols based on the new primitive. The protocols keep the user’s exact position private, but allow the provider of the service to learn when the user is close to specific points of interest, or inside predefined areas. At the same time, the points and areas of interest remain oblivious to the user. The two proposed protocols are aimed at different scenarios: a two-party setting, in which communication happens directly between the user and the service provider, and a three-party setting, in which the service provider outsources to a third party the communication with the user. A detailed evaluation of the efficiency and security of our solution shows that privacy can be achieved with minimal computational and communication overhead. The potential of spatial Bloom filters in terms of generality, security and compactness makes them ready for deployment, and may open the way for privacy preserving location-aware applications.
Location privacy without mutual trust: The spatial Bloom filter
S0140366415002297
In online systems of videos, music or books, users’ behaviors are disclosed to the recommender systems to learn their interests. Such a disclosure raises a serious concern in the public for the leak of users’ privacy. Meanwhile, some algorithms are proposed to obfuscate users’ historical behavior records to protect users’ privacy, at the cost of degradation of recommendation accuracy. It is a common belief that such tradeoff is inevitable. In this paper, however, we break this pessimistic belief based on the fact that people's interests are not necessarily limited to items which are geared to a certain gender, age, or profession. Based on this idea, we propose a recommendation-friendly privacy-preserving framework by introducing a privacy-preserving module between a recommender system and user side. For instance, to obfuscate a female user's gender information, the privacy-preserving module adds a set of extra factitious ratings of movies not watched by the given user. These added movies are selected to be those mostly watched by male viewers but interesting the given female user. Extensive experiments show that our algorithm obfuscates users’ privacy information, e.g., gender, efficiently, but also maintains or even improves recommendation accuracy.
Can user privacy and recommendation performance be preserved simultaneously?
S0140366415002303
In a question-driven survey, the answers to one question may decide which question is presented next. In this case, encrypting the answers of the participants is not enough to protect their privacy since the system is able to learn them by inspection of the next question the participants request. In this article, we explore the technologies involved in surveys performed through a mobile phone. Participants receive the questions using VoIP technologies and, since their answers affect which questions are presented next, they must protect the selection of the relevant questions. In addition, this paper considers the performance of the proposed encryption technologies in mobile phones. Finally, the answers to the poll must be sent to the server. This paper proposes an eVoting framework to preserve the privacy of the users while sending the answers to the system. Such a scenery involves many different communication channels and technologies. As we will show, the decisions taken in some of the modules force some technologies and decisions in the others.
Private surveys on VoIP
S0140366415002315
High quality online video streaming, both live and on-demand, has become an essential part of many consumers’ lives. The popularity of video streaming, however, places a burden on the underlying network infrastructure. This is because it needs to be capable of delivering significant amounts of data in a time-critical manner to users. The Video-on-Demand (VoD) distribution paradigm uses a unicast independent flow for each user request. This results in multiple duplicate flows carrying the same video assets that only serve to exacerbate the burden placed upon the network. In this paper we present OpenCache: a highly configurable, efficient and transparent in-network caching service that aims to improve the VoD distribution efficiency by caching video assets as close to the end-user as possible. OpenCache leverages Software Defined Networking technology to benefit last mile environments by improving network utilisation and increasing the Quality of Experience for the end-user. Our evaluation on a pan-European OpenFlow testbed uses adaptive bitrate video to demonstrate that with the use of OpenCache, streaming applications play back higher quality video and experience increased throughput, higher bitrate, and shorter start up and buffering times.
Using Software Defined Networking to enhance the delivery of Video-on-Demand
S0140366415002327
Wireless sensor networks adopting static data gathering may suffer from unbalanced energy consumption due to non-uniform packet relay. Although mobile data gathering provides a reasonable approach to solving this problem, it inevitably introduces longer data collection latency due to the use of mobile data collectors. In the meanwhile, energy harvesting has been considered as a promising solution to relieve energy limitation in wireless sensor networks. In this paper, we consider a joint design of these two schemes and propose a novel two layer heterogeneous architecture for wireless sensor networks, which consists of two types of nodes: sensor nodes which are static and powered by solar panels, and cluster heads that have limited mobility and can be wirelessly recharged by power transporters. Based on this network architecture, we present a data gathering scheme, called mobility assisted data gathering with solar irradiance awareness (MADG-SIA), where sensor nodes are clustered around cluster heads that adaptively change their positions according to solar irradiance, and the sensing data are forwarded to the data sink by these cluster heads working as data aggregation points. We evaluate the performance of the proposed scheme by extensive simulations and the results show that MADG-SIA provides significant improvement in terms of balancing energy consumption and the amount data gathered compared to previous work.
Mobility assisted data gathering with solar irradiance awareness in heterogeneous energy replenishable wireless sensor networks
S0140366415002339
Social exchange theory proposes that social behavior is the result of an exchange process. The purpose of this exchange is to maximize benefits and minimize costs. Online social networks seem to be an ideal platform for social exchange because they provide an opportunity to keep social relations with a relatively low cost compared to offline relations. This theory was verified positively many times for offline social interactions, and we decided to examine whether this theory may be also applied to online social networks. Our research is focused on reciprocity, which is crucial for social exchanges because humans keep score, assign meaning to exchanges, and change their subsequent interactions based on a reciprocity balance. The online social network platform of our choice was Facebook, which is one of the most successful online social sites allowing users to interact with their friends and acquaintances. In our study we found strong empirical evidence that an increase in the number of reciprocity messages the actor broadcasts in online social networks increases the reciprocity reactions from his or her audience. This finding allowed for positive verification of the social exchange theory in online communities. Hence, it can be stated that our work contributes to theories of exchange patterns in online social networks.
Social exchange in online social networks. The reciprocity phenomenon on Facebook
S0140366415002340
The stock market is a popular topic in Twitter. The number of tweets concerning a stock varies over days, and sometimes exhibits a significant spike. In this paper, we investigate the relationship between Twitter volume spikes and stock options pricing. We start with the underlying assumption of the Black–Scholes model, the most widely used model for stock options pricing, and investigate when this assumption holds for stocks that have Twitter volume spikes. We find that the assumption is less likely to hold in the time period before a Twitter volume spike, and is more likely to hold afterwards. In addition, the volatility of a stock is significantly lower after a Twitter volume spike than that before the spike. We also find that implied volatility increases sharply before a Twitter volume spike and decreases quickly afterwards. In addition, put options tend to be priced higher than call options. Last, we find that right after a Twitter volume spike, options may still be overpriced. Based on the above findings, we propose a put spread selling strategy for stock options trading. Realistic simulation of a portfolio using one year stock market data demonstrates that, even in a conservative setting, this strategy achieves a 34.3% gain when taking account of commissions and ask-bid spread, while S&P 500 only increases 12.8% in the same period.
Twitter volume spikes and stock options pricing
S0140366415002364
Large content providers, such as Google, Yahoo, and Microsoft, aim to directly connect with consumer networks and place the content closer to end users. Exchanging traffic directly between end users and content providers can reduce the cost of transit services. However, direct connection to all end users is simply not feasible. Content providers by-and-large still rely on transit services to reach the majority of end users. We argue that routing policies are an important factor in considering the selection of ISPs for content providers. Therefore, determining which ISP to peer or use as a transit becomes a key question for content providers. In this paper, we formulate the policy-aware peering problem, in which we determine not only which ISP to connect with, but also the kind of peering agreement to establish. We prove that such a policy-aware peering problem is NP-complete, and propose a heuristic algorithm to solve the problem. Further, we perform a large-scale measurement study of the peering characteristics of five large content providers, and evaluate the existing peering connections deployed by the content providers. Our measurement results show that changing the existing peering agreements or adding as little as 3–5 new peering connections can enhance the connection between content providers and end users significantly.
Routing-policy aware peering for large content providers
S0140366415002406
Complex networks facilitate the understanding of natural and man-made processes and are classified based on the concepts they model: biological, technological, social or semantic. The relevant subgraphs in these networks, called network motifs, are demonstrated to show core aspects of network functionality and can be used to analyze complex networks based on their topological fingerprint. We propose a novel approach of classifying social networks based on their topological aspects using motifs. As such, we define the classifiers for regular, random, small-world and scale-free topologies, and then apply this classification on empirical networks. We then show how our study brings a new perspective on differentiating between online social networks like Facebook, Twitter and Google Plus based on the distribution of network motifs over the fundamental topology classes. Characteristic patterns of motifs are obtained for each of the analyzed online networks and are used to better explain the functional properties behind how people interact online and to define classifiers capable of mapping any online network to a set of topological-communicational properties.
Uncovering the fingerprint of online social networks using a network motif based approach
S0140366415002431
Unified communications has enabled seamless data sharing between multiple devices running on various platforms. Traditionally, organizations use local servers to store data and employees access the data using desktops with predefined security policies. In the era of unified communications, employees exploit the advantages of smart devices and 4G wireless technology to access the data from anywhere and anytime. Security protocols such as access control designed for traditional setup are not sufficient when integrating mobile devices with organization’s internal network. Within this context, we exploit the features of smart devices to enhance the security of the traditional access control technique. Dynamic attributes in smart devices such as unlock failures, application usage, location and proximity of devices can be used to determine the risk level of an end-user. In this paper, we seamlessly incorporate the dynamic attributes to the conventional access control scheme. Inclusion of dynamic attributes provides an additional layer of security to the conventional access control. We demonstrate that the efficiency of the proposed algorithm is comparable to the efficiency of the conventional schemes.
Robust access control framework for mobile cloud computing network