title
stringlengths
0
1.91k
abstract
stringlengths
0
17k
keywords
stringlengths
0
7.81k
source_name
stringclasses
6 values
null
With the increase in the urban population witnessed in all major cities worldwide, efficient urban traffic planning, modeling and control are indispensable. In this paper we propose a novel macro-model for urban networks based on queuing theory and cast in a state-space representation. The model for each junction incorporates a switching mechanism capturing both the nonlinear dynamics of normal traffic conditions and the linear evolution of vehicle queues during junction block-back scenarios. Moreover, while competing models suffer a quadratic increase in computational cost with every added junction, our model exhibits only a linear increase in dimensionality and thus significantly lowers its computational demands. The simulation results from the proposed model are validated against a standard micro-modelling simulation package. These results will demonstrate through a Monte Carlo simulation that given correct model parameters, the proposed macro-model can well capture the complex vehicle dynamics of an urban region, thus paving the way for an accurate analysis of the network and for the development of efficient urban traffic control strategies.
Junction Modelling;Urban Network;Macro-modelling;Junction Block-Back
WoS
null
With the increase of global satellite navigation systems (GNSS) in a limited number of frequency bands, the current spectrum dedicated to satellite navigation encounters congestion. A possible path to keep modernizing the different GNSS is to find new frequency bands where a satellite radio-navigation service (RNSS) could be provided. This article takes a regulatory approach to identify potential candidate bands, which are then reviewed in detail to assess their usability for a new RNSS deployment. The results of this analysis are: (1) there exist spaces in the spectrum where a RNSS allocation exists or could be added. For example, the option consisting in adding another RNSS allocation into aeronautical radio-navigation service (ARNS) bands. However, such allocation change is not planned and would require important commitment from administrations and long negotiations with the civil aviation community; (2) the use of telecommunication signals of opportunity coming from an hybrid satellite-terrestrial network of emitters. Among these solutions, some would permit to combine navigation and telecommunication services, offering a huge opportunity for mass market application.
GNSS evolution;spectrum management;hybrid systems
WoS
null
With the increase of intelligent devices, ubiquitous computing is spreading to all scopes of people life. Smart home (or industrial) environments include automation and control devices to save energy, perform tasks, assist and give comfort in order to satisfy specific preferences. This paper focuses on the proposal for Software Reference Architecture for the development of smart applications and their deployment in smart environments. The motivation for this Reference Architecture and its benefits are also explained. The proposal considers three main processes in the software architecture of these applications: perception, reasoning and acting. This paper centres attention on the definition of the Perception process and provides an example for its implementation and subsequent validation of the proposal. The software presented implements the Perception process of a smart environment for a standard office, by retrieving data from the real world and storing it for further reasoning and acting processes. The objectives of this solution include the provision of comfort for the users and the saving of energy in lighting. Through this verification, it is also shown that developments under this proposal produce major benefits within the software life cycle. (C) 2014 Elsevier B.V. All rights reserved.
Smart environment;Software architecture;Ambient intelligence;Perception
WoS
null
With the increase of intelligent systems based on Multi-Agent Systems (MAS) and the use of Wireless Sensor Networks (WSN) in context-aware scenarios, information fusion has become an essential part of this kind of systems where the information is distributed among nodes or agents. This paper presents a new MAS specially designed to manage data from WSNs, which was tested in a residential home for the elderly. The proposed MAS architecture is based on virtual organizations, and incorporates social behaviors to improve the information fusion processes. The data that the system manages and analyzes correspond to the actual data of the activities of a resident. Data is collected as the information event counts detected by the sensors in a specific time interval, typically one day. We have designed a system that improves the quality of life of dependant people, especially elderly, by fusioning data obtained by multiple sensors and information of their daily activities. The high development of systems that extract and store information make essential to improve the mechanisms to deal with the avalanche of context data. In our case, the MAS approach results appropriated because each agent can represent an autonomous entity with different capabilities and offering different services but collaborating among them. Several tests have been performed to evaluate this platform and preliminary results and the conclusions are presented in this paper. (C) 2014 Elsevier B.V. All rights reserved.
Multi-Agent Systems;Wireless Sensor Networks;Information fusion;Ambient Intelligence
WoS
null
With the increase of massive data, a large number of business applications began to seek effective and scalable frameworks for data storages and processing. Under this background, emerging technologies for big data, such as Hadoop-based systems that use scalable distributed storage system HBase, become available. Since most of business data nowadays are stored in relational databases, and information imprecision and uncertainty widely exist in real-world applications, there is an increasing willingness to manage large-scale fuzzy relational data in the Hadoop-based platform. This paper concentrates on fuzzy information modeling in HBase. In particular, we investigate the formal transformation from the fuzzy relational data model to the HBase model and develop a set of mapping rules to assist in the transformation process. In addition, we present a generic approach to transform the fuzzy relational algebra into the fuzzy HBase manipulation language.
HBase;fuzzy relational data;modeling;mapping
WoS
null
With the increase of the types and numbers of the household appliances, electrical appliances' standby energy consumption has attracted more and more people's concern. To reduce the energy waste of the household appliances, a method for the energy-saving control of the standby household appliances is presented in this paper. The presented method divides the indoor electrical circuits into two parts, one of which is the circuit that can be powered down while the other can not be turned off. An algorithm that fuses the detected current information and the pyroelectric infrared sensor information is put forward to assure that the circuit will be cut off when the household appliances in the circuit that can be powered down are at the standby states and nobody is at home. Also, the software and hardware design schemes are given in detail, and the energy-saving control devices for the standby appliances are developed. At last, experiments are made. From the experimental results, we can observe that the developed system is reliable and has great importance on energy conservation.
Smart home;Information fusion;Household appliance;Energy saving
WoS
null
With the increase of urban population, the high-rise buildings are widely applied. The foundation is the basis of the building, and its quality is directly relative to the safety of building. According to a practical example of the foundation engineering in a high-rise building, this paper established the parameterized model of the foundation engineering by BIM technology, and simulated the whole process of foundation construction. Then the quickly and complete transmission of construction information was accomplished through the cooperative work platform of BIM. Afterward, the collision detections of reinforcement clearance were done to catch the collision problems between the reinforcements at the joints in the early time. Compared with the traditional construction management method, it was found that the application of BIM technology can control the construction quality more effectively, optimize the resource allocation, and ensure the safety of foundation engineering.
BIM;Foundation engineering;Collision detection;Construction simulation;Quality control
WoS
null
With the increasing amount of available data, the need for classification of large data volumes is permanently growing. In order to cope with this challenge, neural classifiers should be adapted to large-scale data. We present here a well scalable extension to the fuzzy Adaptive Resonance Associative Map (ARAM) neural network, which was specially developed for the quick classification of high-dimensional and large data. This extension aims at increasing the classification speed by adding an extra layer for clustering learned prototypes into large clusters. This enables the activation of only one or a few clusters i.e. a small fraction of all prototypes, reducing the classification time significantly. Further we introduce two methods to adapt this extension to a multi-label classification task.
Neural networks;Adaptive resonance theory;Clustering;Classification;Multi-label classification
WoS
null
With the increasing amount of interconnections between vehicles, the attack surface of internal vehicle networks is rising steeply. Although these networks are shielded against external attacks, they often do not have any internal security to protect against malicious components or adversaries who can breach the network perimeter. To secure the in-vehicle network, all communicating components must be authenticated, and only authorized components should be allowed to send and receive-messages. This is achieved through the use of an authentication framework. Cryptography is widely used to authenticate communicating parties and provide secure communication channels (e.g., Internet communication). However, the real-time performance requirements of in-vehicle networks restrict the types of cryptographic algorithms and protocols that may be used. In particular, asymmetric cryptography is computationally infeasible during vehicle operation. In this work, we address the challenges of designing authentication protocols for automotive systems. We present Lightweight Authentication for Secure Automotive Networks (LASAN), a full lifecycle authentication approach. We describe the core LASAN protocols and show how they protect the internal vehicle network while complying with the real-time constraints and low computational resources of this domain. By lever-aging the fixed structure of automotive networks, we minimize bandwidth and computation requirements. Unlike previous work, we also explain how this framework can be integrated into all aspects of the automotive product lifecycle, including manufacturing, vehicle maintenance, and software updates. We evaluate LASAN in two different ways: First, we analyze the security properties of the protocols using established protocol verification techniques based on formal methods. Second, we evaluate the timing requirements of LASAN and compare these to other frameworks using a new highly modular discrete event simulator for in-vehicle networks, which we have developed for this evaluation.
Automotive;security;authentication;authorization;lightweight
WoS
null
With the increasing applications of ionic liquids (ILs), the toxicity of ILs has drawn increasing attention in recent years, especially the influences of different anions and alkyl-chain lengths on the acute toxicity to aquatic organisms. We performed a study on the acute toxicity of 1-alkyl-3-methylimidazolium nitrate ([C(n)mim]NO3] (n = 2, 4, 6, 8, 10, 12)), 1-hexyl-3-methylimidazolium ILs ([C(6)mim]R (R = Cl-, Br-, BF4-, PF6-)) to zebrafish (Danio rerio). We also evaluated the sensibility of the investigated animals and the stability of ILs in water via high performance liquid chromatography (HPLC, Agilent 1260, Agilent Technologies Inc., USA) to prove the reliability of the present study. The results illustrated that the test zebrafish (Danio rerio) were sensitive to the reference toxicant and that the investigated ILs in water were stable. The 50% lethal concentration (LC50) was used to represent the acute toxicity to zebrafish (Dario rerio). The present study showed that the highest toxic IL is [C(12)mim]NO3 and the lowest toxic IL is [C(2)mim]NO(3)on Danio rerio. The LC(50)s for ILs with different anions had similar values. Accordingly, we believe that ILs with different alkyl-chain lengths cause greater effects than other anions on acute toxicity to aquatic organisms. Furthermore, the present study can also provide scientific methods for future studies to select and assess ILs.
LC50;HPLC;Sensitivity test;Determination of concentration;Green solvents;Organization for Economic Cooperation and;Development (OECD)
WoS
null
With the increasing availability of high-resolution images, videos, and 3D models, the demand for scalable large data processing techniques increases. We introduce a method of sparse dictionary learning for edit propagation of large input data. Previous approaches for edit propagation typically employ a global optimization over the whole set of pixels (or vertexes), incurring a prohibitively high memory and time-consumption for large input data. Rather than propagating an edit pixel by pixel, we follow the principle of sparse representation to obtain a representative and compact dictionary and perform edit propagation on the dictionary instead. The sparse dictionary provides an intrinsic basis for input data, and the coding coefficients capture the linear relationship between all pixels and the dictionary atoms. The learned dictionary is then optimized by a novel scheme, which maximizes the Kullback-Leibler divergence between each atom pair to remove redundant atoms. To enable local edit propagation for images or videos with similar appearance, a dictionary learning strategy is proposed by considering range constraint to better account for the global distribution of pixels in their feature space. We show several applications of the sparsity-based edit propagation, including video recoloring, theme editing, and seamless cloning, operating on both color and texture features. Our approach can also be applied to computer graphics tasks, such as 3D surface deformation. We demonstrate that with an atom-to-pixel ratio in the order of 0.01% signifying a significant reduction on memory consumption, our method still maintains a high degree of visual fidelity.
Edit propagation;dictionary learning;video recoloring;image cloning;surface deformation
WoS
null
With the increasing capabilities of computer modeling and simulation technology, the analysis of options for maximizing gains in energy efficiency for buildings can be realized more efficiently and cost effectively. Conducting building performance simulations allows for the analysis of the environmental impacts of buildings and the economic vitality of green building techniques. Commercial buildings consume nearly one-fifth of all the energy used in the United States, costing more than $200 billion each year. The building envelope plays a key role in determining how much energy is required for the operation of a building. Individual thermal and solar properties of glazing and shading systems only provide information based on static evaluations, but it is very important to assess the efficiency of these systems as a whole assembly under site-specific conditions. This paper presents a case study that was conducted using computer simulation tools to evaluate the environmental and economic impacts of using different types of glazing-sunshade systems on the overall performance of an office building in Florida. The case study results show how early stage building performance studies using computer simulation tools help practitioners in achieving the goals of reduced energy consumption and increased indoor comfort in an economical manner. (C) 2016 American Society of Civil Engineers.
Simulation;Energy consumption;Indoor comfort;Cost;Commercial buildings
WoS
null
With the increasing complexity of both data structures and computer architectures, the performance of applications needs fine tuning in order to achieve the expected runtime execution time. Performance tuning is traditionally based on the analysis of performance data. The analysis results may not be accurate, depending on the quality of the data and the applied analysis approaches. Therefore, application developers may ask: Can we trust the analysis results? This paper introduces our research work in performance optimization of the memory system, with a focus on the cache locality of a shared memory and the memory locality of a distributed shared memory. The quality of the data analysis is guaranteed by using both real performance data acquired at the runtime while the application is running and well-established data analysis algorithms in the field of bioinformatics and data mining. We verified the quality of the proposed approaches by optimizing a set of benchmark applications. The experimental results show a significant performance gain.
Code optimization;data analysis;data locality;distributed shared memory;performance tuning
WoS
null
With the increasing concern about the long-term effects of concussive and sub-concussive head accelerations in sport, this research applies two technologies initially developed for team-based sports to snowsports to understand the characteristics of snowsport head acceleration. Results indicate that pediatric snowsports participants regularly achieved speeds over 23 km/h; snowsport head accelerations are rare and that when they do occur they are generally of low magnitude; and those most at risk were make snowboarders. (C) 2013 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of the School of Aerospace, Mechanical and Manufacturing Engineering, RMIT University
Concussion;head injury;snowsport;snowboarding;skiing
WoS
null
With the increasing concern about the serious global energy crisis and high energy consumption during high content solid wastes (HCSWs) treatment, microbial fuel cell (MFC) has been recognized as a promising resource utilization approach for HCSW stabilization with simultaneous electrical energy recovery. In contrast to the conventional HCSW stabilization processes, MFC has its unique advantages such as direct bio-energy conversion in a single step and mild reaction conditions (viz., ambient temperature, normal pressure, and neutral pH). This review mainly introduces some important aspects of electricity generation from HCSW and its stabilization in MFC, focusing on: (1) MFCs with different fundamentals and configurations designed and constructed to produce electricity from HCSW; (2) performance of wastes degradation and electricity generation; (3) prospect and deficiency posed by MFCs with HCSW as substrates. To date, the major drawback of MFCs fueled by HCSW is the lower power output than those using simple substrates. HCSW hydrolysis and decomposition would be a major tool to improve the performance of MFCs. The optimization of parameters is needed to push the progress of MFCs with HCSW as fuel. (C) Higher Education Press and Springer-Verlag Berlin Heidelberg 2017
Microbial fuel cell;High content solid wastes;Substrate;Bioremediation;Biosensor
WoS
null
With the increasing demand for ultra-low latency services in 5G cellular networks, fog with edge computing is one of promising solutions which migrate the computing from the cloud to the edge of the network. Rather than relying on the distant cloud or additional servers, we propose the Fog-Radio Access Network (F-RAN), which leverages the current infrastructures in the radio access network, such as small cells and macro base stations, to pursue the ultra-low latency by joint powerful computing of multiple F-RAN nodes and near-range communications at the edge. The optimization problem is firstly formulated to tackle the tradeoff between communication and computing resources into time domain within distributed computing scenario, and then we propose a cooperative task computing operation algorithm to simultaneously decide how many F-RAN nodes should be selected with proper communication resource allocation and computing task assignment. The numerical results show that the ultra low-latency services can be achieved by F-RAN via cooperative task computing.
Edge computing;fog computing;fog-radio access networks;ultra-low latency;5G cellular networks
WoS
null
With the increasing demand of Machine to Machine (M2M) communications and Internet of Things (IoT) services it is necessary to develop a new network architecture and protocols to support cost effective, distributed computing systems. Generally, M2M and IoT applications serve a large number of intelligent devices, such as sensors and actuators, which are distributed over large geographical areas. To deploy M2M communication and IoT sensor nodes in a cost-effective manner over a large geographical area, it is necessary to develop a new network architecture that is cost effective, as well as energy efficient. This paper presents an IEEE 802.11 and IEEE 802.15.4 standards-based heterogeneous network architecture to support M2M communication services over a wide geographical area. For the proposed heterogeneous network, we developed a new cooperative Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) medium access control (MAC) protocol to transmit packets using a shared channel in the 2.4 GHz ISM band. One of the key problems of the IEEE 802.11/802.15.4 heterogeneous network in a dense networking environment is the coexistence problem in which the two protocols interfere with each other causing performance degradation. This paper introduces a cooperative MAC protocol that utilizes a new signaling technique known as the Blank Burst (BB) to avoid the coexistence problem. The proposed MAC protocol improves the network QoS of M2M area networks. The developed network architecture offers significant energy efficiency, and operational expenditure (OPEX) and capital expenditure (CAPEX) advantages over 3G/4G cellular standards-based wide area networks.
heterogeneous network;IEEE 802.11;IEEE 802.15.4;6LoWPAN;M2M communication;low power network
WoS
null
With the increasing deployment of network systems, network attacks are increasing in intensity as well as complexity. Along with these increasing network attacks, many network intrusion detection techniques have been proposed which are broadly classified as being signature-based, classification-based, or anomaly based. A deployable network intrusion detection system (NIDS) should be capable of detecting of known and unknown attacks in near real time with very low false positive rate. Supervised approaches for intrusion detection provides good detection accuracy for known attacks, but they can not detect unknown attacks. Some of the existing NIDS emphasize on unknown attack detection by using unsupervised anomaly detection techniques, but they can not distinguish network data as accurately as supervised approaches. Moreover they do not consider some other important issues like real time detection or minimization of false alarm. To overcome these problems, in the recent years many hybrid NIDS have been proposed which are basically aimed at detecting both known and unknown attacks with high accuracy of detection. In this literature review on hybrid network intrusion detection systems, we will discuss a few of the notable hybrid NIDS proposed in the recent years and will try to provide a comparative study on them.
Intrusion detection system;NIDS;Network security
WoS
null
With the increasing emergence of ambient intelligence, sensors and wireless network technologies, robotic assistance becomes a very active area of research in autonomous intelligent systems. Robotic systems would be integrated in the environment as physical autonomous entities. These entities will be able to interact independently with the ambient environment and provide services such as assistance to people at homes, offices, buildings and public spaces. Furthermore, robots as cognitive entities will be able to coordinate their activities with other physical or logical entities, to move, to feel and explore the surrounding environment, decide and act to meet the situations they may encounter. These cognitive operations will be part of a smart network which can provide individually or collectively, new features and various support services anywhere and anytime. The aim of this research work is to build a multimodal fusion engine using the semantic web. This multimodal system will be applied on a wheelchair with a manipulated arm to help people with disabilities interact with their main tool of movement and their environment. This work focuses on building a multimodal interaction fusion engine to better understand the multimodal inputs using the concept of ontology. (C) 2015 The Authors. Published by Elsevier B.V.
Assistance robot;multimodal systems;ontology;SWRL rules;fusion engine
WoS
null
With the increasing incidence and prevalence of antibiotic contamination in animal-derived food and drug resistance around the world, early and specific detection of antibiotic residues has garnered significant attention. Herein, for the first time, we devise a facile, label free and portable aptasensor for quantitative determination of Kanamycin (KANA) by electrochemical impedance spectroscopy (EIS), which accompanying the assembly of in-vitro selected single strand DNA (ssDNA) anti-KANA-aptamer functionalized screen printed carbon electrodes (SPCEs) used as transducer. The target detection is based on specific recognition by KANA-aptamer covalently immobilized on SPCEs surface. The surface morphology and electrochemical properties of aptasensor were characterized using Fourier transform infrared spectroscopy (FT-IR), scanning electron microscopy (SEM), atomic force microscopy (AFM), cyclic voltammetry (CV) and EIS. Under optimized experimental conditions, the devised aptasensor exhibited a dynamic range 1.2-600 ngmL(-1) with linearity 1.2-75 ngmL(-1) and limit of detection (LOD) 0.11 ng mL(-1) (S/N =3) of KANA. The developed aptasensor are endowed with a good selectivity and specificity to KANA without interference from competitive analogues (streptomycin and gentamicin). For practical application, the aptasensor performance was verified in spiked milk samples and the acceptable recovery percentage of 96.88-100.5% (%RSD = 4.56, n=3) was obtained in milk samples. (C) 2017 Elsevier B.V. All rights reserved.
Kanamycin;Impedance;Aptasensor;Screen printed carbon electrodes;Milk;Disposable
WoS
null
With the increasing installed capacity of wind power and the interdependencies among multiple energy sectors, optimal operation of integrated energy systems (IES) with combined cooling, heating and power (CCHP) is becoming more important. This paper proposes an optimal dispatch strategy for IES with CCHP and wind power. Natural gas system is modeled and its security constraints are integrated into the optimal dispatch model. The gas shift factor (GSF(gas)) matrix for natural gas system is derived to quantify the impact of gas supply and load at each node on the gas flow through the pipelines so that the pipeline flow equation is linearized. The objective function of the optimization model is to minimize the total operation cost of IES. Then the model is transformed into mixed integer linear programming (MILP) formulation to improve the computation efficiency. Numerical case studies conducted demonstrate the lower operation cost of the proposed model facilitating wind power integration. (C) 2016 Elsevier Ltd. All rights reserved.
IES with CCHP;Wind power integration;Mixed integer linear programming;Natural gas system;Optimal dispatch strategy
WoS
null
With the increasing penetration of renewable energy sources and inclusion of additional features, the electrical networks have become more active. Every passing decade has seen newer features of the electrical network and with the emerging Smart Grid, distribution systems becomes active. Grid tied inverters are vital component of active distribution systems, acting as interfaces for the distributed generators. This paper presents Model Predictive Control (MPC) of a multifunctional inverter used for power quality compensation of non-linear local loads and grid integration of solar photovoltaic generators. By including the discrete nature of the switches in the power converter, the optimisation problem of MPC The reference current for the converter control is generated using Instantaneous Power theory. The control is simulated in MATLAB/SIMULINK.
Distributed Generation;Renewable Energy;Power Quality & Harmonics;Model Predictive Control
WoS
null
With the increasing prevalence of electronic readers (e-readers) for vocational and professional uses, it is important to discover if there are visual consequences in the use of these products. There are no studies in the literature quantifying the incidence or severity of eyestrain, nor are there clinical characteristics that may predispose to these symptoms with e-reader use. The primary objective of this pilot study was to assess the degree of eyestrain associated with e-reader use compared to traditional paper format. The secondary outcomes of this study were to assess the rate of eyestrain associated with e-reader use and identify any clinical characteristics that may be associated with the development of eyestrain. Forty-four students were randomly assigned to study (e-reader iPAD) and control (print) groups. Participant posture, luminosity of the room, and reading distance from reading device were measured during a 1-h session for both groups. At the end of the session, questionnaires were administered to determine symptoms. Significantly higher rates of eyestrain (p = 0.008) and irritation (p = 0.011) were found among the iPAD study group as compared to the print 'control' group. The study group was also 4.9 times more likely to report severe eyestrain (95 % CI [1.4, 16.9]). No clinical characteristics predisposing to eyestrain could be identified. These findings conclude that reading on e-readers may induce increased levels of irritation and eyestrain. Predisposing factors, etiology, and potential remedial interventions remain to be determined.
Eyestrain;Asthenopia;Computer vision syndrome;Ocular complaints;Electronic readers and tablets
WoS
null
With the increasing proportion of natural gas in power generation, natural gas network and electricity network are closely coupled. Therefore, planning of any individual system regardless of such interdependence will increase the total cost of the whole combined systems. Therefore, a multi-objective optimization model for the combined gas and electricity network planning is presented in this work. To be specific, the objectives of the proposed model are to minimize both investment cost and production cost of the combined system while taking into account the N-1 network security criterion. Moreover, the stochastic nature of wind power generation is addressed in the proposed model. Consequently, it leads to a mixed integer non-linear, multi-objective, stochastic programming problem. To solve this complex model, the Elitist Non-dominated Sorting Genetic Algorithm II (NSGA-II) is employed to capture the optimal Pareto front, wherein the Primal-Dual Interior-Point (PDIP) method combined with the point estimate method is adopted to evaluate the objective functions. In addition, decision makers can use a fuzzy decision making approach based on their preference to select the final optimal solution from the optimal Pareto front. The effectiveness of the proposed model and method are validated on a modified IEEE 24-bus electricity network integrated with a 15-node natural gas system as well as a real-world system of Hainan province. (C) 2015 The Authors. Published by Elsevier Ltd.
Natural gas network expansion planning;Transmission expansion planning;Multi-objective;Primal-Dual Interior-Point method;Point-estimate method
WoS
null
With the increasing scale of construction projects, coupled with their complexity, information management in the construction process becomes more complex and trivial. Moreover, the communication among project participants is hindered by the large amount and scattered structure of the information involved in the construction process. Most present studies on the information management of construction focus on the special needs of construction sectors rather than a total information integration of general aspects. This paper presents a multidimensional information model that combines Workflow, Work Breakdown Structures (WBS) and Time factor to manage information based on process control. Unlike previous researches, the present work puts emphasis on process control to enhance the communication process. The proposed information model uses the WBS and Time factor to manage information in the spatial and time dimension and control the process information of each work package during project execution through the workflow technology. In order to test the proposed information model, a web-based project management system (WPMS) was developed using the three layer architecture. The system structure realizes the invisible logic of the information model through the business logic layer and the data access layer. The system has been applied to a construction project of an underground cavern group in a hydropower project in the southwest of China. The application has shown that it achieves managing information based on process control and provides structured information.
Multidimensional information model;information management;construction management;Work Breakdown Structure;Workflow;web-based project management system
WoS
null
With the increasing sizes of digital elevation models (DEMs), there is a growing need to design parallel schemes for existing sequential algorithms that identify and fill depressions in raster DEMs. The Priority-Flood algorithm is the fastest sequential algorithm in the literature for depression identification and filling of raster DEMs, but it has had no parallel implementation since it was proposed approximately a decade ago. A parallel Priority-Flood algorithm based on the fastest sequential variant is proposed in this study. The algorithm partitions a DEM into stripes, processes each stripe using the sequential variant in many rounds, and progressively identifies more slope cells that are misidentified as depression cells in previous rounds. Both Open Multi-Processing (OpenMP)- and Message Passing Interface (MPI)-based implementations are presented. The speed-up ratios of the OpenMP-based implementation over the sequential algorithm are greater than four for all tested DEMs with eight computing threads. The mean speed-up ratio of our MPI-based implementation is greater than eight over TauDEM, which is a widely used MPI-based library for hydrologic information extraction. The speed-up ratios of our MPI-based implementation generally become larger with more computing nodes. This study shows that the Priority-Flood algorithm can be implemented in parallel, which makes it an ideal algorithm for depression identification and filling on both single computers and computer clusters.
Parallel computing;depression identification;depression filling;DEM
WoS
null
With the increasing use of aluminum shape castings in structural applications in the automotive and aerospace industries, assurance of cast product integrity and performance has become critical in both design and manufacturing. In this paper, the latest understanding of the relationship between casting quality and mechanical properties of aluminum castings is summarized. Newly developed technologies for alloy design, melting and melt treatment, casting and heat treatment processes in aluminum casting are reviewed. The mechanical properties of aluminum castings strongly depend upon their microstructure constituents and particularly cast defect population and distribution. To produce quality castings with quantifiable properties, the key is to control multi-scale defects and microstructure in the casting. Robust design and development of high integrity aluminum castings through Integrated Computational Materials Engineering (ICME) approach is also discussed. The most effective way to optimize the processes and achieve the desirable mechanical properties is through the development and exploitation of robust and accurate multi-scale computational models.
aluminum shape castings;casting quality;new technologies;robust design;virtual casting
WoS
null
With the increasing use of databases, there is an abundant opportunity to investigate new watermarking techniques that cater to the requirements for emerging applications. A major challenge that needs to be tackled is to recover crucial information that may be lost accidentally or due to malicious attacks on a database that represents asset and needs protection. In this paper, we elucidate a scheme for robust watermarking with multiple watermarks that resolve the twin issues of ownership and recovery of information in case of data loss. To resolve ownership conflicts watermark is prepared securely and then embedded into the secretly selected positions of the database. Other watermark encapsulates granular information on user-specified crucial attributes in a manner such that the perturbed or lost data can be regenerated conveniently later. Theoretical analysis proves that the probability of identifying target locations, False hit rate and False miss rate is negligible. We have experimentally verified that the proposed technique is robust enough to extract the watermark accurately even after 100% tuple addition or alteration and after 98% tuple deletion. Experiments on information recovery reveal that successful regeneration of tampered/lost data improves with the increase in the number of candidate attributes for embedding the watermark. (C) 2016 Elsevier Ltd. All rights reserved.
Information recovery;Database watermarking;Right protection;Robustness;Tamper detection
WoS
null
With the increasing use of real-time PCR techniques, Leptospira isolation has mostly been abandoned for the diagnosis of human leptospirosis. However, there is a great value of collecting Leptospira isolates to better understand the epidemiology of this complex zoonosis and to provide the researchers with different isolates. In this study, we have successfully isolated different Leptospira strains from BacT/Alert aerobic blood culture bottles and suggest that this privileged biological material offers an opportunity to isolate leptospires. (C) 2017 Elsevier Inc. All rights reserved.
Leptospirosis;Diagnosis;Blood bottles;Culture
WoS
null
With the increasingly frequent intercultural communication, the influence of intercultural Pragmatic failures more and more clearly revealed. Currently, many researchers and language teachers at home and abroad conducted researches on verbal conduct pragmatic failure, but they often overlooked the nonverbal communication pragmatic failure. This paper focuses on the types and causes of nonverbal pragmatic failure in intercultural communication, and then puts forward the strategy to avoid or reduce pragmatic failures in order to provide some useful reference for people engaging in intercultural communication.
nonverbal pragmatic failure;intercultural communication;causes
WoS
null
With the increasingly growing amount of service requests from the world-wide customers, the cloud systems are capable of providing services while meeting the customers' satisfaction. Recently, to achieve the better reliability and performance, the cloud systems have been largely depending on the geographically distributed data centers. Nevertheless, the dollar cost of service placement by service providers (SP) differ from the multiple regions. Accordingly, it is crucial to design a request dispatching and resource allocation algorithm to maximize net profit. The existing algorithms are either built upon energy-efficient schemes alone, or multi-type requests and customer satisfaction oblivious. They cannot be applied to multi-type requests and customer satisfaction-aware algorithm design with the objective of maximizing net profit. This paper proposes an ant-colony optimization-based algorithm for maximizing SP's net profit (AMP) on geographically distributed data centers with the consideration of customer satisfaction. First, using model of customer satisfaction, we formulate the utility (or net profit) maximization issue as an optimization problem under the constraints of customer satisfaction and data centers. Second, we analyze the complexity of the optimal requests dispatchment problem and rigidly prove that it is an NP-complete problem. Third, to evaluate the proposed algorithm, we have conducted the comprehensive simulation and compared with the other state-of-the-art algorithms. Also, we extend our work to consider the data center's power usage effectiveness. It has been shown that AMP maximizes SP net profit by dispatching service requests to the proper data centers and generating the appropriate amount of virtual machines to meet customer satisfaction. Moreover, we also demonstrate the effectiveness of our approach when it accommodates the impacts of dynamically arrived heavy workload, various evaporation rate and consideration of power usage effectiveness. Copyright (C) 2014 John Wiley & Sons, Ltd.
geo-distributed data centers;utility maximization;customer satisfaction model
WoS
null
With the increasingly serious energy crisis and environmental pollution, the short-term economic environmental hydrothermal scheduling (SEEHTS) problem is becoming more and more important in modern electrical power systems. In order to handle the SEEHTS problem efficiently, the parallel multi-objective genetic algorithm (PMOGA) is proposed in the paper. Based on the Fork/Join parallel framework, PMOGA divides the whole population of individuals into several subpopulations which will evolve in different cores simultaneously. In this way, PMOGA can avoid the wastage of computational resources and increase the population diversity. Moreover, the constraint handling technique is used to handle the complex constraints in SEEHTS, and a selection strategy based on constraint violation is also employed to ensure the convergence speed and solution feasibility. The results from a hydrothermal system in different cases indicate that PMOGA can make the utmost of system resources to significantly improve the computing efficiency and solution quality. Moreover, PMOGA has competitive performance in SEEHTS when compared with several other methods reported in the previous literature, providing a new approach for the operation of hydrothermal systems.
parallel computing;economic environmental hydrothermal scheduling;multi-objective optimization;multi-objective genetic algorithm;constraint handling method
WoS
null
With the increasingly serious environmental problems, energy saving and environmental protection of the green building began to have people's attention, and gradually become the development trend of future building, which came into being green building integrated design. How to apply the integrated design theory of green building to the concrete architectural design practice is a question that designers must seriously think about. This paper expounds the integrated design concept of green building and integration methods from the connotation of green building design, and puts forward the integrated design strategy in the green building design concept in theory and practical application, in order to provide reference to the designers or managers.
Green building;Integration;Design strategy
WoS
null
With the installation of synchrophasors widely across the power grid, measurement-based oscillation monitoring algorithms are becoming increasingly useful in identifying the real-time oscillatory modal properties in power systems. When the number of phasor measurement unit (PMU) channels grows, the computational time of many PMU data based algorithms is dominated by the computational burden in processing large-scale dense matrices. In order to overcome this limitation, this paper presents new formulations and computational strategies for speeding up an ambient oscillation monitoring algorithm, namely, stochastic subspace identification (SSI). Based on previous work, two fast singular value decomposition (SVD) approaches are first applied to the SVD evaluation within the SSI algorithm. Next, block structures are exploited so that the large-scale dense matrix computations can be processed in parallel. This helps in memory savings as well as in over-all computational time. Experimental results from three sets of archived data of the western interconnection demonstrate that the new approaches can provide significant speedups while retaining modal estimation accuracy. With proposed fast parallel algorithms, the real-time oscillation monitoring of the large-scale system using hundreds of PMU measurements becomes feasible.
Power system oscillations;stochastic subspace identification;large-scale dense matrix computations;parallel computing;synchrophasors
WoS
null
With the integrated development of construction projects, the complexity of construction management is increasing. Owing to numerous participants and resources are involved in, it is necessary to evaluate the organizational performance for the construction projects. Considering the objectives and process of construction and the benefits of participants, an index system is established in this paper. Based on the index system, entropy method is introduced to calculate the weight. Then analytic hierarchy process (AHP) and grey theory are elected as the method for the organization's performance evaluation. The research provides the theoretical basis and practical guidance for improving the management level of construction projects to achieve higher efficiency of the organization.
Organizational performance;Index system;Multilevel grey theory
WoS
null
With the integration of integrated circuits continues to rise, the feature size of integrated devices has entered nanometer. Single electron transistor (SET) is satisfied as nanoelectronic devices, and the SET will be mixed with the composition of nano-MOS devices (SETMOS), is one of the hot current study. SETMOS as a new hybrid device combine the advantages of both, it also has the same Coulomb oscillation characteristics with the SET and MOS high gain. Integrated analog signal processing filters as the basic unit circuit, it must conform to the development of the times. Based on the I-V characteristics of a SETMOS hybrid device model, a SETMOS integrator is designed, and expounding it's the operating condition, structure, performance, parameter and characteristics. The transmission performance of the integrator designed is simulated by SPICE. The conclusion is proved by simulation result.
Single Electron Transistor (SET);SETMOS;Integrator;Transmission Performance;SPICE
WoS
null
With the integration of Spain into the European Higher Education Space (EHES) and the implantation of the new University Degrees, different methodological strategies to improve the performance of the teaching-learning process, have been taken. Terms as competence framework or student personal effort assessment, appear on the educational scene with the new curricula and degrees. One of the most debated items among college teachers is the students' attendance to the classes. Must it be compulsory? Which are the most appropriated control mechanisms to do it? Must teachers include the assistance in the assessment of the students' personal effort? etc. These are questions that should be debated and discussed. We have analyzed the behavior of one students group. They all are studying the "Electronic and Automatic Control Engineering Degree" at the Universitat Politecnica de Valencia (UPV). In their third academic course, students must course the subject "Digital electronics", on which was performed this work. In the paper, a statistical study relating the scores obtained by students with their attendance to lectures is presented. Likewise, proposals about "school attendance" as a first step to achieve the competency objectives of the course are performed.
Attendance;lectures;students;competences;Higher Education
WoS
null
With the intensification of human activities, fresh water resources are increasingly being exposed to contamination from effluent disposal to land. Thus, there is a greater need to identify the sources and pathways of water contamination to enable the development of better mitigation strategies. To track discharges of domestic effluent into soil and groundwater, 10 synthetic double-stranded DNA (dsDNA) 3 tracers were developed in this study. Laboratory column experiment and field groundwater and soil lysimeter studies were carried out spiking DNA with oxidation-pond domestic effluent. The selected DNA tracers were compared with a non-reactive bromide (Br) tracer with respect to their relative mass recoveries, speeds of travel and dispersions using the method of temporal moments. In intact stony soil and gravel aquifer media, the dsDNA tracers typically showed earlier breakthrough and less dispersion than the Br tracer, and underwent mass reduction. This suggests that the dsDNA tracers were predominantly transported through the network of larger pores or preferential flow paths. Effluent tracking experiments in soil and groundwater demonstrated that the dsDNA tracerswere readily detectable in effluent-contaminated soil and groundwater using quantitative polymerase chain reaction. DNA tracer spiked in the effluent at quantities of 36 mu g was detected in groundwater 37 mdown-gradient at a concentration 3-orders of magnitude above the detection limit. It is anticipated it could be detected at far greater distances. Our findings suggest that synthetic dsDNA tracers are promising for tracking effluent discharges in soils and groundwater but further studies are needed to investigate DNA-effluent interaction and the impact of subsurface environmental conditions onDNA attenuation. With further validation, synthetic dsDNA tracers, especially when multiple DNA tracers are used concurrently, can be an effective new tool to track effluent discharge in soils and groundwater, providing spatial estimation on the presence or absence of contamination sources and pathways. (C) 2017 Elsevier B.V. All rights reserved.
DNA tracer;Water contamination;Effluent discharge;Groundwater;Soil
WoS
null
With the introduction of BIM Technology, the traditional construction project management model innovation will can to realize data sharing in a common platform to participate in many of the units, and the construction project management more convenient and effective. The due to long engaged in technology research and development and application of Bim and construction management, the future will be in construction project management of Bim and we were technology sharing and exchange, such as the construction schedule, cost, quality and safety control, application of Bim and construction simulation and construction green BIM application, and people, machine, material, method, central management of BIM applications.
BIM;information model;modeling;virtual construction;simulation
WoS
null
With the introduction of electrified drive-trains and autonomous driving features, the requirements for electronic systems in vehicles have rapidly become more and more demanding. In order to face these complex requirement scenarios, a paradigm shift from closed-loop-controlled to truly self-deciding and self-learning automata is strongly needed. In this paper, a novel concept for drive-train platforms enabling self-learning capabilities based on sensor integration, micro and power electronics and secure cloud communication directly integrated into the electric motor will be introduced.
Electric vehicle;Electrical drive-train architecture;Smart systems;Power electronics;Self-learning
WoS
null
With the introduction of more and more random factors in power systems, probabilistic load flow (PLF) has become one of the most important tasks for power system planning and operation. Cumulants-based PLF is an effective algorithm to calculate PLF in an analytical way, however, the correlations among the nodal injections to the system level have rarely been studied. A novel parallel cumulants-based PLF method considering nodal correlations is proposed in this paper, which is able to deal with the correlations among all system nodes, and avoid the Jacobian matrix inversion in the traditional cumulants-based PLF as well. In addition, parallel computing is introduced to improve the efficiency of the numerical calculations. The accuracy of the proposed method is validated by numerical tests on the standard IEEE-14 system, comparing with the results from Correlation Latin hypercube sampling Monte Carlo Simulation (CLMCS) method. And the efficiency and parallel performance is proven by the tests on the modified IEEE-300, C703, N1047 systems with distributed generation (DG). Numerical simulations show that the proposed parallel cumulants-based PLF method considering nodal correlations is able to get more accurate results using less computational time and physical memory, and have higher efficiency and better parallel performance than the traditional one.
correlation matrix;Correlation Latin hypercube sampling Monte Carlo Simulation (CLMCS);cumulants;distributed generation (DG);parallel computing;probabilistic load flow (PLF)
WoS
null
With the latest developments in database technologies, it becomes easier to store the medical records of hospital patients from their first day of admission than was previously possible. In Intensive Care Units (ICU), modern medical information systems can record patient events in relational databases every second. Knowledge mining from these huge volumes of medical data is beneficial to both caregivers and patients. Given a set of electronic patient records, a system that effectively assigns the disease labels can facilitate medical database management and also benefit other researchers, e.g., pathologists. In this paper, we have proposed a framework to achieve that goal. Medical chart and note data of a patient are used to extract distinctive features. To encode patient features, we apply a Bag-of-Words encoding method for both chart and note data. We also propose a model that takes into account both global information and local correlations between diseases. Correlated diseases are characterized by a graph structure that is embedded in our sparsity-based framework. Our algorithm captures the disease relevance when labeling disease codes rather than making individual decision with respect to a specific disease. At the same time, the global optimal values are guaranteed by our proposed convex objective function. Extensive experiments have been conducted on a real-world large-scale ICU database. The evaluation results demonstrate that our method improves multi-label classification results by successfully incorporating disease correlations.
ICD code labeling;multi-label learning;sparsity-based regularization;disease correlation embedding
WoS
null
With the main focus on safety, design of structures for vibration serviceability is often overlooked or mismanaged, resulting in some high profile structures failing publicly to perform adequately under human dynamic loading due to walking, running or jumping. A standard tool to inform better design, prove fitness for purpose before entering service and design retrofits is modal testing, a procedure that typically involves acceleration measurements using an array of wired sensors and force generation using a mechanical shaker. A critical but often overlooked aspect is using input (force) to output (response) relationships to enable estimation of modal mass, which is a key parameter directly controlling vibration levels in service. This paper describes the use of wireless inertial measurement units (IMUs), designed for biomechanics motion capture applications, for the modal testing of a 109 m footbridge. IMUs were first used for an output-only vibration survey to identify mode frequencies, shapes and damping ratios, then for simultaneous measurement of body accelerations of a human subject jumping to excite specific vibrations modes and build up bridge deck accelerations at the jumping location. Using the mode shapes and the vertical acceleration data from a suitable body landmark scaled by body mass, thus providing jumping force data, it was possible to create frequency response functions and estimate modal masses. The modal mass estimates for this bridge were checked against estimates obtained using an instrumented hammer and known mass distributions, showing consistency among the experimental estimates. Finally, the method was used in an applied research application on a short span footbridge where the benefits of logistical and operational simplicity afforded by the highly portable and easy to use IMUs proved extremely useful for an efficient evaluation of vibration serviceability, including estimation of modal masses. (C) 2016 The Authors. Published by Elsevier Ltd.
Footbridge vibration;Human jumping;Modal mass identification;Wireless sensor
WoS
null
With the major constituent of cell membranes coated on the surface, phosphatidylcholine-coated magnetic nanoparticles (P-MNPs) which have been shown to have low toxicity and good dispersion, are widely used in biomedical applications, such as drug-delivery, magnetic resource imaging and hyperthermia. However, the effects of P-MNPs in vivo with magnetic field exposure have been evaluated poorly. The present study was focused on the interference of lipid metabolism by P-MNPs under 0.5 T static magnetic fields (SMFs) in Caenorhabditis elegans (C. elegans). The P-MNPs were accumulated in the intestine of nematodes, where fatty acids are accumulated. The content of lipofuscin was decreased in C. elegans exposed to either P-MNPs or SMF, while there were no further decrease in worms with P-MNPs+ SMF exposure. However in the presence of SMF, the lipid content was greatly decreased in C. elegans in the P-MNPs+ SMF exposure groups. The constituents of long-chain fatty acids and polyunsaturated fatty acids analyzed by gas chromatography (GC) were significantly decreased in P-MNPs treated worms under 0.5 T SMF, which was consistent with the mRNA expression of fatty acid metabolism genes including elo-2, let-767 and fat-5. Our results indicated that the lipid metabolism interfered by P-MNPs via the accelerated beta oxidation and changed gene expression in C. elegans under 0.5 T SMF.
Magnetic Nanoparticles;Magnetic Field;Lipid Metabolism;Caenorhabditis elegans
WoS
null
With the network scales rapidly and new network applications emerge frequently, bandwidth supply for today's Internet could not catch up with the rapid increasing requirements. Unfortunately, irrational using of network sources makes things worse. Actual network deploys single-next-hop optimization paths for data transmission, but such "best effort" model leads to the imbalance use of network resources and usually leads to local congestion. On the other hand Multi-path routing can use the aggregation bandwidth of multi paths efficiently and improve the robustness of network, security, load balancing and quality of service. As a result, multi-path has attracted much attention in the routing and switching research fields and many important ideas and solutions have been proposed. This paper focuses on implementing the parallel transmission of multi next-hop data, balancing the network traffic and reducing the congestion. It aimed at exploring the key technologies of the multi-path communication network, which could provide a feasible academic support for subsequent applications of multi-path communication networking. It proposed a novel multi-path algorithm based on node potential in the network. And the algorithm can fully use of the network link resource and effectively balance network link resource utilization.
multi-path;loop-free routing;traffic engineering;resource allocation
WoS
null
With the number of satellite sensors and date centers being increased continuously, it is becoming a trend to manage and process massive remote sensing data from multiple distributed sources. However, the combination of multiple satellite data centers for massive remote sensing (RS) data collaborative processing still faces many challenges. In order to reduce the huge amounts of data migration and improve the efficiency of multi-datacenter collaborative process, this paper presents the infrastructures and services of the data management as well as workflow management for massive remote sensing data production. A dynamic data scheduling strategy was employed to reduce the duplication of data request and data processing. And by combining the remote sensing spatial metadata repositories and Gfarm grid file system, the unified management of the raw data, intermediate products and final products were achieved in the co-processing. In addition, multi-level task order repositories and workflow templates were used to construct the production workflow automatically. With the help of specific heuristic scheduling rules, the production tasks were executed quickly. Ultimately, the Multi-datacenter Collaborative Process System (MDCPS) were implemented for large-scale remote sensing data production based on the effective management of data and workflow. As a consequence, the performance of MDCPS in experiments environment showed that those strategies could significantly enhance the efficiency of co-processing across multiple data centers.
Multi-datacenter infrastructure;Remote sensing data processing;Distributed computing;Big data computing;Data management;Workflow management
WoS
null
With the ongoing economic development, lifestyle changes and an aging population, diabetes mellitus has be come one of the most prevalent chronic diseases in the world. Rhino-orbito-cerebral (ROC) mucormycosis is a rare, acute and angioinvasive fungal infection that can be fatal. Mucormycosis occurs exclusively in immunocompromised patients with diabetes mellitus and other types of immunodeficiency and has three subtypes: Rhino-maxillary, rhino-orbital and ROC mucormycosis. The present study reported on a case of ROC mucormycosis in a patient with diabetic ketoacidosis. In the present case, the pathogen afflicted all of the above organs, including the left eye, nasal cavity, hard palate and cerebrum.
mucormycosis;type 2 diabetes;angioinvasive fungal infection
WoS
null
With the popular use of high-resolution satellite images, more and more research efforts have been focused on land-use scene classification. In scene classification, effective visual features can significantly boost the final performance. As a typical texture descriptor, the census transform histogram (CENTRIST) has emerged as a very powerful tool due to its effective representation ability. However, the most prominent limitation of CENTRIST is its small spatial support area, which may not necessarily be adept at capturing the key texture characteristics. We propose an extended CENTRIST (eCENTRIST), which is made up of three subschemes in a greater neighborhood scale. The proposed eCENTRIST not only inherits the advantages of CENTRIST but also encodes the more useful information of local structures. Meanwhile, multichannel eCENTRIST, which can capture the interactions from multichannel images, is developed to obtain higher categorization accuracy rates. Experimental results demonstrate that the proposed method can achieve competitive performance when compared to state-of-the-art methods. (C) 2017 Society of Photo-Optical Instrumentation Engineers (SPIE)
census transform histogram;land-use classification;multichannel descriptor;spectral regression kernel discriminant analysis
WoS
null
With the popularity of Internet of Things, lots of resource constrained devices equipped with sensors and actuators are pervasively deployed to compose a smart environment, and Big Data are obtainable for a system to do further analytics thus to achieve human-centric purposes. One such human-centric system is a smart home which analyze Big Data to recognize contexts and their corresponding preferences for service configuration thus to provide context-aware services. However, since these Big Data are generated in real-time with huge amount, analytics based on conventional supervised way is not desirable due to the requirement of human efforts. In addition, there are usually multiple inhabitants with multiple combination of contexts in a home environment, and it is difficult to fully collect all these possible context combination as well as their corresponding preferences in advance. Therefore, this paper proposes an unsupervised nonparametric analytics method with a framework for human-centric smart homes to automatically discover contexts and their corresponding service configurations, and the models resulting from the proposed analytics can also be used to determine the preference for a context combination unseen before.
Recognition;Non-parametric Learning Model;Ambient Intelligence;Knowledge Acquisition;Machine Learning;Smart Environment
WoS
null
With the popularization and application of Internet technology, it has brought great opportunities, challenges and has a significant change to various industries, which entered the Internet age. Computer network is double-edged sword, bringing convenience to people at the same time there are some security risks, seriously affecting the information security of the Internet Age. Therefore, this article is to explore the main computer network security risks and to lower the risks of computer security management measures, thus providing an important guarantee for computer network security.
Computer;Network Security;Risks;Management Measures
WoS
null
With the prevalence of information and communication technologies, Electronic Health Services (EHS) are commonly used by patients, doctors, and other healthcare professionals to decrease healthcare costs and provide efficient healthcare processes. However, using EHS increases the concerns regarding security, privacy, and integrity of healthcare data. Several solutions have been proposed to address these issues in EHS. In this survey, we categorize and evaluate state-of-the-art electronic health system research based on their architecture, as well as services including access control, emergency access, sharing, searching, and anonymity methods by considering their cryptographic approaches. Our survey differs from previous EHS related surveys in being method-based such that the proposed services are classified based on their methods and compared with other solutions. We provide performance comparisons and state commonly used methods for each category. We also identify relevant open problems and provide future research directions. (C) 2016 Elsevier B.V. All rights reserved.
Electronic health services;Privacy;Security;Cryptography;E-health
WoS
null
With the progress in the degree of integration of analog filters, realization of filters with their characteristics modified by an external control is desired. This paper presents a method for realizing systematically the arbitrary grounded immittance, floating immittance, and voltage transfer function by means of the operational transconductance amplifiers (OTA) as variable voltage-controlled current sources (VCCS) and of a grounded capacitor (GC). In signal flow graph, the OTA-GC circuit can be represented only in terms of an integrator block unit or of a graph with mixed voltage and current. On the other hand, the Coates graph can represent the circuit configuration itself and the circuit characteristics can easily be computed from the graph. By means of this graph, a condition of the graph to directly realize a higher-order function is presented and a direct configuration method is derived which can realize an arbitrary function with the least number of nodes. When the floating immittance obtained by the direct configuration method is simply replaced with an LC circuit, the device sensitivity cannot be reduced. However, low-sensitivity configuration becomes possible if the differential input of the OTA is used. Further, a transpose circuit is defined in which the input and the output of the OTA are interchanged. A method of configuration is shown for multioutput circuits by means of the transpose circuit. Finally, the validity of the present configuration method is confirmed by experiment.
OTA;THE COATES GRAPH;FILTER THEORY;DEVICE SENSITIVITY
WoS
null
With the progressive role of computers and their users as actors in social networks, computations alike social network analysis (SNA) are gaining in attention. This work proposes an approach based on SNA, not alone on social networks as supported by existing approaches, to estimate the tourists' satisfaction with individual Points of Interest (POIs), and accordingly recommend those POIs or not to that tourist or its tour planning system. Moreover, instead of a common unimodal network, a bimodal tourist-reviewer network is modeled as suggested by the SNA literature given tourists and POI reviewers act as two distinct classes of entities with links between them representing their (dis)similarities. Both tourists and reviewers provide their personal attributes (like age), but reviewers then providing preferences for specific POIs, whereas tourists only preferences for certain types or categories of POIs (say archeology). Further, an algorithm for grouping into "islands" of most similar reviewers to a certain tourist given the strength of corresponding links in the bimodal network is developed. Additionally, a ranking algorithm based on in-degree or authority centrality is adopted to identify the highest ranked reviewers within the island and recommend their preferred POIs to a given tourist. If there are more than single POIs preferred per reviewer, and there remain more than requested POIs of the highly ranked reviewers to select among for recommendation, a similar centrality algorithm is applied over a reviewer-POI network with links representing a certain reviewer prefers that certain POI. The evaluation initially with an exemplary real-life experiment, and then extended to a massive online dataset from Foursquare, proves our approach as feasible in estimating the tourist's satisfaction with individual POIs. Moreover, it is already promising since incorporating location influence remains yet our future work and might further improve its performance.
Social network analysis;Bimodal graphs;Satisfaction factor on POIs;Collaborative filtering;Recommender systems
WoS
null
With the proliferation of application specific accelerators, the use of heterogeneous clusters is rapidly increasing. Consisting of processors with different architectures, a heterogeneous cluster aims at providing different performance and cost tradeoffs for different types of workloads. In order to achieve peak performance, software running on heterogeneous cluster needs to be designed carefully to provide enough flexibility to explore its variety. We propose a design methodology to modularize complex software applications with data dependencies. The software application designed in this way have the flexibility to be reconfigured for different hardware platforms to facilitate resource management, and features high scalability and parallelism. Using a neuromorphic application as a case study, we present the concept of modularization and discuss the management, scheduling and communication of the modules. We also present experimental results demonstrating the improvements and effects of system scaling on throughput.
Distributed computing;structure based scheduling;heterogeneous computing;pipelining;latency hiding;modularization
WoS
null
With the proliferation of unstructured data, and the inability of relational databases to handle such data in an efficient manner, non-relational databases have gained importance over the last decade. This paper provides a brief description of non-relational database characteristics. It explains the important features of MongoDB and Oracle NoSQL databases. It provides guidance to the decision makers who want to use to choose between these two databases for their enterprise applications.
non-relational databases;MongoDB;sharding;Oracle NoSQL;unstructured data
WoS
null
with the rapid advancement of technology today, smartphones become more and more powerful and attract a huge amount of users with new features provided by mobile device operating systems such as Android. However, due to its security vulnerability, hackers and cybercriminals constantly attack Android mobile devices. Thus, research on effective and efficient mobile threat analysis becomes an emerging and important topic in cybersecurity research area, using various security analysis and evaluation strategies such as static analysis and dynamic analysis. In this paper, we propose a hybrid approach which aggregates the static and dynamic analysis for detecting security threat and attack in mobile app. In our approach, we implement the unification of data states and software execution on the critical test path. Our approach has two phases. We first perform the static analysis to identify the possible attack critical path based on Android API and the existing attack patterns, next we perform the dynamic analysis which follows the path to execute the program in a limited and focused scope, and detect the attack possibility by checking conformance of detected path with the existing attack patterns. In the second phase of runtime dynamic analysis, dynamic inspection will report the type of attack scenarios with respect to the type of confidential data leakage, such as web browser cookie, without accessing any real critical and protected data sources in mobile devices.
Android application analysis;static analysis;dynamic analysis;data path tracing;symbolic execution
WoS
null
With the rapid advancement of wireless technologies, distinct futuristic applications of Wireless Sensor Networks (WSNs) are evolving for both public and private domains. However, Wireless Sensor Nodes experience the major challenges in terms of power supply, computing capabilities and bandwidth requirements during the data communication. Apart from these, data confidentiality and integrity suffer due to various transmission errors caused by channel noise, channel bandwidth limitation, weak signals, limitation of transmitting and receiving devices as well as different security attacks. Various researches have been conducted to resolve these issues in an integrated way. Nevertheless, existing literature does not offer any effective solution to resolve these issues. Hence, this paper proposes a unique Elliptic Curve based Cryptography along with the Diffie-Hellman key exchange technique to address these issues. The experimental results show that the proposed technique consumes low power and utilizes CPU time effectively by offering better throughput and low Cyclomatic Complexity. It enhances confidentiality by producing higher avalanche effect and entropy value than the existing techniques. The capacity of generating higher Signal to Noise Ratio and lower percentage of Information Loss justify its efficiency to produce higher data integrity as compared to the existing schemes in the literature.
Computational power;bandwidth;confidentiality;integrity;transmission errors;security attacks
WoS
null
With the rapid advances in the field of computer vision systems and increasing flexibility in using multimedia applications, brings major challenges in terms of algorithm design and hardware. One such widely used infotainment application is Smart TV control in Multimedia portfolio. This paper presents a non-intrusive approach for smart TV control virtual assistance system targeting elderly citizens using stereo-vision by switching TV ON/OFF and volume INR/DECR events without using any hand-held remote control. System is initialized by waving the hands in-view of stereo vision mounted on Television (TV). Hand palm regions are detected and extracted using motion and color-space model. In next successive frames of a video, Hand palms are tracked using Color Histogram based tracker. A virtual 3D plane is then constructed by waving hand palms around in view of stereo-vision camera and reconstructing 3D points using Epipolar geometry and Triangulation. A KEY PRESS/RELEASE(TV ON/OFF) events are identified by measuring the distance between projected 3D forefinger tip into/onto virtual 3D plane and TV VOLUME INCR/DECR events are identified by waving the hands UP/DOWN in-view of stereo-vision.
Motion detection;Mean shift tracking;Stereo correspondence;Stereo vision;Stereo camera calibration;Stereo rectification;Epipolar geometry;Projection matrix;Triangulation;Singular value decomposition (SVD)
WoS
null
With the rapid development of broadband wireless access technology and mobile terminal, the mobile internet developed quickly in recent years. However, malicious applications have become one of the key factors threatening the development of mobile internet. In order to protect vital interest of mobile terminal users, mobile malicious applications should be effectively prevented and controlled. This paper analyzes the existence limitation of current mobile application detection technology and uses symbolic execution on the base of stream tracing. Malicious code writers usually hide malicious code execution path, and in some special circumstances to trigger some malicious behaviors. We constraint solve execution routes with sensitive calls, and ultimately solve the specific backstage behaviors and trigger conditions. We did some experiments to evaluate the performance of the proposed method. The experimental results show that our method can work well.
Malware detection;Data analysis optimization;Path optimization;Symbolic computation;Stream tracing
WoS
null
With the rapid development of computer technology, people pay more attention to the security of computer data and the computer virus has become a chief threat to computer data security. By using an antivirus system that can identify randomly generated computer viruses and on the basis of the basic characteristics of the computer code, this paper investigates the heuristic scanning technique. This paper proposes the minimum distance classifier and detection model through the analysis of the malicious code. This model can identify unknown feature codes of illegal procedures and construct a healthy network environment by using a combination of model and experimental method, which can intercept the illegal virus program in the installation and operation stages. (C) 2016 Elsevier B.V. All rights reserved.
Security indigital systems;Software engineering;Performance evaluation
WoS
null
With the rapid development of economics in our country, a large number of high-level specialized talents are needed. From 2009 universities expand the enrollment scope to recruit this year's undergraduates to professional degree postgraduates study in our country. In order to improve the training quality of control engineering full-time professional degree postgraduate, a series of reforms are carried out in Northeast Petroleum University, our experience and practice is represented.
Full-time professional degree;Postgraduate education;Professional practice
WoS
null
With the rapid development of economy, electrical load is increasing year by year, resulting in low voltage phenomenon in a peak time for the user side, and affecting the lives of the residents seriously. Based on the understanding of the causes of low voltage, this paper makes some analysis of the impact on the area of voltage regularity and combinations, because of using the control measures of distribution transformer on-load voltage regulation, low voltage reactive power compensation and 10kV bus voltage regulation. And establishes characterization of multiple correlation area of low voltage distribution network to adjust change process dynamic hybrid model, and then derives the sensitivity function of area voltage to each regulation variables, in order to determine different low voltage and corresponding adjustment measures to improve the combination of priorities. Taking constraints of distribution network area voltage level into account, it makes the multi-purpose function priority ranking, the function includes voltage deviation, network loss, and the minimum adjustment costs, etc. Then integrates each means of voltage regulation and makes the design of predictive control based on multipurpose constraint model area voltage control strategy with considering the constraints of safe operation and target priority.
Low Voltage;Control Measures;Dynamic Hybrid Model;Control Strategy
WoS
null
With the rapid development of electrical circuits, Micro electromechanical system (MEMS) and network technology, wireless smart sensor networks (WSSN) have shown significant potential for replacing existing wired SHM systems due to their cost effectiveness and versatility. A few structural systems have been monitored using WSSN measuring acceleration, temperature, wind speed, humidity; however, a multi-scale sensing device which has the capability to measure the displacement has not been yet developed. In the previous paper, a new high-accuracy displacement sensing system was developed combining a high resolution analog displacement sensor and MEMS-based wireless microprocessor platform. Also, the wireless sensor was calibrated in the laboratory to get the high precision displacement data from analog sensor, and its performance was validated to measure simulated thermal expansion of a laboratory bridge structure. This paper expands the validation of the developed system on full-scale experiments to measure both static and dynamic displacement of expansion joints, temperature, and vibration of an in-service highway bridge. A brief visual investigation of bridges, comparison between theoretical and measured thermal expansion are also provided. The developed system showed the capability to measure the displacement with accuracy of 0.00027 in.
wireless smart sensor;displacement monitoring;wireless hybrid sensor (WHS) system;full-scale experiment;expansion joint;structural health monitoring
WoS
null
With the rapid development of electrical network construction, it is an urgent need to solve the problem of reasonably planning AC and DC transmission voltage class sequence in the transmission network. As the major object of study, the applicable range of AC and DC transmission mode and their voltage class sequence is explored in this paper. Firstly, the fast establishing method of AC is proposed under the given transmission requirement, and the applicable range of EHV and UHV transmission is summarized after the simulations under different transmission scenarios. On this basis, combining the economical transmission distances of DC voltage class sequence, which has been researched before, the suitable AC and DC transmission schemes can be selected under the given transmission scenario. Then, based on the AC and DC transmission characteristics, the comprehensive evaluation index system of AC and DC transmission schemes is established, which has considered multiple factors, including power transmission characteristics, reliability, economy and so on. Meanwhile, a grey comprehensive optimal selection method of AC and DC transmission schemes is put forward. Finally, in several groups of typical transmission scenarios, the most economical AC and DC transmission schemes are established, and then using the grey comprehensive optimal selection method and the evaluation index system, the optimal one is selected comprehensively. Through the simulations, the applicable range of AC and DC transmission voltage class sequence is summarized.
AC and DC transmission;EHV and UHV transmission;applicable range;grey comprehensive evaluation;index system
WoS
null
With the rapid development of embedded system and Internet of Things technology, embedded system and smart device based on embedded system is collecting huge amounts of data, and corresponding data processing and application method have been greatly changed, different from the traditional big data and cloud computing focus, local processing and application become important trend. This paper tries to take Cortex-A7 as the main system with the stronger ability, Hadoop distributed computing system is deployed in embedded system and could meet the demand of the future data process with directly managing the resource-constrained sensors, embedded systems and smart devices. Successfully deploying over 20GB data in the test, the system is verified that it can complete most of the functions of data processing cluster, and can also manage the collected sensors and embedded system terminal, with better research and market promotional value.
Embedded systems;Hadoop clusters;Distributed system;Parallel computing
WoS
null
With the rapid development of internet technology, the new methodologies and techniques based on data analysis are increasing popular for decision-making on many aspects of society and economy, including insurance system. The research focus on the current challenges faced by the senior insurance system of China, such as the acceleration of population aging, the rising prevalence rate of chronic diseases among aging population, and the increasing burden of the senior insurance system. This paper, based on the analysis of CLHLS data from Beijing University and other data sources, mainly investigated the relationship between the social insurance and the health level of the senior population through a multiple linear regression model from a Grossman health function. The preliminary conclusions can be drawn from the study as follows. Through the comparison of the regression coefficients, it is clear that the three kinds of basic medical care have different effects on the health level of senior population. The interpretation function of medical insurance on senior health level is lower than those of other social decisions and social support factors, while the influence of pension insurance is minor. Furthermore, the paper studied the coupling mechanism among pension, medical and nursing insurances, and put forward the suggestions of merging pension, medical and nursing insurances to one single plan to improve the efficiency performance of the current social security service system.
Senior Social Insurance;Health Function;System Coupling
WoS
null
With the rapid development of medical imaging technology, computer graphics and visualization technologies, virtual endoscopy technology emerged. It mainly includes 2D medical image segmentation, 3D image reconstruction, path planning and virtual roaming. However, the path planning of virtual endoscopy has become one of the obstacles in this field due to the high irregularity of the nasopharyngeal anatomy structure. In this study, the nasopharynx including meatus nasi, pharyngeal canal, maxillary sinus, frontal sinus, sphenoid sinus, and ethmoid sinus is segmented and 3D reconstructed using MR images. The key technology of virtual endoscopy center path planning algorithm is implemented based on distance transform. Also, two improved algorithms of center path planning are proposed. One is the selection algorithm of branch path and the other is the extraction algorithm for complex path based on human-computer interaction. These two improved algorithms can not only allow the traditional path planning algorithm to handle multiple branching structure but make roaming path to start at any point. Our experimental results satisfied the needs of clinical practice.
Image segmentation;3D reconstruction;Path planning;Distance transformation;Virtual roaming
WoS
null
With the rapid development of micro electro-mechanical devices, the demands for micro power generation systems have significantly increased in recent years. Traditional chemical batteries have energy densities much lower than hydrocarbon fuels, which makes internal-combustion-engine an attractive technological alternative to batteries. Micro rotary internal combustion engine has drawn great attractions due to its planar design, which is well-suited for fabrication in MEMS. In this paper, a phenomenological model considering heat transfer and mass leakage has been developed to investigate effects of engine speed, compression ratio, blow-by and heat transfer on the performance of micro rotary engine, which provide the guidelines for preliminary design of rotary engine. The lower possible miniaturization limits of rotary combustion engines are proposed. (C) 2016 Elsevier Ltd. All rights reserved.
Rotary engine;Heat transfer;Leakage;Miniaturization limits
WoS
null
With the rapid development of mobile communications technologies, social apps (e.g., Line, WeChat) have emerged as important communication tools. Although social apps provide people with additional convenience, overuse of such applications may have negative life effects, such as technostress and distraction. Past research has indicated that personality attributes contribute to compulsive usage. This study explores the relationships between personality attributes and compulsive usage of social apps, and examines the impact of technostress on academic performance. A total of 136 valid questionnaires were collected from university students through an online survey. Fourteen proposed hypotheses were examined using SmartPLS software. The results indicate that extraversion, agreeableness, and neuroticism have significant effects on compulsive usage of mobile social applications. Compulsive usage had a significant positive impact on technostress but did not negatively affect academic self-perception and course grades. In addition, conscientiousness significantly influenced academic self-perception. Unexpectedly, gender and number of friends had little influence on technostress or compulsive usage. The implications of these findings are discussed and directions for future research are offered. (C) 2016 Elsevier Ltd. All rights reserved.
Compulsive usage;Social apps;Universal access and usability;Personality traits;Technostress
WoS
null
With the rapid development of mobile data acquisition technology, the volume of available spatial data is growing at an increasingly fast pace. The real-time processing of big spatial data has become a research frontier in the field of Geographic Information Systems (GIS). To cope with these highly dynamic data, we aim to reduce the time complexity of data updating by modifying the traditional spatial index. However, existing algorithms and data structures are based on single work nodes, which are incapable of handling the required high numbers and update rates of moving objects. In this paper, we present a distributed spatial index based on Apache Storm, an open-source distributed real-time computation system. Using this approach, we compare the range and K-nearest neighbor (KNN) query efficiency of four spatial indexes on a single dataset and introduce a method of performing spatial joins between two moving datasets. In particular, we build a secondary distributed index for spatial join queries based on the grid-partition index. Finally, a series of experiments are presented to explore the factors that affect the performance of the distributed index and to demonstrate the feasibility of the proposed distributed index based on Storm. As a real-world application, this approach has been integrated into an information system that provides real-time traffic decision support.
real time;spatial query;moving objects;Apache Storm
WoS
null
With the rapid development of mobile Internet, people pay increasing attention to the wireless network security problem. But due to the specificity of the wireless network, at present it is rare to see the research of wireless intrusion alerts clustering method for mobile Internet. This paper proposes a Wireless Intrusion Alert Clustering Method (WIACM) based on the information of the mobile terminal. The method includes alert formatting, alert reduction and alert classification. By introducing key information of the mobile terminal device, this method aggregates the original alerts into hyper alerts. The experimental results show that WIACM would be appropriate for real attack scenarios of mobile Internet, and reduce the amount of alerts with more accuracy of alert analysis.
mobile Internet;wireless intrusion;alert clustering;network security
WoS
null
With the rapid development of our country's economy, the per capita level of material life has improved significantly, the corresponding power consumption is also growing sharply, which brings a severe test to the stable operation of power grid system. As the mainstream trend of the current power grid construction, intelligent substation gathers many advanced technologies, wherein the communication technology plays a more and more significant role in it, and how to ensure the communication network security has become the currently primary work content. Thus, this article mainly analyzes the communication security technology in the intelligent substation, objectively sets forth the structure of the intelligent substation by integrating the actual situation, puts forward a safer and more reliable technical solution according to the characteristics of IED, so as to provide more solid guarantee for the intelligent substation, make sure the data transmission is securer, and promote the healthy and sustainable development of power utility.
Intelligent substation;communication security technology;exchange technology;address binding
WoS
null
With the rapid development of remote sensing technology, searching the similar image is a challenge for hyperspectral remote sensing image processing. Meanwhile, the dramatic growth in the amount of hyperspectral remote sensing data has stimulated considerable research on content-based image retrieval (CBIR) in the field of remote sensing technology. Although many CBIR systems have been developed, few studies focused on the hyperspectral remote sensing images. A CBIR system for hyperspectral remote sensing image using endmember extraction is proposed in this paper. The main contributions of our method are that: (1) the endmembers as the spectral features are extracted from hyperspectral remote sensing image by improved automatic pixel purity index (APPI) algorithm; (2) the spectral information divergence and spectral angle match (SID-SAM) mixed measure method is utilized as a similarity measurement between hyperspectral remote sensing images. At last, the images are ranked with descending and the top-M retrieved images are returned. The experimental results on NASA datasets show that our system can yield a superior performance.
Hyperspectral remote sensing images;content-based image retrieval;endmember extraction;similarity measurement;SID-SAM mixed measure
WoS
null
With the rapid development of science and technology and the network popularization, computer network security becomes a social problem. Based on the analysis of the main factors influencing the computer network security, the risk evaluation index system of computer network security is established. The analytic hierarchy process (AHP) is used to calculate the weight of each index. Through the common criteria to determine the grey clustering of computer network security risk, establishing the whitenization weight function of computer network security risk evaluation model, and then calculating the whitenization clustering coefficient and clustering vector of computer network security risk evaluation model, the evaluation results are finally obtained. The method is scientific and reasonable, combining subjective evaluation with objective calculation.
Computer network;Grey clustering;Grey class;Evaluation;Weight
WoS
null
With the rapid development of science and technology, computer network technology has spread to all areas of production and life. However, because of the network has connectivity and openness characteristics, that resulting in network security problems are more prominent, network data theft, hacker attack, viruses, Trojan attacks and other issues direct threat to network information security, economic and social life and even national security have caused a very threat. This paper analysis the computer network security risks existing for the information security threats that facing explore appropriate protective strategy.
Computer networks;information security;protection strategies
WoS
null
With the rapid development of the computer network technology, it has been widely used in various fields, and has achieved the good results. The network security has become a major obstacle that will affect the popularity of the network. Based on the author's practice of the computer network security defense for many years, the author carries out the detailed analysis of the problems of the computer network security and the characteristics of the risks, and focuses on the analysis of the main problems of the viruses and the hackers faced by the computer network, and proposes the concrete measures for the network security protection. With the arrival of the twenty-first century, the computer network technologies have obtained the rapid development, and the human society has entered the age of the Internet and the information, and the computer network is more and more widely used, so that the quality of our life can be greatly improved. However, with the rapid popularization and application of the computer network, the computer network security has become the focus of many experts and scholars. Especially with the taking-off of the modern Internet economy, the network security has caused great impact on the national economy and our life, which will more easily lead to Internet security incidents. In this paper, based on the author's practical experience of the computer network security precaution for many years, the author carries out the detailed analysis of the problems in the network security, and puts forward many measures for the network security precaution, to make efforts to the application of the computer network security.
Computer network;Safety precautions;Measures
WoS
null
With the rapid development of the Internet, especially the development of mobile Internet, network size and network infrastructure changes, especially the more random access node to detect network attacks challenges traditional detection algorithm using neural networks, so that the calculated volume increased dramatically, and will detection accuracy is not high, for this problem, we use the stochastic process theory hidden Markov chain transformation neural network, so that the convergence of a substantial decline, while improving network attack detection accuracy degree.
Neural network;Intrusion detection;Network security;Hidden Markov Chain
WoS
null
With the rapid development of the mobile app market, understanding the determinants of mobile app success has become vital to researchers and mobile app developers. Extant research on mobile applications primarily focused on the numerical and textual attributes of apps. Minimal attention has been provided to how the visual attributes of apps affect the download behavior of users. Among the features of app "appearance", this study focuses on the effects of app icon on demand. With aesthetic product and interface design theories, we analyze icons from three aspects, namely, color, complexity, and symmetry, through image processing. Using a dataset collected from one of the largest Chinese Android websites, we find that icon appearance influences the download behavior of users. Particularly, apps with icons featuring higher colorfulness, proper complexity, and slight asymmetry lead to more downloads. These findings can help developers design their apps.
Mobile apps;Demand;Icon;Aesthetics;Image processing
WoS
null
With the rapid development of the space technology, operational amplifier is widely used as the basic liner circuit in a satellite system. There are many charged particles trapped in the earth's magnetosphere, most of the particles are protons and electrons. In BJTs, the damage caused by electrons causes both bulk recombination and surface recombination to increase and subsequently current gain to decrease. Transistor gain degradation is the primary cause of parametric shifts and functional failures in linear bipolar circuits. The severity of electron radiation response correlates with electron's energy and flux, therefore it is important to understand the electron radiation response in different conditions. In this paper, the tested devices used in this study are NPN-input bipolar operational amplifiers commercial-off-the-shelf (COTS) manufactured by Texas Instruments (TI). NPN-input bipolar operational amplifiers LM108 are irradiated with different energy and different beam current electrons respectively under different bias conditions to study the electron radiation damage effect. Experiment using Co-60 gamma-ray radiation is conducted to compare the different radiation damages between Co-60 gamma-ray and electron radiation. The total radiation experiments are carried out with the Co-60 gamma-ray source (Xinjiang Technical Institute of physics and chemistry). The radiation dose rates for the test samples are 1 Gy (Si)/s, and the total accumulated dose is 1000 Gy (Si). Subsequently, room temperature and high temperature annealings are conducted to analyze the parametric failure mechanism of LM108 caused by a total dose radiation for different biases. Result shows that 0.32 Gy(Si)/s beam current electrons can induce more damage than that caused by 1.53 Gy(Si)/s electrons with the same energy; 1.8 MeV electrons can induce more damages than 1 MeV electrons with the same electron beam current because the former produces more displacement damage than the latter. Comparison between zero and forward biased devices shows that different biased devices have different radiation sensibility, and radiation produces more damages in zero biased devices than in forward biased devices with the same electron energy and beam current. This is because forward biased BJT will suppress the edge electric field, thus leading to the decrease of oxide-trapped charge and interface-trapped charge. During high-temperature annealing, degradation of the devices obviously can be recovered and almost return to the initial value finally. This result indicates that the 1.8 MeV and 1 MeV electron radiation mainly induces ionization damage in bipolar operational amplifier LM108.
NPN-input bipolar operational amplifier;electron radiation;radiation effect;annealing
WoS
null
With the rapid development of wind power generation technology in recent years, it has caused a number of new problems when large scale wind electricity incorporates into power system. One of the most serious problems is that the randomness and volatility of wind generation output or grid fault may lead wind farm to quit operation totally, then power balance of the system is broken and the transient voltage stability of the system is seriously threatened. Therefore, it's quite necessary to study transient voltage stability of the power system with wind power generation incorporated. Above all, the mechanism of transient voltage instability is explained. A model of an actual electrical network, with large-scale incorporation wind electricity 50% to the total installed, is built in BPA. Based on electrical network model, the simulation design and analysis are carried on in detailed. As wind speed fluctuation and short circuit fault are two common kinds of disturbances in wind power integrated system, we study the influences on the transient voltage stability of the system in both cases. On this basis, as SVC and STATCOM are two dynamic reactive compensation devices, their abilities of system voltage supporting are analyzed by comparing the voltage wave forms in cases of SVC or STATCOM being configured or not. Results show that, it can cause adverse impact in both cases of wind speed fluctuation and short circuit fault. When wind speed suddenly increases the voltage of the system decreases. While with SVC or STATCOM configured in the power system, decreasing extent of system voltage is reduced in case of wind speed suddenly changing. The results also show that the voltage supporting function is more obvious when STATCOM is configured than SVC. As for cases of single-phase grounding fault and three-phase grounding fault, when the fault occurs the voltage of the system decreases. But when SVC or STATCOM is configured in the power system, decreasing extent of system voltage is reduces. It can be observed that the dynamic reactive power compensation devices STATCOM has a greater contribution on system voltage supporting, it is more effective in the field of reactive power compensation than SVC.
wind power generation;transient voltage stability;dynamic reactive compensation;BPA
WoS
null
With the rapid developments of computer technology and information technology, human-machine interfaces of aircrafts, ships, nuclear power plants, battlefield command system, and other complex information systems have evolved from the traditional control mode to digital control mode with visual information interface. This paper studies error factors of information interface in human-computer interaction based on visual cognition theory. A feasible error-cognition model is established to solve some design problems which result in serious failures in information recognition and analysis, and even in operation and execution processes. Based on Rasmussen, Norman, Reason and other error types as well as the HERA and CREAM failure identification models, we performed classification and cognitive characterization for error factors according to information search, information recognition, information identification, information selection and judgment as well as the decision-making process and obtained the comprehensive error-cognition model for complex information interface.
Error factors;Design factors;Human-computer interface;Interaction;Visual cognition;Error-cognition model
WoS
null
With the rapid developments of network technology, devices connected to the network in a variety of fields have increased, and then, network security has become more important. Rule-based classification for intrusion detection is useful, because it is not only easily understood by humans, but also accurate for the classification of new patterns. Genetic network programming (GNP) is one of the rule-mining techniques as well as the evolutionary-optimization techniques. It can extract rules efficiently even from an enormous database, but still needs more accuracy and stability for practical use. This paper describes a classification system with random forests, employing weighted majority vote in the classification to enhance its performance. For the performance evaluation, NSL-KDD (Network Security Laboratory-Knowledge Discovery and Data Mining) data set is used and the proposed method is compared with the conventional methods, including other machine-learning techniques (Random forests, SVM, J4.8) in terms of the accuracy and false positive rate.
Association rule;Data mining;Genetic algorithm;Genetic network programming;Intrusion detection system;Random forests
WoS
null
With the rapid increase in computational power of mobile devices the amount of ambient intelligence-based smart environment systems has increased greatly in recent years. A proposition of such a solution is described in this paper, namely real time monitoring of an electrocardiogram (ECG) signal during everyday activities for identification of life threatening situations. The paper, being both research and review, describes previous work of the authors, current state of the art in the context of the authors' work and the proposed aforementioned system. Although parts of the solution were described in earlier publications of the authors, the whole concept is presented completely for the first time along with the prototype implementation on mobile device-a Windows 8 tablet with Modern UI. The system has three main purposes. The first goal is the detection of sudden rapid cardiac malfunctions and informing the people in the patient's surroundings, family and friends and the nearest emergency station about the deteriorating health of the monitored person. The second goal is a monitoring of ECG signals under non-clinical conditions to detect anomalies that are typically not found during diagnostic tests. The third goal is to register and analyze repeatable, long-term disturbances in the regular signal and finding their patterns.
ambient intelligence;ECG anomalies;heart anomaly alert;ECG monitoring for mobile devices
WoS
null
With the rapid increase in size and complexity of analog systems being implemented on field-programmable analog arrays (FPAAs), the need for synthesis tools is becoming a necessity. In this paper, we present Sim2spice, a tool that automatically converts analog signal-processing systems from Simulink designs to a SPICE netlist. This tool is the top level of a complete chain of automation tools. It allows signal-processing engineers to design analog systems at the block level and then implement those systems on a reconfigurable-analog hardware platform in a matter of minutes. We will provide detailed descriptions of each stage of the design process, elaborate on a custom library of analog signal-processing blocks, and demonstrate several examples of systems automatically compiled from Simulink blocks to analog hardware.
Field-programmable analog array (FPAA);Simulink;SPICE
WoS
null
With the rapid proliferation of RFID technologies, RFID has been introduced into applications such as inventory and sampling inspection. Conventionally, in RFID systems, the reader usually identifies all the RFID tags in the interrogation region with the maximum power. However, some applications may only need to identify the tags in a specified area, which is usually smaller than the reader's default interrogation region. An example could be identifying the tags in a box, while ignoring the tags out of the box. In this paper, we respectively present two solutions to identify the tags in the specified area. The principle of the solutions can be compared to the picture-taking process of an auto-focus camera, which firstly focuses on the target automatically and then takes the picture. Similarly, our solutions first focus on the specified area and then shoot the tags. The design of the two solutions is based on the extensive empirical study on RFID tags. Realistic experiment results show that our solutions can reduce the execution time by 44 percent compared to the baseline solution, which identifies the tags with maximum power. Furthermore, we improve the proposed solutions to make them work well in more complex environments.
RFID;tag identification;auto-focus;specified area;experimental study;algorithm design
WoS
null
With the rapid technological development of various satellite sensors, high-resolution remotely sensed imagery has been an important source of data for change detection in land cover transition. However, it is still a challenging problem to effectively exploit the available spectral information to highlight changes. In this paper, we present a novel change detection framework for high-resolution remote sensing images, which incorporates superpixel-based change feature extraction and hierarchical difference representation learning by neural networks. First, highly homogenous and compact image superpixels are generated using superpixel segmentation, which makes these image blocks adhere well to image boundaries. Second, the change features are extracted to represent the difference information using spectrum, texture, and spatial features between the corresponding superpixels. Third, motivated by the fact that deep neural network has the ability to learn from data sets that have few labeled data, we use it to learn the semantic difference between the changed and unchanged pixels. The labeled data can be selected from the bitemporal multispectral images via a preclassification map generated in advance. And then, a neural network is built to learn the difference and classify the uncertain samples into changed or unchanged ones. Finally, a robust and high-contrast change detection result can be obtained from the network. The experimental results on the real data sets demonstrate its effectiveness, feasibility, and superiority of the proposed technique.
Change detection;difference representation learning;multispectral images;neural network;superpixel segmentation
WoS
null
With the recent advances in the programmability and performance of mobile Graphics Processing Units (GPUs), General-Purpose Graphics Processing Unit (GPGPU) technologies have become available even in mobile devices such as smartphones and tablets. Among the available GPGPU technologies for mobile devices, Open Computing Language (OpenCL) and RenderScript are used to accelerate applications in various fields such as computer graphics, image processing/recognition, and computer vision. For example, these technologies are used for detecting collisions and edges, processing data from a camera, recognizing an object in an image, processing the images stored on a device, and accelerating the drawing of an image when live wallpaper is used in Android-based devices. These technologies increase the processing speed as well as reduce the power consumption of mobile devices. In addition to these general applications, they have great potential for use in the optimizing algorithms of scientific fields. This paper describes GPGPU technologies for mobile devices, compares their similarities and differences, and compares their performance for further research purposes. To the best of our knowledge, this paper is the first work that compares and analyzes available GPGPU technologies for mobile devices.
GPGPU;OpenCL;RenderScript;Mobile device;Parallel processing
WoS
null
With the recent increase in the satellite-based navigation applications, the ionospheric total electron content (TEC) and the L-band scintillation measurements have gained significant importance. In this paper we present the temporal and spatial variations in TEC derived from the simultaneous and continuous measurements made, for the first time, using the Indian GPS network of 18 receivers located from the equator to the northern crest of the equatorial ionization anomaly (EIA) region and beyond, covering a geomagnetic latitude range of 1 degrees S to 24 degrees N, using a 16-month period of data for the low sunspot activity (LSSA) years of March 2004 to June 2005. The diurnal variation in TEC at the EIA region shows its steep increase and reaches its maximum value between 13:00 and 16:00 LT, while at the equator the peak is broad and occurs around 16:00 LT. A short-lived day minimum occurs between 05:00 to 06:00 LT at all the stations from the equator to the EIA crest region. Beyond the crest region the day maximum values decrease with the increase in latitude, while the day minimum in TEC is flat during most of the nighttime hours, i.e. from 22:00 to 06:00 LT, a feature similar to that observed in the mid-latitudes. Further, the diurnal variation in TEC show a minimum to maximum variation of about 5 to 50 TEC units, respectively, at the equator and about 5 to 90 TEC units at the EIA crest region, which correspond to range delay variations of about 1 to 8m at the equator to about I to 15 m at the crest region, at the GPSL 1 frequency of 1.575 GHz. The day-to-day variability is also significant at all the stations, particularly during the daytime hours, with maximum variations at the EIA crest regions. Further, similar variations are also noticed in the corresponding equatorial electrojet (EEJ) strength, which is known to be one of the major contributors for the observed day-to-day variability in TEC. The seasonal variation in TEC maximizes during the equinox months followed by winter and is minimum during the summer months, a feature similar to that observed in the integrated equatorial electrojet (IEEJ) strength for the corresponding seasons. In the Indian sector, the EIA crest is found to occur in the latitude zone of 15 degrees to 25 degrees N geographic latitudes (5 degrees to 15 degrees N geomagnetic latitudes). The EIA also maximizes during equinoxes followed by winter and is not significant in the summer months in the LSSA period, 2004-2005. These studies also reveal that both the location of the EIA crest and its peak value in TEC are linearly related to the IEEJ strength and increase with the increase in IEEJ.
ionosphere;equatorial ionosphere;electric;fields and currents;radio science;ionospheric propagation
WoS
null
With the recent increase in youth sports participation and single-sport youth athletes over the past 30 years, there has been an increase in the number of acute and overuse sports injuries in this population. This review focuses on overuse and traumatic injuries of the shoulder and elbow in young athletes. In particular we discuss little league shoulder, glenohumeral internal rotation deficit, glenohumeral instability, superior labrum anterior posterior lesions, Little League elbow, Panner disease, osteochondritis dissecans of the capitellum, posteromedial elbow impingement, and posterolateral rotatory instability of the elbow. There is a significant emphasis on the evaluation and management of upper extremity injury in the overhead thrower.
adolescent athlete;overuse;injury;shoulder;elbow
WoS
null
With the recent widespread adoption of service-oriented architecture, the dynamic composition of services is now a crucial issue in the area of distributed computing. The coordination and execution of composite Web services are today typically conducted by heavyweight centralized workflow engines, leading to an increasing probability of processing and communication bottlenecks and failures. In addition, centralization induces higher deployment costs, such as the computing infrastructure to support the workflow engine, which is not affordable for a large number of small businesses and end-users. In a world where platforms are more and more dynamic and elastic as promised by cloud computing, decentralized and dynamic interaction schemes are required. Addressing the characteristics of such platforms, nature-inspired analogies recently regained attention to provide autonomous service coordination on top of dynamic large scale platforms. In this paper, we propose an approach for the decentralized execution of composite Web services based on an unconventional programming paradigm that relies on the chemical metaphor. It provides a high-level execution model that allows executing composite services in a decentralized manner. Composed of services communicating through a persistent shared space containing control and data flows between services, our architecture allows to distribute the composition coordination among nodes. A proof of concept is given, through the deployment of a software prototype implementing these concepts, showing the viability of an autonomic vision of service composition.
Service coordination;workflow execution;nature-inspired computing;rule-based programming;decentralization
WoS
null
With the rise of restrictions imposed by law for gases emission, several technologies both for petrodiesel (PD) or diesel engines are been applied, such as the sulfur reduction and the injection electronic command, followed of gases recirculation and/or after-treatment. The utilization of biofuels is considered as an interesting option for pollutants reduction. In this study was evaluated the performance on short duration tests (minor period than the factory indication of the lubricant lifespan) of the Diesel engine fueled with four vegetable oils. With the aim to select the most interesting oils for future evaluations in long duration tests. The analyzed variables were fuel consumption, power relative loss and opacity, for oils of linseed, crambe, rapseed, jatropha, with 100 degrees C preheating and engine work temperature (60 degrees C) comparing those with the PD. It was verified that the vegetable oils, on average, present a lower consumption than the PD for the cases of working without load, however with load, they presented higher consumption. In addiction were observed that the oils show a higher relative power loss in relation of PD and provides lower emission of particulate matter. Crambe and canola presented the best performance among the evaluated oils.
consumption;brake power;opacity
WoS
null
With the specific objective of exploring the surface pressure characteristics and further revealing the torsional vortex-induced vibration (VIV) mechanisms of a bridge deck with a particular geometry, numerous simultaneous pressure measurement campaigns were performed in a wind tunnel for aerodynamic-countermeasure-modified and unmodified sections of a section model at different angles of incidence under the conditions of smooth or turbulent flow. The mean and fluctuating pressure distributions, instantaneous pressures at typical instants, dominant pressure frequencies, pressure phase differences at the dominant frequency of individual pressure measurement taps, and the correlation coefficients among local and global torsional moments were studied, revealing the origins and mechanisms of torsional VIVs. The results demonstrate that the angle of incidence, flow conditions (smooth or turbulent), and installation of a spoiler exert significant effects on the surface pressure distributions, hence affecting the corresponding aerodynamic performance of the bridge deck. Turbulence on the top surface can potentially neutralize the vortex shedding effects and enhance immunity to torsional VIVs. The signature turbulence from the leading (fairing) edge was effectively weakened or even destroyed by sufficiently intense oncoming turbulence and/or the turbulence generated by a spoiler with an appropriate configuration and location. Therefore, potential torsional VIVs could be suppressed by the interaction of vortices generated by oncoming and signature turbulences. This knowledge is essential for a thorough evaluation of the potential for torsional VIVs for this particular bridge deck.
Bridge;Vortex-induced vibration (VIV);Wind tunnel test;Simultaneous pressure measurement;Aerodynamic countermeasure;Vibration mitigation
WoS
null
With the steady growth of linked datasets available on the web, it becomes increasingly necessary the creation of efficient approaches for analyzing, search and discover links between RDF datasets. In this paper, we describe LD-LEx, an architecture that creates the possibility of indexing RDF datasets using GridFS documents and probabilistic data structures called Bloom filter. Hence, our lightweight approach provides metadata about quantity and quality of links between datasets. Moreover, we explored these concepts indexing more than 2 billion triples from over a thousand of datasets, providing insights of Bloom filters behavior w.r.t. performance and memory footprint.
RDF;Bloom filter;Linksets;Linked Open Data
WoS
null
With the support of cloud computing techniques, social coding platforms have changed the style of software development. Github is now the most popular social coding platform and project hosting service. Software developers of various levels keep entering Github, and use Github to save their public and private software projects. The large amounts of software developers and software repositories on Github are posing new challenges to the world of software engineering. This paper tries to tackle one of the important problems: analyzing the importance and influence of Github repositories. We proposed a HITS based influence analysis on graphs that represent the star relationship between Github users and repositories. A weighted version of HITS is applied to the overall star graph, and generates a different set of top influential repositories other than the results from standard version of HITS algorithm. We also conduct the influential analysis on per-month star graph, and study the monthly influence ranking of top repositories.
Social coding;Github;HITS;Influence analysis
WoS
null
With the tremendous growth of Internet, large amounts of data are generated and create big challenges for nowadays computing technologies and systems. However, on the other hand, it also sheds new light on the areas of data analytics and mining which enables uncovering the patterns and laws beneath the big data. In recent years, big data analytics have been successfully applied to many areas, such as E-commerce, Healthcare, and Industry. As the same time, security analytics based on big data also receive great attention from both academic and industry. In this paper, we give a comprehensive sketch of techniques about the applications of big data in network security analytics. The existing research works are classified into three types: supervised, unsupervised and hybrid approaches. Then we elaborate the technical issues of the three kinds of approaches and compare their advantages and disadvantages. Finally we outlook the potentials and research directions in the future.
big data;network security;anomaly detection
WoS
null
With the turn of this century, novel food processing techniques have become commercially very important because of their profound advantages over the traditional methods. These novel processing methods tend to preserve the characteristic properties of food including their organoleptic and nutritional qualities better when compared with the conventional food processing methods. During the same period of time, there is a clear rise in the populations suffering from food allergies, especially infants and children. Though, this fact is widely attributed to the changing livelihood of population in both developed and developing nations and to the introduction of new food habits with advent of novel foods and new processing techniques, their complete role is still uncertain. Under the circumstance, it is very important to understand the structural changes in the protein as food is processed to comprehend whether the specific processing technique (conventional and novel) is increasing or mitigating the allergenicity. Various modern means are now being employed to understand the conformational changes in the protein which can affect the allergenicity. In this review, the processing effects on protein structure and allergenicity are discussed along with the insinuations of recent studies and techniques for establishing a platform to investigate future pathway to reduce or eliminate allergenicity in the population.
Protein structure;microwave processing;food allergenicity;high pressure
WoS
null
With the uptake of distance learning (DL), which has actually been marginal for most academics, teaching contexts, traditional power structures and relationships have been transformed, leaving lecturers potentially disenfranchised. Institutional and cultural change is vital, particularly changes concerning academic roles. The advent of DL has caused role ambiguity; however published literature related to academic roles is confusing and lacks clear guidance. For academics involved in post graduate clinical education, information is even more incomplete. Using a framework of communities, this study is a direct response to these concerns. The aim was to systematically and critically evaluate the implementation of clinical DL in an effort to improve practice. Maintaining a practitioner inquiry methodology, this study investigated the development and delivery of a new DL module. Data collection consisted of documentary analysis of meetings, interviews with staff and students, student evaluations and analytics. Data analysis incorporated both quantitative and qualitative methods to triangulate the research findings. New competencies for academics emerged, including leadership and management. Barriers to staff progress included: ambiguity in roles, lack of leadership and unpreparedness for responsibilities, time, and workload. Student barriers included: time, fear, relevance of learning, isolation and increased autonomy. Explicit planning, organisational support and working within communities were requisite to create a 'sustaining' technology. This study contributes to educational practice on two levels. Firstly, by striving for rigour, it demonstrates that practitioner inquiry is a legitimate research approach that is accessible and valuable to teachers. Secondly, it adds to useful and applied knowledge concerning DL practice. Avoiding traditional workload assumptions that are erroneous and inaccurate, this study provides new models of organisational roles and responsibilities. The results challenge the evolutionary nature of academia, suggesting working in communities and new competencies are required whilst traditional roles and culture must be redefined.
Distance learning;clinical education;academic staff;competencies;communities
WoS
null
With the use of fuzzy analytic hierarchical process (FAHP) and fuzzy transformation matrix (FTM), this research takes the policies related to intelligent green building promoted in Taiwan during 1999-2015 as subjects, so as to extract the collective intelligence of senior domain experts to evaluate the contribution weight ranking of each policy measures to achieve policy goals; in addition, with the implementation of policy measures of green building label, intelligent building label and green building material label in each year during this period, the growth and decline in evaluation cases are analyzed to testify the feasibility of policy evaluation method proposed. The results show that: 1.Use of FAHP and FTM to extract the collective intelligence from senior experts can establish policy evaluation method with reference value; 2.The additional building bulk incentive for private building, mandatory control for public building, mandatory incorporation of green public purchase into green building material are the most effective policy measures in Taiwan; 3.Implementation of control measures in the stage of design and planning for new buildings is superior to the control in the stage of operation and management in effectiveness. (C) 2016 Elsevier B.V. All rights reserved.
Policy evaluation;Green building;Intelligent building;Green building material;Intelligent green building;Fuzzy analytic hierarchical process
WoS
null
With the vigorous spread of renewable energy, much attention has been paid to natural ventilation. The natural ventilator is usually classified into a passive type and an active type. Iri this study, the Venturi type ventilator, which is one of the passive type and basically operated by the Bernoulli's principle, was experimentally investigated to evaluate the ventilation characteristics according to the outdoor wind velocities and the opening area of a wall. It was confirmed from the experimental results that the ventilation rate of the Venturi-type ventilator was linearly increased and that the ventilation rate was affected by an intake opening area. The wider the intake opening size gets, the more the ventilation rate increases. Furthermore, the new coefficient of a, which reflects the pressure loss from the intake opening to the mixing zone of the Venturi-type ventilator was introduced and experimentally evaluated. The value of beta, which was evaluated as about 0.08, provides the simple calculation means to estimate the ventilation rate through the Venturi-type ventilator only if the geometric dimensions are known. (C) 2017 Elsevier B.V. All rights reserved.
Bernoulli's equation;Natural ventilation;Reverse flow;Venturi type ventilator;Wind velocity
WoS
null
With the wealth of data accumulated from completely sequenced genomes and other high-throughput experiments, global studies of biological systems, by simultaneously investigating multiple biological entities (e.g. genes, transcripts, proteins), has become a routine. Network representation is frequently used to capture the presence of these molecules as well as their relationship. Network biology has been widely used in molecular biology and genetics, where several network properties have been shown to be functionally important. Here, we discuss how such methodology can be useful to translational biomedical research, where scientists traditionally focus on one or a small set of genes, diseases, and drug candidates at any one time. We first give an overview of network representation frequently used in biology: what nodes and edges represent, and review its application in preclinical research to date. Using cancer as an example, we review how network biology can facilitate system-wide approaches to identify targeted small molecule inhibitors. These types of inhibitors have the potential to be more specific, resulting in high efficacy treatments with less side effects, compared to the conventional treatments such as chemotherapy. Global analysis may provide better insight into the overall picture of human diseases, as well as identify previously overlooked problems, leading to rapid advances in medicine. From the clinicians' point of view, it is necessary to bridge the gap between theoretical network biology and practical biomedical research, in order to improve the diagnosis, prevention, and treatment of the world's major diseases.
Network biology;Systems biology;Biomedical research;Cancers;Personalized therapy
WoS