FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0920548914000865 | Stack smashing is one of the most popular techniques for hijacking program controls. Various techniques have been proposed, but most techniques need to alter compilers or require hardware support, and only few of them are developed for Windows. In this paper, we design a Secure Return Address Stack to defeat stack smashing attacks on Windows. Our approach does not need source code and hardware support. We also extend our approach to instrument a DLL, a multi-thread application, and DLLs used by multi-thread applications. Benchmark GnuWin32 shows that the relative performance overhead of our approach is only between 3.47% and 8.59%. | An enhancement of return address stack for security |
S0920548914000981 | Cloud computing has gained vast attention due to its technological advancement and availability. Possible benefits of adopting cloud computing in organizations are ease-of-use, convenience, on-demand access, flexibility, and least management from the users. This paper analyzes the risk and value components inside cloud computing practice through a value creation model. | Cloud computing: A value creation model |
S0920548914000993 | To correctly evaluate semantic technologies and to obtain results that can be easily integrated, we need to put evaluations under the scope of a unique software quality model. This paper presents SemQuaRE, a quality model for semantic technologies. SemQuaRE is based on the SQuaRE standard and describes a set of quality characteristics specific to semantic technologies and the quality measures that can be used for their measurement. It also provides detailed formulas for the calculation of such measures. The paper shows that SemQuaRE is complete with respect to current evaluation trends and that it has been successfully applied in practice. | SemQuaRE — An extension of the SQuaRE quality model for the evaluation of semantic technologies |
S0920548914001007 | Identity-based proxy ring signature concept was introduced by Cheng et al. in 2004. This primitive is useful where the privacy of proxy signers is required. In this paper, the first short provably secure identity-based proxy ring signature scheme from RSA assumption has been proposed. In addition, the security of the proposed scheme tightly reduces to the RSA assumption, and therefore, the proposed scheme has a proper advantage in security reduction compared to the ones from RSA. The proposed scheme not only outperforms the existing schemes in terms of efficiency and practicality, but also does not suffer from the proxy key exposure attack due to the use of the sequential aggregation paradigm. | A short identity-based proxy ring signature scheme from RSA |
S0920548914001019 | Effective ICT standards enable different services to work together while promoting differentiation that facilitates competition and innovation. In order to ensure that the quality of ICT standards is well developed, it is important that these standards and standardization procedures meet certain requirements. This study reviews standardization research in terms of the process of standardization, innovation, and the demand-supply perspective. It draws implications that standards will be one of the important tools for national economic growth and for unconventional strategies of businesses. The analyses, based on the demand-supply framework, finally suggest promising opportunities for potential researchers. | Standardization revisited: A critical literature review on standards and innovation |
S0920548914001020 | In the scope of the applications developed under the service-based paradigm, Service Level Agreements (SLAs) are a standard mechanism used to flexibly specify the Quality of Service (QoS) that must be delivered. These agreements contain the conditions negotiated between the service provider and consumers as well as the potential penalties derived from the violation of such conditions. In this context, it is important to assure that the service based application (SBA) behaves as expected in order to avoid potential consequences like penalties or dissatisfaction between the stakeholders that have negotiated and signed the SLA. In this article we address the testing of SLAs specified using the WS-Agreement standard by means of applying testing techniques such as the Classification Tree Method and Combinatorial Testing to generate test cases. From the content of the individual terms of the SLA, we identify situations that need to be tested. We also obtain a set of constraints based on the SLA specification and the behavior of the SBA in order to guarantee the testability of the test cases. Furthermore, we define three different coverage strategies with the aim at grading the intensity of the tests. Finally, we have developed a tool named SLACT (SLA Combinatorial Testing) in order to automate the process and we have applied the whole approach to an eHealth case study. | Automatic test case generation for WS-Agreements using combinatorial testing |
S0920548914001032 | This paper describes a new implementation model for the service-oriented smart transducers network based on the IEEE 1451 Web Services. The model enables simple service dislocation and addition of new functionalities in Service Oriented Architecture (SOA) network. Presented architectural organization supports new service-oriented network entities in addition to standard IEEE 1451 smart transducers. The entities such as particular transducer services and functionalities, processing applications and algorithms, and set of I/O devices are supported in the form of service providers managed by a central server. The entities are modeled as virtual transducers incorporated in the service-oriented network, and analysis on how this architectural change affects the smart transducer design constraints is given. The case study of a smart transducer network design in form of automated configuration and data exchange between ARM-based smart transducer interface module, central server and dislocated virtual transducer is presented. | Service-oriented implementation model for smart transducers network |
S0920548914001044 | Network enabled capabilities (NEC) and electronic architectures in modern-day military vehicles fundamentally change the way in which the military conduct its operations. In-vehicle electronic (vetronics) architectures consist of the integration of a variety of safety-critical, deterministic and non-deterministic sub-systems through gateways and backbone networks. This assimilation provides the military with the required information superiority, battle space information integration and new capabilities. However, with this integration comes added security and safety risk. In essence, different communication nodes within the vehicle could be accessed with malicious intentions through NEC or by allowing tampered nodes to be part of the system during reconfiguration. Attacks on safety-critical sub-systems can affect the safety of the crew and also the completion of the military mission. This paper presents a novel integrated vetronics survivability architecture framework to protect and recover vehicle's vetronics from the growing threats of attacks. Architecture components are identified, and their behaviour is analysed to present an innovative vetronics survivability architectural framework. | Integrated vetronics survivability: Architectural design and framework study for vetronics survivability strategies |
S0920548914001056 | Virtual campuses (VCs) are becoming complex applications. Nowadays, users demand more tools, more e-learning platforms, and fewer dependencies among these platforms. To deal with these requirements the Virtual Campus Advanced Architecture project has defined multitier software architecture for VC based on the SOA integration of e-learning platforms. This solution defines a set of interfaces that standardizes the core functions of learning management systems (LMSs), so decoupling VC from LMSs and promoting its evolution towards the SaaS model. Simplicity and viability have been key issues in the development of these interfaces that have been implemented for Blackboard Learn, Moodle and Sakai. | SOA support to virtual campus advanced architectures: The VCAA canonical interfaces |
S0920548914001184 | Companies seek quality certification but, depending on the maturity of their manufacturing processes, may not always able to obtain it or keep it in the long run. This article evaluates the correlation between ISO certification and product development process maturity. It addresses subjects related to the product development process, the maturity of this process and certification and then analyzes the maturity and quality certification of ten Brazilian companies in the electrical and electronics, motorcycle and food industries (two, three and five companies, respectively). Each company's level of maturity is identified and related to the status of the company's quality certification. | The relationship between industrial process maturity and quality certification |
S0920548914001196 | Microarray technology is a powerful tool in molecular biology which is used for concurrent monitoring of a large number of gene expressions. Each microarray experiment produces hundreds of images. In this paper, we present a hardware scheme for lossless compression of 16-bit microarray images. The most significant parts of the image pixels are compressed by classifying them into foreground and background regions. This increases the spatial redundancy of the image. The foreground regions are packed together using a novel compaction unit. Real-time compression of these images is achieved while the bit-per-pixel values are comparable with the standard offline compression tools. | Real-time lossless compression of microarray images by separate compaction of foreground and background |
S0920548914001202 | The execution speed of a programmable logic controller (PLC) depends upon the number of analog and digital input it scans, complication in ladder diagram and the time to store the ladder diagram outputs in memory. Next to the ladder diagram, scanning of analog signals consume enough time as they have to be converted into digital. The two facts that limit the conversion speed is that the processor used for analog signal scanning can process only one channel at a time and the multichannel analog to digital converter (ADC) has digital output for only one channel. The hardware nature of field programmable gate array (FPGA) allows simultaneous conversion of all the analog signals into digital and storage of digital data in block RAM. The proposed design discusses the design of multichannel ADC using FPGA. The simulation result shows that the conversion time of ‘n’ channel ADC is 13.17μs. This increases the PLC execution speed. | Design of parallel conversion multichannel analog to digital converter for scan time reduction of programmable logic controller using FPGA |
S0920548914001214 | Route diversity improves the security of Wireless Sensor Networks (WSNs) against adversaries attempting to obtain sensitive sensor data. By limiting the fraction of data relayed over each link and/or routed through each relay node, route diversity can be achieved, thus, extracting useful information from the network is rendered more challenging for adversaries. Sensor nodes operate with limited energy resources, therefore, energy consumption of security mechanisms is a central concern for WSNs. In this paper we evaluate the energy cost of route diversity for security in WSNs through a novel Linear Programming framework. We characterize energy dissipation and data relaying behaviors of three route diversity techniques to mitigate node capture only, eavesdropping only, and node capture and eavesdropping attacks. Effects of node density, network area, level of resilience, and network topology on energy cost are investigated. | Evaluating energy cost of route diversity for security in wireless sensor networks |
S0920548915000136 | Early and accurate discrimination of risky software projects is critical to project success. Researchers have proposed many predictive approaches based on traditional modeling techniques, but the high misclassification rate of risky projects is common. To overcome this problem, this study proposes a typical three-layered neural network (NN) architecture with a back propagation algorithm that can learn the complex patterns of the OMRON dataset. This study uses four accuracy evaluation criteria and two performance charts to objectively quantify and visually illustrate the performance of the proposed approach. Experimental results indicate that the NN approach is useful for predicting whether a project is risky. Specifically, this approach improves accuracy and sensitivity by more than 12.5% and 33.3%, respectively, compared to a logistic regression model developed from the same database. These results imply that the proposed approach can be used for early planning of limited project/organization resources and appropriate action for risky projects that are likely to cause schedule slippage and cost overload. | Discriminating risky software project using neural networks |
S0920548915000148 | A new information hiding scheme for color images based on the concept of visual cryptography and the Boolean exclusive-or (XOR) operation is proposed. Three different schemes with noise-like, meaningful and binary shares are presented. Meaningful shares may reduce suspicion that something is concealed there. Binary shares can achieve both the benefits of smaller share size and good visual quality. Our model can be easily extended from 256 colors to 65,536 or true color images simply by expanding the block size from 3×3 to 4×4 or 5×5, respectively. | A technique for sharing a digital image |
S092054891500015X | The development of distributed testing frameworks is more complex, where the implementation process must consider the mechanisms and functions required to support interaction as long as the communication and the coordination between distributed testing components. The typical reactions of such systems are the generation of errors‘set: time outs, locks, observability, controllability and synchronization problems. In other side, the distributed testing process must not only check if the output events have been observed, but also the dates when these events have been occurred. In this paper, we show how to cope with these problems by using a distributed testing method including timing constraints. Afterwards, a multi-agent architecture is proposed in the design process to describe the behavior of testing a distributed chat group application on high level of abstraction. | A temporal agent based approach for testing open distributed systems |
S0920548915000161 | Recommendation systems and content-filtering approaches based on annotations and ratings essentially rely on users expressing their preferences and interests through their actions, in order to provide personalised content. This activity, in which users engage collectively, has been named social tagging, and it is one of the most popular opportunities for users to engage online, and although it has opened new possibilities for application interoperability on the semantic web, it is also posing new privacy threats. In fact, it consists in describing online or offline resources by using free-text labels, i.e., tags, thereby exposing a user's profile and activity to privacy attacks. As a result, users may wish to adopt a privacy-enhancing strategy in order not to reveal their interests completely. Tag forgery is a privacy-enhancing technology consisting in generating tags for categories or resources that do not reflect the user's actual preferences too accurately. By modifying their profile, tag forgery may have a negative impact on the quality of the recommendation system, thus protecting user privacy to a certain extent but at the expenses of utility loss. The impact of tag forgery on content-based recommendation isconsequently investigated in a real-world application scenario where different forgery strategies are evaluated, and the resulting loss in utility is measured and compared. | On content-based recommendation and user privacy in social-tagging systems |
S0920548915000173 | Processing of high-resolution time series satellite images typically requires a large amount of computational resources and time. We introduce here a scientific gateway for computing the Normalized Difference Vegetation Index (NDVI) time series data. Based on a distributed workflow using the Web Processing Service (WPS) standard, the gateway aims to be completely interoperable with other standardized tools. The availability of this gateway may help researchers to acquire knowledge of land cover changes more efficiently over very large spatial and temporal extents, which is especially important in the context of Armenia for which timely decision-making is needed. | An interoperable cloud-based scientific GATEWAY for NDVI time series analysis |
S0920548915000185 | The block cipher Crypton is a 128-bit block cipher was proposed by Lim as a candidate for the Advanced Encryption Standard (AES) competition. So far, a variety of cryptanalytic methods have been used to mount attacks on reduced-round versions of Crypton. Biclique attack is one of the most recent cryptanalytic techniques which brings new tools from the area of hash functions to the area of block cipher cryptanalysis. In this paper, using non-isomorphic biclique cryptanalysis, we propose a method to construct independent bicliques, up to five rounds, for cryptanalysis of full-round Crypton. | Non-isomorphic biclique cryptanalysis of full-round Crypton |
S0920548915000197 | We present an automatic approach to compile language resources for named entity recognition (NER) in Turkish by utilizing Wikipedia article titles. First, a subset of the article titles is annotated with the basic named entity types. This subset is then utilized as training data to automatically classify the remaining titles by employing the k-nearest neighbor algorithm, leading to the construction of a significant lexical resource set for Turkish NER. Experiments on different text genres are conducted after extending an existing NER system with the resources and the results obtained confirm that the resources contribute to NER on different genres. | Automatic compilation of language resources for named entity recognition in Turkish by utilizing Wikipedia article titles |
S0920548915000203 | We present an informatics approach to synthesize and classify terminology as defined in standards. Traditional document style standards and dictionary style definitions are very limiting when it comes to getting a holistic picture of the application of standards and regulations. We focus on standards for sustainable manufacturing, but the approach is not limited to this domain. By studying the structure and relationships within those standards we developed a schema for representing and relating standards to each other. We then used that schema as a basis for visualization and querying, which enables interactive and intuitive perusal of the material. | Communicating standards through structured terminology |
S0920548915000215 | Big Data Era brings global digital infrastructure collaboration built on the emerging standards. Given the complexity and dynamics of each specification, corresponding implementations need to undergo sufficient verification and validation procedures. Significant efforts have been invested into conformance testing of individual requirements, for example, by using formal, semi-formal or informal approaches. Less works have been accomplished, however, on the overall orchestration assessment so as to ensure global validity of conformance statements. For example, cyclic dependencies among conformance statements of a service under test may lead to inappropriate conclusions on the assessment outcome. In this study, a dependency model based on three-valued logic and fixed point theory to address dependency issues among cross-referenced statements is presented, so as to provide effective support to global digital infrastructure collaboration. | A model for tracing cross-referenced statement validity |
S0920548915000239 | Operatives' training is crucial in emergency management. Traditional exercises to improve procedures interoperability and harmonization between agencies are complex and expensive. Research on command and control systems specifically designed for emergency management and on virtual reality use leads towards enhancing real world applications' capabilities, facilitating management and optimizing resource usage. This paper proposes a new architecture for a training system based on the interconnection between real and virtual worlds extending the MPEG-V standard; allowing real and virtual units' simultaneous and real-time training using commercial off-the-shelf equipment, and including a novel subsystem for video management from both real and virtual sources. | Interoperable architecture for joint real/virtual training in emergency management using the MPEG-V standard |
S0920548915000240 | This paper analyzes to which extent research published in Computer Standards & Interfaces (CSI) has a technical focus. We find that CSI has been following its scope very closely in the last three years and that the majority of its publications have a technical focus. Articles published in CSI constantly cite research from various technical disciplines, but there are also a limited number of references to non-technical literature. Mostly technical journals cite CSI papers, with a few exceptions of non-technical journals. We conclude that CSI stays within its scope of computer standards and interfaces interpreted in a technical sense. | Citation analysis of Computer Standards & Interfaces: Technical or also non-technical focus? |
S0920548915000252 | This paper introduces the design, implementation and evaluation of the CORFU technique to deal with corporate name ambiguities and heterogeneities in the context of public procurement meta-data. This technique is applied to the “PublicSpending.net” initiative to show how the unification of corporate names is the cornerstone to provide a visualization service that can serve policy-makers to detect and prevent upcoming necessities. Furthermore, a research study to evaluate the precision, recall and robustness of the proposed technique is conducted using more than 40 million of names extracted from public procurement datasets (Australia, United States and United Kingdom) and the CrocTail project. | Enabling policy making processes by unifying and reconciling corporate names in public procurement data. The CORFU technique |
S0920548915000276 | Incorporating security features is one of the most important and challenging tasks in designing distributed systems. Over the last decade, researchers and practitioners have come to recognize that the incorporation of security features should proceed by means of a structured, systematic approach, combining principles from both software and security engineering. Such systematic approaches, particularly those implying some sort of process aligned with the development life-cycle, are termed security methodologies. There are a number of security methodologies in the literature, of which the most flexible and, according to a recent survey, most satisfactory from an industry-adoption viewpoint are methodologies that encapsulate their security solutions in some fashion, especially via the use of security patterns. While the literature does present several mature pattern-driven security methodologies with either a general or a highly specific system applicability, there are currently no (pattern-driven) security methodologies specifically designed for general distributed systems. Going further, there are also currently no methodologies with mixed specific applicability, e.g. for both general and peer-to-peer distributed systems. In this paper we aim to fill these gaps by presenting a comprehensive pattern-driven security methodology – arrived at by applying a previously devised approach to engineering security methodologies – specifically designed for general distributed systems, which is also capable of taking into account the specifics of peer-to-peer systems as needed. Our methodology takes the principle of encapsulation several steps further, by employing patterns not only for the incorporation of security features (via security solution frames), but also for the modeling of threats, and even as part of its process. We illustrate and evaluate the presented methodology in detail via a realistic example – the development of a distributed system for file sharing and collaborative editing. In both the presentation of the methodology and example our focus is on the early life-cycle phases (analysis and design). | ASE: A comprehensive pattern-driven security methodology for distributed systems |
S0920548915000288 | The efficiency of telecommunication services (TS) has increased their popularity. However, objectively evaluating the quality and the potential of TS is difficult for the TS provider because its milieu differs from that of the customer. This obstructs the progression of TS development and usage. This study therefore provides a model for measuring the presence, magnitude, and form of the perception discrepancy regarding TS. This model can help the TS provider and the customer gauge the pros and cons of investment in TS and shape corresponding strategies by linking the developed model, short/long-term TS strategies, and business activities related to TS. | Perceived service quality discrepancies between telecommunication service provider and customer |
S0920548915000380 | Virtualization provides the essential assistance to save energy & resources and also simplify the required information management. However, the information security issues have increasingly become a serious concern. This study investigates the post-virtualization business security landscape related to system security. A questionnaire is developed based on 133 control management principles of ISO/IEC 27001 standard and a sampling technique is employed to collect responses from IT professionals with an understanding of virtualization information environment. The obtained findings suggest that virtualization may be beneficial to certain industrial sectors in handling the issues of information security. | Effects of virtualization on information security |
S0920548915000392 | The technology acceptance model (TAM) has been applied in various fields to study a wide range of information technologies. Although TAM has been developed in this research stream in Taiwan, TAM's issues of measurement have received scant attention. A robust model must perform measurement invariance across different respondent subgroups to ensure that various sample profiles have the same relationship. A survey of regarding E-portfolio system reuse intention was constructed, resulting in 360 valid responses across subgroups differing in gender, grade, and levels of willingness to share, to examine the measurement invariance of TAM. The results empirically support the validity of our TAM instrument for evaluating E-portfolio reuse intentions behavior. These findings suggest that men and women, differing grades, levels of willingness to share conceptualize the TAM construct in similarly. The implications of these results enable us to understand TAM's validity in E-portfolio acceptance research. | Continuance intention of E-portfolio system: A confirmatory and multigroup invariance analysis of technology acceptance model |
S0920548915000409 | Power line communication (PLC) seems to be one of the best trade-off between cost and benefit for implementing Smart Grids in industrial context. Unfortunately, industrial environment may compromise the reliability of PLC technologies due to noisy communication channel and interfering/competing PLC systems. In this work, a multi-protocol instrument for PLC performance estimation is presented. The proposed solution is able to characterize and to decode several PLC systems with different physical modulations using a software-defined architecture. A working prototype of the proposed instrument has been characterized and used in a real industrial plant in order to study potential issues affecting PLC. | Performance analysis of power line communication in industrial power distribution network |
S0920548915000410 | Careful design of information privacy policies is one significant means to induce providing personal information. This research takes three design elements – length, visibility, and specificity – and tests their effectiveness to address information sensitivity, measuring perceived importance and relevance of the policy to decisions to share personal information. The experiment results show that visibility and specificity takes priority. Furthermore, high information sensitivity conditions induce higher perceptions of importance and relevance. Research implications suggest that managers should consider maximizing the benefits of these policy characteristics to induce consumers to read the policy and make it a significant consideration in sharing personal information. | Information privacy policies: The effects of policy characteristics and online experience |
S0920548915000422 | In this paper, we present a framework for building a hybrid network composed of Controller Area Network (CAN) bus and ISA100.11a industrial wireless network. The end-to-end delay of CAN-ISA100.11a hybrid network is evaluated. One scheme is proposed based on time slot configuration in ISA100.11a to ensure the prioritization hierarchy which is suitable for large packet size, namely priority assurance (PA). Moreover, a comparison between shared time slot and short time slot is evaluated to give insight on which one gives the best performance in CAN-ISA100.11a hybrid network. The delay of the proposed framework is further simulated under interference environment. | Extending CAN bus with ISA100.11a wireless network |
S0920548915000446 | The increasing availability of distributed renewable energy sources such as photo-voltaic (PV) panels has introduced new requirements for innovative power grid infrastructures. Information technologies provide new opportunities for developing techniques to optimize the energy usage by a new generation of smart-grids. Here we investigate an original solution that aims at maximizing the self-consumption within a neighborhood based on a collaborative approach. Distributed software agents plan and enforce the optimal schedule of consuming appliances according to the prediction of energy production by PV panels, the estimated energy profile of consuming devices and the user's preferences and constraints. Finally we focus on the performance evaluation of a negotiation protocol that allows agents to find a sub-optimal solution for the global schedule, comparing results obtained by a prototype implementation and experimenting different technologies. | Design and evaluation of P2P overlays for energy negotiation in smart micro-grid |
S0920548915000458 | The PCCR (Pupil Center Corneal Reflection) method became dominant for finding human's diverse eye gaze directions through the research on the eye tracking technology that has been done for a very long period of time. The initial studies on the eye tracking technology were related to the general human interface for operating equipment and devices, then it has been promoted to the field of various purposes such as a market research in a recent study analyzing customer's behaviors. In particular, a real time eye gaze tracking system is most important for many HCI applications including stereoscopic synthesis, intend extraction, behavior analysis and etc. In order to make an eye gaze tracking system to be real time, the system must have an efficient pupil detection algorithm and ambience-independent image processing as well as reduced complexity, small size and number of circuit components. This paper proposes a method for getting clean images compared to the previous systems to reduce image processing overhead. Because it also helps reducing the number of image frames to be dropped during the image processing, the proposed method can provide a sufficient performance even on a low cost hardware system by reducing the transmission traffic. | A novel approach to the low cost real time eye mouse |
S092054891500046X | Hidden and exposed terminal problems are known to negatively impact wireless communications, degrading potential computing services on top. These effects are more significant in Wireless Mesh Sensor Networks (WMSNs), and, particularly, in those based on the IEEE 802.15.5 Low-Rate Wireless Personal Area Network (LR-WPAN mesh) standard, a promising solution for enabling low-power WMSNs. The first contribution of this paper is a quantitative evaluation of these problems under the IEEE 802.15.5 Asynchronous Energy Saving (ASES) mode, which is intended for asynchronous data-collection applications. The results obtained show a sharp deterioration of the network performance. Therefore, this paper also reviews the most relevant approaches that cope with these problems and are compatible with ASES. Finally, a set of these proposals is assessed to find out those more suitable for their potential integration with ASES, which constitutes the second major contribution of the paper. | On the influence of the hidden and exposed terminal problems on asynchronous IEEE 802.15.5 networks |
S0920548915000471 | We present the deficiencies of traditional identity-based authorization models in structured Peer-to-Peer (P2P) networks where users' Public Key Certificates (PKCs) represent two roles, authentication and authorization, and the access to the network resources is controlled by Access Control Lists (ACLs). With these deficiencies in mind, we propose a complete new framework for authorization in structured P2P networks based on Attribute Certificates (ACs) and a fully distributed certificate revocation system. We argue that the proposed framework yields a more flexible and secure authorization scheme for structured P2P networks while improving the efficiency of the assignment of privileges. | Attribute-based authorization for structured Peer-to-Peer (P2P) networks |
S0920548915000550 | Trustworthiness and technological security solutions are closely related to online collaborative learning and they can be combined with the aim of reaching information security requirements for e-Learning participants and designers. Moreover, mobile collaborative learning is an emerging educational model devoted to providing the learner with the ability to assimilate learning any time and anywhere. In this paper, we justify the need of trustworthiness models as a functional requirement devoted to improving information security. To this end, we propose a methodological approach to modelling trustworthiness in online collaborative learning. Our proposal sets out to build a theoretical approach with the aim to provide e-Learning designers and managers with guidelines for incorporating security into mobile online collaborative activities through trustworthiness assessment and prediction. | A methodological approach for trustworthiness assessment and prediction in mobile online collaborative learning |
S0920548915000562 | Lately, the use of mobile applications in Smartphones has grown considerably. One of the most popular types of applications is videogames. As classic videogame development is a costly process several editors and tools to make this process quicker and easier have appeared. Although these editors allow the definition of various aspects of the videogame they do not include, in general, capabilities for modeling the logic of the game loop. In this research we propose VGPM, a specific notation based in the BPMN approach to define several important characteristics of the game logic, trying to reduce the cost of traditional videogame development. | VGPM: Using business process modeling for videogame modeling and code generation in multiple platforms |
S0920548915000574 | In 2010 SEMAT launched a call for action to refound Software Engineering. Later, the Object Management Group endorsed it as a request for proposals to deal with SEMAT concerns. The KUALI-KAANS Research Group responded to the request as a submitter by creating the KUALI-BEH proposal. The objective of this paper is to present the roadmap KUALI-BEH followed throughout the OMG standardization process: its origins, fusion with the ESSENCE proposal and eventual appearance as a standard. The subsequent lessons learned highlight the lack of aligned definitions among IT standards and the standardization process shortcomings, to which improvements are suggested. | The making of an OMG standard |
S0920548915000604 | Requirements concerning the specification and correct implementation of access control policies have become more and more popular in industrial networked systems during the last years. Unfortunately, the peculiar characteristics of industrial systems often prevent the designer from taking full advantage of technologies and techniques already developed and profitably employed in other application areas. In particular, the unavailability and/or impossibility of adopting hardware (h/w) and software (s/w) mechanisms able to automatically enforce the policies defined at a high level of abstraction, often results in checking the correctness of policy implementation in the real system manually. The first step towards carrying out this cumbersome task in an automated way is the development of a model able to capture both the high level policy specification as well as the details and low-level mechanisms characterizing the actual system implementation. This paper introduces a twofold model for the description of access control policies in industrial environments aimed at coping with this requirement and which can be profitably adopted in several kinds of automated analysis. | A twofold model for the analysis of access control policies in industrial networked systems |
S0920548915000616 | Radio Frequency Identification (RFID) has been widely adopted in practice for objects identification. The ownership of an object can be represented by the ownership of the RFID tag attached to the object. An ownership could be shared among different parties and should be transferable. Although many RFID ownership transfer protocols were proposed, a shared ownership transfer protocol remains as a daunting task with absence of a trusted party. In this paper, we propose the first provably secure shared ownership transfer protocol, which requires merely hashing computations and has a constant key size. | Shared RFID ownership transfer protocols |
S0920548915000628 | Among key design practices which contribute to the development of inclusive ICT products and services is user testing with people with disabilities. Traditionally, this involves partial or minimal user testing through the usage of standard heuristics, employing external assisting devices, and the direct feedback of impaired users. However, efficiency could be improved if designers could readily analyse the needs of their target audience. The VERITAS framework simulates and systematically analyses how users with various impairments interact with the use of ICT products and services. Findings show that the VERITAS framework is useful to designers, offering an intuitive approach to inclusive design. | Designing for designers: Towards the development of accessible ICT products and services using the VERITAS framework |
S092054891500063X | The rise of Internet of Things has been improving the so-called mobile Online Social Networks (mOSNs) in sense of more ubiquitous inter-communication and information sharing. Meanwhile, location sharing service is known as the key cornerstone of mOSNs. Unfortunately, location sharing has also caused similarly serious concerns on the potential privacy leakage. We propose BMobishare, a security enhanced privacy-preserving location sharing mechanism. It employs the Bloom Filter to mask sensitive data exchanges, such that exchange of both sides cannot obtain unauthorized privacy information. Analyses and evaluations show that BMobishare's enhanced location sharing procedure achieves significantly better performance when compared to existing approaches. | An efficient and privacy-preserving location sharing mechanism |
S0920548915000641 | Knowledge-based Clinical Decision Support Systems (KB-DSSs) promise to provide patient-specific recommendations, generated by matching the KB with electronic patient data from various sources. The challenge of making KB-DSSs interoperable can be simplified by including those data sources into an integrated Personal Health Record (PHR). This paper aims to identify relevant criteria to support the evaluation of data standards for the PHR, following a case-study approach. 15 functional and non- functional criteria were identified and used to evaluate selected standards (HL7 CDA, HL7 vMR and openEHR). Our evaluation identifies their main advantages and disadvantages to support the development of interoperable, data-integrated KB-DSSs | Understanding requirements of clinical data standards for developing interoperable knowledge-based DSS: A case study |
S0920548915000653 | The number of applications for RFID systems is growing rapidly. Due to the conflict between large-scale use of RFID technology and inadequate security-related research, this paper proposes secure mechanisms based on tripartite credibility consisting of an enhanced security mechanism of LLRP and a tag participation third-party authentication mechanism. This paper first introduces relevant information about RFID systems and then details design and implementation of the proposed secure mechanisms. Finally, this paper evaluates the performance of the proposed mechanisms in terms of storage complexity, communication cost and computational cost and analyzes the security advantages compared to those of previous research. | Design and analysis of secure mechanisms based on tripartite credibility for RFID systems |
S0920548915000665 | Deep convolutional network cascade has been successfully applied for face alignment. The configuration of each network, including the selecting strategy of local patches for training and the input range of local patches, is crucial for achieving desired performance. In this paper, we propose an adaptive cascade framework, termed Adaptive Cascade Deep Convolutional Neural Networks (ACDCNN) which adjusts the cascade structure adaptively. Gaussian distribution is utilized to bridge the successive networks. Extensive experiments demonstrate that our proposed ACDCNN achieves the state-of-the-art in accuracy, but with reduced model complexity and increased robustness. | Adaptive Cascade Deep Convolutional Neural Networks for face alignment |
S0920548915000677 | Energy-efficient backbone construction is one of the most important objective in a wireless sensor network (WSN) and to construct a more robust backbone, weighted connected dominating sets can be used where the energy of the nodes are directly related to their weights. In this study, we propose algorithms for this purpose and classify our algorithms as weighted dominating set algorithms and weighted Steiner tree algorithms where these algorithms are used together to construct a weighted connected dominating set (WCDS). We provide fully distributed algorithms with semi-asynchronous versions. We show the design of the algorithms, analyze their proof of correctness, time, message and space complexities and provide the simulation results in ns2 environment. We show that the approximation ratio of our algorithms is 3ln(S) where S is the total weight of optimum solution. To the best of our knowledge, our algorithms are the first fully distributed and semi-asynchronous WCDS algorithms with 3ln(S) approximation ratio. We compare our proposed algorithms with the related work and show that our algorithms select backbone with lower cost and less number of nodes. | Semi-asynchronous and distributed weighted connected dominating set algorithms for wireless sensor networks |
S0920548915000689 | Data exchange formats play a prominent role in facilitating interoperability. Standardization of data exchange formats is therefore extremely important. In this paper, we present two contributions: an empirical framework called XML-DIUE, for evaluating data exchange format standards in terms of their usage and an illustration of this framework, demonstrating its ability to inform on these standards from their usage in practice. This illustration is derived from the localization domain and focuses on identifying interoperability issues associated with the usage of XML Localization Interchange File Format (XLIFF), an open standard data exchange format. The initial results from this illustrative XLIFF study suggest the utility of the XML-DIUE approach. Specifically they suggest that there is prevalent ambiguity in the standard's usage, and that there are validation errors across 85% of the XLIFF files studied. The study also suggests several features for deprecation/modularization of the standard, in line with the XLIFF Technical Committee's deliberations, and successfully identifies the core features of XLIFF. | An empirical framework for evaluating interoperability of data exchange standards based on their actual usage: A case study on XLIFF |
S0920548915000690 | Cloud computing has gained mass popularity in the business environment. However, this technology also imposes some risk concerns, such as weak protection to security and privacy. Since its nature of distant and remote connectivity, the auditing process to this technology becomes challengeable. This paper focuses on issues related to cloud computing risk and audit tasks. | Cloud computing risk and audit issues |
S0920548915000707 | In this paper, we surveyed technical elements of video surveillance systems and proposed several countermeasures to effectively manage a number of keys for image encryption, privacy protection and user authentication. In addition, we proposed several solutions for potential problems that could arise when the system adopts a privacy masking policy for each user. The proposed solutions selectively implement a Kerberos approach, a round keys approach and a double hash chain approach. For secure video surveillance systems, protecting privacy and providing accessibility through strong user authentication and prioritized authorization is expected. | An efficient key management solution for privacy masking, restoring and user authentication for video surveillance servers |
S0920548915000719 | How to securely transmit data is an important problem in Internet of Things (IoT). Fuzzy identity-based encryption (FIBE) is a good candidate for resolving this problem. However, existing FIBE schemes suffer from the following disadvantages: rely on random oracle models, merely secure in selective-ID model, long public parameters, and loose security reduction. In this paper, we propose a new FIBE scheme. Our scheme is secure in the full model without random oracles, and at the same time has a tight security reduction and short public parameters. This means that our scheme is quite suitable for secure transmitting data in IOT. | Fully secure fuzzy identity-based encryption for secure IoT communications |
S0920548915000732 | The two significant tasks of a focused Web crawler are finding relevant documents and prioritizing them for effective download. For the first task, we propose an algorithm to fetch and analyze the most effective HTML elements of the page to predict and elicit the topical focus of each unvisited page with high accuracy. For the second task, we propose a scoring function of the relevant URLs through the use of T-Graph to prioritize each unvisited link. Thus, our novel method uniquely combines these approaches, giving precision and recall values close to 50%, which indicate the significance of the proposed architecture. | A focused crawler combinatory link and content model based on T-Graph principles |
S0920548915000744 | In this paper, a routing protocol for k-anycast communication based upon the anycast tree scheme is proposed for wireless sensor networks. Multiple-metrics are utilized for instructing the route discovery. A source initiates to create a spanning tree reaching any one sink with source node as the root. Subsequently, we introduce three schemes for k-anycast: a packet is transmitted to exact k sinks, at least k sinks, and at most k sinks, where, a packet can be transmitted to k or more than k sinks benefiting from broadcast technique without wasting the source energy for replicating it. | Routing protocol for k-anycast communication in rechargeable wireless sensor networks |
S0920548915000847 | With the availability of various heterogeneous radio access technology standards, how to support multiple heterogeneous radio interfaces in a single device and provide seamless mobility between them is one of key issues in the design and implementation of modern multi-radio mobile devices (like smartphones). The radio architectures of such multi-radio mobile devices can have great impacts on achievable mobility performance. This paper presents a case study that investigates into the impacts of radio architecture design on mobility performance achievable in wireless standard implementation, with a particular focus on heterogeneous radio access mobility between Long Term Evolution (LTE) and enhanced High Rate Packet Data (eHRPD). We present an in-depth overview of handover procedures in standards and their achievable performance enhancements from the perspective of device radio architectures. We consider three radio architectures (single transmission/reception, single transmission dual reception, and dual transmission/reception architectures) and provide a performance analysis to compare the three architectures in terms of handover delay and energy consumption. We also discuss supportability for advanced flow-precision mobility under the different radio architectures, and present a comparison with another vertical handover technology. | On the impact of mobile node radio architectures on heterogeneous wireless network standards: A performance analysis of LTE–eHRPD mobility |
S0920548915000859 | In this paper we propose a novel Application Programming Interface (API) design pattern for inter-communication between Remote Laboratory Management Systems (RLMSs) accommodating different levels of functional support and thereby allowing more efficient sharing of laboratory resources regardless of their hosting RLMS. Afterwards, we present initial results and demonstrate the feasibility and effectiveness of this pattern by developing an API for two common RLMSs, Sahara and the iLab shared Architecture (ISA). As a result, users logging into a Sahara server managed to access and manipulate a radio-activity experiment hosted on an ISA server. | Interoperating remote laboratory management systems (RLMSs) for more efficient sharing of laboratory resources |
S0920548915000860 | Malicious code can propagate rapidly via software vulnerabilities. In order to prevent the explosion of malicious codes on the Internet, a distributed patching mechanism is proposed in which the patch can tend to hub nodes automatically based on social computing in social networks. A server in social network generates automatic patches and then selects those nodes with maximum degree to push automatic patches to. Those hub nodes then send the patch to their buddies according to their degree in social network. Automatic patches propagate rapidly through hub nodes and patch nodes in social network, which will improve the security of the whole social network. Those receivers accept the patch according to trust value to the sender, which can avoid some malicious codes exploit our scheme to propagate themselves. Experiments show this mechanism is more efficient than other patching mechanisms. | Patching by automatically tending to hub nodes based on social trust |
S0920548915000872 | In the field of Context-Aware Recommendation Systems (CARS), only static contextual information is usually considered. However, the dynamic contextual information would very helpful in mobile computing scenarios. Despite this interest, the design and implementation of flexible and generic frameworks to support an easy development of context-aware mobile recommendation systems have been relatively unexplored. In this paper, we describe a framework that facilitates the development of CARS for mobile environments. We mainly focus on the development of the elements needed to support pull-based recommendations and the experimental evaluation of the proposed system. | Pull-based recommendations in mobile environments |
S0920548915000884 | We propose resource allocation models for wireless multi-cell orthogonal frequency division multiple access (OFDMA) networks. The models maximize signal to interference noise ratio (SINR) and capacity with SINR produced in an OFDMA network subject to user power and subcarrier assignment constraints. We derive mixed integer programming formulations for the case when maximizing SINR and piecewise linear approximations for the capacity objective. A variable neighborhood search (VNS) metaheuristic is proposed to compute near optimal solutions. Our numerical results indicate that VNS provides near optimal solutions and better feasible solutions than CPLEX and DICOPT solvers in significantly less computational cost. | Resource allocation in uplink wireless multi-cell OFDMA networks |
S0920548915000896 | HTML5 can be used to develop client applications by composing REST web services within the context of Web 2.0. However, the possibility of implementing cross-platform smartphone applications with REST services needs to be studied. Accordingly, we developed a REST-based cross-platform application with PhoneGap. The application was deployed on the Android, Windows Phone, and iOS platforms; subsequently we evaluated its usability. We observed that REST-based cross-platform smartphone applications can be implemented with HTML5 and PhoneGap, which can be scaled-up into a REST service composition tool. Moreover, the application’s usability remains unaffected on the native platforms and adaptation required only minimal effort. | Towards end-user development of REST client applications on smartphones |
S0920548915000902 | Problems of the E-book application include different file formats, performance inefficiency and image distortion; those may cause a slow development of E-book marketing in the industry of information technology. This paper proposes an innovative E-book system called CloudBook, which utilizes an embedded Graphics Processing Unit (GPU) and a data locality aware Hadoop system to resolve the problems. The results of the experiments show that the CloudBook system can increase the performance of the OpenVG library by 73%, reduce the execution time of file conversion by 50%–75%, and improve the data hit ratio in cloud platform by 10%. | CloudBook: Implementing an E-book reader on a cloud platform by an optimized vector graphic library |
S0920548915000914 | The Internet of Things (IoT) refers to a world-wide network of interconnected physical things using standardized communication protocols. Recent advancements in IoT protocol stack unveil a possibility for the future IoT based on the stable and scalable Internet Protocol (IP). Then, how can data and events introduced by IP networked things be efficiently exchanged and aggregated in various application domains? The problem, known as service composition, is essential to support the rapid creation of new ubiquitous applications. This article explains the practicability of the future full-IP IoT with realtime Web protocols and discusses the research challenges of service composition. | Service composition for IP smart object using realtime Web protocols: Concept and research challenges |
S0920548915000926 | The smartphone market is nowadays highly competitive. When buying a new device, users focus on visual esthetics, ergonomics, performance, and user experience, among others. Assessing usability issues allows improving these aspects. One popular method for detecting usability problems is heuristic evaluation, in which evaluators employ a set of usability heuristics as guide. Using proper heuristics is highly relevant. In this paper we present SMASH, a set of 12 usability heuristics for smartphones and mobile applications, developed iteratively. SMASH (previously named TMD: Usability heuristics for Touchscreen-based Mobile Devices) was experimentally validated. The results support its utility and effectiveness. | Developing SMASH: A set of SMArtphone's uSability Heuristics |
S0920548915000938 | Peer-to-peer (P2P) computing has gained great attention in both research and industrial communities. Although many P2P systems, such as CAN, Pastry, Chord, and Tapstry have been proposed, they only support exact-match lookups. To overcome the limitation, a new area of P2P research, called peer data management system (PDMS), has emerged. In PDMS, metadata were used for annotating resources to support complex queries. This paper proposed a hybrid PDMS called RDF-Chord. In RDF-Chord, a set of keys is ingeniously designed to significantly reduce the search spaces. In experiments, it shows that RDF-Chord is highly scalable and efficient, especially in range queries. | RDF-Chord: A hybrid PDMS for P2P systems |
S092054891500094X | Context-awareness enables the personalization of computer systems according to the users' needs and their particular situation at a given time. The personalization capabilities are usually implemented by programmers due to the complex processes that are involved. However, an important trend in software development is that more and more software systems are being implemented not only by programmers but also by people with expertise in other domains. Since most of the existing context-aware development toolkits are designed for programmers, non-technical users cannot develop these kinds of systems. The design of tools to create context-aware systems by users that do not have programming skills but are experts in the domain where the system is going to be deployed, will contribute to speed up the adoption of these kinds of services by the society. This paper presents a cloud-based platform to ease the development of context-aware mobile applications by people without programming skills. The platform has been designed to be used in a tourism domain. This way, tourism experts can send tourist information to mobile users according to their context data (indoor/outdoor location, language, and date and time range). An energy-efficient mobile app has been developed in order to obtain context data from the user's device and to receive personalized information in real time based on these data. The architecture and implementation details of the system are presented and the evaluation of the platform by tourism domain experts is discussed. | A cloud-based platform to develop context-aware mobile applications by domain experts |
S0920548915000951 | The rapid development of Internet of Things has triggered the multiplication of communication nodes based on Radio-Frequency Identification (RFID) and Wireless Sensor Networks (WSNs) in various domains such as building, city, industry, and transport. These communication nodes are attached to a thing or directly included in the material of the thing to form a communicating material. In communicating material, one of the desired objectives is to merge the logical data with its physical material, thus simplifying the monitoring of its life cycle, the maintenance operations, and the recycling process. In this context, the initial form of the communicating material can evolve during its lifecycle. It can be split, aggregated with other materials, or partially damaged. However, the entire information in the material should always be accessible after each change. Thus, the objective of this research is to develop specific algorithms for efficient dissemination of information in the material in order to limit information losses. Two dissemination algorithms hop-counter-based and probabilistic-based are proposed for storing data by using WSNs, and non-localized and localized storage is considered. Non-localized storage ensures that information can be retrieved from each piece of the material by using a uniform data replication process. Localized storage ensures that the information is stored in a limited region of the material. Castalia/OMNeT++ simulator is used to compare the performance of the proposed algorithms with other similar protocols such as DEEP, Supple, and RaWMS. | Non-localized and localized data storage in large-scale communicating materials: Probabilistic and hop-counter approaches |
S0920548915000963 | We adress in this paper the security issues that arise when outsourcing business processes in the BPaaS (Business Process as a Service). In particular when sharing and reusing process fragments coming from different organizations for faster and easier development of process-based applications (PBA). The goal is twofold, to preserve the process fragment provenance, i.e., the companies's business activities which provide the reused fragments in order to avoid the competition, and to guarantee the end-to-end availability of PBA to fragment's consumers. We formally define the problem, and offer an efficient anonymization-based protocol. Experiments have been conducted to show the effectiveness of the proposed solution. | Security-aware Business Process as a Service by hiding provenance |
S0920548915000975 | The designing and implementation of a multi-agent system (MAS), where autonomous agents collaborate with other agents for solving problems, constitute complex tasks that may become even harder when agents work in new interactive environments such as the Semantic Web. In order to deal with the complexities of designing and implementing a MAS, a domain-specific language (DSL) can be employed inside the MAS's development cycle. In such a manner, a MAS can be completely specified by programs written in a DSL. Such programs are declarative, expressive, and at the right abstraction level. In this way the complexity of MAS development is then partially shifted to DSL development and the task herein can be much more feasible by using a proper DSL development methodology and related tools. This paper presents and discusses our methodology for DSL development based on declarative formal specifications that are easy to compose, and its usage during MAS development. A practical case-study is also provided covering an example of a MAS's development for expert finding systems. By using denotational semantics for precisely defining the language, we show that it is possible to generate the language automatically. In addition, using attribute grammars makes it possible to have modular methodology within which evolutionary language development becomes easier. | Declarative specifications for the development of multi-agent systems |
S0920548915000987 | M2M (Machine-to-Machine) communication for the Internet of Things (IoTs) system is considered to be one of the major issues in future networks. Considering the characteristics of M2M networks in IoTs systems, traditional security solutions are not able to be applied to E2E (End-to-End) M2M networks because the M2M network itself is vulnerable to various attacks. We consider security aspects for M2M communications and then propose a security gateway application (SGA) including the lightweight symmetric key cryptographic negotiation function, secure E2E M2M key exchange generation function and secure E2E M2M messages delivery function. The proposal of the SGA is newly suggested to improve the gateway application (GA) of the ITU-T M2M service layer in the IoTs reference model. We prove that it could prevent various attacks via the theoretical security analyses. Therefore, it could meet the basic security requirements of the M2M service layer. | A security gateway application for End-to-End M2M communications |
S0920548915000999 | During the past few years the widespread use of the wireless local area networks (WLANs) communication technique is one of the most popular technologies in data telecommunications and networking. The IEEE 802.11 protocol has achieved worldwide acceptance with WLANs with minimum management and maintenance costs. The theoretical performance and numerical results in terms of saturation throughput and delay of distributed coordination function (DCF) were finished by Ziouva and Antonakopoulous. It took into account of the busy medium conditions and how they affected the use of the backoff mechanism. However, the definition of their proposed channel busy probability is not suitable for the operating system architecture. In this paper, the channel busy conditions is modified and the Ziouva and Antonakopoulous's (ZA's) model is improved, and the more accurate analyses of the DCF are presented. Our analysis is also extended to the computation of the delay performance of both the request-to-send/clear-to-send (RTS/CTS) and basic access mechanisms. The numerical results show that the modified model has better performance than the ZA's model under ideal channel scenario. | The performance evaluation of IEEE 802.11 DCF using Markov chain model for wireless LANs |
S0920548915001002 | This paper proposes a feedback scheduler for energy harvesting systems (FS-EH) in a soft real-time context on a DVFS processor. This scheduler reduces the processor speed in proportion to the available energy in the batteries and the processor utilization. The goal is to experimentally maximize the battery life while minimizing the deadline miss ratio. When the battery is full, the harvested energy is wasted, therefore the system could use the processor at full speed. This is accounted for in FS-EH by using the processor at full speed when the available energy is over a given threshold. Otherwise, the processor speed is set proportionally to the available energy and instantaneous processor utilization. We experimentally show that FS-EH performs better, in terms of energy consumption, quality of control and deadline miss rate than other scheduling algorithms proposed in this context. | A real-time feedback scheduler for environmental energy with discrete voltage/frequency modes |
S0920548915001014 | New technologies are making videoconferencing more ubiquitous than ever. This imposes a big challenge for scaling software MCUs, the traditional videoconferencing servers. We propose, implement and test an architecture for a distributed MCU designed to be deployed in a Cloud Computing environment. The main design idea is to break monolithic MCUs into more simple parts: broadcasters. These broadcasters can be deployed independently on the fly. This achieves a higher deployment granularity and flexibility. We describe the control architecture that allows this distribution and prove the viability of the system with a fully developed implementation. | Materialising a new architecture for a distributed MCU in the Cloud |
S0920548915001026 | This paper presents a study of the collective knowledge in information technology (IT) and the comparative analyses of innovative trends in the standardisation of the roads of knowledge in the subfields of software engineering (SE). The focus is on the amount of required innovation that will be necessary in the examples database of standardised units in IT and SE for the improvement of the information systems (IS). The goal is to determine how to obtain appropriate knowledge in IT and SE to model the excellence of IS. The contribution to the modelling of IS excellence in PDCA (Plan-Do-Check-Act) is presented. | Knowledge acquisition in information technology and software engineering towards excellence of information systems based on the standardisation platform |
S0920548915001038 | We consider wireless nodes connected in an ad hoc network where recursion based localization is available and ad hoc routing is deployed. We are interested in studying the possibility to use ad hoc routing to help a mobile (sensor) node in a dense/sparse wireless network to estimate its position by first finding the closest two or three ad hoc reference nodes that are already known their positions then use the position value of the found reference nodes and add the estimated distance using the hop counts of the ad hoc routing to find the estimated position. Our protocol will control which are the nodes that will have to calculate their position using the recursive approach in order to serve as reference points to other nodes in the network. Our proposed algorithm basically includes the improved version of the OLSR protocol mostly about the MPR decision and utilization topics by introducing supplemental selection criteria which are also significant for the localization process. Besides, the first part of the localization is performed with this modified version but at the continuation part, two schemas are used: DV-hop and DV-distance. These two schemas are used in two ways, after finding three anchors to find the position of the related node and if three of the anchors could not be collected then in case of finding anchors. Furthermore, the localized node whose position is detected also assigned as an anchor node in the network. Additionally, we compare our schemas with a recursive position estimation (RPE) algorithm about density, position error and reference point numbers. And t-test is performed in our study for the reference points–densities with p-value of 0.05. | Recursive and ad hoc routing based localization in wireless sensor networks |
S092054891500104X | Indexing the Web is becoming a laborious task for search engines as the Web exponentially grows in size and distribution. Presently, the most effective known approach to overcome this problem is the use of focused crawlers. A focused crawler employs a significant and unique algorithm in order to detect the pages on the Web that relate to its topic of interest. For this purpose we proposed a custom method that uses specific HTML elements of a page to predict the topical focus of all the pages that have an unvisited link within the current page. These recognized on-topic pages have to be sorted later based on their relevance to the main topic of the crawler for further actual downloads. In the Treasure-Crawler, we use a hierarchical structure called T-Graph which is an exemplary guide to assign appropriate priority score to each unvisited link. These URLs will later be downloaded based on this priority. This paper embodies the implementation, test results and performance evaluation of the Treasure-Crawler system. The Treasure-Crawler is evaluated in terms of specific information retrieval criteria such as recall and precision, both with values close to 50%. Gaining such outcome asserts the significance of the proposed approach. | Empirical evaluation of the link and content-based focused Treasure-Crawler |
S0920548915001051 | Since the human resource-intensive traditional medical care practices often lack consideration of natural human reasoning, they follow procedural knowledge as the knowledge base, and therefore, they lack precision of knowledge description. In order to reduce dependency upon human resources and quickly and accurately discern a patient's health condition, we have constructed a fuzzy blood pressure verification system, which consists of an intelligent mobile device integrated with fuzzy comparison and a fuzzy analytic hierarchical process. This personalized and humane blood pressure verification system can provide assistance for the health and safety of the elderly. In stage one, the system development involves computation of the hierarchical levels of patient's various physiological data using the fuzzy analytic hierarchical process (FAHP). In stage two, a fuzzy blood pressure verification system was constructed using the fuzzy analytic hierarchical process and high-level fuzzy Petri nets. Plotting functionality was added for the convenience of observing the blood pressure trends in patient records in the database system. Upon detection of an abnormal physiological signal from a patient, the intelligent mobile system can generate an alarm, thereby facilitating real-time home care. | A novel blood pressure verification system for home care |
S0920548915001063 | For critical systems, providing services with minimal interruption is essential. Availability Management Framework (AMF), defined by SA Forum for managing highly-available applications, requires configurations of applications consisting of various entities organized according to AMF-specific rules and constraints. Creating such configurations is difficult due to the numerous constrained entities involved. This paper presents UACL (UML-based AMF Configuration Language) and a supporting implementation that models the AMF domain, providing designers with tools needed to design, edit, and analyze AMF configurations. UACL is an extension of UML through its profiling mechanism and has been designed to represent AMF concepts, their relations, and constraints. | A UML-based domain specific modeling language for service availability management: Design and experience |
S0920548915001075 | The emergence of cloud environments makes users convenient to synchronize files across platform and devices. However, the data security and privacy are still critical issues in public cloud environments. In this paper, a private cloud storage service with the potential for security and performance concerns is proposed. A data deduplication scheme is designed in the proposed private cloud storage system to reduce cost and increase the storage efficiency. Moreover, the Cloud Data Management Interface (CDMI) standard is implemented in the proposed system to increase the interoperability. The proposed service provides an easy way to for user to establish the system and access data across devices conveniently. The experiment results also show the superiority of the proposed interoperable private cloud storage service in terms of data transmission and storage efficiency. By comparing with the existing system Gluster Swift, the proposed system is demonstrated much suitable for the service environment where most of the transmitted data are small files. | A file-deduplicated private cloud storage service with CDMI standard |
S0920548915001087 | Standards are documents that aim to define norms and common understanding of a subject by a group of people. In order to accomplish this purpose, these documents must define its terms and concepts in a clear and unambiguous way. Standards can be written in two different ways: by informal specification (e.g. natural language) or formal specification (e.g. math-based languages or diagrammatic ones). Remarkable papers have already shown how well-founded ontology languages provide resources for the specification's author to better distinguish concepts and relations meanings, resulting in a better specification. This paper has the objective to expose the importance of truly ontological distinctions for standardizations. To achieve this objective, we evaluate a math-based formal specification, in Z notation, using a well-founded ontology language for a telecommunications case study, the ITU-T Recommendation G.805. The results confirm that truly ontological distinctions are essential for clear and unambiguous specifications. | On the importance of truly ontological distinctions for standardizations: A case study in the domain of telecommunications |
S0920548915001166 | (ANTECEDENT) The main responsibility of the Information Technology Service Management (ITSM) as an organization is to provide services in high level quality. That implies that the services will be an appropriate service and it will ensure continuity. In this context, the organization needs to adopt the best practices in service management to be more efficient and competitive. Some ITSM models collect the best practices of recognized organizations. These models are mainly applied by large organizations. (OBJECTIVE) The objective of this study is to gather experiences in the application of ITSM models in small organizations. (METHODS) To achieve this objective a systematic literature review was performed. (RESULTS) We found primary studies applied to IT areas from some large and medium companies but there is a few in small companies' context. (CONCLUSION) During the SLR we have identified some improvements and difficulties in many organizations, we have founded when applying ITSM models. The principal difficulty was the lack of knowledge of its personnel and consultants have, for adopting a model. On the other hand, companies who succeeded in the application of an ITSM model, had founded some benefits, such as processes improvement, higher user satisfaction, and service cost and time reduction. | Information technology service management models applied to medium and small organizations: A systematic literature review |
S0920548915001178 | The number of computers installed in urban and transport networks has grown tremendously in recent years, also the local processing capabilities and digital networking currently available. However, the heterogeneity of existing equipment in the field of ITS (Intelligent Transportation Systems) and the large volume of information they handle, greatly hinder the interoperability of the equipment and the design of cooperative applications between devices currently installed in urban networks. While the dynamic discovery of information, composition and invocation of services through intelligent agents are a potential solution to these problems, all these technologies require intelligent management of information flows. In particular, it is necessary to wean these information flows of the technologies used, enabling universal interoperability between computers, regardless of the context in which they are located. The main objective of this paper is to propose a systematic methodology to create ontologies, using methods such as a semantic clustering algorithms for retrieval and representation of information. Using the proposed methodology, an ontology will be developed in the ITS domain. This ontology will serve as the basis of semantic information to a SS (Semantic Service) that allows the connection of new equipment to an urban network. The SS uses the CORBA standard as distributed communication architecture. | A methodology for structured ontology construction applied to intelligent transportation systems |
S092054891500118X | Enterprise Resource Planning (ERP) packages are information systems that automate the business processes of organizations thereby improving their operational efficiency substantially. ERP projects that involve customization are often affected by inaccurate estimation of efforts. Size of the software forms the basis for effort estimation. Methods used for effort estimation either employ function points (FP) or lines of code (LOC) to measure the size of customized ERP packages. Literature review reveals that the existing software size methods which are meant for custom-built software products may not be suitable for COTS products such as customized ERP packages. Hence, the effort estimation using conventional methods for customized ERP packages may not be accurate. This paper proposes a new approach to estimate the size of customized ERP packages using Package Points (PP). The proposed approach was validated with data collected from 14 ERP projects delivered by the same company. A positive correlation was observed between Package Points (PP) and the efforts of these projects. This result indicates the feasibility of our proposed approach as well as the positive climate for its utility by the project managers of future ERP projects. Lastly, we examine the implication of these results for practice and future research scope. | An approach to estimate the size of ERP package using package points |
S0920548915001191 | The complexity of electronic systems embedded in modern vehicles has led to the adoption of distributed implementations where different communication protocols are used. Although literature addressing vehicular networks presents several methods for the timing analysis of automotive systems, there is not a reference model for the holistic time analysis of heterogeneous systems where FlexRay/CAN protocols are used. In this work we propose a reference model for timing and schedulability analysis of heterogeneous FlexRay/CAN networks. The proposed reference model can be used to compute end-to-end response times and to analyze local components, such as response times in a specific network segment. | A reference model for the timing analysis of heterogeneous automotive networks |
S0920548915001208 | Integration of the several heterogeneous and geographically dispersed actors in Smart Grid environment is currently hampered by the usage of non-interoperable and proprietary automation solutions. Standards-compliant integration is an indispensable requirement for successful Smart Grid automation. The international standard IEC 61850 has been recognised as one of the fundamental components for reference Smart Grid architectures. Nevertheless, certain parts of the IEC 61850 definitions seem inadequate to compete with the accelerated Smart Grid evolution. For this reason, current literature presents solutions able to surpass the main limitations of the standard, improving harmonisation in Smart Grid; among them, there is integration of IEC 61850 data model with OPC UA information model. According to the current literature, integration of IEC 61850 with OPC UA does not involve the IEC 61850 engineering process, based on a particular language called Substation Configuration description Language (SCL). The paper will present a common Smart Grid scenario where interoperability could be improved extending the idea of the integration between IEC 61850 and OPC UA also to the SCL-based engineering capabilities of the IEC 61850 standard. The paper will present a proposal of mapping between IEC 61850 SCL and OPC UA, describing the relevant algorithm, its implementation and performance evaluation. | Integration of IEC 61850 SCL and OPC UA to improve interoperability in Smart Grid environment |
S092054891500121X | The driving force behind software development of the Electronic Medical Record (EMR) has been gradually changing. Heterogeneous software requirements have emerged, so how to correctly carry out development project has become a complex task. This paper adopts the knowledge engineering and management mechanism, i.e., CommonKADS, and software quality engineering to improve existing strategic information management (SIM) plan as a design methodology to help software implementation for medical institutes. We evaluate the adopting performance by a real case that examines the maturity level of the architecture alignment between the target solution in the proposed SIM plan and the built medical system. | An improved strategic information management plan for medical institutes |
S092054891500135X | Controller Area Network (CAN) is very popular in networked embedded systems. On the other hand, intranets are now ubiquitous in office, home, and factory environments. Namely, the Internet Protocol (IP) is the glue that permits any kind of information to be exchanged between devices in heterogeneous systems. In this paper, a network architecture to seamlessly integrate CAN busses in intranets is described. Flexibility and scalability were the key design requirements to conceive a comprehensive solution that suits both inexpensive and very complex applications. A prototype implementation has been tested to confirm architecture feasibility and assess the performance it can achieve. | Seamless integration of CAN in intranets |
S0920548915001361 | Despite the popularity of the subject, one surprising aspect of building automation (BA) is the scarcity of authoritative literature references regarding the topic. This situation hampers communication between developers and contributes to the well-known problem of heterogeneity where there is difficulty in integrating solutions from different manufacturers with each other. This article systematizes fundamental concepts and requirements of BA systems, defining each aspect based upon established literature standards. Using these aspects as guidelines, the main BA technology specifications available are then reviewed with respect to their coverage of features. We then proceed by showing that none of the analyzed specifications are able to totally cover the expected standard functionality span of BA. Finally, we conclude that none of the existing approaches are able to fully overcome the problem of heterogeneity by satisfactorily addressing all the aspects of BA endorsed by the standards. | Building automation systems: Concepts and technology review |
S0920548915001397 | The capitalization of activities occurring during collaborative brainstorming sessions is a real challenge. It is still enhanced when two remote teams have to participate in the same work sessions. Activities which have tangible results like digital notes are easier to capitalize. However many other events may happen which can only be captured by videos. We use the mandatory system of video conference between two remote teams to capitalize non-tangible results like agreement or disagreement between participants. We developed a system for supporting remote collaboration. The paper describes the design of two main parts of this system. First it presents the resource channel i.e. the way the teams can exchange, display and synchronize data on large multi-touch devices. Then it presents the video channel i.e. the way people can be aware of the other team. The paper concludes with some observations about the current version of this system alloted to the capitalization of collaborative team activities. | Capitalization of remote collaborative brainstorming activities |
S0920548915001403 | In recent years, the popularity of smart phones has boomed the emergence of wearable devices like wristband, smart watch, and sport watch since these devices are portable to record human body information, synchronize information with smart phones, and conduct real-time monitoring of physical condition. However, a recent survey indicates that near 70% respondents are not interested in buying Apple's new iWatch although the marketplace is full of competing alternatives like Samsung's Gear fit, LG's G watch, and Sony's SW3. In this study, a novel framework combining multiple correspondence analysis (MCA), association rule mining (ARM), with K nearest neighbor (KNN) is proposed to help brand companies address the following issues: (1) using MCA to explore the latent relationships between users' demographic profiles, user perceptions of design attributes, and user preferences for wearable devices, (2) using ARM to identify key design attributes that can best configure a specific alternative to achieve effective product differentiation (positioning), (3) using KNN to accomplish efficient product selection (recommendation). More importantly, hundreds of consumers are surveyed to justify the validity of the presented framework. | Combining multiple correspondence analysis with association rule mining to conduct user-driven product design of wearable devices |
S0920548915001415 | Software architecture management, especially in component-based web user interfaces is important to enhance their run-time accessibility, dynamics and management. The cloud offers some excellent mechanisms for this kind of systems, since software can be managed remotely, easy availability of the resources is ensured and mass storage is possible. This article presents an infrastructure solution, based on the use of web services and cloud computing, for managing COTS-based architectures. | A cloud service for COTS component-based architectures |
S0920548915001427 | This paper presents the LQM metadata schema, an extension of the IEEE LOM standard. LQM is capable of registering information related to the quality of virtual education resources. As a complement, we have developed a cataloging and evaluation tool capable of registering LQM metadata and performing the subsequent quality estimation according to UNE 66181:2012. The proposal identifies and describes the dimensions and properties of the LQM element data. The research results show that it is feasible to provide an automatic estimation of quality of digital educational resources using LQM. | A Learning Quality Metadata approach: Automatic quality assessment of virtual training from metadata |
S0920548915001439 | The addressing-based routing solutions usually adopt a tree topology, but the routing paths along a tree are not optimal and the resources of nodes around a root are excessively consumed. Moreover, the descendants of a failed node need to rejoin a tree and reacquire addresses, and during this process they cannot perform communications. In order to overcome the above deficiencies, this paper proposes an optimal addressing-based routing scheme for 6LoWPAN. This scheme takes advantages of one-hop and two-hop neighbors to calculate optimal paths. Since a root is not involved in most of optimal paths, the excessive resource consumption is avoided. This scheme also proposes an address update algorithm and the descendants of a failed node can use this algorithm to update their addresses rather than reacquire the new addresses. During the address update process, the descendants can still use the original addresses to perform communications. | Optimal addressing-based routing for 6LoWPAN |
S0920548915001440 | The increase of Linked Open Data (LOD) usage has grown in the last few years, and the number of datasets available is considerably higher. Taking this into account, another way to make data available is microdata, whose aim is to make information more understandable for search engines to give better results. The Schema.org vocabulary was created for the enrichment of microdata as a way to give more accurate results for user searches. As Schema.org is a kind of ontology, it has the potential to become a bridge to the Web of Linked Data. In this paper we analyze the potential of mapping Schema.org and the Web of Linked Data. Concretely, we have obtained mappings between Schema.org terms and the terms provided by the Linked Open Vocabularies (LOV) collection. In order to measure the limitations of our mappings we have compared the results of our script with some matching tools. Finally, an analysis of the usability of interlinking Schema.org to vocabularies in LOV has been carried out. For this purpose, two studies in which we have been presented aggregated information. Results show that new information has been added a substantial number of times. | Linking from Schema.org microdata to the Web of Linked Data: An empirical assessment |
S0920548915001452 | Cold chain management is of importance for manufacturers in the food industry since they face the dilemma of having to choose between frozen storage and cool storage in delivering products to retailers or end-consumers. Frozen storage incurs high-energy consumption for the preservation of food products, whereas cool storage involves the constant threat of bacterial decay. Contemporary cold-chain development in temperature control usually focuses on single logistic chain rather than serving multiple channels. In order to overcome the aforementioned deficiency, this study proposes a time-temperature indicator (TTI) based cold-chain system, which uses wireless sensors for collecting temperature data and implements the formulation of Critical Control Point (CCP) criteria throughout the entire delivery process. In particular, this approach is based on an Internet-of-Things (IoT) architecture as well as international food standards named ISO 22,000. More importantly, four new business models including (1) cold chain home-delivery service; (2) Convenience store (CVS) indirect delivery; (3) CVS direct delivery; (4) flight kitchen service, are successfully developed for conducting performance assessment. Experimental results indicate that the proposed framework can increase annual sales of braised pork rice from 4.44 million bowls to over 6 million bowls, create more distribution channels to increase extra revenue of more than US$6.35 million, and reduce 10% energy consumption in central kitchens to enhance turnover of electricity from US$14.23 raised to US$18.64 per kilowatt hour. | Integrating wireless sensor networks with statistical quality control to develop a cold chain system in food industries |
S0920548915001464 | The automatic detection of differences between documents is a very common task in several domains. This paper introduces a formal way to compare diff algorithms and to analyze the deltas they produce. There is no one-fits-all definition for the quality of a delta, because it is strongly related to the application domain and the final use of the detected changes. Researchers have historically focused on minimality: reducing the size of the produced edit scripts and/or taming the computational complexity of the algorithms. Recently they started giving more relevance to the human interpretability of the deltas, designing tools that produce more readable, usable and domain-oriented results. We propose a universal delta model and a set of metrics to characterize and compare effectively deltas produced by different algorithms, in order to highlight what are the most suitable ones for use in a given task and domain. | Measuring the quality of diff algorithms: a formalization |
S0920548916000027 | We introduce a new paradigm, based on an extension of the Open Cloud Computing Interface (OCCI), for the on-demand monitoring of the cloud resources provided to a user. We have extended the OCCI with two new sub-types of core entities: one to collect the measurements and the other to process them. The user can request instances of such entities to implement a monitoring infrastructure. The paradigm does not target a specific cloud model, and is therefore applicable to any kind of resource provided as a service. The specifications include only the minimum needed to describe a monitoring infrastructure, thus making this standard extension simple and easily adoptable. Despite its simplicity, the model is able to describe complex solutions, including private/public clouds, and covers both infrastructure and application monitoring. To highlight the impact of our proposal in practice, we have designed an engine that deploys a monitoring infrastructure using its OCCI-compliant descriptions. The design is implemented in a prototype that is available as open source. | Application level interface for a cloud monitoring service |
S0920548916000118 | Photo-voltaic Energy Storage Systems (PVESS) are the widely used non-conventional energy sources. System's reliability could be improved by standardizing its transducers and controller components as per IEEE 1451.0 standard protocol. The protocol defines the commands and functions required to interface transducers in digital domain using two functional modules namely “Transducer Interface Module” (TIM) and “Transducer Electronic Data Sheet” (TEDS). This paper presents a novel modular control structure of TIM and corresponding TEDS of a photo-voltaic system. Smart transducer interface module proposed in this paper can be embedded to the PVESS system. Effectiveness of the proposed smart system is illustrated for battery storage system using solar energy. Field Programmable Gate Array (FPGA) implementation of the protocol provides a compact Smart Energy Storage System (SESS). It supports knowledge based reconfigurable control module using TEDS information. Spartan 6 FPGA is chosen to implement the architecture with necessary command execution unit to effectively utilize TEDS information. Proposed architecture is implemented in FPGA and its performance is validated for battery storage system. | Reconfigurable smart controller and interface architecture for Photo-voltaic Energy Storage System |
S0920548916300010 | IEEE 1599 is an XML-based format standardized in 2008 by the Computer Society of the Institute of Electrical and Electronics Engineers (IEEE). The goal of this standard is to provide a comprehensive description of a music piece, supporting the encoding of heterogeneous aspects (symbolic, formal, graphical, audio, etc.) inside a unique XML document. The format presents advanced features such as a multi-layer information structuring and full synchronization among synchronizable entities. In this work we aim to conduct a critical review of IEEE 1599, not only providing a brief overview of its strength points but above all underlining those aspects that could be improved. The paper will also compare IEEE 1599 with other common formats for representing music information. | A critical review of the IEEE 1599 standard |
S0920548916300022 | Technology market is continuing a rapid growth phase where different resource providers and Cloud Management Frameworks are positioning to provide ad-hoc solutions–as management interfaces, information discovery or billing–trying to differentiate from competitors resulting in incompatibilites between them when addressing more complex scenarios like federated clouds. Therefore, grasping interoperability problems present in current infrastructures by studying how existing and emerging standards could enhance the cloud user experience. In this paper we will review the current open challenges in Infrastructure as a Service cloud interoperability and federation, as well as point to the potential standards that should alleviate these problems. | Standards for enabling heterogeneous IaaS cloud federations |
S0920548916300034 | Online user-generated content is playing a progressively important role as information source for social scientists seeking for digging out value. Advances procedures and technologies to enable the capture, storage, management, and analysis of the data make possible to exploit increasing amounts of data generated directly by users. In that regard, Big Data is gaining meaning into social science from quantitative datasets side, which differs from traditional social science where collecting data has always been hard, time consuming, and resource intensive. Hence, the emergent field of computational social science is broadening researchers' perspectives. However, it also requires a multidisciplinary approach involving several and different knowledge areas. This paper outlines an architectural framework and methodology to collect Big Data from an electronic Word-of-Mouth (eWOM) website containing user-generated content. Although the paper is written from the social science perspective, it must be also considered together with other complementary disciplines such as data accessing and computing. | Harvesting Big Data in social science: A methodological approach for collecting online user-generated content |
S0920548916300046 | The global software development (GSD) paradigm has, over the last 15fifteen years, shifted from being novel and ground breaking to being widely adopted and mainstream. This wide adoption is partly owing to the many benefits provided by GSD, such as reduced labour costs, proximity to new markets and access to a diverse and experienced skills pool. Yet taking advantage of these benefits is far from straightforward, and research literature now includes a proliferation of guidelines, reviews and models to support the GSD industry. Although this active area of study is firmly established as a research area in its own right, the boundaries between general software engineering and GSD are somewhat confused and poorly defined. In an effort to consolidate our understanding of GSD, we have developed an ontology in order to capture the most relevant terms, concepts and relationships related to the goals, barriers and features of GSD projects. The study we present here builds on research conducted in a collaboration project between industry and academia, in which we developed an ontology in order to provide practitioners with a “common language and conceptualisation”. Its successful outcome encouraged us to create a broader ontology that captures the current trends in GSD literature. The key ontology, along with its three subontologies, are the result of a review of the relevant literature, together with several expert evaluations. This ontology can serve as a useful introduction to GSD for researchers who are new to the paradigm. Moreover, practitioners can take advantage of it in order to contextualise their projects and predict and detect possible barriers. What is more, using a common language will help both researchers and practitioners to avoid ambiguities and misunderstanding. | A validated ontology for global software development |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.