FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0895611116300301
Current state-of-the-art imaging techniques can provide quantitative information to characterize ventricular function within the limits of the spatiotemporal resolution achievable in a realistic acquisition time. These imaging data can be used to personalize computer models, which in turn can help treatment planning by quantifying biomarkers that cannot be directly imaged, such as flow energy, shear stress and pressure gradients. To date, computer models have typically relied on invasive pressure measurements to be made patient-specific. When these data are not available, the scope and validity of the models are limited. To address this problem, we propose a new methodology for modeling patient-specific hemodynamics based exclusively on noninvasive velocity and anatomical data from 3D+t echocardiography or Magnetic Resonance Imaging (MRI). Numerical simulations of the cardiac cycle are driven by the image-derived velocities prescribed at the model boundaries using a penalty method that recovers a physical solution by minimizing the energy imparted to the system. This numerical approach circumvents the mathematical challenges due to the poor conditioning that arises from the imposition of boundary conditions on velocity only. We demonstrate that through this technique we are able to reconstruct given flow fields using Dirichlet only conditions. We also perform a sensitivity analysis to investigate the accuracy of this approach for different images with varying spatiotemporal resolution. Finally, we examine the influence of noise on the computed result, showing robustness to unbiased noise with an average error in the simulated velocity approximately 7% for a typical voxel size of 2mm3 and temporal resolution of 30ms. The methodology is eventually applied to a patient case to highlight the potential for a direct clinical translation.
A novel methodology for personalized simulations of ventricular hemodynamics from noninvasive imaging data
S0895611116300313
Automatic anatomy recognition (AAR) methodologies for a body region require detailed understanding of the morphology, architecture, and geographical layout of the organs within the body region. The aim of this paper was to quantitatively characterize the normal anatomy of the thoracic region for AAR. Contrast-enhanced chest CT images from 41 normal male subjects, each with 11 segmented objects, were considered in this study. The individual objects were quantitatively characterized in terms of their linear size, surface area, volume, shape, CT attenuation properties, inter-object distances, size and shape correlations, size-to-distance correlations, and distance-to-distance correlations. A heat map visualization approach was used for intuitively portraying the associations between parameters. Numerous new observations about object geography and relationships were made. Some objects, such as the pericardial region, vary far less than others in size across subjects. Distance relationships are more consistent when involving an object such as trachea and bronchi than other objects. Considering the inter-object distance, some objects have a more prominent correlation, such as trachea and bronchi, right and left lungs, arterial system, and esophagus. The proposed method provides new, objective, and usable knowledge about anatomy whose utility in building body-wide models toward AAR has been demonstrated in other studies.
Quantitative normal thoracic anatomy at CT
S0920548913000172
We have developed a translation system that maps sentences of Attempto Controlled English to predicates of many-sorted first-order logic, which can be directly imported into a logic-based policy management framework. Our translation achieves broader coverage than prior work that uses ACE, by a novel application of modern compositional semantics. This translation also natively supports question answering. The system significantly features a modular architecture, enabling semi-automated porting to new policy domains. We initially developed the system for cognitive radio policies, then generalized and ported it to two other policy vocabularies. The system interoperates with policies written in the XACML language.
Modular natural language interfaces to logic-based policy frameworks
S0920548913000184
Controller Area Networks (CAN) adopt bit stuffing at the physical layer, thus introducing a frame length variability that may adversely affect sensing and actuation jitter. One way to mitigate this issue is to encode the payload by means of a suitable run length limited code, before transmission. In this paper, a family of these codes is defined and thoroughly analyzed from the theoretical point of view, showing its optimality within a set of performance and footprint-related constraints typical of contemporary embedded systems. Experimental results confirm that the proposed technique is amenable to an efficient and deterministic software-based implementation.
On a family of run length limited, block decodable codes to prevent payload-induced jitter in Controller Area Networks
S0920548913000196
Human–Computer Interaction is a discipline concerned with studying the way people interact with computers. The main aim of this research area is to design, evaluate and implement computer systems that allow people to carry out their activities productively and safely. In recent years, the utilization of Natural Language Processing in the form of Natural Language Interfaces toward an effective Human–Computer Interaction has received much attention. Several platforms have been developed to enable humans to interact with computers through Natural Language Interfaces such as speech recognition systems, natural language query systems, semantic search engines and question answering systems. However, the full extent of the synergies between these two research fields is yet to be realized.
NATURAL LANGUAGE PROCESSING and Human–Computer Interaction
S0920548913000202
The paper presents the results of statistical analyses of ICT innovations on the examples of global and local standardisation. PDCA and methodology of statistical research were applied. Relying on the original research, ICT innovations were analysed in the period between 2000 and 2012, along with all areas of human endeavour. Regression equations were presented by explicit mathematical relations and their applicability in time was analysed. Furthermore, this paper presents unique indices obtained by multicriteria analyses, for a closer determination of ICT innovations and the creation of models of knowledge excellence. The objectives of further development are also given.
ICT innovations at the platform of standardisation for knowledge quality in PDCA
S0920548913000214
We propose an ontology-based approach to automated trust negotiation (ATN) to establish a common vocabulary for ATN across heterogeneous domains and show how ontologies can be used to specify and implement ATN systems. The components of the ATN framework are expressed in terms of a shared ontology and ontology inference techniques are used to perform ATN policy compliance checking. On this basis, a semantically relevant negotiation strategy (SRNS) is proposed that ensures the success of a negotiation whenever it is semantically possible. We analyze the properties of SRNS and evaluate the performance of the ontology-based ATN.
An ontology-based approach to automated trust negotiation
S0920548913000226
An interceptor is a generic architecture pattern, and has been used to resolve specific issues in a number of application domains. Many standard platforms such as CORBA also provide interception interfaces so that an interceptor developed for a specific application can become portable across systems running on the same platform. SOAP frameworks are commonly used platforms to build Web Services. However, there is no standard way to build interceptors portable across current SOAP frameworks, although, some of them provide proprietary interceptor solution within individual framework, such as Axis, XFire, and etc. In this paper, we propose the portable interceptor mechanism (PIM) consisting of a set of application programming interfaces (API) on SOAP engine, a core component of a SOAP framework. An interceptor is able to receive messages passing through the SOAP framework from the SOAP engine via these APIs. Furthermore, the proposed PIM facilitates run-time lifecycle management of interceptors that is a crucial feature to many application domains but is not fully supported by CORBA standard. For concept proving, we implement the proposed PIM on two popular SOAP frameworks, namely, Axis and XFire. We also discuss a number of implementation issues including the performance and reliability of PIM.
A portable interceptor mechanism for SOAP frameworks
S0920548913000238
Recent advances in wireless sensor networks (WSNs) technologies and their incorporation with geographic information system (GIS) technologies offer vast opportunities for development and application of environment monitoring data communication. This paper analyzes the method of predicting the location of moving target with the Kalman filter and Greedy-ViP approach to establish WSN flat network routing and the data management system. Simulation results demonstrate that the predicted information collection node locations by the proposed method are consistent with the majority of real ones, the hops tend to straight lines, the hops count is the least, lower repetition rate of the nodes on different hops, and the environment monitoring data can be saved and queried.
Design and implementation of monitoring and management system based on wireless sensor network hop estimation with the moving target Kalman prediction and Greedy-Vip
S092054891300024X
Many organizations are implementing process improvement models, seeking to increase their organizational maturity for software development. However, implementing traditional maturity models involves a large investment (as regards money, time and resources) which is beyond the reach of vast majority of small organizations. This paper presents the use and adaptation of some ISO models in the creation of an organizational maturity model for the Spanish software industry. This model was used satisfactorily to (i) improve the software processes of several Spanish small firms, and (ii) obtain an organizational maturity certification for software development, granted by the Spanish Association for Standardization and Certification.
A maturity model for the Spanish software industry based on ISO standards
S0920548913000251
Recently, WirelessHART (2007) and ISA100.11a (2009) have been proposed as communication standard for a wireless fieldbus. However, Wireless Networked Control Systems performances are hard to verify in the real world, since test beds are expensive and difficult to implement. This paper proposes the use of a co-simulation framework based on the interaction of TrueTime, together with a cross layer wireless network simulator based on OMNET++. In particular, OMNET++ models show accurate aspects of network and devices, for improving overall coexistence management. A sample system controlled by a WirelessHART network has been considered; the analysis of the control performance and coexistence immunity of WirelessHART with respect to the traditional IEEE802.15.4, has been done.
Improving simulation of wireless networked control systems based on WirelessHART
S0920548913000263
The CSMA/CD access method is no longer invoked in switched, full-duplex Ethernet, but the industrial protocols still take the presence of the method into account. The parallel processing producer–distributor–consumer network architecture (ppPDC) was designed specifically to actively utilize the frame queuing. The network nodes process frames in parallel, which shortens the time needed to perform a cycle of communication, especially in cases when frame processing times within the nodes are not uniform. The experiments show that the achievable cycle times of the ppPDC architecture are an order of magnitude shorter than in the well-known sequential PDC protocol.
Performance evaluation of the parallel processing producer–distributor–consumer network architecture
S0920548913000275
Business-to-government integration (B2Gi) requires the development of a unique, inter-organizational integration framework to meet the dynamic requirements of various business entities and government organizations. The authors proposed a conceptual framework for the inter-organizational integration service provider (IISP) as a philosophical and strategic guideline for developing inter-organizational integration. A real-world case study was discussed, with the presentation of a cost-benefit model to assess the possibility for adopting such a business model. With the assistance of the guideline for B2Gi, it is anticipated that the proposed integration model will take advantage of the trade-off between the flexibility and controllability issues.
Business-to-government application integration framework: A case study of the high technology industry in Taiwan
S0920548913000287
Metadata is an important element for achieving interoperability between Learning Objects systems; it facilitates the process of describing, searching, selecting and recovering of Learning Objects. The IEEE LOMv1.0 is a metadata standard for describing Learning Objects. Recently found evidence shows that the standard does not fulfill all requirements of its users, therefore they have extended it. In order to know the impact that extensions have into the interoperability, we made an international study where the use of the LOMv1.0 standard in forty-four works was analyzed. As a result we found fifteen types of extensions implemented to the standard.
An international analysis of the extensions to the IEEE LOMv1.0 metadata standard
S0920548913000482
The present work analyzes and compares two popular standards for data transmission over power line networks: PRIME and G3. A complete and detailed description of both standards is presented together with simulation results of their performance in a power line environment. In order to create an accurate analogy of the transmission channel, background and asynchronous impulsive noises are included using previous results from literature. Simulation results show how PRIME and G3 behave in several noisy environments. Finally, with respect to PRIME, a proposal is made to increase its performance in a hardly impulsive noise channel.
Performance evaluation of two narrowband PLC systems: PRIME and G3
S0920548913000615
Vehicular ad hoc networks (VANETs) have emerged to leverage the power of modern communication technologies, applied to both vehicles and infrastructure. Allowing drivers to report traffic accidents and violations through the VANET may lead to substantial improvements in road safety. However, being able to do so anonymously in order to avoid personal and professional repercussions will undoubtedly translate into user acceptance. The main goal of this work is to propose a new collaborative protocol for enforcing anonymity in multi-hop VANETs, closely inspired by the well-known Crowds protocol. In a nutshell, our anonymous-reporting protocol depends on a forwarding probability that determines whether the next forwarding step in message routing is random, for better anonymity, or in accordance with the routing protocol on which our approach builds, for better quality of service (QoS). Different from Crowds, our protocol is specifically conceived for multi-hop lossy wireless networks. Simulations for residential and downtown areas support and quantify the usefulness of our collaborative strategy for better anonymity, when users are willing to pay an eminently reasonable price in QoS.
A collaborative protocol for anonymous reporting in vehicular ad hoc networks
S0920548913000627
The integration of educational video games in Virtual Learning Environments (VLEs) is a challenging task in need of standardization to improve interoperability and to safeguard investment. The generalized use of VLEs has fostered the emergence of rich contents, and different standards exist to improve their interoperability and reusability. This work describes a proposal of how existing e-learning standards can be used to improve the integration of educational games in VLEs, while introducing a set of models that take into account the features of the selected standards. A specific implementation of this approach in the eAdventure game platform is also presented.
Using e-learning standards in educational video games
S0920548913000639
The early concept of the World Wide Web was the network of related (linked) documents represented in human readable form. The ongoing development leads to another aspect of the web, the web of data. The goal being that the network will provide first-class, machine readable data. Therefore the current network will be transformed to a network where the machines will not only serve as the platform that hosts human readable data but as a true machine–machine network. In this paper, we review and compare the formats, technologies and approaches that are used today for publishing semantic, machine readable data, on the web.
Analysis of approaches to structured data on the web
S0920548913000640
OPC UA is the evolution of the well known OPC COM and XML specifications. OPC UA adopts a very complex software infrastructure to realise the communication among industrial applications; furthermore it features many mechanisms realising data exchanges, whose tuning depends on several parameters. The aim of this paper is to deal with the performance evaluation of OPC UA. The main data exchange mechanisms which may influence performance of the client/server communications will be pointed out; then, the analysis of the overhead they introduce will be presented and discussed. Finally, some guidelines about the setting of OPC UA mechanisms will be given on the basis of the results achieved.
Analysis of OPC UA performances
S0920548913000652
Reliability is an important criterion to facilitate extensive deployment of web service technology for commercial business applications. Run-time monitoring and fault management of web services are essential to ensure uninterrupted and continuous availability of web services. This paper presents WISDOM (Web Service Diagnoser Model) a generic architecture for detecting faults during execution of web services. Policies have been proposed to describe the intended behavior of web services and faulty behavior would be detected as deviations or inconsistencies with respect to the specified behavior. The model proposes the use of monitoring components in service registries and service providers to detect run-time faults during publishing, discovery, binding and execution of web services. An independent fault diagnoser is proposed to coordinate the individual monitoring components and also act as a repository for the specified web service policies. The proposed model has been tested with a sample web service application and the results obtained are presented.
Web Service Diagnoser Model for managing faults in web services
S0920548913000664
This paper deals with the investigation of PESQ's (Perceptual Evaluation of Speech Quality; also known as ITU-T Recommendation P.862) behavior under independent and dependent loss conditions from a speech activity parameter perspective. The results show that an increase in amount of speech in the reference signal (expressed by the activity parameter) may result in an increase of the PESQ sensitivity to packet loss change as well as PESQ's prediction accuracy improvement. On the other hand, it seems that human brain is a bit less sensitive to loss of some parts of words than PESQ. The reasons for those findings are particularly discussed.
Effect of speech activity parameter on PESQ's predictions in presence of independent and dependent losses
S0920548913000676
In order to increase the quality of systems of a financial company, the process of a software development team has changed some times to get stabilized. This paper presents the action research steps that were conducted, the perceptions of the team about the process evolution and the solved problems. Also, a software process improvement assessment has been conducted in order to identify the success factors on this implementation and the result is analyzed and discussed through the Servqual method. Among other conclusions, the involvement of the team during the improvement process and future perspectives are crucial to achieve success.
Software process improvement in a financial organization: An action research approach
S0920548913000688
IT departments in non-IT small companies lack guidelines for defining the services they provide and for assigning costs to these services. This article compares international models and standards and describes an approach that can be used by these companies in order to define and implement their service catalog to be used as an input for their IT financial management. The proposed solution is based on the concept of a process asset library. The proposal has been tested in a non-IT small company. The results provide useful insights for companies interested in defining their own service catalog from a standard service catalog.
Building an IT service catalog in a small company as the main input for the IT financial management
S092054891300069X
Living Labs are innovation infrastructures where software companies and research organizations collaborate with lead users to design and develop new products and services. There is not any reference model related to the processes or practices to manage a living lab. This article presents a reference model to manage effectively the synergies of software companies with the other stakeholders participating in a living lab. The article describes the approach used to create the reference model through the analysis of a multiple case study considering six living labs and discusses the lessons learned during the creation of the process reference model.
A process reference model for managing living labs for ICT innovation: A proposal based on ISO/IEC 15504
S0920548913000706
Link unidirectionality is a commonly encountered phenomenon in wireless sensor networks (WSNs), which is a natural result of various properties of wireless transceivers as well as the environment. Transmission power heterogeneity and random irregularities are important factors that create unidirectional links. Majority of the inter-node data transfer mechanisms are designed to work on bidirectional links (i.e., due to the lack of a direct reverse path, handshaking cannot be performed between a transmitter and receiver) which render the use of unidirectional links infeasible. Yet, there are some data transfer mechanisms designed specifically to operate on unidirectional links which employ distributed handshaking mechanisms (i.e., instead of using a direct reverse path, a multi-hop reverse path is used for the handshake). In this study, we investigate the impact of both transmission power heterogeneity and random irregularities on the lifetime of WSNs through a novel linear programming (LP) framework both for networks that utilize only bidirectional links and for those that can use bidirectional links as well as unidirectional links.
Systematic investigation of the effects of unidirectional links on the lifetime of wireless sensor networks
S0920548913000718
To support the transformation of system engineering from the project-based development of highly customer-specific solutions to the reuse and customization of ‘system products’, we integrate a process reference model for reuse- and product-oriented industrial engineering and a process reference model extending ISO/IEC 12207 on software life cycle processes with software- and system-level product management. We synthesize the key process elements of both models to enhance ISO/IEC 15288 on system life cycle processes with product- and reuse-oriented engineering and product management practices as an integrated framework for process assessment and improvement in contexts where systems are developed and evolved as products.
Enhancing ISO/IEC 15288 with reuse and product management: An add-on process reference model
S092054891300072X
This paper proposes a QoS active queue management (AQM) mechanism for multi-QoS classes, named as M-GREEN (Global Random Early Estimation for Nipping), which includes the consideration of QoS parameters and provides service differentiation among different flows/classes. M-GREEN extends the concept of “Random” and “Early Detection” in RED to “Global Random” and “Early Estimation,” respectively. Furthermore, M-GREEN extends the “linear” concept of RED to an “exponential” one to enhance the efficiency of AQM. For performance evaluation, extensive numerical cases are employed to compare M-GREEN with some popular AQM schemes and to show the superior performance and characteristics of M-GREEN. Consequently, M-GREEN is a possible way to provide the future multimedia Internet with differential services for different traffic classes of diverse QoS requirements.
M-GREEN: An active queue management mechanism for multi-QoS classes
S0920548913000731
Context When dealing with improvements, organizations seek to find a break-even point as early as possible in order to maximize ROI. In some cases such a strategy can lead to long-term failures by not realizing full benefits, when focusing only on the short-term. LEGO (Living Engineering Process) allows building customized process meta-models based on multiple inputs, making an organization more efficient and effective by optimizing resources, time and costs. This paper introduces elements for designing a strategy for more efficient deployments of process improvement initiatives, optimizing the choice of models and elements to be considered as input to the LEGO approach.
The LEGO strategy: Guidelines for a profitable deployment
S0920548913000743
Data-Centric Publish–Subscribe (DCPS) is an architectural and communication paradigm where applications exchange data-content through a common data-space using publish-subscribe interactions. Due to its focus on data-content, DCPS is especially suitable for deploying IoT systems. However, some problems must be solved to support large deployments. In this paper we define a novel extension to the IETF REsource LOcation And Discovery (RELOAD) protocol specification for providing content discovery and transfer in big scale IoT deployments. We have conducted a set of experiments over multiple simulated networks of 500 to 10,000 nodes that demonstrate the viability, scalability, and robustness of our proposal.
RELOAD extension for data discovery and transfer in data-centric publish–subscribe environments
S0920548913000755
This paper presents the concept of hybrid semantic-document models to aid information management when using standards for complex technical domains. These standards are traditionally text based documents for human interpretation, but prose sections can often be ambiguous and can lead to discrepancies. Many organisations will produce semantic representations of the material. In developing these semantic representations, no relationship is maintained to the original prose. Maintaining the relationships has key benefits, including assessing conformance at a semantic level rather than prose, and enabling original content authors to explicitly define their intentions. This paper proposes a framework to help achieve these benefits.
Extending document models to incorporate semantic information for complex standards
S0920548913000767
A hybrid framework integrating conjoint analysis (CA) with quality function deployment (QFD) is presented to incorporate customer preferences into the process of product development. In particular, the proposed framework constitutes two sequential phases, namely, concept generation based on CA and prototype evaluation based on QFD. In addition, product features are characterized by customer requirements (CRs) and functional attributes (FAs). By means of DEMATEL (Decision Making and Trial Laboratory), the impacts of FAs on CRs are systematically identified to visualize their causalities. Instead of utilizing FAs, the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) is employed to assess potential prototypes in terms of CRs.
Integrating conjoint analysis with quality function deployment to carry out customer-driven concept development for ultrabooks
S0920548913000779
A multitude of Internet protocols are developed in the Internet Engineering Task Force to solve the challenges with the existing protocols and to fulfill the requirements of emerging application areas. However, most of them fail to achieve their goals due to limited adoption. A significant reason for non-adoption seems to be that the potential adopters' incentives for adoption are not understood and taken into account during the protocol development. This paper addresses this problem by developing a conceptual framework for analyzing the techno-economic feasibility of Internet protocols already during their development. The framework is based on the experiences collected during several protocol case studies and an extensive literature review. It focuses on analyzing the economic incentives of the relevant stakeholders and also takes into account the deployment environment including the competing solutions. The framework is accompanied by a research method toolbox that introduces practical tools for applying the framework. Finally, the application of the framework is demonstrated with Multipath TCP case study. The usage of the suggested framework can help protocol developers to identify the potential deployment challenges and opportunities of emerging protocols and thus increase the likelihood of adoption. Moreover, potential adopters can use the framework as a supporting tool for making adoption decisions.
Techno-economic feasibility analysis of Internet protocols: Framework and tools
S0920548913000780
Developing safety critical software is a complex process. Due to the fact that medical device software failure can lead to catastrophic consequences, numerous standards have been developed which govern software development in the medical device domain. Risk management has an important role in medical device software development as it is important to ensure that safe software is developed. Demonstrating traceability of requirements right throughout the medical device software development and maintenance lifecycles is an important part of demonstrating that ‘safe’ software has been produced through adopting defined processes. Consequently, medical device standards and guidelines emphasise the need for traceability. This paper outlines the extent and diversity of traceability requirements within medical device standards and guidelines, and identifies the requirements for traceability through each phase of the software development lifecycle. The paper also summarises the findings obtained when a lightweight assessment method (Med-Trace), which we created, based upon the traceability practices within these standards, was implemented in two SME organisations. Finally we highlight how the findings indicate a lack of guidance as to what is required when implementing and maintaining a traceability process.
Medical device standards' requirements for traceability during the software development lifecycle and implementation of a traceability assessment model
S0920548913000792
This paper proposes a mobility management solution for IPv6-based vehicular networks. First, the architecture based on vehicle domains is proposed in order to reduce the mobility handover frequency and delay. Based on the architecture, a distributed address configuration algorithm is proposed. Based on the address configuration algorithm, a vehicle can establish a routing path reaching the nearest AP (Access point) and achieve the multi-hop communication with the Internet through the routing path. Finally, based on the routing algorithm, the mobility management solution is proposed. The data results show that the solution shortens the mobility handover delay and lowers the packet loss.
Mobility management solution for IPv6-based vehicular networks
S0920548913000810
One of the critical security issues of Vehicular Ad Hoc Networks (VANETs) is the revocation of misbehaving vehicles. While essential, revocation checking can leak potentially sensitive information. Road Side Units (RSUs) receiving the certificate status queries could infer the identity of the vehicles posing the query. An important loss of privacy results from the RSUs ability to tie the checking vehicle with the query's target. We propose a Privacy Preserving Revocation mechanism (PPREM) based on a universal one-way accumulator. PPREM provides explicit, concise, authenticated and unforgeable information about the revocation status of each certificate while preserving the users' privacy.
PPREM: Privacy Preserving REvocation Mechanism for Vehicular Ad Hoc Networks
S0920548913000822
In the area of eLearning the automation of processes from their specifications is still unfeasible. The reason may be the lack of standards and specifications to specify learning processes unambiguously. The work presented in this paper try to solve such problem with two main contributions: the creation of a OKI-OSID metamodel to specify learning processes unambiguously, and the creation of a system that creates automatically partial implementations of learning processes from the OKI-OSID metamodel. Furthermore, an implementation of the method is also provided as well as some insights about the problems found in OKI-OSID and the underspecifications these lacks produces.
Automating educational processes implementation by means of an ontological framework
S0920548913000834
OpenVG and SVG Tiny are the most widely used de facto standards for accelerating 2D vector graphics output on various embedded systems, including smart phones and tablet PC's. In this paper, we present a cost-effective way of simultaneously accelerating both of OpenVG and SVG Tiny, based on the multimedia-processing hardware, which is wide-spread on mobile phones and media devices. Through the effective use of these multimedia processors, we successfully accelerated OpenVG and SVG Tiny at least 3.5 times to at most 30 times, even with less power consumption and less CPU usage.
Simultaneously accelerating OpenVG and SVG tiny with multimedia hardware
S0920548913000846
This paper presents a comparison study between the TLS-based security for DSMIPv6 and IKEv2 when establishing Security Associations between MN and HA. The network transmission and processing costs are examined for each protocol using different authentication methods. The results show that the TLS-based solution has less computation cost and less authentication delay than IKEv2with D–H Groups 5 and 14. However, the high amount of transmitted data for certificate based authentications increases the authentication delay in low bandwidth wireless networks.
A comparison study between the TLS-based security framework and IKEv2 when protecting DSMIPv6 signaling
S0920548913000858
Collecting metrics and indicators to assess objectively the different products resulting during the lifecycle of a software project is a research area that encompasses many different aspects, apart from being highly demanded by companies and software development teams. Focusing on software products, one of the most used methods by development teams for measuring Internal Quality is the static analysis of the source code. This paper works in this line and presents a study of the state-of-the-art open source software tools that automate the collection of these metrics, particularly for developments in Java. These tools have been compared according to certain criteria defined in this study.
Open source tools for measuring the Internal Quality of Java software products. A survey
S092054891300086X
The development of connected mobile applications is a complex task due to device diversity. Therefore, device-independent approaches are aimed at hiding the differences among the distinct mobile devices in the market. This work proposes DIMAG, a software framework to generate connected mobile applications for multiple software platforms, following a declarative approach. DIMAG provides transparent data and state synchronization between the server and the client side applications. The proposed platform has been designed making use of existing standards, extending them when a required functionality is not provided.
Using standards to build the DIMAG connected mobile applications framework
S0920548913000871
In the near future, home networks are expected to become an important part of the user's ubiquitous environment. However, how to provide service discovery and multimedia services in such networks becomes a great challenge. In the paper, we propose an extension header, referred to as the “MediaService” header, into the Session Initiation Protocol to provide video streaming service. The streaming control and session mobility functions are also considered in the MediaService header. We also propose a peer to peer Service Location Protocol architecture for users to search for location of services across domains. We added a Substitute Request Message and cache policy into the Service Location Protocol to search for location of services across domains. A prototype implementation shows the performance of our prototype.
Location service and session mobility for streaming media applications in home networks
S0920548913000883
This paper introduces an Educational Modeling Language (EML) to support the computational modeling of learning units. These computational models can be processed by suitable e-learning systems to support the authoring and delivering of learning experiences. Nevertheless, the modeling of learning units is a complex endeavor, involving issues of expressiveness, reusability, adaptability and flexibility. This paper proposes the Perspective-oriented EML to simplify this complexity. In PoEML, the model of a learning unit is not a single piece, as in current EMLs, but it is made up by several perspectives, where each perspective is focused on non-overlapping specific issues.
PoEML: Modeling learning units through perspectives
S0920548913000895
Recently, Voice over Internet Protocol (VoIP) has been one of the more popular applications in Internet technology. For VoIP and other IP applications, issues surrounding Session Initiation Protocol (SIP) have received significant attention. SIP is a widely used signaling protocol and is capable of operating on Internet Telephony, typically using Hyper Text Transport Protocol (HTTP) digest authentication protocol. Authentication is becoming increasingly crucial because it accesses the server when a user asks to use SIP services. In this paper, we concentrate on the security flaws in the current SIP authentication procedure. We propose a secure ECC-based authentication mechanism to conquer many forms of attacks in previous schemes. By a sophisticated analysis of the security of the ECC-based protocol, we show that it is suitable for applications with higher security requirements.
Robust smart card secured authentication scheme on SIP using Elliptic Curve Cryptography
S0920548913000913
In this article, future communication environments have been derived from the analysis of impacts of information communication technology and social service aspects. From the concepts of “Smart Ubiquitous Networks (SUN)” as a short-term realization of Future Networks in ITU-T, this article presents frameworks of the SUN with context awareness and smart resource management. As challenges, we propose methodologies and operational processes to support context awareness and new fine granularity of traffic for smart resource management. Finally we illustrate a use case of SUN considering a smart city to show how SUN capabilities contribute to build smart and ubiquitous communication environments.
Smart Ubiquitous Networks for future telecommunication environments
S0920548913000925
The paper presents part of a study on collective knowledge and innovations in IT as well as an extract segment from a comparative statistical analysis of trends in the global–local standardisation of the pathways of knowledge and IT innovations in IT applications. The aim of the paper is to provide and promote educational and financial resources for the quality of knowledge in IT application. ISO (global) and SRPS (local) documents on IT and IT applications have been extracted from this statistical sample and analysed. The main results of the research are presented with phases of the PDCA methodology
Innovation and knowledge trends through standardisation of IT applications
S0920548913000937
The Petri Net Markup Language (PNML) is originally an XML-based interchange format for Petri nets. Individual companies may specify their process models in Petri nets and exchange the Petri nets with other companies in PNML. This paper aims to demonstrate the capabilities of PNML in the development of applications instead of an industrial interchange format only. In this paper, we apply PNML to develop context-aware workflow systems. In existing literature, different methodologies for the design of context-aware systems have been proposed. However, workflow models have not been considered in these methodologies. Our interests in this paper are to propose a methodology to automatically generate context-aware action lists for users and effectively control resource allocation based on the state of the workflow systems. To achieve these objectives, we first propose Petri net models to describe the workflows. Next, we propose models to capture resource activities. Finally, the interactions between workflows and resources are combined to obtain a model for the whole processes. Based on the combined model, we propose architecture to automatically generate context-aware graphical user interface to guide the users and control resource allocation in workflow systems. We demonstrate our design methodology using a health care example.
Development of context-aware workflow systems based on Petri Net Markup Language
S0920548913000949
Intelligent Transportation Systems (ITSs) make use of advanced detection, communications, and computing technology to improve the safety and efficiency of surface transportation networks. An ITS incorporates a variety of equipment and devices all working in mutual harmony. However, each piece of equipment or device has its own data format and protocol so they cannot exchange data with each other directly. In this paper, a platform of data exchange in an ITS is proposed that can receive data from several types of equipment external to automobiles, repackage the received data, and then dispatch the data to different devices inside the vehicles.
An integrated data exchange platform for Intelligent Transportation Systems
S0920548913000950
WMNs (Wireless Mesh Networks) are a new wireless broadband network structure based completely on IP technologies and have rapidly become a broadband access measure to offer high capacity, high speed and wide coverage. Trusted handoff in WMNs requires that mobile nodes complete access authentication not only with a short delay, but also with the security protection for the mobile nodes as well as the handoff network. In this paper, we propose a trusted handoff protocol based on several technologies, such as hierarchical network model, ECC (Elliptic Curve Cryptography), trust evaluation and gray relevance analysis. In the protocol, the mobile platform's configuration must be measured before access to the handoff network can proceed and only those platforms whose configuration meets the security requirements can be allowed to access the network. We also verify the security properties through formal analysis based on an enhanced Strand model and evaluate the performance of the proposed protocol through simulation to show that our protocol is more advantageous than the EMSA (Efficient Mesh Security Association) authentication scheme in terms of success rate and average delay.
An access authentication protocol for trusted handoff in wireless mesh networks
S0920548913000962
Current XML editors do not provide conceptual modeling for XLink. This leads to inefficient development processes, and a low potential for reuse. To address these shortcomings, this study presents a Model Driven Architecture (MDA) approach with the UML profile to build XLink applications for various domains. This investigation demonstrates how users can use the UML profile to provide a conceptual and visual modeling for XLink applications, and automatically generate different XLink-based documents for various domains. The proposed methodology enables Web-based system developers to generate relationships between resources, and to improve software quality by adopting software engineering techniques in XML development.
MDA-based visual modeling approach for resources link relationships using UML profile
S0920548913000974
OpenID is an open standard providing a decentralized authentication mechanism to end users. It is based on a unique URL (Uniform Resource Locator) or XRI (Extensible Resource Identifier) as identifier of the user. This fact of using a single identifier confers this approach an interesting added-value when users want to get access to different services in the Internet, since users do not need to create a new account on every website they are visiting. However, OpenID providers are normally used as a point to store certain personal attributes of the end users too, which might be of interest for any service provider willing to make profit from collecting that personal information. The definition of a reputation management solution integrated as part of the OpenID protocol can help users to determine whether a given service provider is more or less reliable before interacting with it and transferring their private information. This paper is providing the definition of a reputation framework that can be applied to the OpenID SSO (Single Sign-On) standard solution. It also defines how the protocol itself can be enhanced so OpenID providers can collect (and provide) recommendations from (to) users regarding different service providers and thus enhancing the users' experience when using OpenID. Besides the definition, a set of tests has been performed validating the feasibility of the framework.
Towards the integration of reputation management in OpenID
S0920548913000986
The introduction of communication services in the demanding ITS scenarios strongly relies on the existence of technologies that enable mobility and security. ITS related standardization bodies, mainly ISO and ETSI, are actively producing and developing new specifications in this regard. In this paper, we study those ITS standards related to security and communication efficiency and analyze the suitability of our NeMHIP protocol, in order to be considered for ITS scenarios. NeMHIP provides secure mobility while at the same time constitutes a framework to protect user data and services. In addition, despite being based on the introduction of a new namespace, its introduction in the current Internet architecture is considered affordable. Aware of the need to satisfy users for having a new technology accepted in a certain scenario, we have also assessed analytically the efficiency of our approach. Specifically, in this paper we analyze and compare the handover signaling delay with the standardized NEMO BS protocol, showing that our approach provides satisfactory results and outperforms it in specific cases. Moreover, we present the results obtained by means of a simulation tool, and show that QoS requirements for the demanding video streaming application are fulfilled. All of these features make our approach a candidate for being considered by standardization organizations and a valuable facility for ensuring secure and efficient communications in the ITS.
A proposal to contribute to ITS standardization activity: A valuable network mobility management approach
S0920548913000998
In C2C communication, all necessary information must be collected promptly when a buyer and a seller communicate. That is, an intelligent C2C agent is needed to provide information to buyers and sellers. Along with the evolution of computing technology, C2C agents can exploit the efficient delivery capabilities of peer-to-peer (P2P) technology. However, P2P also increases traffic between agents, but communication faults are a fatal problem for C2C business. This study proposes a robust communication architecture based on current P2P content-delivery standards and its efficiency and robustness have been verified from an experiment.
Efficient communication architecture for the C2C agent
S0920548913001232
This paper presents a novel partition-based fuzzy median filter for noise removal from corrupted digital images. The proposed filter is obtained as the weighted sum of the current pixel value and the output of the median filter, where the weight is set by using fuzzy rules concerning the state of the input signal sequence to indicate to what extent the pixel is considered to be noise. Based on the adaptive resonance theory, the authors developed a neural network model and created a new weight function where the neural network model is employed to partition the observation vector. In this framework, each observation vector is mapped to one of the M blocks that form the observation vector space. The least mean square (LMS) algorithm is applied to obtain the optimal weight for each block. Experiment results have confirmed the high performance of the proposed filter in efficiently removing impulsive noise and Gaussian noise.
Partition-based fuzzy median filter based on adaptive resonance theory
S0920548913001244
This paper presents an analysis of the relation between IP channel characteristics and final voice transmission quality. The NISTNet emulator is used for adjusting the IP channel network. The transmission quality criterion is an MOS parameter investigated using the ITU-T P.862 PESQ, future P.863 POLQA and P.563 3SQM algorithms. Jitter and packet loss influence are investigated for the PCM codec and the Speex codec.
Evaluation of objective speech transmission quality measurements in packet-based networks
S0920548913001268
The abundance of mobile software applications (apps) has created a security challenge. These apps are widely available across all platforms for little to no cost and are often created by small companies and less-experienced programmers. The lack of development standards and best practices exposes the mobile device to potential attacks. This article explores not only the practices that should be adopted by developers of all apps, but also those practices the enterprise user should demand of any app that resides on a mobile device that is employed for both business and private uses.
A standard for developing secure mobile applications
S092054891300127X
Flexibility, maintainability and evolvability are very desirable properties for modern automation control systems. In order to achieve these characteristics, modularity is regarded as an important concept in several scientific domains. The reuse of modules facilitates the reproduction of functionality, or extensions of existing systems in similar environments. However, it is often necessary to ‘prepare’ such an environment to be able to reuse the intended programmed functionality. In an IEC 61131-3 environment, cross-vendor reuse of modules is problematic due to dependencies in proprietary programming environments and existing configurations. In this paper, we aim to enable cross-vendor reuse of modules by controlling these dependencies. Our approach is based on the Normalized Systems Theory, from which we derived three guidelines for the design of reusable modules in an IEC 61131-3 environment for automation control projects. These guidelines are intended to support programmers in controlling dependencies, regardless of the commercial programming environment they work with.
Deriving guidelines for cross-vendor reuse of IEC 61131-3 modules based on Normalized Systems theorems
S0920548913001281
Wide-area situational awareness for critical infrastructure protection has become a topic of interest in recent years. As part of this interest, we propose in this paper a smart mechanism to: control real states of the observed infrastructure from anywhere and at any time, respond to emergency situations and assess the degree of accuracy of the entire control system. Particularly, the mechanism is based on a hierarchical configuration of sensors for control, the ISA100.11a standard for prioritization and alarm management, and the F-Measure technique to study the level of accuracy of a sensor inside a neighborhood.
Diagnosis mechanism for accurate monitoring in critical infrastructure protection
S0920548913001293
When eXtensible Markup Language (XML) becomes a widespread data representation and exchange format for Web applications, safeguarding the privacy of data represented in XML documents can be indispensable. In this paper, we propose an XML privacy protection model by separating the structure and content, and with cloud storage to save content information and Trusted Third Party (TTP) to help manage structure information. To protect data privacy more effectively, we will create different Document Type Definition (DTD) views for different users according to users' privacy practice and the provider's privacy preferences. To further speed up the process of gaining access to data we will adopt the start–end region encoding scheme to encode the nodes in XML document and DTD views. The experiment result shows that this mechanism has a good performance in space and time.
XML privacy protection model based on cloud storage
S092054891300130X
Food consumption data are collected and used in several fields of science. The data are often combined from various sources and interchanged between different systems. There is, however, no harmonized and widely used data interchange format. In addition, food consumption data are often combined with other data such as food composition data. In the field of food composition, successful harmonization has recently been achieved by the European Food Information Resource Network, which is now the basis of a standard draft by the European Committee for Standardization. We present an XML-based data interchange format for food consumption based on work and experiences related to food composition. The aim is that the data interchange format will provide a basis for wider harmonization in the future.
Towards harmonized data interchange in food consumption data
S0920548913001323
A mobile ad hoc network (MANET) is a special type of wireless network in which a collection of mobile nodes with wireless network interfaces may form a temporary network, without the aids of any fixed infrastructure. Security has become a hot research topic in mobile ad hoc networks. In 1998, Volker and Mehrdad proposed a tree-based key management and access control scheme for the mobile agents to manage rights to access its own resources for the visited mobile nodes. Latter, Huang et al. showed that Volker and Mehrdad's scheme needs a large amount of storage and costs for managing and storing secret keys. Huang et al. further proposed a new and efficient scheme based on the elliptic curve cryptosystems to reduce costs and gain better efficiency. However, there is a security leak inherent in Huang et al.'s scheme that the malicious node can overstep his authority to access unauthorized information. This paper will propose a secure, robust, and efficient hierarchical key management scheme for MANETs. Some practical issues and solutions about dynamic key management are also considered and proposed. As compared with Huang et al.'s scheme, our proposed scheme can provide better security assurance, while requiring smaller key-size, lower computational complexities, and constant key management costs which is independent on the number of the confidential files and the visited nodes.
Improved migration for mobile computing in distributed networks
S0920548913001335
Software engineering standards developed under the auspices of ISO/IEC JTC1's SC7 have been identified as employing terms whose definitions vary significantly between standards. This led to a request in 2012 to investigate the creation of an ontological infrastructure that aims to be a single coherent underpinning for all SC7 standards, present and future. Here, we develop that necessary infrastructure prior to its adoption by SC7 and its implementation (likely 2014). The proposal described here requires, firstly, the identification of a single comprehensive set of definitions, the definitional elements ontology (DEO). For the scope of an individual standard, only a subset of these definitional elements will be needed. Once configured, this definitional subset creates a configured definitional ontology or CDO. Both the DEO and the CDO are essentially foundational ontologies from which a domain-specific ontology known as a SDO or standard domain ontology can be created. Consequently, all such SDOs are conformant to a CDO and hence to the single DEO thus ensuring that all standards use the same ontological base. Standards developed in this fashion will therefore be not only of a higher quality but also, importantly, interoperable.
An ontology for ISO software engineering standards: 1) Creating the infrastructure
S0920548913001347
Learning to rank has received great attentions in the field of text retrieval for several years. However, a few researchers introduce the topic into visual reranking due to the special nature of image presentation. In this paper, a novel unsupervised visual reranking is proposed, termed rank via the convolutional neural networks (RankCNN). This approach integrates deep learning with pseudo preference feedback. The optimal set of pseudo preference pairs is first detected from initial list by a modified graph-based method. Ranking is then reduced to pairwise classification in the architecture of CNN. In addition, Accelerated Mini-Batch Stochastic Dual Coordinate Ascent (ASDCA) is introduced to the framework to accelerate the training. The experiments indicate the competitive performance on the LETOR 4.0, the Paris and the Francelandmark dataset.
RankCNN: When learning to rank encounters the pseudo preference feedback
S0920548913001359
Worldwide organizations have made important efforts to replace their legacy information applications by ERP (Enterprise Resource Planning) solutions. However, a suitable system implementation does not guarantee the ERP adoption success. This also depends on the modifications carried out in the system during the ERP maintenance. The implementation process of these changes is risky and may negatively affect application performance. Therefore, practitioners must handle the existing ERP maintenance risks in order to preserve the system performance. Hence, we propose an innovative and flexible technique to manage risks impacts on ERP performance.
Modeling maintenance projects risk effects on ERP performance
S0920548913001360
This paper presents a new service for CORBA applications that orchestrates the timely execution of the tasks of a distributed real-time system in a flexible way. It follows the CORBA philosophy of complementing the CORBA standard with additional services that solve specific problems and facilitate using CORBA in complex applications. The service has been designed for highly coupled applications that execute over LANs. It provides a synchronous framework to synchronize distributed applications that is open to accepting and removing components on-line, with reduced impact on the application timing. It also provides the flexibility to use different distributed scheduling policies that can override the local operating systems schedulers. This paper describes the service architecture and implementation as well as its best-case performance on low computing power hardware with the QNX OS and connected to a switched Ethernet network. Finally the usage and of the service is illustrated with one case study: the synchronization of several robots in a welding process.
A flexible time-triggered service for real-time CORBA
S0920548913001608
A key technique of network security inspection is by using the regular expression matching to locate the specific fingerprints of networking applications or attacks in the packet flows, and accordingly identify the underlying applications or attacks. However, due to the surge of various networking applications and attacks in recent years, even more fingerprints need to be investigated in this process, which leads to a high demand on a large memory space for regular expression matching. In addition, with the frequent upgrading of the network links nowadays, the network flow rate also increases dramatically. As a result, it demands the fast operation of regular expression matching accordingly with the enhanced throughput for network inspection. However, due to the limited space of the fast memory, the requirements on fast operations and large memory space are conflicting. On addressing this challenge, in this paper, we propose to use hybrid memory for regular expression matching. In specific, by investigating on the transition table state access probability through the Markov theory, it can be observed that there exist a number of states which are much more frequently accessed than others. Therefore, we devise a matching engine which is suitable for FPGA implementation with two-level memories, where the first-level memory uses the on-chip memory of FPGA to cache the frequently accessed state transitions, and the second-level memory, composed of slow and cheap DRAM, stores the whole state transitions. Furthermore, the L7-filter's regular expression patterns have been applied to obtain the state access probability, and different quantities of memory assignment approaches have also been investigated to evaluate the throughput.
A regular expression matching engine with hybrid memories
S0920548913001785
We describe a data management solution and associated key management approaches to provide accountability within service provision networks, in particular addressing privacy issues in cloud computing applications. Our solution involves machine readable policies that stick to data to define allowed usage and obligations as data travels across multiple parties. Service providers have fine-grained access to specific data based on agreed policies, enforced by interactions with independent third parties that check for policy compliance before releasing decryption keys required for data access. We describe alternative solutions based upon Public Key Infrastructure (PKI), Identity Based Encryption (IBE) and advanced secret sharing schemes.
End-to-end policy based encryption techniques for multi-party data management
S0920548913001815
Business process modelling and security engineering are two important concerns when developing information system. However current practices report that security is addressed at the later development stages (i.e. design and implementation). This raises a question whether the business processes are performed securely. In this paper, we propose a method to introduce security requirements to the business processes through the collaboration between business and security analysts. To support this collaboration we present a set of security risk-oriented patterns. We test our proposal in two industrial business models. The case findings characterise pattern performance when identifying business assets, risks, and countermeasures.
Securing business processes using security risk-oriented patterns
S0920548913001827
Security is one of the most essential quality attributes of distributed systems, which often operate over untrusted networks such as the Internet. To incorporate security features during the development of a distributed system requires a sound analysis of potential attacks or threats in various contexts, a process that is often termed "threat modeling". To reduce the level of security expertise required, threat modeling can be supported by threat libraries (structured or unstructured lists of threats), which have been found particularly effective in industry scenarios; or attack taxonomies, which offer a classification scheme to help developers find relevant attacks more easily. In this paper we combine the values of threat libraries and taxonomies, and propose an extensible, two-level "pattern-based taxonomy" for (general) distributed systems. The taxonomy is based on the novel concept of a threat pattern, which can be customized and instantiated in different architectural contexts to define specific threats to a system. This allows developers to quickly consider a range of relevant threats in various architectural contexts as befits a threat library, increasing the efficacy of, and reducing the expertise required for, threat modeling. The taxonomy aims to classify a wide variety of more abstract, system- and technology-independent threats, which keeps the number of threats requiring consideration manageable, increases the taxonomy's applicability, and makes it both more practical and more useful for security novices and experts alike. After describing the taxonomy which applies to distributed systems generally, we propose a simple and effective method to construct pattern-based threat taxonomies for more specific system types and/or technology contexts by specializing one or more threat patterns. This allows for the creation of a single application-specific taxonomy. We demonstrate our approach to specialization by constructing a threat taxonomy for peer-to-peer systems.
An extensible pattern-based library and taxonomy of security threats for distributed systems
S0920548913001840
One of the major research challenges for the successful deployment of cloud services is a clear understanding of security and privacy issues on a cloud environment, since cloud architecture has dissimilarities compared to traditional distributed systems. Such differences might introduce new threats and require a different treatment of security and privacy issues. It is therefore important to understand security and privacy within the context of cloud computing and identify relevant security and privacy properties and threats that will support techniques and methodologies aimed to analyze and design secure cloud based systems.
Towards the design of secure and privacy-oriented information systems in the cloud: Identifying the major concepts
S0920548913001852
In model-based development, quality properties such as consistency of security requirements are often verified prior to code generation. Changed models have to be re-verified before re-generation. If several alternative evolutions of a model are possible, each alternative has to be modeled and verified to find the best model for further development. We present a verification strategy to analyze whether evolution preserves given security properties. The UMLchange profile is used for specifying potential evolutions of a given model simultaneously. We present a tool that reads these annotations and computes a delta containing all possible evolution paths. The paths can be verified wrt. security properties, and for each successfully verified path a new model version is generated automatically.
Specifying model changes with UMLchange to support security verification of potential evolution
S0920548913001864
Trust is an essential feature of any system where entities have to collaborate. Trust can assist entities making decisions before establishing collaborations. It is desirable to simulate the behaviour of users as in social environments where they tend to trust users who have common interests or share some of their opinions, i.e., users similar to them. In this paper, we introduce the concept of context similarity among entities and derive a similarity network. Then, we define a trust model that allows us to establish trust along a path of entities. We validate our model in a proximity-based trust establishment scenario.
Building trust from context similarity measures
S092054891400004X
Queuing network models (QNMs) provide powerful notations and tools for modeling and analyzing the performance of many different kinds of systems. Although several powerful tools currently exist for solving QNMs, some of these tools define their own model representations, have been developed in platform-specific ways, and are normally difficult to extend for coping with new system properties, probability distributions or system behaviors. This paper shows how Domain Specific Languages (DSLs), when used in conjunction with Model-driven engineering techniques, provide a high-level and very flexible approach for the specification and analysis of QNMs. We build on top of an existing metamodel for QNMs (PMIF) to define a DSL and its associated tools (editor and simulation engine), able to provide a high-level notation for the specification of different kinds of QNMs, and easy to extend for dealing with other probability distributions or system properties, such as system reliability.
Specification and simulation of queuing network models using Domain-Specific Languages
S0920548914000051
In this paper, we propose a steganographic scheme based on the varieties of coefficients of discrete cosine transformation of an image. The major problem of hiding data in the high-frequency coefficients of discrete cosine transformation is that rounding errors will be added into the spatial-domain image, and thus cannot be transformed back to the correct modified coefficients of the discrete cosine transformation. To solve this problem, we use integer mapping to implement our discrete cosine transformation. Thus, the image recovered from the modified coefficients can be transformed again to the correct data hidden coefficients.
A data hiding scheme based upon DCT coefficient modification
S0920548914000063
In the proposed advanced computing environment, known as the HoneyBee Platform, various computing devices using single or multiple interfaces and technologies/standards need to communicate and cooperate efficiently with a certain level of security and safety measures. These computing devices may be supported by different types of operating systems with different features and levels of security support. In order to ensure that all operations within the environment can be carried out seamlessly in an ad-hoc manner, there is a need for a common mobile platform to be developed. The purpose of this long-term project is to investigate and implement a new functional layered model of the common mobile platform with secured and trusted ensemble computing architecture for an innovative Digital Economic Environment in the Malaysian context. This mobile platform includes a lightweight operating system to provide a common virtual environment, a middleware for providing basic functionalities of routing, resource and network management, as well as to provide security, privacy and a trusted environment. A generic application programming interface is provided for application developers to access underlying resources. The aim is for the developed platform to act as the building block for an ensemble environment, upon which higher level applications could be built. Considered as the most essential project in a series of related projects towards a more digital socio-economy in Malaysia, this article presents the design of the target computational platform as well as the conceptual framework for the HoneyBee project.
Beyond ubiquitous computing: The Malaysian HoneyBee project for Innovative Digital Economy
S0920548914000075
Developing a data warehouse is an ongoing task where new requirements are constantly being added. A widely accepted approach for developing data warehouses is the hybrid approach, where requirements and data sources must be accommodated to a reconciliated data warehouse model. During this process, relationships between conceptual elements specified by user requirements and those supplied by the data sources are lost, since no traceability mechanisms are included. As a result, the designer wastes additional time and effort to update the data warehouse whenever user requirements or data sources change. In this paper, we propose an approach to preserve traceability at conceptual level for data warehouses. Our approach includes a set of traces and their formalization, in order to relate the multidimensional elements specified by user requirements with the concepts extracted from data sources. Therefore, we can easily identify how changes should be incorporated into the data warehouse, and derive it according to the new configuration. In order to minimize the effort required, we define a set of general Query/View/Transformation rules to automate the derivation of traces along with data warehouse elements. Finally, we describe a CASE tool that supports our approach and provide a detailed case study to show the applicability of the proposal.
Tracing conceptual models' evolution in data warehouses by using the model driven architecture
S0920548914000087
A Wireless Sensor Network (WSN) is a wireless network consisting of spatially distributed autonomous devices using sensor nodes in a wide range of applications in various domains. In the future, WSNs are expected to be integrated into the “Internet of Things” (IoT), where sensor nodes join the Internet dynamically, and use them to collaborate and accomplish their tasks. Because of the communications of WSN will produce a broadcast storm, the Cluster-based Wireless Sensor Network (CWSN) was proposed to ameliorate the broadcast storm. However, the capability of the fault-tolerance and reliability of CWSNs must be carefully investigated and analyzed. To cope with the influence of faulty components, reaching a common agreement in the presence of faults before performing certain tasks is essential. Byzantine Agreement (BA) problem is a fundamental problem in fault-tolerant distributed systems. To enhance fault-tolerance and reliability of CWSN, the BA problem in CWSN is revisited in this paper. In this paper, a new BA protocol is proposed that adapts to the CWSN and derives its limit of allowable faulty components, while maintaining the minimum number of message exchanges.
The optimal generalized Byzantine Agreement in Cluster-based Wireless Sensor Networks
S0920548914000099
In the recent years, there is an intense competition between software development companies to design better interfaces. In this marketing rat race, Ribbon interface came to make software user interface easier. After the introduction of Ribbon by Microsoft, it was widely used by various software development companies. Ribbon is a replacement for menus and toolbars and it tends to organize tools in tabs based on their similarities. Although Ribbon interface has many advantages, previous researches have shown that there are serious usability issues that hinder usage of Ribbon interfaces for users with less computer literacy. In order to solve Ribbon interfaces usability issues, this study tried to introduce Ribbon interface design guidelines by focusing on the issues related to users with less computer literacy. In this study two separate sets of moderated (in-person) usability testing were used. The first set evaluated the usability issues of an experimental Ribbon interface software in terms of both visual and cognitive issues. The second set was used to evaluate the Ribbon interface prototype that was designed based on the discovered usability issues in the first usability test. In order to ensure the validity of the data, the researchers tried to triangulate the data collection process by collecting data from different sources, namely, quantitative measurement of participants' performance, direct observation, and interview. Based on the comparison of the usability tests results which points out the factors that have led to participants' performance improvement in the prototype version, a number of guidelines are extracted for Ribbon interfaces. These guidelines are applicable to Microsoft Office, Microsoft SharePoint and most of the software that can be developed with Ribbon interface. Putting these guidelines into action, self-learning would be promoted and learning issues of users with less computer literacy would be decreased.
An investigation on Ribbon interface design guidelines for people with less computer literacy
S0920548914000269
As the functionality of ISOBUS compliant agriculture machines increases, demands on the underlying bus network capacity increase as well. Therefore, to prevent potential bottleneck performance of critical applications, bus utilization must be carefully optimized. In this paper, a methodology for transparent compression/decompression of Object Pool files arising from the use of powerful GUIs during network initialization time is presented. Comprehensive simulation experiments developed under CANoe.ISO11783 shows that data compression remarkably reduces bus utilization during ISOBUS network initialization time, thus enabling the use of powerful GUIs. Furthermore, simulation results suggest GZIP as the best performing method for transparent ISOBUS data compression.
Enabling powerful GUIs in ISOBUS networks by transparent data compression
S0920548914000270
In recent years, there has been a growth in a category of performance-critical distributed embedded systems and applications. These systems gain complexity when they are equipped with many microcontrollers which oversee many electronic control units (ECU). High performance and predictability are the main criteria of choice for any large-scale networked system dependent on real-time data processing and analysis. Switched-fabric networks can provide fast and highly scalable hardware solutions and are now being increasingly used in distributed systems. In this paper, SCPN models of switched fabric and bussed CAN network are presented using timed colored Petri nets. These models are then evaluated and verified for the desired properties using CPN Tools. Comparison is then applied to both of the models to extract information on network performance metrics.
SCPN modeling and comparative performance evaluation of bussed and crossbar-based switched fabric CAN
S0920548914000282
Multi-agent Systems (MASs) are one of the main fields of distributed systems. MASs are based on autonomous entities that cooperate obtaining emergent behaviors, and can be useful for integrating open systems. However, the great diversity of agent-oriented modeling languages (AOMLs) hinders the understanding and interchange of MAS models. Most MAS concepts are shared among the AOMLs. However, these concepts have certain peculiarities in each AOML, such as the expected behavior and concrete syntax. This paper presents a metamodeling solution for the integration of the AOML diversity that uses the powertype pattern. In this pattern, the clabjects represent concept subtypes and are instantiated in models. MAS designers can change the clabject properties to indicate the peculiarities of each concept subtype, depending on the particular needs. Each designer can understand models of other experts by consulting the peculiarities of concepts in models. This solution is the Inter-methodology AOML, which is supported with a graphical modeling tool. This tool is created with a model-driven development approach. This work presents this AOML as a first step of a potential standardization process in the modeling of MASs. In addition, the Ingenias Development Kit tool, an existing agent-oriented software engineering tool, is adapted to export models to the presented AOML. The proposed Inter-methodology AOML is quantitatively compared with other AOMLs in nine different problem domains, and this comparison shows that the proposed Inter-methodology AOML can determine a higher proportion of concepts in these domains than other AOMLs. The presented AOML is also evaluated and validated with its mapping to FAML.
Towards the integration of the agent-oriented modeling diversity with a powertype-based language
S0920548914000294
The intended purpose of a device is a key reference when regulators decide whether or not to regulate it as a medical device. However, when coming into consumer health domain, it is sometimes difficult to decide whether a device (or system) has a general purpose or not. The authors discussed the regulatory policy around health device connectivity, operating system and software market, and proposed to regulate them with enough granularity.
How do we define the “general purpose”?
S0920548914000300
With the increase of intelligent devices, ubiquitous computing is spreading to all scopes of people life. Smart home (or industrial) environments include automation and control devices to save energy, perform tasks, assist and give comfort in order to satisfy specific preferences. This paper focuses on the proposal for Software Reference Architecture for the development of smart applications and their deployment in smart environments. The motivation for this Reference Architecture and its benefits are also explained. The proposal considers three main processes in the software architecture of these applications: perception, reasoning and acting. This paper centres attention on the definition of the Perception process and provides an example for its implementation and subsequent validation of the proposal. The software presented implements the Perception process of a smart environment for a standard office, by retrieving data from the real world and storing it for further reasoning and acting processes. The objectives of this solution include the provision of comfort for the users and the saving of energy in lighting. Through this verification, it is also shown that developments under this proposal produce major benefits within the software life cycle.
Software reference architecture for smart environments: Perception
S0920548914000415
To ensure the safety of avionic systems, civil avionic software and hardware regulated by certification authorities must be certified based on applicable standards (e.g., DO-178B and DO-254). The overall safety integrity of an avionic system, comprising software and hardware, should be considered at the system level. Thus, software and hardware components should be planned, developed and certified in a unified, harmonized manner to ensure the integral safety of the entire avionic system. One of the reasons for the high development costs of avionic systems complying with standards may be a lack of sufficient understanding of how to employ these standards efficiently. Therefore, it is important to understand the similarities and differences between DO-178B and DO-254 to effectively manage the processes required by these standards, to minimize cost, and to ultimately ensure the safety of the entire avionic system. Thus, the goal of this paper is to compare various aspects of DO-178B and DO-254 comprehensively. The paper may serve as a useful supplementary material for the practitioner to understand the rationales behind and the differences between two main standards used in avionic industries.
Software and hardware certification of safety-critical avionic systems: A comparison study
S0920548914000427
In a wireless sensor network (WSN), data collected by a sensor node need to be associated with location information in order to support real-world applications. Taking the WSN characteristics into account, this paper proposes an address configuration scheme based on location information and passive duplicate address detection (PDAD). In this scheme, the network architecture based on location information is presented, and based on this architecture the address initialization algorithm and address maintenance algorithm are proposed. The address initialization algorithm is performed once the network starts, and it is made up of the initialization address configuration sub-algorithm (IAC) and the initialization location PDAD sub-algorithm (ILPDAD). The address maintenance algorithm is performed after the initialization algorithm is complete, and it is composed of the maintenance address configuration sub-algorithm (MAC) and the maintenance location PDAD sub-algorithm (MLPDAD). During the address initialization process (address maintenance process), a new node first uses IAC (MAC) to obtain an address and then performs ILPDAD (MLPDAD) to ensure the address uniqueness. Since beacon frames are employed to achieve IAC (MAC) and ILPDAD (MLPDAD), the address configuration cost and delay are reduced. Moreover, IAC (MAC) is based on location information and ILPDAD (MLPDAD) is based on PDAD, so there are always the sufficient address resources for address configuration without address reclamation. In this way, the extra cost and delay caused by both the address reclamation and the address configuration failure are avoided. This paper evaluates the performance of this scheme, and the data results show that this scheme effectively reduces the address configuration delay and cost.
An address configuration protocol for 6LoWPAN wireless sensor networks based on PDAD
S0920548914000439
Without an approach accepted by the communities at large, domain disagreements will continue to thwart current global efforts to harmonize information models. The research presented here reviewed current standardization activities. A number of observations and possible solutions are proposed to address the topic of standardizing long term access to multi-discipline Earth System archives by considering the application of the knowledge base concept to facilitate data interpretation. Finally, we present a case study as an initial entry point for the further discussion about standardization.
Point of view: Long-Term access to Earth Archives across Multiple Disciplines
S0920548914000440
A histogram based perceptual quality assessment (HPQA) method for color images is presented in this paper. Basically, the HPQA combines two quality assessment approaches (color image quality assessment and histogram based image quality assessment) and it uses the fourier transform. Its range is between 0 and 1. 1 represents the best quality result, 0 represents the worst quality result for the HPQA. The HPQA results are more suitable than its counterparts in terms of the HVS and they can be obtained faster than the other methods' results. In addition, it can easily differentiate effects of low distortions on color images.
Histogram based perceptual quality assessment method for color images
S0920548914000695
This paper presents context development and requirement validation to overcome maintenance problems in Enterprise Resource Planning (ERP) systems. Using ERP data of a local petroleum firm, we employ knowledge integration to dynamically validate users' requirements, and to gather, analyze, and represent context through knowledge models. We also employ context-awareness to model the ERP context, along with a user requirement model. We employ context affinity to determine impact of these models on requirements' validation. We apply fault-tolerance on these models by using data mining to pre-identify delays in delivery of petroleum products, and to predict faulty contextual ERP product configuration.
Fault-tolerant context development and requirement validation in ERP systems
S0920548914000701
In this paper, a novel discovery scheme using modified counting Bloom filters is proposed for data distribution service (DDS) for real-time distributed system. In a large scale network for combat management system (CMS), a lot of memory is required to store all the information. In addition, high network traffic can become problematic. In many cases, most of the information stored is not needed by the DDS's endpoints but occupy memory storage. To reduce the size of information sent and stored, a discovery process combined with counting Bloom filters is proposed. This paper presents delay time for filters construction and total discovery time needed in DDS's discovery process. Simulation results show that the proposed method gives low delay time and zero false positive probability.
Node discovery scheme of DDS for combat management system
S0920548914000713
Software process improvement frameworks for software organizations enable to identify opportunities for improving the processes as well as establishing road maps for improvement. However, software process improvement practice showed that to achieve a sustained, leveraged state, software organizations need to focus on the workforce as much as the process. Software process improvement frameworks address the people dimension indirectly through processes. To complement process assessment models/methods, there is a need of mechanisms that address the problem of “how to assess, identify and prioritize detailed skill and knowledge improvement needs in relation to roles and processes of software organizations”. In this study, we developed a Software Workforce Assessment Model (SWAM) for emergent software organizations to perform role based workforce skill assessment aligned with software processes by coupling SW-CMM and SWEBOK models. SWAM is developed in accordance with the widely accepted assessment and evaluation theory principles. It is composed of an assessment baseline for software roles, criteria and scales for assessment. A SWAM based assessment process uses specific techniques such as Euclidian distance and dendogram diagrams to obtain useful results from data obtained from assessments. Through a case study, SWAM is shown to be applicable and the results are valuable for an emergent software organization. Specifically, the assessment enables the organization to identify priority knowledge units, to decide the extent of trainings for groups of individuals, to effectively assign project roles, to identify improvement priorities for the practitioners related to their roles and finally facilitates enactment and improvement of the software processes.
A process capability based assessment model for software workforce in emergent software organizations
S0920548914000725
The ever increasing hardware capabilities typical of modern microcontrollers make it easier to add more and more functions to embedded systems, even for relatively low-end ones. In turn, this raises new requirements on their firmware, focusing on aspects like adherence to international and industrial standards, modularity, portability, fast time to market, and integration of diverse software components. This paper shows, by means of a case study, how to design a full-fledged networked embedded system using only open-source components, including a small-scale real-time operating system. In addition, it highlights how different components addressed key design issues, like inter-task synchronization and communication.
Modular design of an open-source, networked embedded system
S0920548914000737
In this paper we present a new algorithm called b64pack 1 1 b64 stands for Base64. for compression of very short text messages. The algorithm executes in two phases: in the first phase, it converts the input text consisting of letters, numbers, spaces and punctuation marks commonly used in English writings to a format which can be compressed in the second phase. The second phase consists of a transformation which reduces the size of the message by a fixed fraction of its original size. We experimentally measured both the compression speed and the compression ratio of b64pack on a large number of short messages and compared them with compress, gzip and bzip2, three most common UNIX compression programs. We show that in case of short text messages up to a certain size b64pack achieves better compression than any of the three programs. With respect to speed, b64pack beats all three algorithms by orders of magnitudes. This rapid compression is one of the key strengths of b64pack.
Rapid lossless compression of short text messages
S0920548914000749
Data modelling is not only important to visualise the structural schema of data, but also to show the intended integrity constraints. In this paper, we propose a modelling approach called XML Static Dynamic Modelling (XSDyM). While a text-based schema definition is often the most common method used to describe XML, graphical modelling is more accepted as it is capable of visualising the schema definition more effectively for the reader. Conveying the dynamic constraints on XML graphical model requires a special treatment as the constraints basically comprehend the state transitions. It is important for an XML modelling to keep the basis as precise as possible to satisfy the nature of XML and at the same time be able to represent the constraints in an effective way. Using the XML tree-based modelling as the basis of the work, we proposed our own approach to convey the state transitions of the constraints, where it is inspired from the well-known state diagram and adopt some useful features of ORM modelling. We evaluate the correctness of our proposed modelling using a model which involves the checking of model transformations between the modelling and the equivalent XML schema languages.
XSDyM: An XML graphical conceptual model for static and dynamic constraints
S0920548914000750
In this paper, we conducted a series of face-to-face interviews with 17 participants from 11 SA government entities, with the aim of validating whether existing processes and strategic direction were sufficient to satisfactorily achieve the implementation of an ISMS and classification of data for the respective SA government entities. Based on our interviews and review of ISMS associated reviews conducted within other Australian State and Territory jurisdictions, we identify key areas that the SA Government may need to consider as part of the progressive roll-out of the other phases of ISMF version 3 implementation up and to June 2017.
Cyber security readiness in the South Australian Government
S0920548914000762
This paper proposes two new message scheduling methods on shared timeslots of the ISA100.11a standard to enhance real-time performance, namely, traffic-aware message scheduling (TAMS) and contention window size adjustment (CWSA). In TAMS, instead of competing to transmit sporadic messages in consecutive cycles, end-nodes are divided into groups, which then access the channel in specific cycles when the probability of timeslots getting involved in collisions exceeds a specified threshold. Conversely, in CWSA, the contention window is adjusted when the probability of timeslots getting involved in collisions exceeds the threshold. The results of simulations conducted indicate that these two proposed methods provide performance improvements in terms of success probability and end-to-end delay.
Real-time message scheduling for ISA100.11a networks
S0920548914000774
The increasing availability of geo-referenced data has increased the need to enrich OLAP with spatial analysis, leading to the concept of Spatial OLAP (SOLAP). The conceptual modelling of spatial data cubes requires the definition of two kinds of metadata: (i) warehouse metadata that model data structures that maintain integrated data from multiple data sources and (ii) aggregation 1 1 In this paper, the term “aggregation” does not refer to UML aggregation associations. In the text of our article, “aggregation” is used in the sense of calculating a result (as in the field of databases) — the terms “aggregation level” and “aggregating relationship” refer to OLAP aggregations. metadata that specify how the warehoused data should be aggregated to meet the analysis goals of decision makers. In this paper we provide a review of existing conceptual spatial data cube models. We highlight some limits of these models concerning the aggregation model design, and their implementation in existing CASE tools and SOLAP architectures. Firstly, we propose a new UML (Unified Modeling Language) profile for modelling complex Spatial Data Warehouses and aggregations. Our profile is implemented in the MagicDraw CASE tool. Secondly, we propose a tool for the automatic implementation of conceptual spatial data cube models, designed using our profile, in a SOLAP architecture. In particular, our solution allows: (i) generating different logical representations of the SDW (Spatial Data Warehouse) model (star schema and snow-flake schema) and (ii) implementing complex SOLAP analysis indicators using MDX (MultiDimensional eXpressions language).
Conceptual model for spatial data cubes: A UML profile and its automatic implementation
S0920548914000786
IT service provider organizations that have implemented a Quality Management System (QMS) according to ISO 9001 can take advantage of all the efforts made when implementing an IT Service Management System (ITSMS). In order to facilitate the integration of these two management systems, we analyze the existing relations between the requirements of the QMS and the ITSMS. Based on these results, we provide a new Integrated Management System (IMS) which widens the scope of the ISO 9001 QMS with the specific IT service management requirements of ISO/IEC 20000-1, and present a guide to support organizations in implementing this IMS.
Integrating IT service management requirements into the organizational management system
S0920548914000798
Successful implementation of an enterprise strategy, the reorganization of an enterprise, the successful enterprise-wide adoption of a new enterprise resource planning system, or simply being able to manage the daily operations at an enterprise in general are all common examples of organizational actions that are strongly interrelated with the achievement of goals related to these actions. From the research as presented in this paper, it becomes clear that it is not elementary to clearly formulate goals and to understand how to achieve them. In two use scenarios, it is described how the executive board of a mid-sized bank in Germany wants to achieve their overall goal to increase the bank appraisal. The first scenario deals with determining who is responsible for goal creation and accomplishment, while the second scenario deals with describing a concrete goal system. A domain-specific modelling language (DSML) for designing goal models is proposed that provides solutions for requirements that are derived from the described scenarios. This DSML is coined the ‘goal modelling language’ (GoalML), which enables the development of goal models from multiple perspectives in order to relate goals with their context and vice versa.
A language for multi-perspective goal modelling: Challenges, requirements and solutions
S0920548914000804
In order to increase the efficiency in the use of energy resources, the electrical grid is slowly evolving into a smart(er) grid that allows users' production and storage of energy, automatic and remote control of appliances, energy exchange between users, and in general optimizations over how the energy is managed and consumed. One of the main innovations of the smart grid is its organization over an energy plane that involves the actual exchange of energy, and a data plane that regards the Information and Communication Technology (ICT) infrastructure used for the management of the grid's data. In the particular case of the data plane, the exchange of large quantities of data can be facilitated by a middleware based on a messaging bus. Existing messaging buses follow different data management paradigms (e.g.: request/response, publish/subscribe, data-oriented messaging) and thus satisfy smart grids' communication requirements at different extents. This work contributes to the state of the art by identifying, in existing standards and architectures, common requirements that impact in the messaging system of a data plane for the smart grid. The paper analyzes existing messaging bus paradigms that can be used as a basis for the ICT infrastructure of a smart grid and discusses how these can satisfy smart grids' requirements.
Message-oriented middleware for smart grids
S0920548914000816
Public street lighting system consists of devices distributed in points of light and a supervision and control application. The system architecture is modular and expandable. In developing the work the C# language is adopted to develop the operation and monitoring via standard CyberOPC and XML file types are applied to the device description and definition of the network topology. This paper describes the validation proposed and the results obtained attests that this applied methodology is feasible and can be applied to other public lighting systems.
Public street lighting remote operation and supervision system
S0920548914000853
Cloud computing is one of the most popular information processing concepts of today's IT world. The security of the cloud computing is complicated because each service model uses different infrastructure elements. Current security risk assessment models generally cannot be applied to cloud computing systems that change their states very rapidly. In this work, a scalable security risk assessment model has been proposed for cloud computing as a solution of this problem using game theory. Using this method, we can evaluate whether the risk in the system should be fixed by cloud provider or tenant of the system.
Scalable risk assessment method for cloud computing using game theory (CCRAM)