FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S1386505615000118
Background The development of Electronic Health Records (EHRs) forms an integral part of the information strategy for the National Health Service (NHS) in the UK, with the aim of facilitating health information exchange for patient care and secondary use, including research and healthcare planning. Implementing EHR systems requires an understanding of patient expectations for consent mechanisms and consideration of public awareness towards information sharing as might be made possible through integrated EHRs across primary and secondary health providers. Objectives To explore levels of public awareness about EHRs and to examine attitudes towards different consent models with respect to sharing identifiable and de-identified records for healthcare provision, research and planning. Methods A cross-sectional questionnaire survey was administered to adult patients and members of the public in primary and secondary care clinics in West London, UK in 2011. In total, 5331 individuals participated in the survey, and 3157 were included in the final analysis. Results The majority (91%) of respondents expected to be explicitly asked for consent for their identifiable records to be accessed for health provision, research or planning. Half the respondents (49%) did not expect to be asked for consent before their de-identified records were accessed. Compared with White British respondents, those from all other ethnic groups were more likely to anticipate their permission would be obtained before their de-identified records were used. Of the study population, 59% reported already being aware of EHRs before the survey. Older respondents and individuals with complex patterns of interaction with healthcare services were more likely to report prior awareness of EHRs. Individuals self-identifying as belonging to ethnic groups other than White British, and those with lower educational qualifications were less likely to report being aware of EHRs than White British respondents and respondents with degree-level education, respectively. Those who reported being aware of EHRs were less likely to say they expected explicit consent to be sought before use of their de-identified record. Conclusions A large number of patients remain unaware of EHRs, while preference for implicit consent is stronger among those who report previous awareness. Differences in awareness levels and consent expectations between groups with different socio-demographic characteristics suggest that public education and information campaigns should target specific groups to increase public awareness and ensure meaningful informed consent mechanisms.
Patient and public attitudes towards informed consent models and levels of awareness of Electronic Health Records in the UK
S1386505615000945
Objectives The mental state examination (MSE) provides crucial information for healthcare professionals in the assessment and treatment of psychiatric patients as well as potentially providing valuable data for mental health researchers accessing electronic health records (EHRs). We wished to establish if improvements could be achieved in the documenting of MSEs by junior doctors within a large United Kingdom mental health trust following the introduction of an EHR based semi-structured MSE assessment template (OPCRIT+). Methods First, three consultant psychiatrists using a modified version of the Physician Documentation Quality Instrument-9 (PDQI-9) blindly rated fifty MSEs written using OPCRIT+ and fifty normal MSEs written with no template. Second, we conducted an audit to compare the frequency with which individual components of the MSE were documented in the normal MSEs compared with the OPCRIT+MSEs. Results PDQI-9 ratings indicated that the OPCRIT+MSEs were more ‘Thorough’, ‘Organized’, ‘Useful’ and ‘Comprehensible’ as well as being of an overall higher quality than the normal MSEs. The audit identified that the normal MSEs contained fewer mentions of the individual components of ‘Thought content’, ‘Anxiety’ and ‘Cognition & Insight’. Conclusions These results indicate that a semi-structured assessment template significantly improves the quality of MSE recording by junior doctors within EHRs. Future work should focus on whether such improvements translate into better patient outcomes and have the ability to improve the quality of information available on EHRs to researchers.
A comparison of mental state examination documentation by junior clinicians in electronic health records before and after the introduction of a semi-structured assessment template (OPCRIT+)
S1386505615300356
Purpose To provide an overview of essential elements of good governance of data linkage for health-related research, to consider lessons learned so far and to examine key factors currently impeding the delivery of good governance in this area. Given the considerable hurdles which must be overcome and the changing landscape of health research and data linkage, a principled, proportionate, risk-based approach to governance is advocated. Discussion In light of the considerable value of data linkage to health and well-being, the United Kingdom aspires to design and deliver good governance in health-related research. A string of projects have been asking: what does good governance look like in data linkage for health research? It is argued here that considerable progress can and must be made in order to develop the UK’s contribution to future health and wealth economies, particularly in light of mis-start initiatives such as care.data in NHS England. Discussion centres around lessons learned from previous successful health research initiatives, identifying those governance mechanisms which are essential to achieving good governance. Conclusion This article suggests that a crucial element in any step-increase of research capability will be the adoption of adaptive governance models. These must recognise a range of approaches to delivering safe and effective data linkage, while remaining responsive to public and research user expectations and needs as these shift and change with time and experience. The targets are multiple and constantly moving. There is not – nor should we seek – a single magic bullet in delivering good governance in health research.
On moving targets and magic bullets: Can the UK lead the way with responsible data linkage for health research?
S1386505615300447
Aim It is still unclear whether telemonitoring reduces hospitalization and mortality in heart failure (HF) patients and whether adding an Information and Computing Technology-guided-disease-management-system (ICT-guided-DMS) improves clinical and patient reported outcomes or reduces healthcare costs. Methods A multicenter randomized controlled trial was performed testing the effects of INnovative ICT-guided-DMS combined with Telemonitoring in OUtpatient clinics for Chronic HF patients (IN TOUCH) with in total 179 patients (mean age 69 years; 72% male; 77% in New York Heart Association Classification (NYHA) III–IV; mean left ventricular ejection fraction was 28%). Patients were randomized to ICT-guided-DMS or to ICT-guided-DMS+telemonitoring with a follow-up of nine months. The composite endpoint included mortality, HF-readmission and change in health-related quality of life (HR-QoL). Results In total 177 patients were eligible for analyses. The mean score of the primary composite endpoint was −0.63 in ICT-guided-DMS vs. −0.73 in ICT-guided-DMS+telemonitoring (mean difference 0.1, 95% CI: −0.67 +0.82, p =0.39). All-cause mortality in ICT-guided-DMS was 12% versus 15% in ICT-guided-DMS+telemonitoring (p =0.27); HF-readmission 28% vs. 27% p =0.87; all-cause readmission was 49% vs. 51% (p =0.78). HR-QoL improved in most patients and was equal in both groups. Incremental costs were €1360 in favor of ICT-guided-DMS. ICT-guided-DMS+telemonitoring had significantly fewer HF-outpatient-clinic visits (p <0.01). Conclusion ICT-guided-DMS+telemonitoring for the management of HF patients did not affect the primary and secondary endpoints. However, we did find a reduction in visits to the HF-outpatient clinic in this group suggesting that telemonitoring might be safe to use in reorganizing HF-care with relatively low costs.
The value of telemonitoring and ICT-guided disease management in heart failure: Results from the IN TOUCH study
S138904171300034X
A growing conceptual and empirical literature is advancing the idea that language extends our cognitive skills. One of the most influential positions holds that language – qua material symbols – facilitates individual thought processes by virtue of its material properties (Clark, 2006a). Extending upon this model, we argue that language enhances our cognitive capabilities in a much more radical way: the skilful engagement of public material symbols facilitates evolutionarily unprecedented modes of collective perception, action and reasoning (interpersonal synergies) creating dialogically extended minds. We relate our approach to other ideas about collective minds (Gallagher, 2011; Theiner, Allen, & Goldstone, 2010; Tollefsen, 2006) and review a number of empirical studies to identify the mechanisms enabling the constitution of interpersonal cognitive systems.
The dialogically extended mind: Language as skilful intersubjective engagement
S1389041713000624
We present a new model of children’s performance on the balance-scale task, one of the most common benchmarks for computational modeling of psychological development. The model is based on intuitive and torque-rule modules, each implemented as a constructive neural network. While the intuitive module recruits non-linear sigmoid units as it learns to solve the task, the second module can additionally recruit a neurally-implemented torque rule, mimicking the explicit teaching of torque in secondary-school science classrooms. A third, selection module decides whether the intuitive module is likely to yield a correct response or whether the torque-rule module should be invoked on a given balance-scale problem. The model progresses through all four stages seen in children, ending with a genuine torque rule that can solve untrained problems that are only solvable by comparing torques. The model also simulates the torque-difference effect and the pattern of human response times, faster on simple problems than on conflict problems. The torque rule is more likely to be invoked on conflict problems than on simple problems and its emergence requires both explicit teaching and practice. Overlapping waves of rule-based stages are also covered by the model. Appendices report evidence that constructive neural networks can also acquire a genuine torque rule from examples alone and show that Latent Class Analysis often finds small, unreliable rule classes in both children and computational models. Consequently, caution in using Latent Class Analysis for rule diagnosis is suggested to avoid emphasis on rule classes that cannot be replicated.
A comprehensive model of development on the balance-scale task
S1389041715300164
This article deals with artificial intelligence models inspired from cognitive science. The scope of this paper is the simulation of the decision-making process for virtual entities. The theoretical framework consists of concepts from the use of internal behavioral simulation for human decision-making. Inspired from such cognitive concepts, the contribution consists in a computational framework that enables a virtual entity to possess an autonomous world of simulation within the simulation. It can simulate itself (using its own model of behavior) and simulate its environment (using its representation of other entities). The entity has the ability to anticipate using internal simulations, in complex environments where it would be extremely difficult to use formal proof methods. Comparing the prediction and the original simulation, its predictive models are improved through a learning process. Illustrations of this model are provided through two implementations. First illustration is an example showing a shepherd, his herd and dogs. The dog simulates the sheep’s behavior in order to make predictions testing different strategies. Second, an artificial 3D juggler plays in interaction with virtual jugglers, humans and robots. For this application, the juggler predicts the behavior of balls in the air and uses prediction to coordinate its behavior in order to juggle.
Simulation within simulation for agent decision-making: Theoretical foundations from cognitive science to operational computer model
S1389128615000626
HTTP Adaptive Streaming (HAS) technologies, e.g., Apple HLS or MPEG-DASH, automatically adapt the delivered video quality to the available network. This reduces stalling of the video but additionally introduces quality switches, which also influence the user-perceived Quality of Experience (QoE). In this work, we conduct a subjective study to identify the impact of adaptation parameters on QoE. The results indicate that the video quality has to be maximized first, and that the number of quality switches is less important. Based on these results, a method to compute the optimal QoE-optimal adaptation strategy for HAS on a per user basis with mixed-integer linear programming is presented. This QoE-optimal adaptation enables the benchmarking of existing adaptation algorithms for any given network condition. Moreover, the investigated concept is extended to a multi-user IPTV scenario. The question is answered whether video quality, and thereby, the QoE can be shared in a fair manner among the involved users.
Identifying QoE optimal adaptation of HTTP adaptive streaming based on subjective studies
S1389128615002224
Ubiquitous crowdsourcing, or the crowdsourcing of tasks in settings beyond the desktop, is attracting interest due to the increasing maturity of mobile and ubiquitous technology, such as smartphones and public displays. In this paper we attempt to address a fundamental challenge in ubiquitous crowdsourcing: if people can contribute to crowdsourcing anytime and anyplace, why would they choose to do so? We highlight the role of motivation in ubiquitous crowdsourcing, and its effect on participation and performance. Through a series of field studies we empirically validate various motivational approaches in the context of ubiquitous crowdsourcing, and assess the comparable advantages of ubiquitous technologies' affordances. We show that through motivation ubiquitous crowdsourcing becomes comparable to online crowdsourcing in terms of participation and task performance, and that through motivation we can elicit better quality contributions and increased participation from workers. We also show that ubiquitous technologies' contextual capabilities can increase participation through increasing workers' intrinsic motivation, and that the in-situ nature of ubiquitous technologies can increase both participation and engagement of workers. Combined, our findings provide empirically validated recommendations on the design and implementation of ubiquitous crowdsourcing.
Motivating participation and improving quality of contribution in ubiquitous crowdsourcing
S1389128615002674
Conventionally, cross-layer designs with same timescale updates can work well; however, there is a difference in layers’ timescales and each layer normally operates at its corresponding timescale when implemented in real systems. To respect this issue, in this article, we introduce a multi-timescale cross-layer design along with three sets of constraints: congestion control, link delay, and power control and with the objective of maximizing the overall utility and minimizing the total link delay and power consumption. The proposed procedure can be implemented in a distributed fashion, which not only guarantees truly optimal solutions to the underlying problem, but also adheres to the natural timescale difference among layers. Finally, the numerical results further solidify the efficacies of our proposal compared to the current frameworks.
A multi-timescale cross-layer approach for wireless ad hoc networks
S1389128615002741
Network virtualization is one of the fundamental building blocks of cloud computing, where computation, storage and networking resources are shared through virtualization technologies. However, the complexity of virtualization exposes additional security vulnerabilities, which can be taken advantage of by malicious users. While traditional network security technologies can help in virtualized environments, we argue that it is cost-effective to isolate virtual resources with high security demands from the untrusted ones. This paper attempts to tackle the security issue by offering physical isolation during virtual network embedding, the process of allocating virtual networks onto physical nodes and links. We start from modeling the security demands in virtualized environments by analyzing typical security vulnerabilities. A simple abstracted concept of security demands is defined to capture the variations of security requirements, based on which we formulate security-aware virtual network embedding as an optimization problem. The proposed objective and constraint functions involve both resource and security restrictions. Then, two heuristic algorithms are developed to solve this problem with splittable or unsplittable virtual links, respectively. Our simulation results demonstrate their efficiency and effectiveness.
Towards security-aware virtual network embedding
S1389128615003096
Socially aware networking (SAN) exploits social characteristics of mobile users to streamline data dissemination protocols in opportunistic environments. Existing protocols in this area utilized various social features such as user interests, social similarity, and community structure to improve the performance of data dissemination. However, the interrelationship between user interests and its impact on the efficiency of data dissemination has not been explored sufficiently. In this paper, we analyze various kinds of relationships between user interests and model them using a layer-based structure in order to form social communities in SAN paradigm. We propose Int-Tree, an Interest-Tree based scheme which uses the relationship between user interests to improve the performance of data dissemination. The core of Int-Tree is the interest-tree, a tree-based community structure that combines two social features, i.e., density of a community and social tie, to support data dissemination. The simulation results show that Int-Tree achieves higher delivery ratio, lower overhead, in comparison to two benchmark protocols, PROPHET and Epidemic routing. In addition, Int-Tree can perform with 1.36 hop counts in average, and tolerable latency in terms of buffer size, time to live (TTL) and simulation duration. Finally, Int-Tree keeps stable performance with various parameters.
Data dissemination using interest-tree in socially aware networking
S1389128615004132
The emergence of the cloud computing paradigm has greatly enabled innovative service models, such as Platform as a Service (PaaS), and distributed computing frameworks, such as MapReduce. However, most existing cloud systems fail to distinguish users with different preferences, or jobs of different natures. Consequently, they are unable to provide service differentiation, leading to inefficient allocations of cloud resources. Moreover, contentions on the resources exacerbate this inefficiency, when prioritizing crucial jobs is necessary, but impossible. Motivated by this, we propose Abacus, a generic resource management framework addressing this problem. Abacus interacts with users through an auction mechanism, which allows users to specify their priorities using budgets, and job characteristics via utility functions. Based on this information, Abacus computes the optimal allocation and scheduling of resources. Meanwhile, the auction mechanism in Abacus possesses important properties including incentive compatibility (i.e., the users’ best strategy is to simply bid their true budgets and job utilities) and monotonicity (i.e., users are motivated to increase their budgets in order to receive better services). In addition, when the user is unclear about her utility function, Abacus automatically learns this function based on statistics of her previous jobs. Extensive experiments, running Hadoop on a private cluster and Amazon EC2, demonstrate the high performance and other desirable properties of Abacus.
Auction-based cloud service differentiation with service level objectives
S1389128615004211
In recent years, Long Term Evolution has been one of the promising technologies of current wireless communication systems. Quality of service and data rate contribute for the robustness of the field. These two factors are the sum up of transmission rate, security delay, delay variation and some other communication factors. Despite its high data rate and quality of service, Long Term Evolution has some drawbacks as far as the security of this technology is concerned. Particularly, there are some security holes in authenticating users for access in the domain of Access Network. For instance, when User Equipment requests for attachment, the International Mobile Subscriber Identity (IMSI) is sent over network without security protection, hence privacy does not hold. In addition, many parameters are generated by invoking a function with only one input key by which compromising this key results in the whole failure of the system security. So as to mitigate these and some other huge security problems, the researchers, propose an improved approach without adding extra cost so that it can be implemented within the same environment as the existing security system (Evolved Packet System Authentication and Key agreement). As one of the performance enhancements, fetching authentication vectors from foreign network is enabled instead of fetching from home network, which significantly reduces the authentication delay and message overhead. Generally, the purpose is to boost the security level and performance of the protocol keeping the architecture of the system as similar as the conventional security system. These has been exhaustively analyzed and verified under network as well as security verification and simulation tools.
Performance and security enhanced authentication and key agreement protocol for SAE/LTE network
S1389128616000207
Recently, Farash et al. pointed out some security weaknesses of Turkanović et al.’s protocol, which they extended to enhance its security. However, we found some problems with Farash et al.’s protocol, such as a known session-specific temporary information attack, an off-line password-guessing attack using a stolen-smartcard, a new-smartcard-issue attack, and a user-impersonation attack. Additionally, their protocol cannot preserve user-anonymity, and the secret key of the gateway node is insecure. The main intention of this paper is to design an efficient and robust smartcard-based user authentication and session key agreement protocol for wireless sensor networks that use the Internet of Things. We analyze its security, proving that our protocol not only overcomes the weaknesses of Farash et al.’s protocol, but also preserves additional security attributes, such as the identity change and smartcard revocation phases. Moreover, the results of a simulation using AVISPA show that our protocol is secure against active and passive attacks. The security and performance of our work are also compared with a number of related protocols.
Design of an anonymity-preserving three-factor authenticated key exchange protocol for wireless sensor networks
S1434841115002757
Inductive coupling of two parallel resonant circuits is still one of the most important structures in the practical bandpass filters. The frequency response of these structures has two passbands; one is desirable and the other one is not. This paper presents a novel scheme to omit the second spurious passband based on theoretical analysis. Thus, a prototype circuit has been designed based on transmission line theory. Simulations show that its behavior agrees well with the ideal equivalent lumped circuit over a very wide range of frequencies. Center frequency tuning has been implemented by using a simple high-Q varactor based on microelectromechanical systems (MEMS) technology. The tuning range of the proposed circuit is 9.8% and the fractional bandwidth is 10.8±0.4 for the 15–16.5GHz frequency range in the Ku-band. The compact size of the filter is 1.15mm×0.7mm and it has a good stop rejection and sharp roll-off frequency compared to its conventional counterpart. It has an insertion loss below 2.5dB and returns loss higher than 10dB in the passband across the whole tuning range.
A novel analytical technique to omit the spurious passband in inductively coupled bandpass filter structures
S1434841115300133
Design of optimal filters is an essential part of signal processing applications. It involves the computation of optimal filter coefficients such that the designed filter response possesses a flat passband and up to an infinite amount of stopband attenuation. This study investigates the effectiveness of employing the swarm intelligence (SI) based and population-based evolutionary computing techniques in determining and comparing the optimal solutions to the FIR filter design problem. The nature inspired optimization techniques applied are cuckoo search, particle swarm and real-coded genetic algorithm using which the FIR highpass (HP) and bandstop (BS) optimal filters are designed. These filters are examined for the stopband attenuation, passband ripples and the deviation from desired response. Moreover, the employed optimization techniques are compared on the field of algorithm execution time, t-test, convergence rate and obtaining global optimal results for the design of digital FIR filters. The results reveal that the proposed FIR filter design approach using cuckoo search algorithm outperforms other techniques in terms of design accuracy, execution time and optimal solution.
Design of optimal digital FIR filters using evolutionary and swarm optimization techniques
S1471772716300720
The concept of the data structure is part of the accepted and relatively unexplored background of the information disciplines. As such, the data structure is treated largely as a technological artefact, helping to support but somewhat isolated from considerations of institutional order. This paper develops an alternative consideration of the data structure which focuses upon the constitutive capacity of such artefacts within institutional order. This viewpoint builds upon literature from the language/action tradition, the more recent work of John Searle on social ontology as well as the small amount of work which proposes the actability of data structures. To help provide some grip on the slippery notion of institutional order we consider it here in terms of the notion of business patterns. The term business pattern is used to refer to a coherent and repeating sequence of action involving humans, machines (including IT systems) and other artefacts (such as data structures) appropriate to some way of organising. The paper also describes a way of visualising either existing business patterns or envisaged business patterns through the pattern ‘language’ of pattern comics. We ground our approach using material gathered within a research study of a key routine enacted within a large manufacturing organisation. Within this routine a mismatch was experienced between what the data structures within the production IT system was telling production managers and what was experienced on the ground by production operators. We show how an actability worldview of data structures expressed in terms of business patterns offers a fruitful way of making sense of problem situations such as this. It also suggests important ways of thinking differently about the nature of design in relation to data structures. “Until I know this sure uncertainty, I'll entertain the offered fallacy.” William Shakespeare, The Comedy of Errors
Instituting facts: Data structures and institutional order
S147403461300027X
Modeling the energy performance of existing buildings enables quick identification and reporting of potential areas for building retrofit. However, current modeling practices of using energy simulation tools do not model the energy performance of buildings at their element level. As a result, potential retrofit candidates caused by construction defects and degradations are not represented. Furthermore, due to manual modeling and calibration processes, their application is often time-consuming. Current application of 2D thermography for building diagnostics is also facing several challenges due to a large number of unordered and non-geo-tagged images. To address these limitations, this paper presents a new computer vision-based method for automated 3D energy performance modeling of existing buildings using thermal and digital imagery captured by a single thermal camera. First, using a new image-based 3D reconstruction pipeline which consists of Graphic Processing Unit (GPU)-based Structure-from-Motion (SfM) and Multi-View Stereo (MVS) algorithms, the geometrical conditions of an existing building is reconstructed in 3D. Next, a 3D thermal point cloud model of the building is generated by using a new 3D thermal modeling algorithm. This algorithm involves a one-time thermal camera calibration, deriving the relative transformation by forming the Epipolar geometry between thermal and digital images, and the MVS algorithm for dense reconstruction. By automatically superimposing the 3D building and thermal point cloud models, 3D spatio-thermal models are formed, which enable the users to visualize, query, and analyze temperatures at the level of 3D points. The underlying algorithms for generating and visualizing the 3D spatio-thermal models and the 3D-registered digital and thermal images are presented in detail. The proposed method is validated for several interior and exterior locations of a typical residential building and an instructional facility. The experimental results show that inexpensive digital and thermal imagery can be converted into ubiquitous reporters of the actual energy performance of existing buildings. The proposed method expedites the modeling process and has the potential to be used as a rapid and robust building diagnostic tool.
An automated vision-based method for rapid 3D energy performance modeling of existing buildings using thermal and digital imagery
S1474034613000475
Dental implant and prosthetics is a growing industry that follows the increasing aged populations that incur a higher percentage of tooth loss [1]. The dental implant sector is one of the most technical oriented fields in dentistry with many new techniques, devices, and materials being invented and put to clinical trials. Most innovations and technologies tend to be protected by intellectual property rights (IPRs) through patents. Thus, this research identifies the life spans of dental implant (DI) key technologies using patent analysis. Key patents and their frequently appearing phrases are analyzed for the construction of the DI ontology. Afterward, the life spans of DI technical clusters are defined based on the ontology schema. This research demonstrates the feasibility of using text mining and data mining techniques to extract key phrases from a set of DI patents with different patent classifications (e.g., UPC, IPC) as the basis for building a domain-specific ontology. The case study of ontological sub-clustering for dental implants demonstrates life span mapping of the technology and the ability to use clusters to represent stages of development and maturity in specific technology life cycles.
Constructing a dental implant ontology for domain specific clustering and life span analysis
S1474034613000669
The purpose of this research is twofold: first, to undertake a thorough appraisal of existing Input Variable Selection (IVS) methods within the context of time-critical and computation resource-limited dimensionality reduction problems; second, to demonstrate improvements to, and the application of, a recently proposed time-critical sensitivity analysis method called EventTracker to an environment science industrial use-case, i.e., sub-surface drilling. Producing time-critical accurate knowledge about the state of a system (effect) under computational and data acquisition (cause) constraints is a major challenge, especially if the knowledge required is critical to the system operation where the safety of operators or integrity of costly equipment is at stake. Understanding and interpreting, a chain of interrelated events, predicted or unpredicted, that may or may not result in a specific state of the system, is the core challenge of this research. The main objective is then to identify which set of input data signals has a significant impact on the set of system state information (i.e. output). Through a cause-effect analysis technique, the proposed technique supports the filtering of unsolicited data that can otherwise clog up the communication and computational capabilities of a standard supervisory control and data acquisition system. The paper analyzes the performance of input variable selection techniques from a series of perspectives. It then expands the categorization and assessment of sensitivity analysis methods in a structured framework that takes into account the relationship between inputs and outputs, the nature of their time series, and the computational effort required. The outcome of this analysis is that established methods have a limited suitability for use by time-critical variable selection applications. By way of a geological drilling monitoring scenario, the suitability of the proposed EventTracker Sensitivity Analysis method for use in high volume and time critical input variable selection problems is demonstrated.
Input variable selection in time-critical knowledge integration applications: A review, analysis, and recommendation paper
S147403461300075X
This paper proposes a new extended Process to Product Modeling (xPPM) method for integrated and seamless information delivery manual (IDM) and model view definition (MVD) development. Current IDM development typically uses Business Process Modeling Notation (BPMN) to represent a process map (PM). Exchange requirements (ERs) and functional parts (FPs) specify the information required when information is exchanged between different activities. A set of information requirements, specifically defined as a subset of Industry Foundation Classes (IFC), is called an MVD. Currently however, PMs, ERs, FPs, and MVDs are developed as separate documents through independent development steps. Moreover, even though ERs and FPs are designed to be reused, tracking and reusing the ERs and FPs developed by others is practically impossible. The xPPM method is proposed to provide a tight connection between PMs, ERs, FPs, and MVDs and to improve the reusability of predefined ERs and FPs. The theoretical framework is based on the approach of the Georgia Tech Process to Product Modeling (GTPPM) to suit the IDM development process. An xPPM tool is developed, and the validity of xPPM is analyzed through the reproduction of existing IDMs and MVDs. The benefits and limitations of xPPM and lessons from the applicability tests are discussed.
Extended Process to Product Modeling (xPPM) for integrated and seamless IDM and MVD development
S1474034613000761
Video recordings of earthmoving construction operations provide understandable data that can be used for benchmarking and analyzing their performance. These recordings further support project managers to take corrective actions on performance deviations and in turn improve operational efficiency. Despite these benefits, manual stopwatch studies of previously recorded videos can be labor-intensive, may suffer from biases of the observers, and are impractical after substantial period of observations. This paper presents a new computer vision based algorithm for recognizing single actions of earthmoving construction equipment. This is particularly a challenging task as equipment can be partially occluded in site video streams and usually come in wide variety of sizes and appearances. The scale and pose of the equipment actions can also significantly vary based on the camera configurations. In the proposed method, a video is initially represented as a collection of spatio-temporal visual features by extracting space–time interest points and describing each feature with a Histogram of Oriented Gradients (HOG). The algorithm automatically learns the distributions of the spatio-temporal features and action categories using a multi-class Support Vector Machine (SVM) classifier. This strategy handles noisy feature points arisen from typical dynamic backgrounds. Given a video sequence captured from a fixed camera, the multi-class SVM classifier recognizes and localizes equipment actions. For the purpose of evaluation, a new video dataset is introduced which contains 859 sequences from excavator and truck actions. This dataset contains large variations of equipment pose and scale, and has varied backgrounds and levels of occlusion. The experimental results with average accuracies of 86.33% and 98.33% show that our supervised method outperforms previous algorithms for excavator and truck action recognition. The results hold the promise for applicability of the proposed method for construction activity analysis.
Vision-based action recognition of earthmoving equipment using spatio-temporal features and support vector machine classifiers
S1474034613000773
The automatic detection of construction materials in images acquired on a construction site has been regarded as a critical topic. Recently, several data mining techniques have been used as a way to solve the problem of detecting construction materials. These studies have applied single classifiers to detect construction materials—and distinguish them from the background—by using color as a feature. Recent studies suggest that combining multiple classifiers (into what is called a heterogeneous ensemble classifier) would show better performance than using a single classifier. However, the performance of ensemble classifiers in construction material detection is not fully understood. In this study, we investigated the performance of six single classifiers and potential ensemble classifiers on three data sets: one each for concrete, steel, and wood. A heterogeneous voting-based ensemble classifier was created by selecting base classifiers which are diverse and accurate; their prediction probabilities for each target class were averaged to yield a final decision for that class. In comparison with the single classifiers, the ensemble classifiers performed better in the three data sets overall. This suggests that it is better to use an ensemble classifier to enhance the detection of construction materials in images acquired on a construction site.
Classification of major construction materials in construction environments using ensemble classifiers
S1474034613000943
Automatically monitoring construction progress or generating Building Information Models using site images collections – beyond point cloud data – requires semantic information such as construction materials and inter-connectivity to be recognized for building elements. In the case of materials such information can only be derived from appearance-based data contained in 2D imagery. Currently, the state-of-the-art texture recognition algorithms which are often used for recognizing materials are very promising (reaching over 95% average accuracy), yet they have mainly been tested in strictly controlled conditions and often do not perform well with images collected from construction sites (dropping to 70% accuracy and lower). In addition, there is no benchmark that validates their performance under real-world construction site conditions. To overcome these limitations, we propose a new vision-based method for material classification from single images taken under unknown viewpoint and site illumination conditions. In the proposed algorithm, material appearance is modeled by a joint probability distribution of responses from a filter bank and principal Hue-Saturation-Value color values and classified using a multiple one-vs.-all χ 2 kernel Support Vector Machine classifier. Classification performance is compared with the state-of-the-art algorithms both in computer vision and AEC communities. For experimental studies, a new database containing 20 typical construction materials with more than 150 images per category is assembled and used for validation. Overall, for material classification an average accuracy of 97.1% for 200 × 200 pixel image patches are reported. In cases where image patches are smaller, our method can synthetically generate additional pixels and maintain a competitive accuracy to those reported above (90.8% for 30 × 30 pixel patches). The results show the promise of the applicability of the proposed method and expose the limitations of the state-of-the-art classification algorithms under real world conditions. It further defines a new benchmark that could be used to measure the performance of future algorithms.
Vision-based material recognition for automated monitoring of construction progress and generating building information modeling from unordered site image collections
S1474034613000979
Considering their significant impact on construction projects, scaffolding as part of the temporary facilities category in construction must be thoroughly designed, planned, procured, and managed. The current practices in planning and managing scaffolding though is often manual and reactive, especially when a construction project is already underway. Widespread results are code compliance problems, inefficiency, and waste of procuring and managing material for scaffolding systems. We developed a rule-based system that automatically plans scaffolding systems for pro-active management in Building Information Modeling (BIM). The scope of the presented work is limited to traditional pipe and board scaffolding systems. A rule was prepared based on the current practice of planning and installing scaffolding systems. Our computational algorithms automatically recognize geometric and non-geometric conditions in building models and produce a scaffolding system design which a practitioner can use in the field. We implemented our automated scaffolding system for a commercially-available BIM software and tested it in a case study project. The system thoroughly identified the locations in need of scaffolding and generated the corresponding scaffolding design in BIM. Further results show, the proposed approach successfully generated a scaffolding system-loaded BIM model that can be utilized in communication, billing of materials, scheduling simulation, and as a benchmark for accurate field installation and performance measurement.
Automatic design and planning of scaffolding systems using building information modeling
S1474034613000980
The performance of a genetic algorithm is compared with that of particle swarm optimization for the constrained, non-linear, simulation-based optimization of a double flash geothermal power plant. Particle swarm optimization converges to better (higher) objective function values. The genetic algorithm is shown to converge more quickly and more tightly, resulting in a loss of solution diversity. Particle swarm optimization obtains solutions within 0.1% and 0.5% of the best known optimum in significantly fewer objective function evaluations than the genetic algorithm.
Comparison of genetic algorithm to particle swarm for constrained simulation-based optimization of a geothermal power plant
S1474034613000992
The engineering of laminated composite structures is a complex task for design engineers and manufacturers, requiring significant management of manufacturing process and materials information. Ontologies are becoming increasingly commonplace for semantically representing knowledge in a formal manner that facilitates sharing of rich information between people and applications. Moreover, ontologies can support first-order logic and reasoning by rule engines that enhance automation. To support the engineering of laminated composite structures, this work developed a novel Semantic LAminated Composites Knowledge management System (SLACKS) that is based on a suite of ontologies for laminated composites materials and design for manufacturing (DFM) and their integration into a previously developed engineering design framework. By leveraging information from CAD/FEA tools and materials data from online public databases, SLACKS uniquely enables software tools and people to interoperate, to improve communication and automate reasoning during the design process. With SLACKS, this paper shows the power of integrating relevant domains of the product life cycle, such as design, analysis, manufacturing and materials selection through the engineering case study of a wind turbine blade. The integration reveals a usable product-life-cycle knowledge tool that can facilitate efficient knowledge creation, retrieval and reuse from design inception to manufacturing of the product.
A semantic knowledge management system for laminated composites
S1474034614000020
Dysarthria is a neurological impairment of controlling the motor speech articulators that compromises the speech signal. Automatic Speech Recognition (ASR) can be very helpful for speakers with dysarthria because the disabled persons are often physically incapacitated. Mel-Frequency Cepstral Coefficients (MFCCs) have been proven to be an appropriate representation of dysarthric speech, but the question of which MFCC-based feature set represents dysarthric acoustic features most effectively has not been answered. Moreover, most of the current dysarthric speech recognisers are either speaker-dependent (SD) or speaker-adaptive (SA), and they perform poorly in terms of generalisability as a speaker-independent (SI) model. First, by comparing the results of 28 dysarthric SD speech recognisers, this study identifies the best-performing set of MFCC parameters, which can represent dysarthric acoustic features to be used in Artificial Neural Network (ANN)-based ASR. Next, this paper studies the application of ANNs as a fixed-length isolated-word SI ASR for individuals who suffer from dysarthria. The results show that the speech recognisers trained by the conventional 12 coefficients MFCC features without the use of delta and acceleration features provided the best accuracy, and the proposed SI ASR recognised the speech of the unforeseen dysarthric evaluation subjects with word recognition rate of 68.38%.
Artificial neural networks as speech recognisers for dysarthric speech: Identifying the best-performing set of MFCC parameters and studying a speaker-independent approach
S1474034614000032
An efficient computational methodology for shape acquisition, processing and representation is developed. It includes 3D computer vision by applying triangulation and stereo-photogrammetry for high-accuracy 3D shape acquisition. Resulting huge 3D point clouds are successively parameterized into mathematical surfaces to provide for compact data-set representation, yet capturing local details sufficiently. B-spline surfaces are employed as parametric entities in fitting to point clouds resulting from optical 3D scanning. Beyond the linear best-fitting algorithm with control points as fitting variables, an enhanced non-linear procedure is developed. The set of best fitting variables in minimizing the approximation error norm between the parametric surface and the 3D cloud includes the control points coordinates. However, they are augmented by the set of position parameter values which identify the respectively closest matching points on the surface for the points in the cloud. The developed algorithm is demonstrated to be efficient on demanding test cases which encompass sharp edges and slope discontinuities originating from physical damage of the 3D objects or shape complexity.
3D shape acquisition and integral compact representation using optical scanning and enhanced shape parameterization
S1474034614000044
Construction work typically means producing on shifting locations. Moving materials, equipment and men efficiently from place to place, in and in between projects, depends on good coordination and requires specialized information systems. The key to such information systems are appropriate approaches to collect de-centralized sensor readings and to process, and distribute them to multiple end users at different locations both during the construction process and after the project is finished. This paper introduces a framework for the support of such distributed data collection and management to foster real-time data collection and processing along with the provision of opportunities to retain highly precise data for post-process analyses. In particular, the framework suggests a scheme to benefit from exploiting readings from the same sensors in varying levels of detail for informing different levels of decision making: operational, tactical, and strategic. The sensor readings collected in this way are not only potentially useful to track, assess, and analyse construction operations, but can also serve as reference during the maintenance stage. To this extent, the framework contributes to the existing body of knowledge of construction informatics. The operationality of the framework is demonstrated by developing and applying two on site information systems to track asphalt paving operations.
A distributed data collection and management framework for tracking construction operations
S1474034614000160
Simulation of construction activities in a virtual environment can prevent constructability problems and increase efficiency and safety at the physical construction site. The computation for collision checks creates a bottleneck during these simulations. A typical construction simulation requires collision checks to be performed between all pairs among thousands or even millions of objects, and each of these checks must be completed within 1/10th or even 1/20th of a second to provide a smooth real-time simulation. Therefore, the reduction of computational cost is paramount. An effective and commonly used method is to cluster the objects into groups and use a larger surrounding boundary shape in place of the individual objects. This significantly reduces the computational effort required. However, clustering objects manually is usually time consuming and is difficult especially for large scenarios. In this paper, we develop an automatic clustering method, called the Propagation Clustering Method (PCM). PCM employs k-means clustering to iteratively cluster objects into multiple groups. A quality index is defined to evaluate the clustering results. Once the clustering results satisfy the predefined quality requirement, the group of objects is replaced by a rectangular box using the axis-aligned bounding box (AABB) algorithm. The rectangular box is then stored in a tree structure. To verify the feasibility of the proposed PCM, we defined three testing scenarios: a site with scattered objects, such as a small plant construction; a common construction site; and a large site with both common structures and scattered objects. Experimental results show that PCM is effective for automatically grouping objects in virtual construction scenarios. It can significantly reduce the effort required to prepare a construction simulation.
Automatic clustering method for real-time construction simulation
S1474034614000172
The medical equipment industry has been one of the fastest growing sectors of the decade with predicted global sales reaching US$ 430billion in 2017 [22]. During the period from 1995 to 2008, the patent applications in medical technology increased rapidly worldwide (World Intellectual Property Organization, 2012). Patent analysis, although useful in forecasting technology development trends, has posed a challenging analysis task since the volume and diversity of new patent applications has surpassed the ability of regular firms and research teams to process and identify relevant information. Further, medical related technologies rely on clinical trials to validate and gain regulatory approval for patient treatment even though patents, protecting the intellectual property rights of inventors, have been granted. This research focuses on developing a knowledge centric methodology and system to analyze and assess viable medical technology innovations and trends considering both patents and clinical reports. Specifically, the design innovations of dental implant connections are used as a case study. A novel and generic methodology combining ontology based patent analysis and clinical meta-analysis is developed to analyze and identify the most effective patented techniques in the dental implant field. The research establishes and verifies a computer supported analytical approach and system for the strategic prediction of medical technology development trends.
A knowledge centric methodology for dental implant technology assessment using ontology based patent analysis and clinical meta-analysis
S1474034614000184
Although the integration of engineering data within the framework of product data management systems has been successful in the recent years, the holistic analysis (from a systems engineering perspective) of multi-disciplinary data or data based on different representations and tools is still not realized in practice. At the same time, the application of advanced data mining techniques to complete designs is very promising and bears a high potential for synergy between different teams in the development process. In this paper, we propose shape mining as a framework to combine and analyze data from engineering design across different tools and disciplines. In the first part of the paper, we introduce unstructured surface meshes as meta-design representations that enable us to apply sensitivity analysis, design concept retrieval and learning as well as methods for interaction analysis to heterogeneous engineering design data. We propose a new measure of relevance to evaluate the utility of a design concept. In the second part of the paper, we apply the formal methods to passenger car design. We combine data from different representations, design tools and methods for a holistic analysis of the resulting shapes. We visualize sensitivities and sensitive cluster centers (after feature reduction) on the car shape. Furthermore, we are able to identify conceptual design rules using tree induction and to create interaction graphs that illustrate the interrelation between spatially decoupled surface areas. Shape data mining in this paper is studied for a multi-criteria aerodynamic problem, i.e. drag force and rear lift, however, the extension to quality criteria from different disciplines is straightforward as long as the meta-design representation is still applicable.
Shape mining: A holistic data mining approach for engineering design
S1474034614000196
The Blocks Relocation Problem consists in minimizing the number of movements performed by a gantry crane in order to retrieve a subset of containers placed into a bay of a container yard according to a predefined order. A study on the mathematical formulations proposed in the related literature reveals that they are not suitable for its solution due to their high computational burden. Moreover, in this paper we show that, in some cases, they do not guarantee the optimality of the obtained solutions. In this regard, several optimization methods based on the well-known A∗ search framework are introduced to tackle the problem from an exact point of view. Using our A∗ algorithm we have corrected the optimal objective function value of 17 solutions out of 45 instances considered by Caserta et al. (2012) [4]. In addition, this work presents a domain-specific knowledge-based heuristic algorithm to find high-quality solutions by means of short computational times. It is based on finding the most promising positions into the bay where to relocate those containers that are currently located on the next one to be retrieved, in such a way that, they do not require any additional relocation operation in the future. The computational tests indicate the higher effectiveness and efficiency of the suggested heuristic when solving real-world scenarios in comparison with the most competitive approaches from the literature.
A domain-specific knowledge-based heuristic for the Blocks Relocation Problem
S1474034614000305
In pipe spool assemblies used in construction, pre-fabrication errors inevitably occur due to the complexity of the tasks involved in the pipe spool fabrication process, the inaccuracy of the tools employed for performing these tasks, human error, and inadequate inspection and monitoring during the process. Permanent deflections may also occur during shipment and transportation. After delivery at construction sites, defective spools must be detected and further consideration given to the erection of the spools to tolerance levels specified; otherwise, the repair and realignment associated with rework can cause schedule delays and consequent substantial costs increases. This paper presents an automated approach for monitoring and assessing fabricated pipe spools using automated scan-to-BIM registration. Defects are detected through a neighborhood-based Iterative Closest Point (ICP) approach for the registration process. While this technique can be broadly employed, this paper focuses on industrial construction facilities with particular emphasis on pipe spool assemblies. Experiments show that the proposed approach can be employed for the automatic and continual monitoring of such assemblies throughout fabrication, assembly and erection to enable timely detection and characterization of deviations. The main contribution of the work presented in this paper is an automated 3D inspection framework and algorithms for construction assemblies in general and pipe spools in particular.
Automated 3D compliance checking in pipe spool fabrication
S1474034614000329
An important challenge in mechatronic system design is to select a feasible system architecture that satisfies all requirements. This article describes (i) the necessary concepts that a system architect needs to be able to formally and declaratively describe the design space of mechanical design synthesis problems, thereby minimizing accidental complexity; (ii) how a Domain Specific Language based on the SysML modeling language and the Object Constraint Language (OCL) can be used to create this model of the design space; and (iii) an iterative process to come up with a formal model of the design space. This model describes the design space independent of any (knowledge of a) particular solving technology for the Design Space Exploration. Furthermore, the information in the model allows to select the most appropriate solving strategy for a particular design synthesis problem. The different concepts are illustrated on the example of automated synthesis of a gearbox.
Describing the design space of mechanical computational design synthesis problems
S1474034614000330
Mechatronic design aims to integrate the models developed during the mechatronic design process, in order to be able to optimize the overall mechatronic system performance. A lot of work has been done in the last few years by researchers and software developers to achieve this objective. However, the level of integration does not yet meet the purposes of mechatronic system designers, particularly when dealing with modeling changes. Therefore, new methodologies are required to manage the multi-view complexity of mechatronic design. In this paper, we propose a multi-agent methodology for the multi-abstraction modeling issue of mechatronic systems. The major contribution deals with proposing a new method for the decomposition of the multi-level design into agents linked with relationships. Each agent is representing an abstraction level and both agent and relationships are managed with rules. By considering an application to a piezoelectric energy harvesting system, we show how we associate agents, rules and inter-level relationships to multi-abstraction modeling. We also show how modeling errors are identified using this approach.
A multi-agent methodology for multi-level modeling of mechatronic systems
S1474034614000342
Mechatronic systems are characterized by the synergic interaction between their components from different technological domains. These interactions enable the system to achieve more functionalities than the sum of the functionalities of its components considered independently. Traditional design approaches are no longer adequate and there is a need for new synergic and multidisciplinary design approaches with close cooperation between specialists from different disciplines. SysML is a general purpose multi-view language for systems modeling and is identified as a support to this work. In this paper, a SysML-based methodology is proposed. This methodology consists of two phases: a black box analysis with an external point of view that provides a comprehensive and consistent set requirements, and a white box analysis that progressively leads to the internal architecture and behavior of the system.
A SysML-based methodology for mechatronic systems architectural design
S1474034614000366
Building related data tends to be generated, used and retained in a domain-specific manner. The lack of interoperability between data domains in the architecture, engineering and construction (AEC) industry inhibits the cross-domain use of data at an enterprise level. Semantic web technologies provide a possible solution to some of the noted interoperability issues. Traditional methods of information capture fail to take into account the wealth of soft information available throughout a building. Several sources of information are not included in performance assessment frameworks, including social media, occupant communication, mobile communication devices, occupancy patterns, human resource allocations and financial information. The paper suggests that improved data interoperability can aid the integration of untapped silos of information into existing structured performance measurement frameworks, leading to greater awareness of stakeholder concerns and building performance. An initial study of how building-related data can be published following semantic web principles and integrated with other ‘soft-data’ sources in a cross-domain manner is presented. The paper goes on to illustrate how data sources from outside the building operation domain can be used to supplement existing sources. Future work will include the creation of a semantic web based performance framework platform for building performance optimisation.
Using semantic web technologies to access soft AEC data
S147403461400038X
The design process of mechatronic devices, which involves experts from different disciplines working together, has limited time and resource constraints. These experts normally have their own domain-specific designing methods and tools, which can lead to incompatibilities when one needs to work together using these those methods and tools. Having a proper framework which integrates different design tools is of interest, as such a framework can prevent incompatibilities between parts during the design process. In this paper, we propose our co-modelling methodology and co-simulation tools integration framework, which helps to maintain the domain specific properties of the model components during the co-design process of various mechatronic devices. To avoid expensive rework later in the design phase and even possible system failure, fault modelling and a layered structure with fault-tolerance mechanisms for the controller software are introduced. In the end, a practical mechatronic device is discussed to illustrate the methods and tools which are presented in this paper in details.
A co-modelling method for solving incompatibilities during co-design of mechatronic devices
S1474034614000391
Building Information Models (BIM) are comprehensive digital representations of buildings, which provide a large set of information originating from the different disciplines involved in the design, construction and operation processes. Moreover, accessing the data needed for a specific downstream application scenario is a challenging task in large-scale BIM projects. Several researchers recently proposed using formal query languages for specifying the desired information in a concise, well-defined manner. One of the main limitations of the languages introduced so far, however, is the inadequate treatment of geometric information. This is a significant drawback, as buildings are inherently spatial objects and qualitative spatial relationships accordingly play an important role in the analysis and verification of building models. In addition, the filters needed in specific data exchange scenarios for selecting the information required can be built by spatial objects and their relations. The lack of spatial functionality in BIM query languages is filled by the Query Language for Building Information Models (QL4BIM) which provides metric, directional and topological operators for defining filter expressions with qualitative spatial semantics. This paper focuses on the topological operators provided by the language. In particular, it presents a new implementation method based on the boundary representation of the operands which outperforms the previously presented octree-based approaches. The paper discusses the developed algorithms in detail and presents extensive performance tests.
Processing of Topological BIM Queries using Boundary Representation Based Methods
S1474034614000408
It is hard to imagine living in a building without electricity and a heating or cooling system these days. Factories and data centers are equally dependent on a continuous functioning of these systems. As beneficial as this development is for our daily life, the consequences of a failure are critical. Malfunctioning power supplies or temperature regulation systems can cause the close-down of an entire factory or data center. Heat and air conditioning losses in buildings lead to a large waste of the limited energy resources and pollute the environment unnecessarily. To detect these flaws as quickly as possible and to prevent the negative consequences constant monitoring of power lines and heat sources is necessary. To this end, we propose a fully automatic system that creates 3D thermal models of indoor environments. The proposed system consists of a mobile platform that is equipped with a 3D laser scanner, an RGB camera and a thermal camera. A novel 3D exploration algorithm ensures efficient data collection that covers the entire scene. The data from all sensors collected at different positions is joined into one common reference frame using calibration and scan matching. In the post-processing step a model is built and points of interest are automatically detected. A viewer is presented that aids experts in analyzing the heat flow and localizing and identifying heat leaks. Results are shown that demonstrate the functionality of the system.
A mobile robot based system for fully automated thermal 3D mapping
S1474034614000421
Space layouts are created by designers to model a building’s spaces and related physical objects. Building services designers commonly reuse space layouts created by architectural designers to develop their designs. However, reuse tends to be limited due to differences in designers’ space views. In order to address this issue of modeling multiple space views, we define a set of novel operations that can be used by designers to generate new space layouts from existing layouts. Fundamental operations include select , aggregate , and decompose operations. The select operation facilitates reuse of space layouts created in building information modeling (BIM) authoring systems. Signatures and processing of these operations are defined. We use an existing schema for network-based space layouts to represent space layouts. In a network-based space layout, specific spatial relations between layout elements are explicitly modeled as a directed, weighted graph or network. Processing of certain operations involves traversal of a spatial relation network with graph algorithms to determine layout modifications. Symmetric difference and overlay operations are defined as additional operations. They are composed of union , intersect , and subtract operations, which are fundamental operations. Fundamental and additional layout operations may be composed into expressions to model domain-specific space views. We have extended an existing layout modeling system with implementations of these layout operations. The system relies on geometric and solid modeling as well as graph libraries to represent layouts and process operations. The feasibility of modeling of multiple space views with layout operation expressions is shown with an example in which a security lighting layout of a floor of an existing office building is automatically generated from an architectural layout. “is adjacent to” relation layout element “is near” relation “overlaps” relation “partially bounds” relation “partially encloses” relation space boundary element space element space element contained in a whole space space element partially enclosing a whole space spatial relation element spatial relation network subspace “touches” relation whole space
Operations on network-based space layouts for modeling multiple space views of buildings
S1474034614000445
The quality of initial ideas is considered a critical determinant for successful new product development (NPD). This study presents an ideation method for generating new product ideas. The aims of the proposed method are (1) to clarify and identify potential problems involved in the knowledge domain of the product design through the Su-field enhanced concept mapping diagram; (2) to abstract inventive problems and generate novel product ideas by means of the theory of inventive problem solving (TRIZ) methodology; and, (3) to develop an effective decision aiding method for evaluating alternative ideas and determining promising product ideas using fuzzy linguistic evaluation techniques. The applicability of the ideation method is demonstrated through a case study of an air purifier design. The theoretical and practical implications of the ideation method are also discussed.
An ideation method for generating new product ideas using TRIZ, concept mapping, and fuzzy linguistic evaluation techniques
S1474034614000469
Global Navigation Satellite Systems (GNSS) are widely used to document the on- and off-site trajectories of construction equipment. Before analyzing the collected data for better understanding and improving construction operations, the data need to be freed from outliers. Eliminating outliers is challenging. While manually identifying outliers is a time-consuming and error-prone process, automatic filtering is exposed to false positives errors, which can lead to eliminating accurate trajectory segments. This paper addresses this issue by proposing a hybrid filtering method, which integrates experts’ decisions. The decisions are operationalized as parameters to search for next outliers and are based on visualization of sensor readings and the human-generated notes that describe specifics of the construction project. A specialized open-source software prototype was developed and applied by the authors to illustrate the proposed approach. The software was utilized to filter outliers in sensor readings collected during earthmoving and asphalt paving projects that involved five different types of common construction equipment.
An information fusion approach for filtering GNSS data sets collected during construction operations
S1474034614000470
Construction sites are rugged, dynamic, and complex, involving a large number of objects that continuously change their locations and occupied spaces. Being able to accurately locate and track site objects site is critical to project management and control to meet the increasing demands for efficiency and safety in modern construction projects. Automated locating technologies such as Radio Frequency Identification (RFID) have proven to be beneficial in tracking some construction site objects. The biggest challenge in applying the RFID technology is that the received signal strength (RSS) varies over time and location, and there is no direct relationship between signal strength and detection range, leading to low positional accuracies in the estimated tag locations. This paper presents an algorithm termed BConTri that combines a “boundary condition method” and the trilateration concept to estimate tag location in three-dimensional (3D) real world coordinates from four or more RFID readers equipped with GPS. This study also developed a prototype RFID locating system that implemented the newly created BConTri algorithm. A comprehensive assessment of the positional accuracy was applied to field experiment results and a high accuracy of the algorithm was observed. A measure of the spatial dilution of reader distribution was formulated and a linear relationship between this measure and the locating accuracy was observed, forming the base for using this measure as a reliable indicator for the resulting locating accuracy. This accuracy indicator helps in estimating the accuracy and quality control of RFID-based locating and tracking systems in construction.
A boundary condition based algorithm for locating construction site objects using RFID and GPS
S1474034614000482
The collaborative design of a complicated mechanical product often involves conflicting multidisciplinary objectives, thus one key problem is conflict resolution and coordination among the different disciplines. Since the characteristics such as cooperative competition, professional dependence, compromise, overall utility and so on exist in multidisciplinary collaborative design (MCD), an effective way to gradually eliminate the conflicts among the multiple disciplines and reach an agreement is the negotiation by which a compromise solution that satisfies all parties is got. By comprehensively analyzing the characteristics in MCD and considering the benefit equilibrium among discipline individuals and team, a negotiation strategy is presented, which maximize the union satisfaction degree of system overall objective under the premise of ensuring the higher satisfaction degree level of each discipline’s local objective. A design action of a discipline is abstractly expressed as a concession in the negotiation strategy, and a negotiation model used for MCD is generated by establishing the relation between concession and satisfaction degree. By the relation between satisfaction degree and objective function, the mapping relationship between satisfaction degree domain and physical domain is built to get the design solution. A negotiation process is planned, and a negotiation system framework is designed to support the negotiation among multiple disciplines and assist the different disciplines rapidly reach a consistent compromise solution. A design example of automotive friction clutch is given to illustrate the proposed method. union satisfaction degree of multiple disciplines on design problem P satisfaction degree of discipline i on design problem P strategy power exponent concession weighting factor assigned to discipline i the threshold which means the accepted satisfaction degree of discipline i the optimal solution the worst solution intersection of design solutions of all disciplines negotiation termination condition volume of the clutch (mm3) unit friction power (W/mm2) intensity of pressure on friction surface (N/mm2) the biggest torque (Nm) maximum power (kW) total pressure (kN) reserve coefficient, 1.5 number of friction plate outer diameter of friction plate (mm) ratio of inner radius to outer radius minimum clearance (mm) thickness of friction plate (mm) coefficient of kinetic friction allowable intensity of pressure on friction surface (N/mm2) allowable unit friction power (W/mm2) allowable stress (MPa) rotate speed of friction plate (r/min)
A negotiation methodology for multidisciplinary collaborative product design
S1474034614000494
The growing trend for delivering physical products to customers as parts of product service systems (PSS) is creating a need for a new generation of Computer Aided Design (CAD) system to support the design of PSS: so-called “PSS-CAD”. Key research issues in the development of such systems include building understanding of the kinds of applications that designers of PSS might need and the establishment of well-founded representation schemes to underpin and support communication between PSS-CAD systems. Recent literature includes numerous descriptions of integrated PSS development processes, PSS-CAD tools to support these processes and early meta-models to provide information support. This paper complements this work by proposing a representation scheme that is a key prerequisite to achieving the interoperability between PSS-CAD systems which would be necessary to support the deployment of integrated PSS development processes in industry. The representation scheme, a form of meta-model, draws on learning from the product definition community that emerged in the 1970s in response to a need for interoperability between the different shape-based CAD systems that were being developed at the time. The initial focus on shape representation has developed to digital product definitions that define the design of a product coupled with meta-data recording details of processes by which the design was created and, more recently, supported through-life. Similarly, PSS-related information includes both PSS definitions, to support the lifecycles of physical products and associated services, and meta-data needed to support the management of PSS development processes. This paper focuses on information requirements for the definition of service elements of PSS and relationships with product elements and service actors. These requirements are derived from earlier work on the use of service blueprinting for the visualisation and mapping of service activities to deliver different types of service contract. Key information requirements addressed include the need to represent service process flow and breakdown structures, relationships between service and product elements, substitution relationships, and service variants. A representation scheme is proposed and demonstrated through application to a PSS case study. The representation scheme is built on a generic information architecture that has already been applied to problems of product definition; as such there is an underlying compatibility that offers real promise in the future realisation of integrated PSS development processes.
A representation scheme for digital product service system definitions
S1474034614000500
The planning of large infrastructure projects such as inner-city subway tracks is a highly collaborative process in which numerous experts from different domains are involved. While performing the planning task, widely differing scales have to be taken into consideration, ranging from the kilometer scale for the general routing of the track down to the centimeter scale for the detailed design of connection points. Currently there is no technology available which supports both the collaborative as well as the multi-scale aspect in an adequate manner. To fill this technological gap and better support the collaborative design and engineering activities involved with infrastructure planning, this paper introduces a new methodology which allows engineers to simultaneously manipulate a shared multi-scale tunnel model. This methodology comprises two main aspects. The first aspect is a multi-scale model for shield tunnels, which provides five different levels of detail (LoD) representing the different levels of abstraction required throughout the planning progress. The second aspect is a conceived collaboration platform, which enables simultaneous modifications of the multi-scale model by multiple users. In existing multi-scale approaches, where the individual representations are stored independently from each other, there is a high risk of creating inconsistencies, in particular in the highly dynamic collaborative planning context. To overcome this issue, the concept presented in this paper makes use of procedural modeling techniques for creating explicit dependencies between the geometric entities on the different LoDs. This results in a highly flexible, yet inherently consistent multi-scale model where the manipulation of elements on coarser LoDs results in an automated update of all dependent elements on finer LoDs. The proposed multi-scale model forms a well-suited basis for realizing the collaboration concept, which allows several experts to simultaneously manipulate a shared infrastructure model on various scales while using the different design tools they are accustomed to. The paper discusses in detail the principles and advantages of the proposed multi-scale modeling approach as well as its application in the context of collaborative tunnel design. The paper concludes with a case study of a large infrastructure project: a new inner-city subway tunnel in Munich, Germany.
Synchronous collaborative tunnel design based on consistency-preserving multi-scale models
S1474034614000512
Disassembly Sequence Planning (DSP) is a challenging NP-hard combinatorial optimization problem. As a new and promising population-based evolutional algorithm, the Teaching–Learning-Based Optimization (TLBO) algorithm has been successfully applied to various research problems. However, TLBO is not capable or effective in DSP optimization problems with discrete solution spaces and complex disassembly precedence constraints. This paper presents a Simplified Teaching–Learning-Based Optimization (STLBO) algorithm for solving DSP problems effectively. The STLBO algorithm inherits the main idea of the teaching–learning-based evolutionary mechanism from the TLBO algorithm, while the realization method for the evolutionary mechanism and the adaptation methods for the algorithm parameters are different. Three new operators are developed and incorporated in the STLBO algorithm to ensure its applicability to DSP problems with complex disassembly precedence constraints: i.e., a Feasible Solution Generator (FSG) used to generate a feasible disassembly sequence, a Teaching Phase Operator (TPO) and a Learning Phase Operator (LPO) used to learn and evolve the solutions towards better ones by applying the method of precedence preservation crossover operation. Numerical experiments with case studies on waste product disassembly planning have been carried out to demonstrate the effectiveness of the designed operators and the results exhibited that the developed algorithm performs better than other relevant algorithms under a set of public benchmarks. Ant Colony Optimization Average Running Time Bill of Materials Disassembly Sequence Planning Feasible Solution Generator Genetic Algorithm Greedy Randomized Adaptive Search Procedure Learning Phase Operator Particle Swarm Optimization Rate of Best Simplified Swarm Optimization Standard Deviation Simplified Teaching-Leaning-Based Optimization Teaching-Leaning-Based Optimization Teaching Phase Operator
Disassembly sequence planning using a Simplified Teaching–Learning-Based Optimization algorithm
S1474034614000822
Most of the assessment of creativity in product design is based on the outcome, not the design process from which the creative ideas are derived. In this paper, we revealed the correlation coefficient of 20 factors critical in the product design process and the quality of design creativity via investigation of the design processes and outcomes of 30 senior student designers. Six closely related factors were identified as variables to calculate the design creativity. An assessment formula was proposed: the corresponding correlation coefficient is the weight factor of each variable, and the sum represents the design creativity degree. Our quantitative approach can improve the validity and reliability of assessment of creativity in product design.
A quantitative approach for assessment of creativity in product design
S1474034614000834
This paper addresses the task of identification of nonlinear dynamic systems from measured data. The discrete-time variant of this task is commonly reformulated as a regression problem. As tree ensembles have proven to be a successful predictive modeling approach, we investigate the use of tree ensembles for solving the regression problem. While different variants of tree ensembles have been proposed and used, they are mostly limited to using regression trees as base models. We introduce ensembles of fuzzified model trees with split attribute randomization and evaluate them for nonlinear dynamic system identification. Models of dynamic systems which are built for control purposes are usually evaluated by a more stringent evaluation procedure using the output, i.e., simulation error. Taking this into account, we perform ensemble pruning to optimize the output error of the tree ensemble models. The proposed Model-Tree Ensemble method is empirically evaluated by using input–output data disturbed by noise. It is compared to representative state-of-the-art approaches, on one synthetic dataset with artificially introduced noise and one real-world noisy data set. The evaluation shows that the method is suitable for modeling dynamic systems and produces models with comparable output error performance to the other approaches. Also, the method is resilient to noise, as its performance does not deteriorate even when up to 20% of noise is added.
Model-Tree Ensembles for noise-tolerant system identification
S1474034614000846
With the emerging of free trade zones (FTZs) in the world, the service level of container supply chain plays an important role in the efficiency, quality and cost of the world trade. The performance of container supply chain network directly impacts its service level. Therefore, it is imperative to seek an appropriate method to optimize the container supply chain network architecture. This paper deals with the modeling and optimization problem of multi-echelon container supply chain network (MCSCN). The problem is formulated as a mixed integer programming model (MIP), where the objective is subject to the minimization of the total supply chain service cost. Since the problem is well known to be NP-hard, a novel simulation-based heuristic method is proposed to solving it, where the heuristic is used for searching near-optimal solutions, and the simulation is used for evaluating solutions and repairing unfeasible solutions. The heuristic algorithm integrates genetic algorithm (GA) and particle swarm optimization (PSO) algorithm, where the GA is used for global search and the PSO is used for local search. Finally, computational experiments are conducted to validate the performance of the proposed method and give some managerial implications.
Simulation-based heuristic method for container supply chain network optimization
S1474034614000858
Planning sub terrestrial inner-city-railway-tracks is an interdisciplinary and highly complex task, which involves plenty of different stakeholders. Currently, the different planners work more or less separated in an asynchronous manner. To facilitate a collaborative planning process between these different stakeholders we developed a collaboration platform. Clearly, the integration of geographical information and geoprocessing results into the planning process and the different modelling tools will improve this process in a significant way. In this paper, we show how to describe the needed geographical information by so-called Geospatial Web Service Context Documents in a suitable way and how to integrate these pieces of information into the different planning tools via the collaboration platform in a unified, dynamic, and generic way.
Collaborative planning of inner-city-railway-tracks: A generic description of the geographic context and its dynamic integration in a collaborative multi-scale geometry modelling environment
S1474034614000871
Contemporary advancements in Information Technology and the efforts from various research initiatives in the AEC industry are showing evidence of progress with the emergence of building information modelling (BIM). BIM presents the opportunity of electronically modelling and managing the vast amount of information embedded in a building project, from its conception to end-of-life. Researchers have been looking at extensions to expand its scope. Sustainability is one such modelling extension that is in need of development. This is becoming pertinent for the structural engineer as recent design criteria have put great emphasis on the sustainability credentials in addition to the traditional criteria of structural integrity, constructability and cost. With the complexity of designs, there are now needs to provide decision support tools to aid in the assessment of the sustainability credentials of design solutions. Such tools would be most beneficial at the conceptual design stage so that sustainability is built into the design solution starting from its inception. The sustainability of buildings is related to life cycle and is measured using indicator-terms such as life cycle costing, ecological footprint and carbon footprint. This paper proposes a modelling framework combining these three indicators in providing sustainability assessments of alternative design solutions based on the economic and environmental sustainability pillars. It employs the principles of feature-based modelling to extract construction-specific information from product models for the purposes of sustainability analysis. A prototype system is implemented using .NET and linked to the BIM enabled software, Revit Structures™. The system appraises alternative design solutions using multi-criteria performance analysis. This work demonstrates that current process and data modelling techniques can be employed to model sustainability related information to inform decisions right from the early stages of structural design. It concludes that the utilized information modelling representations – in the form of a process model, implementation algorithms and object-based instantiations – can capture sustainability related information to inform decisions at the early stages of the structural design process.
BIM extension for the sustainability appraisal of conceptual steel design
S1474034614000895
Product design is a multidisciplinary activity that requires the integration of concurrent engineering approaches into a design process that secures competitive advantages in product quality. In concurrent engineering, the Taguchi method has demonstrated an efficient design approach for product quality improvement. However, the Taguchi method intuitively uses parameters and levels in measuring the optimum combination of design parameter values, which might not guarantee that the final solution is the most optimal. This work proposes an integrated procedure that involves neural network training and genetic algorithm simulation within the Taguchi quality design process to aid in searching for the optimum solution with more precise design parameter values for improving the product development. The concept of fractals in computer graphics is also considered in the generation of product form alternatives to demonstrate its application in product design. The stages in the general approach of the proposed procedures include: (1) use of the Taguchi experimental design procedure, (2) analysis of the neural network and genetic algorithm process, and (3) generation of design alternatives. An electric fan design is used as an example to describe the development and explore the applicability of the proposed procedures. The results indicate that the proposed procedures could enhance the efficiency of product design efforts by approximately 7.8%. It is also expected that the proposed design procedure will provide designers with a more effective approach to product development.
An integrated neuro-genetic approach incorporating the Taguchi method for product design
S1474034614000949
The bottling of beverages is carried out in complex plants that consist of several machines and material flows. To realize an efficient bottling process and high quality products, operators try to avoid plant downtimes. With actual non-productive times of between 10% and 60%, the operators require diagnosis tools that allow them to locate plant components that cause downtime by exploiting automatically acquired machine data. This paper presents a model-based solution for automatic fault diagnosis in bottling plants. There are currently only a few plant-specific solutions (based on statistical calculations or artificial neural networks) for automatic bottling plant diagnosis. In order to develop a customizable solution, we followed the model-based diagnosis approach which allows the automatic generation of diagnosis solutions for individual plants. The existing stochastic and discrete-event models for bottling plants are not adequate for model-based diagnosis. Therefore, we developed new first-principle models for the relevant plant components, validated them numerically, and abstracted them to qualitative diagnosis models. Based on the diagnosis engine OCC’M Raz’r, application systems for two real plants and one virtual plant (based on discrete-event simulation) were generated and evaluated. Compared to the reasons for downtime identified by experts, we obtained up to 87.1% of compliant diagnosis results. The diagnosis solution was tested by practitioners and judged as a useful tool for plant optimization.
Model-based fault localization in bottling plants
S1474034614000950
The original prototype of the cellular automaton (CA) shading system (CASS) for building facades was based on rectangular array of cells and used liquid crystal technology. This paper introduces polarized film shading system (PFSS) – an alternative approach based on opto-mechanical modules whose opacity is a function of the rotation of polarized film elements. PFSS in regular tessellations: triangular, square and hexagonal are discussed. Simulations for each type of tessellation are presented and visualized. Visual attractiveness of emergent CA patterns manifested by “particles” and “solitons” is discussed.
Dynamic shading of a building envelope based on rotating polarized film system controlled by one-dimensional cellular automata in regular tessellations (triangular, square and hexagonal)
S1474034614001104
A technology roadmap (TRM), an approach that is applied to the development of an emerging technology to meet business goals, is one of the most frequently adopted tools to support the process of technology innovation. Although many studies have dealt with TRMs that are designed primarily for a market-driven technology planning process, a technology-driven TRM is far less researched than a market-driven one. Furthermore, approaches to a technology-driven roadmap using quantitative technological information have rarely been studied. Thus, the aim of this research is to propose a new methodological framework to identify both profitable markets and promising product concepts based on technology information. This study suggests two quality function deployment (QFD) matrices to draw up the TRM in order to find new business opportunities. A case study is presented to illustrate the proposed approach using patents on the solar-lighting devices, which is catching on as a high-tech way to prevent environmental pollution and reduce fuel costs.
Technology-driven roadmaps for identifying new product/market opportunities: Use of text mining and quality function deployment
S147403461500004X
Radio frequency identification (RFID) technology has been used in manufacturing industries to create a RFID-enabled ubiquitous environment, in where ultimate real-time advanced production planning and scheduling (APPS) will be achieved with the goal of collective intelligence. A particular focus has been placed upon using the vast amount of RFID production shop floor data to obtain more precise and reasonable estimates of APPS parameters such as the arrival of customer orders and standard operation times (SOTs). The resulting APPS model is based on hierarchical production decision-making principle to formulate planning and scheduling levels. A RFID-event driven mechanism is adopted to integrate these two levels for collective intelligence. A heuristic approach using a set of rules is utilized to solve the problem. The model is tested through four dimensions, including the impact of rule sequences on decisions, evaluation of released strategy to control the amount of production order from planning to scheduling, comparison with another model and practical operations, as well as model robustness. Two key findings are observed. First, release strategy based on the RFID-enabled real-time information is efficient and effective to reduce the total tardiness by 44.46% averagely. Second, it is observed that the model has the immune ability on disturbances like defects. However, as the increasing of the problem size, the model robustness against emergency orders becomes weak; while, the resistance to machine breakdown is strong oppositely. Findings and observations are summarized into a number of managerial implications for guiding associated end-users for purchasing collective intelligence in practice.
A two-level advanced production planning and scheduling model for RFID-enabled ubiquitous manufacturing
S1474034615000166
Backfill is the excavated material from earthworks, which constitutes over 50% of the construction wastes in Hong Kong. This paper considers a supply chain that consists of construction sites, landfills and commercial sources in which operators seek cooperation to maximize backfill reuse and improve waste recovery efficiency. Unlike the ordinary material supply chain in manufacturing industries, the supply chain for backfill involves many dynamic processes, which increases the complexity of analyzing and solving the logistic issue. Therefore, this study attempts to identify an appropriate methodology to analyze the dynamic supply chain, for facilitating the backfill reuse. A centralized optimization model and a distributed agent-based model are proposed and implemented in comparing their performances. The centralized optimization model can obtain a global optimum but requires sharing of complete information from all supply chain entities, resulting in barriers for implementation. In addition, whenever the backfill supply chain changes, the centralized optimization model needs to reconfigure the network structure and recompute the optimum. The distributed agent-based model focuses on task distribution and cooperation between business entities in the backfill supply chain. In the agent-based model, decision making and communication between construction sites, landfills, and commercial sources are emulated by a number of autonomous agents. They perform together through a negotiation algorithm for optimizing the supply chain configuration that reduces the backfill shipment cost. A comparative study indicates that the agent-based model is more capable of studying the dynamic backfill supply chain due to its decentralization of optimization and fast reaction to unexpected disturbances.
Formulation and analysis of dynamic supply chain of backfill in construction waste management using agent-based modeling
S147403461500018X
The performance of physical assets has become a major determinant success factor for urban flood control. However, managing these assets is always challenging as there are a huge number of diverse assets involved, which are distributed throughout the city, and owned by different agencies. Aiming at improving the management efficiency of these assets, and ensuring their performance, this paper proposes the concept of cloud asset based on cloud computing, mobile agent, and various smart devices. Through hardware integration and software encapsulation, cloud asset could sense its real-time status, adapt to varied working scenarios, be controlled remotely, and shared among agencies. It enables accurate real-time control of every asset, and thus improves the management efficiency and effectiveness. This paper first presents the concept of cloud asset with its technical architecture, and then analyses the software agent model for cloud asset, which is the key enabler to realize UPnP (Universal Plug and Play) management of assets, and provides mobility and intelligence for them. After that, the framework of cloud asset-enabled workflow management is built, in which cloud asset could be easily found and dynamically invoked by different workflows. Finally, a demonstrative case is provided to verify the effectiveness of cloud asset.
Cloud asset for urban flood control
S1474034615000208
To ensure the safety and the serviceability of civil infrastructure it is essential to visually inspect and assess its physical and functional condition. This review paper presents the current state of practice of assessing the visual condition of vertical and horizontal civil infrastructure; in particular of reinforced concrete bridges, precast concrete tunnels, underground concrete pipes, and asphalt pavements. Since the rate of creation and deployment of computer vision methods for civil engineering applications has been exponentially increasing, the main part of the paper presents a comprehensive synthesis of the state of the art in computer vision based defect detection and condition assessment related to concrete and asphalt civil infrastructure. Finally, the current achievements and limitations of existing methods as well as open research challenges are outlined to assist both the civil engineering and the computer science research community in setting an agenda for future research.
A review on computer vision based defect detection and condition assessment of concrete and asphalt civil infrastructure
S147403461500021X
The collection and analysis of data on the three-dimensional (3D) as-built status of large-scale civil infrastructure – whether under construction, newly put into service, or in operation – has been receiving increasing attention on the part of researchers and practitioners in the civil engineering field. Such collection and analysis of data is essential for the active monitoring of production during the construction phase of a project and for the automatic 3D layout of built assets during their service lives. This review outlines recent research efforts in this field and technological developments that aim to facilitate the analysis of 3D data acquired from as-built civil infrastructure and applications of such data, not only to the construction process per se but also to facility management – in particular, to production monitoring and automated layout. This review also considers prospects for improvement and addresses challenges that can be expected in future research and development. It is hoped that the suggestions and recommendations made in this review will serve as a basis for future work and as motivation for ongoing research and development.
As-built data acquisition and its use in production monitoring and automated layout of civil infrastructure: A survey
S1474034615000221
Design concept evaluation at the early stage of product design has been widely recognized as one of the most critical phases in new product development as it determines the direction of the downstream design activities. However, the information at this stage is mainly subjective and imprecise which only depends on experts’ judgments. How to handle the vagueness and subjectivity in design concept evaluation becomes a critical issue. This paper presents a systematic evaluation method by integrating rough number based analytic hierarchy process (AHP) and rough number based compromise ranking method (also known as VIKOR) to evaluate design concepts under subjective environment. In this study, rough number is introduced to aggregate individual judgments and preferences and deal with the vagueness in decision-making. A novel AHP based on rough number is presented to determine the weight of each evaluation criterion. Then an improved rough number based VIKOR is proposed to evaluate the design concept alternatives. Sensitivity analysis is conducted to measure the impact of the decision makers’ risk to the final evaluation results. Finally, a practical example is put forward to validate the performance of the proposed method. The result shows that the proposed decision-making method can effectively enhance the objectivity in design concept evaluation under subjective environment.
An integrated AHP and VIKOR for design concept evaluation based on rough number
S1474034615000257
Patent claim parsing can contribute in many patent-related applications, such as patent search, information extraction, machine translation and summarization. However, patent claim parsing is difficult due to the special structure of patent claims. To overcome this difficulty, the challenges facing the patent claim parsing were first investigated and the peculiarities of claim syntax that obstruct dependency parsing were highlighted. To handle these peculiarities, this study proposes a new two-level parser, in which a conventional parser is imbedded. A patent claim is pre-processed in order to remove peculiarities before passed to the conventional parser. The process is based on a new dependency-based syntax called Independent Claim Segment Dependency Syntax (ICSDS). This two-lever parser has demonstrated promising improvement for patent claim parsing on both effectiveness and efficiency over the conventional parser.
A two-level parser for patent claim parsing
S1474034615000336
Modern construction projects require sufficient planning and management of resources to become successful. Core issues are tasks that deal with maintaining the schedule, such as procuring materials, guaranteeing the supply chain, controlling the work status, and monitoring safety and quality. Timely feedback of project status aids project management by providing accurate percentages of task completions and appropriately allocating resources (workforce, equipment, material) to coordinate the next work packages. However, current methods for measuring project status or progress, especially on large infrastructure projects, are mostly based on manual assessments. Recent academic research and commercial development has focused on semi- or fully-automated approaches to collect and process images of evolving worksites. Preliminary results are promising and show capturing, analyzing, and documenting construction progress and linking to information models is possible. This article presents first an overview to vision-based sensing technology available for temporary resource tracking at infrastructure construction sites. Second, it provides the status quo of research applications by highlighting exemplary case. Third, a discussion follows on existing advantages and current limitations of vision based sensing and tracking. Open challenges that need to be addressed in future research efforts conclude this paper.
Status quo and open challenges in vision-based sensing and tracking of temporary resources on infrastructure construction sites
S1474034615000361
In construction contractual management, sharing experts’ domain knowledge through ontology is a good way to narrow the knowledge gap between the domain experts and the construction team. However, little work has been done on ontology taxonomy development in this domain. Based on a literature review on sharing domain knowledge, taxonomy development methods and the essence of construction contracts, this study proposes a synthesized methodology for taxonomy development in the domain of construction contractual semantics. This methodology is based on an ontological model extracted from definitions found in the contract, and uses common root concepts as the initial root concept classes, and includes the iterative development and competency questions approaches as well. In the case study, using the research results from pilot studies, the proposed methodology was applied to the AIA A201 General Conditions of the Contract for Construction (2007) document at the textual level. As a result, a taxonomy was developed which was used to determine the validity of the proposed methodology. The taxonomy development methodology and the developed taxonomy itself are both valuable contributions in the quest to further develop ontology-based applications for sharing domain knowledge about construction contract semantics.
Developing taxonomy for the domain ontology of construction contractual semantics: A case study on the AIA A201 document
S147403461500052X
The purpose of this research is to develop a formal knowledge e-discovery methodology, using advanced information technology and decision support analysis, to define legal case evolution based on Collective Litigation Intelligence (CLI). In this research, a decade of Australia’s retail franchise and trademark litigation cases are used as the corpus to analyze and synthesize the evolution of modern retail franchise law in Australia. The formal processes used in the legal e-discovery research include a LexisNexis search strategy to collect legal documents, text mining to find key concepts and their representing key phrases in the documents, clustering algorithms to associate the legal cases into groups, and concept lattice analysis to trace the evolutionary trends of the main groups. The case analysis discovers the fundamental issues for retail modernization, advantages and disadvantages of retail franchising systems, and the potential litigation hazards to be avoided in the Australian market. Given the growing number of legal documents in global court systems, this research provides a systematic and generalized CLI methodology to improve the efficiency and efficacy of research across international legal systems. In the context of the case study, the results demonstrate the critical importance of quickly processing and interpreting existing legal knowledge using the CLI approach. For example, a brand management company, which purchases a successful franchise in one market is under limited time constraints to evaluate the legal environment across global markets of interest. The proposed CLI methodology can be applied to derive market entry strategies to secure growth and brand expansion of a global franchise.
Collective intelligence applied to legal e-discovery: A ten-year case study of Australia franchise and trademark litigation
S1474034615000555
Design of simulation models and their integration into industrial automation systems require knowledge from several heterogeneous data sources and tools. Due to the heterogeneity of engineering data, the integration of the tools and data are time-consuming and error-prone tasks nowadays. The key goal of this article is to provide an effective and efficient integration of heterogeneous data sources and tools as a knowledge basis to support dynamic simulation for industrial plants. The integrated knowledge is utilized both (i) in the design phase of simulation models for defining structure and interfaces of the models and (ii) in the runtime phase of industrial systems for model-driven configuration of the integrated environment. Reaching such goals with a manual approach or point-to-point integration is not beneficial as it may be possible for a few tools and data sources, but quickly gets very complex. A growing number of elements increases the risk of errors and the effort needed for integration. The proposed solution is based on a specification of a common data model to represent engineering knowledge and a service-oriented tool integration with the Engineering Service Bus. Engineering knowledge is integrated in a knowledge base implemented with ontologies in Web Ontology Language-Description Logic (OWL-DL). The proposed approach is demonstrated and evaluated on an educational hydraulic system. Major results of the article are: (i) a data model to represent engineering knowledge for dynamic industrial systems, (ii) the integration platform that, based on this model, integrates the tools for system design and runtime, and (iii) basic design-time and runtime processes for the integrated industrial simulations.
Integrating heterogeneous engineering knowledge and tools for efficient industrial simulation model support
S1474034615000567
We propose a novel approach to surface flatness characterization in construction that relies on the combination of Terrestrial Laser Scanning (TLS) and the Continuous Wavelet Transform (CWT). The former has the advantage over existing measurement technologies of providing both accurate and extremely dense measurements over surfaces. The latter provides the means to conduct frequency analysis with high resolution in both the spatial and frequency domains. This novel approach is tested using two real concrete floors and the results compared with those obtained with the Waviness Index method. The results show a high level of correlation. In fact, the proposed approach delivers a higher level of precision in the frequency and spatial domains. We also show what seems to be a weakness of the Waviness Index method in the detection of undulations with short periods. Finally, although not experimentally demonstrated here, the proposed method has the interesting additional advantage of being applicable in 2D, that is over an entire surface instead of sampled survey lines (1D).
Terrestrial laser scanning and continuous wavelet transform for controlling surface flatness in construction – A first investigation
S1474034615000609
This study reports on the requirements for developing computer-interpretable rules for checking the compliance of a building design in a request for proposal (RFP), especially in the building information modeling (BIM) environment. It focuses on RFPs for large public buildings (over 5million dollars) in South Korea, which generally entail complex designs. A total of 27 RFPs for housing, office, exhibition, hospital, sports center, and courthouse projects were analyzed to develop computer-interpreted RFP rules. Each RFP was composed of over 1800 sentences. Of these, only three to 366 sentences could be translated into a computer-interpretable sentence. For further analysis, this study deployed context-free grammar (CFG) in natural language processing, and classified morphemes into four categories: i.e., object (noun), method (verb), strictness (modal), and others. The subcategorized morphemes included three types of objects, twenty-nine types of methods, and five levels of strictness. The coverage applicability of the derived objects and methods was checked and validated against three additional RFP cases and then through a test case using a newly developed model checker system. The findings are expected to be useful as a guideline and basic data for system developers in the development of a generalized automated design checking system for South Korea.
Requirements for computational rule checking of requests for proposals (RFPs) for building designs in South Korea
S1474034615000610
This paper presents an informatics framework to apply feature-based engineering concept for cost estimation supported with data mining algorithms. The purpose of this research work is to provide a practical procedure for more accurate cost estimation by using the commonly available manufacturing process data associated with ERP systems. The proposed method combines linear regression and data-mining techniques, leverages the unique strengths of the both, and creates a mechanism to discover cost features. The final estimation function takes the user’s confidence level over each member technique into consideration such that the application of the method can phase in gradually in reality by building up the data mining capability. A case study demonstrates the proposed framework and compares the results from empirical cost prediction and data mining. The case study results indicate that the combined method is flexible and promising for determining the costs of the example welding features. With the result comparison between the empirical prediction and five different data mining algorithms, the ANN algorithm shows to be the most accurate for welding operations.
A hybrid cost estimation framework based on feature-oriented data mining approach
S1474034615000658
In the past, designers developed new products by virtue of their own accumulation of aesthetic perception and experience. Because the information that designers could master was limited, it was difficult to develop quickly the capability of satisfying consumer-oriented markets. This limitation may consequently cause the enterprises unnecessary risks. Based on this, a set of aesthetic evaluations and an optimization system for form aesthetics are proposed in this study. Calculations of explicit equations were performed so as to assisting in measuring the aesthetic characteristics; next, a fuzzy judgment was invoked to calculate the perceptual aesthetic measures of a product style so as to establish the overall aesthetic standard for the product. Aesthetic measurement principles were combined with the genetic algorithm (GA) and applied to the optimization of the product’s shape. All-in-one stereos were chosen to serve as the target products as a case study. Further form optimization was conducted on two of the stereos and the questionnaire survey indicated that their aesthetic measures increased after the optimization. The errors that resulted from the equations for aesthetic measurements and judgments were also reduced accordingly. The precision and feasibility of the aesthetic theory that was constructed in this study were assessed, tested, and verified.
A study that applies aesthetic theory and genetic algorithms to product form optimization
S1474034615000683
Stockpile blending is always considered to be an effective method of controlling the quality and maintaining the grade consistency of delivered bulk material, such as iron ore. However, major challenges remain to predict the quality of a stockpile during blending (stacking and reclaiming) operations, because the chemical composition of the ore body is not always available during blending. Consequently, the performance of current stockpile management systems is relatively poorer than expectations. This paper details an innovative modelling approach to estimate the quality of a stockpile during the blending process. The geometric model created from laser scanning data is capable of recording the dynamic shapes of the stockpile using mathematical equations. Therefore, the quality of the stockpile is calculated with a high degree of accuracy when the chemical analysis is completed. Thus, a quality embedded geometric model is created. Furthermore, this geometric model is associated with the kinematic model of a Bucket Wheel Reclaimer (BWR) to achieve precise auto-control in reclaiming operations. The link between these two models also supports the quality of every single cut accomplished by the BWR, to be calculated in advance. Using the calculation results in conjunction with precise machine control, the output grade can be predicted, planned and controlled. This will optimise the efficiency and effectiveness of stockpile blending.
Automatic quality estimation in blending using a 3D stockpile management model
S1474034615000713
Facing fierce competition in marketplaces, companies try to determine the optimal settings of design attribute of new products from which the best customer satisfaction can be obtained. To determine the settings, customer satisfaction models relating affective responses of customers to design attributes have to be first developed. Adaptive neuro-fuzzy inference systems (ANFIS) was attempted in previous research and shown to be an effective approach to address the fuzziness of survey data and nonlinearity in modeling customer satisfaction for affective design. However, ANFIS is incapable of modeling the relationships that involve a number of inputs which may cause the failure of the training process of ANFIS and lead to the ‘out of memory’ error. To overcome the limitation, in this paper, rough set (RS) and particle swarm optimization (PSO) based-ANFIS approaches are proposed to model customer satisfaction for affective design and further improve the modeling accuracy. In the approaches, the RS theory is adopted to extract significant design attributes as the inputs of ANFIS and PSO is employed to determine the parameter settings of an ANFIS from which explicit customer satisfaction models with better modeling accuracy can be generated. A case study of affective design of mobile phones is used to illustrate the proposed approaches. The modeling results based on the proposed approaches are compared with those based on ANFIS, fuzzy least-squares regression (FLSR), fuzzy regression (FR), and genetic programming-based fuzzy regression (GP-FR). Results of the training and validation tests show that the proposed approaches perform better than the others in terms of training and validation errors.
Rough set and PSO-based ANFIS approaches to modeling customer satisfaction for affective product design
S1474034615000725
The use of IFC as a standard format in exchange processes has been increasing as the industry begins to address the need of interoperability. A current problem with the use of IFC is in the quality of product models. Much effort has addressed this issue in the form of certifications to ensure a minimum quality of exchange requirements for an IFC file. However, even with the recent increasing awareness and effort to improve the quality of IFC files, the process is still too tedious, time consuming and requires manual efforts by experts. Even if those resources are available, there is currently no clear and quantifiable definitions of what exactly is a good quality IFC file. Without such measures, the adoption of IFC as standard exchange format will be hindered and the industry will be left with only restricted alternatives that are mostly vendor dependent. This paper sets out to address this issue in two aspects: first by defining what a good quality IFC model is, and second by proposing rules that can be automated to measure with confidence the completeness and correctness of the IFC model. These two aspects will serve as a starting point toward a more comprehensive and quantifiable measures of the quality of an IFC file. This proposed goal is represented by using well defined and well documented rules collected from various projects the authors had experienced over the years. The rules include all known aspects of IFC, including geometry, which currently requires mostly manual validation. The paper also proposes a method to address import validation that is under-developed compared to the export validation.
Toward robust and quantifiable automated IFC quality validation
S1474034615000749
The current trend towards integrating software agents in safety–critical systems such as drones, autonomous cars and medical devices, which must operate in uncertain environments, gives rise to the need of on-line detection of an unexpected behavior. In this work, on-line monitoring is carried out by comparing environmental state transitions with prior beliefs descriptive of optimal behavior. The agent policy is computed analytically using linearly solvable Markov decision processes. Active inference using prior beliefs allows a monitor proactively rehearsing on-line future agent actions over a rolling horizon so as to generate expectations to discover surprising behaviors. A Bayesian surprise metric is proposed based on twin Gaussian processes to measure the difference between prior and posterior beliefs about state transitions in the agent environment. Using a sliding window of sampled data, beliefs are updated a posteriori by comparing a sequence of state transitions with the ones predicted using the optimal policy. An artificial pancreas for diabetic patients is used as a representative example. uncontrolled or passive dynamics input-gain matrix dictionary of training examples integral operator of the desirability function Gaussian process with mean m and covariance k covariance function Kullback–Leibler distance transition probability distribution for the passive dynamics transition probability distribution for the controlled dynamics state transition distribution under optimal control state transition distribution under any implemented control matrix of transition probabilities for the passive dynamics cost function environment internal or hidden state surprise index Twin Kullback-Leibler distance current control action of agent optimal cost-to-go function observable environmental state each state dimension sequence of state transition given system observations sequence of state transition given the specification transition for each state dimension. desirability function monitor’s beliefs a given state transition distribution in the agent environment monitor’s belief distributions over future state transitions distance between belief and state transition distributions kernel width parameter noise variance Kronecker delta function optimal control policy an implemented control policy scaling parameter for Brownian noise Brownian noise blood glucose level hepatic sensitivity insulin sensitivity insulin infusion level glycemic sensor output sensor time-lag parameter sensor calibration parameter
An active inference approach to on-line agent monitoring in safety–critical systems
S1474034615000762
This paper addresses the planning issue in hinterland barge transport domain, characterized by limited information sharing, lack of cooperation and conflict of interest among different parties (terminal and barge party). The planning problem is formulated as a novel Stackelberg game to model a leader–followers bi-level optimization problem. A hybrid algorithm is developed that concerns different objectives (vessel turn around and terminal berthing capacity) simultaneously while fulling pre-defined operational constraints. The presented algorithm is outlined in a hierarchical way and embedded into dedicated agents as decision-making kernel. We describe the architecture and the implementation of the proposed mediator-based multi agent system and overall coupling framework include agent identification, coordination and decision making. A case study evaluates the performance of our approach in terms of global optimality compared with other related approach.
Integrate multi-agent planning in hinterland transport: Design, implementation and evaluation
S1474034615000920
The Service Oriented Architecture (SOA) paradigm enables production systems to be composed of web services. In an SOA-based production system, the individual production devices provide web service interfaces that encapsulate the behavior of the devices and abstract the implementation details. Such a service-oriented approach makes it possible to apply web service orchestration technologies in the development of production workflow descriptions. While manual formulation of production workflows tends to require considerable effort from domain experts, semantic web service descriptions enable computer algorithms to automatically generate the appropriate web service orchestrations. Such algorithms realize AI planning and employ semantic web service descriptions in determining the workflows required to achieve the production goals desired. In addition, the algorithms can automatically adapt the workflows to unexpected changes in the goals pursued and the production devices available.
Planning-based semantic web service composition in factory automation
S1474034615000932
The visibility of a traffic sign at night depends on its retro-reflectivity, a property that needs to be frequently monitored to ensure transportation safety. In the U.S., Federal Highway Administration (FHWA) maintains regulations to ensure minimum retro-reflectivity levels. Current measurement techniques either (a) use vehicle mounted device during the night, or (b) use manual handheld devices during the day. The former is expensive due to nighttime labor cost. The latter is time-consuming and unsafe. To address these limitations, this paper presents a computer vision-based technique to measure retro-reflectivity during daytime using a vehicle mounted device. The presented algorithms simulate nighttime visibility of traffic signs from images taken during daytime and measure their retro-reflectivity. The technique is faster, cheaper, and safer as it neither requires nighttime operation nor requires manual sign inspection. It also satisfies FHWA measurement guidelines both in terms of granularity and accuracy. The performance of the presented technique is evaluated under various testing conditions. The results are promising and demonstrate a strong potential in lowering inspection cost and improving safety in practical applications on retro-reflectivity measurement.
Image-based retro-reflectivity measurement of traffic signs in day time
S1474034615000993
The purpose of this research is to suggest and develop a building information modeling (BIM) database based on BIM perspective definition metadata for connecting external facility management (FM) and BIM data, which considers variability and expandability from the user’s perspective. The BIM-based FM system must be able to support different use cases per user role and effectively extract information required by the use cases from various heterogeneous data sources. If the FM system’s user perspective becomes structurally fixed when developing the system, the lack of expandability can cause problems for maintenance and reusability. BIM perspective definition (BPD) metadata helps increase expandability and system reusability because it supports system variability, which allows adding or changing the user perspective even after the system has been developed. The information to be dealt with differs according to the user’s role, which also means that the data model, data conversion rules, and expression methods change per perspective. The perspective should be able to extract only the user-requested data from the heterogeneous system’s data source and format it in the style demanded by the user. In order to solve such issues, we analyzed the practice of FM and the benefits of using BIM-based FM, and we proposed a BPD that supports data extraction and conversion and created a prototype.
BIM perspective definition metadata for interworking facility management data
S1474034615001032
This paper addresses the problem of automated registration of multi-view point clouds generated by a 3D scanner using sphere targets. First, sphere targets are detected from each point cloud. The centroids of the detected targets in each point cloud are then used for rough registration. Congruent triangles are computed from the centroids for the correspondence among them, with which a rigid body transformation is obtained to bring the two point clouds together as closely as possible. After the initial registration, the two point clouds are further registered by refining the position and orientation of the point clouds using the underlying geometric shapes of the targets. These registration steps are integrated into one system that allows two input point clouds automatically registered with no user intervention. Real examples are used to demonstrate the performance of the point cloud registration.
Automated registration of multi-view point clouds using sphere targets
S1474034615001068
With the rapid popularity of Building Information Modeling (BIM) technologies, BIM resources such as building product libraries are growing rapidly on the World Wide Web. However, numerous BIM resources are usually from heterogeneous systems or various manufacturers with ambiguous expressions and uncertain categories for product descriptions, which cannot provide effective support for information retrieval and categorization applications. Therefore, there is an increasing need for semantic annotation to reduce the ambiguity and unclearness of natural language in BIM documents. Based on Industry Foundation Classes (IFC) which is a major standard for BIM, this paper presents a concept-based automatic semantic annotation method for the documents of online BIM products. The method mainly consists of the following two stages. Firstly, with reference to the concepts and relationships explicitly defined in IFC, a word-level annotation algorithm is applied to the word-sense disambiguation. Secondly, based on latent semantic analysis technique, a document-level annotation algorithm is proposed to discover the relationships which are not explicitly defined in IFC. Finally, a prototype annotation system, named BIMTag, is developed and combined with a search engine for demonstrating the utility and effectiveness of our method. The BIMTag system is available at http://cgcad.thss.tsinghua.edu.cn/liuyushen/bimtag/.
BIMTag: Concept-based automatic semantic annotation of online BIM product resources
S1474034615001093
Over the last few years, new methods that detect construction progress deviations by comparing laser scanning or image-based point clouds with 4D BIM are developed. To create complete as-built models, these methods require the visual sensors to have proper line-of-sight and field-of-view to building elements. For reporting progress deviations, they also require Building Information Modeling (BIM) and schedule Work-Breakdown-Structure (WBS) with high Level of Development (LoD). While certain logics behind sequences of construction activities can augment 4D BIM with lower LoDs to support making inferences about states of progress under limited visibility, their application in visual monitoring systems has not been explored. To address these limitations, this paper formalizes an ontology that models construction sequencing rationale such as physical relationships among components. It also presents a classification mechanism that integrates this ontology with BIM to infer states of progress for partially and fully occluded components. The ontology and classification mechanism are validated using a Charrette test and by presenting their application together with BIM and as-built data on real-world projects. The results demonstrate the effectiveness and generality of the proposed ontology. It also illustrates how the classification mechanism augments 4D BIM at lower LoDs and WBS to enable visual progress assessment for partially and fully occluded BIM elements and provide detailed operational-level progress information.
Formalized knowledge of construction sequencing for visual monitoring of work-in-progress via incomplete point clouds and low-LoD 4D BIMs
S147403461500110X
Kansei evaluation plays a vital role in the implementation of Kansei engineering; however, it is difficult to quantitatively evaluate customer preferences of a product’s Kansei attributes as such preferences involve human perceptual interpretation with certain subjectivity, uncertainty, and imprecision. An effective Kansei evaluation requires justifying the classification of Kansei attributes extracted from a set of collected Kansei words, establishing priorities for customer preferences of product alternatives with respect to each attribute, and synthesizing the priorities for the evaluated alternatives. Moreover, psychometric Kansei evaluation systems essentially require dealing with Kansei words. This paper presents a Kansei evaluation approach based on the technique of computing with words (CWW). The aims of this study were (1) to classify collected Kansei words into a set of Kansei attributes by using cluster analysis based on fuzzy relations; (2) to model Kansei preferences based on semantic labels for the priority analysis; and (3) to synthesize priority information and rank the order of decision alternatives by means of the linguistic aggregation operation. An empirical study is presented to demonstrate the implementation process and applicability of the proposed Kansei evaluation approach. The theoretical and practical implications of the proposed approach are also discussed.
A Kansei evaluation approach based on the technique of computing with words
S1474034615001275
The paper studies the attribution of patents by innovatively establishing a combination of improved cosine similarity concept and patent attribution probability method for hydrolysis substrate fabrication process in order to enhance the speed and accuracy for judgment of patent attribution. For the improved cosine similarity method innovatively established in the paper, all the vocabularies of important technical words or functional words in patent documents are regarded as a number of vector dimensions. The normalized numerical values of these vocabularies are regarded as the weights of these technical words or functional words. They are substituted in a formula of improved cosine similarity. Regarding the patent attribution probability method, it applies the normalized numerical values of the various clusters of technical or functional words as well as the formula of probability method to judge which technical category or functional category that a patent is attributed to. As the study innovatively combines improved cosine similarity with patent attribution probability method, before employing patent attribution probability method for a patent document, the study firstly uses improved cosine similarity to check with the cluster of patent group words having high relativity, and rule out the unrelated technical categories or functional categories that the patent is not attributed to, so as to decrease the number of categories for calculation and the time to calculate which technical category or functional category that a patent is attributed to when using patent attribution probability method, and to achieve a method that can more rapidly and accurately judge which technical category or functional category that a patent is attributed to.
Combination of improved cosine similarity and patent attribution probability method to judge the attribution of related patents of hydrolysis substrate fabrication process
S1474034615001287
Building Information Modeling (BIM) is emerging as a method of creating, sharing, exchanging and managing the building information throughout the lifecycle between all stakeholders. Radio Frequency Identification (RFID), on the other hand, has emerged as an automatic data collection and information storage technology, and has been used in different applications in the AEC/FM (Architecture, Engineering, Construction, and Facilities Management) industry. RFID tags are attached to building assets throughout their lifecycle and used to store lifecycle and context aware information taken from a BIM. Consequently, there is a need for a standard and formal definition of RFID technology components in BIM. The goal of this paper is to add the definitions for RFID components to the BIM standard and to map the data to be stored in RFID memory to the associated entries in a BIM database. The paper defines new entities, data types, and properties to be added to the BIM. Furthermore, the paper identifies the relationships between RFID tags and building elements. These predefined relationships facilitate the linkage between BIM data and RFID data. Eventually, the data that are required to be saved on RFID tags can be automatically selected using the defined relationships in a BIM. A real-world case study has been implemented to validate the proposed method using available BIM software.
Extending IFC to incorporate information of RFID tags attached to building elements
S1474034615001299
Design concept evaluation is one of the most important phases in the early stages of the design process as it not only significantly affects the later stages of the design process but also influences the success of the final design solutions. The main objective of this work is to reduce the imprecise content of customer evaluation process and thus, improve the effectiveness and objectivity of the product design. This paper proposes a novel way of performing design concept evaluations where instead of considering cost and benefit characteristics of design criteria, the work identifies best concept which satisfy constraints imposed by the team of designers on design criteria’s as well as fulfilling maximum customers’ preferences. In this work, the rough number enabled modified Vlsekriterijumska Optimizacija I Kompromisno Resenje (VIKOR) method for design concept evaluation is developed by modifying the extended VIKOR method with interval numbers. The proposed technique is labeled as modified rough VIKOR (MR-VIKOR) analysis. The work primarily involves two phases of concept evaluation. In the first phase, relative importance ranking and initial weights of design criteria are computed through the importance assigned to each design criteria by the designers or the decision makers (DM); and in the second phase, customers’ preferences to the generated user needs are captured in the form of rough numbers. The relative importance ranking computed in first phase along with customers’ preferences is incorporated in the second phase to select the best concept.
Product design concept evaluation using rough sets and VIKOR method
S1474034616300015
Survivors trapped in void spaces formed when buildings collapse in an earthquake may be saved if search and rescue (SAR) operations are quick. A novel computational approach aims to provide building information that can guide SAR teams, thus minimizing their risk and accelerating operations. The inputs are an ‘as-built’ BIM model of the building before an earthquake and a partial ‘as-damaged’ BIM model of the exterior components after the earthquake derived from a terrestrial laser scan. A large set of possible collapse patterns is generated before the earthquake. After the event, the pattern with geometry most similar to that of the ‘as-damaged’ exterior BIM can be selected rapidly. This paper details the selection methods, which use least sum of point distances and Modal Assurance Criteria (MAC) algorithms, and illustrates their operation on a series of simulated computer models of collapsed structures, thus demonstrating the potential feasibility of the proposed approach.
Interior models of earthquake damaged buildings for search and rescue
S1474034616300179
Visualization Environments (VEs) can assist construction professionals in studying intricate interrelations between construction equipment trajectories and their context. Such VEs typically support them in either reviewing earlier conducted work or experimenting with possible alternatives. In the first case, VEs represent equipment trajectories and their actual context sensed during actual construction processes. Trying out alternative trajectories in such VEs is problematic. In the second case, environments support experimenting with alternative equipment trajectories within an a priori defined context, but demand significant modeling efforts to reconstruct real-world projects. Although combining both functionalities within a singular VE will enable obtaining benefits pertinent to each of the visualization environments classes, such attempts were not made earlier. To overcome this gap, this study proposes a method for developing interactive simulation visualization environments suitable for both reviewing conducted and experimenting with alternative equipment trajectories. The suggested method concentrates on compaction operations and comprises two steps: (1) application of a “context-actions-trajectory-impact” framework to structure interrelations between compaction equipment trajectories and their context; and (2) operationalization of an organization scheme to devise a specialized VE with the desired functionality. To evaluate the applicability of the proposed method we applied it for the case of the asphalt compaction process. We developed a specialized visualization environment in consultation with asphalt paving professionals. Two test sessions with a paving specialist and two professional roller operators were conducted with the developed VE. The results from the sessions show that the environment developed according to the method offers the envisioned functionality. As illustrated by the test results, original and demonstrated equipment trajectories are commensurable and able to provide meaningful insights into compaction operations.
Visualization environment for reviewing and experimenting with compaction equipment trajectories in context
S1474034616300192
A great challenge associated with urban growth is to design for energy efficient and healthy built environments. Exploiting the potential for natural ventilation in buildings might improve pedestrian comfort and lower cooling loads, particularly in warm and tropical climates. As a result, predicting wind behavior around naturally ventilated buildings has become important and one of the most common prediction approaches is computational fluid dynamics (CFD) simulation. While accurate wind prediction is essential, simulation is complex and predictions are often inconsistent with field measurements. Discrepancies are due to the large uncertainties associated with modeling assumptions, as well as the high spatial and temporal climatic variability that influences sensor data. This paper proposes metrics to estimate the expected predictive performance of sensor configurations and assesses their usefulness in improving simulation predictions. The evaluations are based on the premise that measurement data are best used for falsifying model instances whose predictions are inconsistent with the data. The potential of the predictive performance metrics is demonstrated using full-scale high-rise buildings in Singapore. The metrics are applied to assess previously proposed sensor configurations. Results show that the performance metrics successfully evaluate the robustness of sensor configurations with respect to reducing uncertainty of wind predictions at other unmeasured locations.
Evaluating predictive performance of sensor configurations in wind studies around buildings
S1474034616300210
Simulation-driven development and optimization of heating, ventilation and air-conditioning (HVAC) systems in passenger rail vehicles is of growing relevance to further increase product quality and energy efficiency. However, today required knowledge of realistic operating conditions is mostly unavailable. This work introduces methodologies and tools to identify representative operating conditions of HVAC systems in passenger rail vehicles. First a Monte-Carlo-simulation approach was employed to acquire a large set of close-to-reality HVAC operating conditions based on simulated train trips. Sampling simulated train trips bypassed the issue of unavailability of appropriate real-world data. Furthermore the approach allowed high flexibility in considering HVAC-relevant factors associated to different categories of trains, rail networks, operation profiles and meteorological conditions. Second, algorithms and methodologies such as k-means clustering and an adapted Finkelstein–Schafer statistical method were implemented to identify representative HVAC operating conditions from the sampled dataset. Final results comprise a set of time-independent HVAC operating points with associated frequencies of occurrence (ROC-points) as well as a set of time-domain signals for representative days (ROC-signals). These results are input for stationary or dynamic system-level simulations, which are used to support design decisions. The developed methodology was exemplarily applied to urban/suburban trains in Switzerland.
Identification of representative operating conditions of HVAC systems in passenger rail vehicles based on sampling virtual train trips
S1474034616300234
The nature of building projects necessitates building models to be designed by various parties representing disciplines in the Architecture, Engineering, Construction and Owner-operator (AECO) Industries. The federated model emerges as the most practical approach to deal with the various models, especially during design stage, construction coordination and beyond. One of the issues with the current approach is there is no real integration between the various models beyond their spatial co-location. This paper proposes a framework to enable fuller integration of otherwise disparate models into integrated models in the federated environment by enabling two critical concepts – deferred reference and an automatic object snap-in. The concepts are applied in a proposed change to an IFC schema and standardized procedure to enable the automatic snap-in mechanism. With these concepts, models could be designed and exported independently as valid, perhaps partial models and yet will remain integrated when they are inserted into the federated model. A prototype system has been developed to show the effectiveness of such integrated models.
A framework for fully integrated building information models in a federated environment
S1474034616300246
In building information modeling (BIM), the model is a digital representation of physical and functional characteristics of a facility and contains enriched product information pertaining to the facility. This information is generally embedded into the BIM model as properties for parametric building objects, and is exchangeable among project stakeholders and BIM design programs – a key feature of BIM for enhancing communication and work efficiency. However, BIM itself is a purpose-built, product-centric information database and lacks domain semantics such that extracting construction-oriented quantity take-off information for the purpose of construction workface planning still remains a challenge. Moreover, some information crucial to construction practitioners, such as the topological relationships among building objects, remains implicit in the BIM design model. This restricts information extraction from the BIM model for downstream analyses in construction. To address identified limitations, this study proposes an ontology-based semantic approach to extracting construction-oriented quantity take-off information from a BIM design model. This approach allows users to semantically query the BIM design model using a domain vocabulary, capitalizing on building product ontology formalized from construction perspectives. As such, quantity take-off information relevant to construction practitioners can be readily extracted and visualized in 3D in order to serve application needs in the construction field. A prototype application is implemented in Autodesk Revit to demonstrate the effectiveness of the proposed new approach in the domain of light-frame building construction.
Ontology-based semantic approach for construction-oriented quantity take-off from BIM models in the light-frame building industry
S1474034616300362
Effectively forecasting the overall electricity consumption is vital for policy makers in rapidly developing countries. It can provide guidelines for planning electricity systems. However, common forecasting techniques based on large historical data sets are not applicable to these countries because their economic growth is high and unsteady; therefore, an accurate forecasting technique using limited samples is crucial. To solve this problem, this study proposes a novel modeling procedure. First, the latent information function is adopted to analyze data features and acquire hidden information from collected observations. Next, the projected sample generation is developed to extend the original data set for improving the forecasting performance of back propagation neural networks. The effectiveness of the proposed approach is estimated using three cases. The experimental results show that the proposed modeling procedure can provide valuable information for constructing a robust model, which yields precise predictions with the limited time series data. The proposed modeling procedure is useful for small time series forecasting.
Extended modeling procedure based on the projected sample for forecasting short-term electricity consumption
S1474034616300465
In construction environments, laser-scanning technologies can perform rapid spatial data collection to monitor construction progress, control construction quality, and support decisions about how to streamline field activities. However, even experienced surveyors cannot guarantee comprehensive laser scanning data collection in the field due to its constantly changing environment, wherein a large number of objects are subject to different data-quality requirements. The current practice of manually planned laser scanning often produces data of insufficient coverage, accuracy, and details. While redundant data collection can improve data quality, this process can also be inefficient and time-consuming. There are many studies on automatic sensor planning methods for guided laser-scanning data collection in the literature. However, fewer studies exist on how to handle exponentially large search space of laser scan plans that consider data quality requirements, such as accuracy and levels of details (LOD). This paper presents a rapid laser scan planning method that overcomes the computational complexity of planning laser scans based on diverse data quality requirements in the field. The goal is to minimize data collection time, while ensuring that the data quality requirements of all objects are satisfied. An analytical sensor model of laser scanning is constructed to create a “divide-and-conquer” strategy for rapid laser scan planning of dynamic environments wherein a graph is generated having specific data quality requirements (e.g., levels of accuracy and detail of certain objects) in terms of nodes and spatial relationships between these requirements as edges (e.g., distance, line-of-sight). A graph-coloring algorithm then decomposes the graph into sub-graphs and identifies “local” optimal laser scan plans of these sub-graphs. A solution aggregation algorithm then combines the local optimal plans to generate a plan for the entire site. Runtime analysis shows that the computation time of the proposed method does not increase exponentially with site size. Validation results of multiple case studies show that the proposed laser scan planning method can produce laser-scanning data with higher quality than data collected by experienced professionals, and without increasing the data collection time.
Rapid data quality oriented laser scan planning for dynamic construction environments
S1474034616300520
In many large engineering enterprises, searching for files is a high-volume routine activity. Visualization-assisted search facilities can significantly reduce the cost of such activities. In this paper, we introduce the concept of Search Provenance Graph (SPG), and present a technique for mapping out the search results and externalizing the provenance of a search process. This enables users to be aware of collaborative search activities within a project, and to be able to reason about potential missing files (i.e., false negatives) more effectively. We describe multiple ontologies that enable the computation of SPGs while supporting an enterprise search engine. We demonstrate the novelty and application of this technique through an industrial case study, where a large engineering enterprise needs to make a long-term technological plan for large-scale document search, and has found the visualization-assisted approach to be more cost-effective than alternative approaches being studied.
Ontology-assisted provenance visualization for supporting enterprise search of engineering and business files
S1476927113000169
Labor intensive experiments are typically required to identify the causal disease variants from a list of disease associated variants in the genome. For designing such experiments, candidate variants are ranked by their strength of genetic association with the disease. However, the two commonly used measures of genetic association, the odds-ratio (OR) and p-value may rank variants in different order. To integrate these two measures into a single analysis, here we transfer the volcano plot methodology from gene expression analysis to genetic association studies. In its original setting, volcano plots are scatter plots of fold-change and t-test statistic (or −log of the p-value), with the latter being more sensitive to sample size. In genetic association studies, the OR and Pearson's chi-square statistic (or equivalently its square root, chi; or the standardized log(OR)) can be analogously used in a volcano plot, allowing for their visual inspection. Moreover, the geometric interpretation of these plots leads to an intuitive method for filtering results by a combination of both OR and chi-square statistic, which we term “regularized-chi”. This method selects associated markers by a smooth curve in the volcano plot instead of the right-angled lines which corresponds to independent cutoffs for OR and chi-square statistic. The regularized-chi incorporates relatively more signals from variants with lower minor-allele-frequencies than chi-square test statistic. As rare variants tend to have stronger functional effects, regularized-chi is better suited to the task of prioritization of candidate genes.
Using volcano plots and regularized-chi statistics in genetic association studies