FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0950705116000289
Internet Protocol Television (IP-TV) recommendation systems are designed to provide programs for groups of people, such as a family or a dormitory. Previous methods mainly generate recommendations to a group of people via clustering the common interests of this group. However, these methods often ignore the diversity of a group’s interests, and recommendations to a group of people may not match the interests of any of the group members. In this paper, we propose an algorithm that first identifies users in accounts, then provides recommendations for each user. In the identification process, time slots in each account are determined by clustering the factorized time subspace, and similar activities among these slots are combined to represent members. Experimental results show that the proposed algorithm gives substantially better results than previous approaches.
User identification for enhancing IP-TV recommendation
S0950705116000290
The use of battery storage devices has been advocated as one of the main ways of improving the power quality and reliability of the power system, including minimisation of energy imbalance and reduction of peak demand. Lowering peak demand to reduce the use of carbon-intensive fuels and the number of expensive peaking plant generators is thus of major importance. Self-adaptive control methods for individual batteries have been developed to reduce the peak demand. However, these self-adaptive control algorithms of are not very efficient without sharing the energy among different batteries. This paper proposes a novel battery network system with optimal management of energy between batteries. An optimal management strategy has been implemented using a population-based constraint differential evolution algorithm. Taking advantage of this strategy the battery network model can remove more peak areas of forecasted demand data compared to the self-adaptive control algorithm developed for the New York City study case.
A novel battery network modelling using constraint differential evolution algorithm optimisation
S0950705116000447
Intelligent Tutoring Systems (ITSs) are one of a wide range of learning environments, where the main activity is problem solving. One of the most successful approaches for implementing ITSs is Constraint-Based Modeling (CBM). Constraint-based tutors have been successfully used as drill-and-practice environments for learning. More recently CBM tutors have been complemented with a model derived from the field of Psychometrics. The goal of this synergy is to provide CBM tutors with a data-driven and sound mechanism of assessment, which mainly consists in applying the principles of Item Response Theory (IRT). The result of this synergy is, therefore, a formal approach that allows carrying out assessments of performance on problem solving tasks. Several previous studies were conducted proving the validity and utility of this combined approach with small groups of students, in short periods of time and using systems designed specifically for assessment purposes. In this paper, the approach has been extended and adapted to deal with a large set of students who used an ITS over a long period of time. The main research questions addressed in this paper are: (1) Which IRT models are more suitable to be used in a constrained-based tutor? (2) Can data collected from the ITS be used as a source for calibrating the constraints characteristic curves? (3) Which is the best strategy to assemble data for calibration? To answer these questions, we have analyzed three years of data from SQL-Tutor. Intelligent Tutoring System Artificial Intelligence in Education Constraint-Based Modeling Item Response Theory Evidence Centered Design Item Characteristic Curve Bayesian Network Constraint Characteristic Curve
Data calibration for statistical-based assessment in constraint-based tutors
S0950705116000526
Dimensionality reduction is an important pre-processing procedure for multi-label classification to mitigate the possible effect of dimensionality curse, which is divided into feature extraction and selection. Principal component analysis (PCA) and multi-label dimensionality reduction via dependence maximization (MDDM) represent two mainstream feature extraction techniques for unsupervised and supervised paradigms. They produce many small and a few large positive eigenvalues respectively, which could deteriorate the classification performance due to an improper number of projection directions. It has been proved that PCA proposed primarily via maximizing feature variance is associated with a least-squares formulation. In this paper, we prove that MDDM with orthonormal projection directions also falls into the least-squares framework, which originally maximizes Hilbert–Schmidt independence criterion (HSIC). Then we propose a novel multi-label feature extraction method to integrate two least-squares formulae through a linear combination, which maximizes both feature variance and feature-label dependence simultaneously and thus results in a proper number of positive eigenvalues. Experimental results on eight data sets show that our proposed method can achieve a better performance, compared with other seven state-of-the-art multi-label feature extraction algorithms.
A multi-label feature extraction algorithm via maximizing feature variance and feature-label dependence simultaneously
S0950705116000551
Physicists research the symmetry of particle space through the contrast of motion law in the real space and inversion space which is created by space inversion techniques. Inspired by this theory, we propose the idea of using local space transformation and dynamic relative position to detect the clustering boundary in high dimensional space. Due to the curse of dimensionality, global space transformation approaches are not only time-consuming, but also fail to keep the original distribution characteristics. So, we inverse the space positions of the k nearest neighbors and project them on the high dimensional space coordinate system. To address the lack of statistics that can describe the uniformity of high dimensional space, we propose the Symmetry Statistics based on the Hopkins Statistics. It is employed to judge the uniformity of k nearest neighbor space of coordinate origin. Moreover, we introduce a filter function to remove some special noises and isolated points. Finally, we use boundary and filter ratios to detect the clustering boundary and propose the corresponding detection algorithm, called Spinver. Experimental results from synthetic and real data sets demonstrate the effectiveness of this algorithm.
Clustering boundary detection for high dimensional space based on space inversion and Hopkins statistics
S095070511600068X
From traditional clusters to cloud systems, job scheduling is one of the most critical factors for achieving high performance in any distributed environment. In this paper, we propose an adaptive algorithm for scheduling modular non-linear parallel jobs in meteorological Cloud, which has a unique parallelism that can only be configured at the very beginning of the execution. Different from existing work, our algorithm takes into account four characteristics of the jobs at the same time, including the average execution time, the deadlines of jobs, the number of assigned resources, and the overall system loads. We demonstrate the effectiveness and efficiency of our scheduling algorithm through simulations using WRF (Weather Research and Forecasting model) that which is widely used in scientific computing. Our evaluation results show that the proposed algorithm has multiple advantages compared with previous methods, including more than 10% reduction in terms of execution time, a higher completion ratio in terms of meeting soft deadlines, and a much smaller standard deviation of the average weighted execution time. Moreover, we show that the proposed algorithm can tolerate inaccuracy in system load estimation.
An adaptive algorithm for scheduling parallel jobs in meteorological Cloud
S0950705116000812
In this position paper we propose a consistent and unifying view to all those basic knowledge representation models that are based on the existence of two somehow opposite fuzzy concepts. A number of these basic models can be found in fuzzy logic and multi-valued logic literature. Here it is claimed that it is the semantic relationship between two paired concepts what determines the emergence of different types of neutrality, namely indeterminacy, ambivalence and conflict, widely used under different frameworks (possibly under different names). It will be shown the potential relevance of paired structures, generated from two paired concepts together with their associated neutrality, all of them to be modeled as fuzzy sets. In this way, paired structures can be viewed as a standard basic model from which different models arise. This unifying view should therefore allow a deeper analysis of the relationships between several existing knowledge representation formalisms, providing a basis from which more expressive models can be later developed.
Paired structures in knowledge representation
S0950705116001040
Molten steel temperature prediction is important in Ladle Furnace (LF). Most of the existing temperature models have been built on small-scale data. The accuracy and the generalization of these models cannot satisfy industrial production. Now, the large-scale data with more useful information are accumulated from the production process. However, the data are with noise. Large-scale and noise data impose strong restrictions on building a temperature model. To solve these two issues, the Bootstrap Feature Subsets Ensemble Regression Trees (BFSE-RTs) method is proposed in this paper. Firstly, low-dimensional feature subsets are constructed based on the multivariate fuzzy Taylor theorem, which saves more memory space in computers and indicates ``smaller-scale'' data sets are used. Secondly, to eliminate the noise, the bootstrap sampling approach of the independent identically distributed data is applied to the feature subsets. Bootstrap replications consist of smaller-scale and lower-dimensional samples. Thirdly, considering its simplicity, a Regression Tree (RT) is built on each bootstrap replication. Lastly, the BFSE-RTs method is used to establish a temperature model by analyzing the metallurgic process of LF. Experiments demonstrate that the BFSE-RTs outperforms other estimators, improves the accuracy and the generalization, and meets the requirements of the RMSE and the maximum error on the temperature prediction.
Molten steel temperature prediction model based on bootstrap Feature Subsets Ensemble Regression Trees
S0950705116001167
In this paper a novel approach to reuse units of learning (UoLs) – such as courses, seminars, workshops, and so on – is presented. Virtual learning environments (VLEs) do not usually provide the tools to export in a standardized format the designed UoLs, making thus more challenging their reuse in a different platform. Taking into account that many of these VLEs are legacy or proprietary systems, the implementation of a specific software is usually out of place. However, these systems have in common that they record the events of students and teachers during the learning process. The approach presented in this paper makes use of these logs (i) to extract the learning flow structure using process mining, and (ii) to obtain the underlying rules that control the adaptive learning of students by means of decision tree learning. Finally, (iii) the process structure and the adaptive rules are recompiled in IMS Learning Design (IMS LD) – the de facto educational modeling language standard. The three steps of our approach have been validated with UoLs from different domains.
Recompiling learning processes from event logs
S0950705116001179
Preserving privacy in the presence of adversary’s background knowledge is very important in data publishing. The k-anonymity model, while protecting identity, does not protect against attribute disclosure. One of strong refinements of k-anonymity, β-likeness, does not protect against identity disclosure. Neither model protects against attacks featured by background knowledge. This research proposes two approaches for generating k-anonymous β-likeness datasets that protect against identity and attribute disclosures and prevent attacks featured by any data correlations between QIs and sensitive attribute values as the adversary’s background knowledge. In particular, two hierarchical anonymization algorithms are proposed. Both algorithms apply agglomerative clustering techniques in their first stage in order to generate clusters of records whose probability distributions extracted by background knowledge are similar. In the next phase, k-anonymity and β-likeness are enforced in order to prevent identity and attribute disclosures. Our extensive experiments demonstrate that the proposed algorithms outperform other state-of-the-art anonymization algorithms in terms of privacy and data utility where the number of unpublished records in our algorithms is less than that of the others. As well-known information loss metrics fail to measure precisely the imposed data inaccuracies stemmed from the removal of records that cannot be published in any equivalence class. This research also introduces an extension into the Global Certainty Penalty metric that considers unpublished records.
Hierarchical anonymization algorithms against background knowledge attack in data releasing
S0950705116001192
Many of our daily life decisions rely on demographic data, which is a good indicator for closeness of people. However, the lack of these data for many online systems let them search for explicit or implicit alternatives. Among many, collaborative filtering is the alternative solutions especially for e-commerce applications where many users are reluctant to disclose their demographic data. This paper explores, discusses and examines many user-profiling approaches for demographic recommender systems (DRSs). These approaches span many alternatives for profiling users in terms of the attribute types, attribute representations, and the profiling way. We present layout, description, and appropriate similarity computation methods for each one of them. A detailed comparison between these different approaches is given using many experiments conducted on a real dataset. The pros and cons of each approach are illustrated for more advantage that may open a window for future work.
User profiling approaches for demographic recommender systems
S0950705116300065
Trust and reputation systems are vital in large open distributed electronic commerce environments. Although existing various mechanisms have been adopted to guarantee trust between customers and sellers (or platforms), self-interested agents often impose various attacks to trust and reputation systems. As these attacks are usually deceptive, collusive, or strategic, it is difficult to keep trust and reputation systems robust to multifarious attacks. Many defense strategies employ a robust trust network (such as a trustable advisor list) for protecting buyers. However, in the evolution of a trust network, existing strategies consider only historical ratings of given buyers and advisors, while neglecting the timeliness of these ratings. Besides, only a single trust network is utilized to evaluate all sellers, leading to problems such as lack of pertinence and quite large deviation of evaluation. This paper proposes a novel pre-evolutionary advisor generation strategy, which first pre-evolves an optimal advisor list for each candidate seller before each trade and then evaluate each seller according to its corresponding list. After evaluating and selecting the seller, the buyer's own advisor list is evolved based on the pre-evolved optimal advisor list of chosen seller. Two sets of experiments have been designed to verify the general performance of this strategy, including accuracy, robustness, and stability. Results show that our strategy outperforms existing ones, especially when attackers use popular attack strategies such as Sybil, Sybil and Camouflage, and Sybil and Whitewashing. Besides, our strategy is more stable than compared ones, and its robustness will not change with the ratio of dishonest buyers.
A pre-evolutionary advisor list generation strategy for robust defensing reputation attacks
S0950705116300132
Recommender systems are a growing research field due to its immense potential application for helping users to select products and services. Recommenders are useful in a broad range of domains such as films, music, books, restaurants, hotels, social networks, news, etc. Traditionally, recommenders tend to promote certain products or services of a company that are kind of popular among the communities of users. An important research concern is how to formulate recommender systems centred on those items that are not very popular: the long tail products. A special case of those items are the ones that are product of an overstocking by the vendor. Overstock, that is, the excess of inventory, is a source of revenue loss. In this paper, we propose that recommender systems can be used to liquidate long tail products maximising the business profit. First, we propose a formalisation for this task with the corresponding evaluation methodology and datasets. And, then, we design a specially tailored algorithm centred on getting rid of those unpopular products based on item relevance models. Comparison among existing proposals demonstrates that the advocated method is a significantly better algorithm for this task than other state-of-the-art techniques.
Item-based relevance modelling of recommendations for getting rid of long tail products
S0950705116300193
This paper presents a general framework for short text classification by learning vector representations of both words and hidden topics together. We refer to a large-scale external data collection named ”corpus” which is topic consistent with short texts to be classified and then use the corpus to build topic model with Latent Dirichlet Allocation (LDA). For all the texts of the corpus and short texts, topics of words are viewed as new words and integrated into texts for data enriching. On the enriched corpus, we can learn vector representations of both words and topics. In this way, feature representations of short texts can be performed based on vectors of both words and topics for training and classification. On an open short text classification data set, learning vectors of both words and topics can significantly help reduce the classification error comparing with learning only word vectors. We also compared the proposed classification method with various baselines and experimental results justified the effectiveness of our word/topic vector representations.
Improving short text classification by learning vector representations of both words and hidden topics
S0950705116300405
In this paper, hand dorsal images acquired under infrared light are used to design an accurate personal authentication system. Each of the image is segmented into palm dorsal and fingers which are subsequently used to extract palm dorsal veins and infrared hand geometry features respectively. A new quality estimation algorithm is proposed to estimate the quality of palm dorsal which assigns low values to the pixels containing hair or skin texture. Palm dorsal is enhanced using filtering. For vein extraction, information provided by the enhanced image and the vein quality is consolidated using a variational approach. The proposed vein extraction can handle the issues of hair, skin texture and variable width veins so as to extract the genuine veins accurately. Several post processing techniques are introduced in this paper for accurate feature extraction of infrared hand geometry features. Matching scores are obtained by matching palm dorsal veins and infrared hand geometry features. These are eventually fused for authentication. For performance evaluation, a database of 1500 hand images acquired from 300 different hands is created. Experimental results demonstrate the superiority of the proposed system over existing systems.
An accurate infrared hand geometry and vein pattern based authentication system
S0950705116300545
In multi-label learning, since different labels may have some distinct characteristics of their own, multi-label learning approach with label-specific features named LIFT has been proposed. However, the construction of label-specific features may encounter the increasing of feature dimensionalities and a large amount of redundant information exists in feature space. To alleviate this problem, a multi-label learning approach FRS-LIFT is proposed, which can implement label-specific feature reduction with fuzzy rough set. Furthermore, with the idea of sample selection, another multi-label learning approach FRS-SS-LIFT is also presented, which effectively reduces the computational complexity in label-specific feature reduction. Experimental results on 10 real-world multi-label data sets show that, our methods can not only reduce the dimensionality of label-specific features when compared with LIFT, but also achieve satisfactory performance among some popular multi-label learning approaches.
Multi-label learning with label-specific feature reduction
S0955799714003221
In the present work, a novel hybrid FE-Meshless quadrilateral element with continuous nodal stress is developed using radial-polynomial basis functions, named as Quad4-RPIMcns. Quad4-RPIMcns can be regarded as the development of the previous FE-Meshless quadrilateral element with radial-polynomial basis functions (Quad4-RPIM) and quadrilateral element with continuous nodal stress (Quad4-CNS). Similar to Quad4-RPIM, radial-polynomial basis functions are used to construct nodal approximations of Quad4-RPIMcns in the context of partition of unity, which avoids the possible singularity problem of constructing nodal approximations. The derivative of Quad4-RPIMcns shape function is continuous at nodes. Therefore, nodal stress can be obtained without any extra operation. Quad4-RPIMcns possesses Kronecker-delta property which is a very important property to impose essential boundary conditions directly as in the FEM. The numerical tests in this paper demonstrate that Quad4-RPIMcns gives better accuracy and higher convergence rate as compared to four-node iso-parametric quadrilateral element (Quad4). Additionally, Quad4-RPIMcns seems to have higher tolerance to mesh distortion than Quad4.
A hybrid ‘FE-Meshless’ QUAD4 with continuous nodal stress using radial-polynomial basis functions
S0955799715001101
The present work uses mean value coordinates to construct the shape functions of a hybrid ‘FE-Meshfree’ quadrilateral element, which is named as Quad4-MVC. This Quad4-MVC can be regarded as the development of the ‘FE-Meshfree’ quadrilateral element with radial-polynomial point interpolation (Quad4-RPIM). Similar to Quad4-RPIM, Quad4-MVC has Kronecker delta property on the boundaries of computational domain, so essential boundary conditions can be enforced as conveniently as in the finite element method (FEM). The novelty of the present work is to construct nodal approximations using mean value coordinates, instead of radial basis functions which are used in Quad4-RPIM. Compared to the radial basis functions, mean value coordinates does not utilize any uncertain parameters, which enhances stability of numerical results. Numerical tests in this paper show that the performance of Quad4-RPIM becomes even worse than four-node iso-parametric element (Quad4) when the parameters of radial basis functions are not chosen properly. However, the performance of Quad4-MVC is stably better than Quad4.
Construct ‘FE-Meshfree’ Quad4 using mean value coordinates
S0955799716300376
Radial basis function domain-type collocation method is applied for an elliptic partial differential equation with nonlocal multipoint boundary condition. A geometrically flexible meshless framework is suitable for imposing nonclassical boundary conditions which relate the values of unknown function on the boundary to its values at a discrete set of interior points. Some properties of the method are investigated by a numerical study of a test problem with the manufactured solution. Attention is mainly focused on the influence of nonlocal boundary condition. The standard collocation and least squares approaches are compared. In addition to its geometrical flexibility, the examined method seems to be less restrictive with respect to parameters of nonlocal conditions than, for example, methods based on finite differences.
Radial basis function collocation method for an elliptic problem with nonlocal multipoint boundary condition
S0957417413004855
In a complex business world, characterised by globalisation and rapid rhythms of change, understanding supply chain (SC) operation dynamics is crucial. This paper describes a logic-based approach to analysing SC operation dynamics, named SCOlog. SC operation is modelled in a declarative fashion and it is simulated following rule-based execution semantics. This approach facilitates the automated explanation of simulated SC operational behaviours and performance. The automated explanation support provided by SCOlog is found to improve the understanding of the domain for non-SCM experts. Furthermore, SCOlog allows for maintainability and reusability.
SCOlog: A logic-based approach to analysing supply chain operation dynamics
S0957417413009615
This study improves the recognition accuracy and execution time of facial expression recognition system. Various techniques were utilized to achieve this. The face detection component is implemented by the adoption of Viola–Jones descriptor. The detected face is down-sampled by Bessel transform to reduce the feature extraction space to improve processing time then. Gabor feature extraction techniques were employed to extract thousands of facial features which represent various facial deformation patterns. An AdaBoost-based hypothesis is formulated to select a few hundreds of the numerous extracted features to speed up classification. The selected features were fed into a well designed 3-layer neural network classifier that is trained by a back-propagation algorithm. The system is trained and tested with datasets from JAFFE and Yale facial expression databases. An average recognition rate of 96.83% and 92.22% are registered in JAFFE and Yale databases, respectively. The execution time for a 100×100 pixel size is 14.5ms. The general results of the proposed techniques are very encouraging when compared with others.
A neural-AdaBoost based facial expression recognition system
S0957417413009901
A digital image processing algorithm was developed to identify flow patterns in high speed imaging. This numerical tool allows to quantify the fluid dynamic features in compressible flows of relevance in aerospace and space related applications. This technique was demonstrated in a harsh environment with poor image quality and illumination fluctuations. This original pattern recognition tool is based on image binarization and object identification. The geometrical properties of the detected elements are obtained by measuring the characteristics of each object in the binary image. In case of multiple shock waves or shock bifurcations, a “decision-making” algorithm chooses the best shock-wave path, based on the original image intensity and local pattern orientation. The algorithm was successfully used for validation on numerical Schlieren images, where the shock-wave fluctuation was triggered by vortex shedding. The applicability of the algorithm was finally evaluated in two Schlieren imaging studies: at the trailing edge of supersonic airfoils and for hypersonic research. The program correctly identified the fuzzy flow features present in all applications.
A decision-making algorithm for automatic flow pattern identification in high-speed imaging
S0957417414005478
Nonnegative matrix factorization (NMF), a popular part-based representation technique, does not capture the intrinsic local geometric structure of the data space. Graph regularized NMF (GNMF) was recently proposed to avoid this limitation by regularizing NMF with a nearest neighbor graph constructed from the input data set. However, GNMF has two main bottlenecks. First, using the original feature space directly to construct the graph is not necessarily optimal because of the noisy and irrelevant features and nonlinear distributions of data samples. Second, one possible way to handle the nonlinear distribution of data samples is by kernel embedding. However, it is often difficult to choose the most suitable kernel. To solve these bottlenecks, we propose two novel graph-regularized NMF methods, AGNMF FS and AGNMF MK , by introducing feature selection and multiple-kernel learning to the graph regularized NMF, respectively. Instead of using a fixed graph as in GNMF, the two proposed methods learn the nearest neighbor graph that is adaptive to the selected features and learned multiple kernels, respectively. For each method, we propose a unified objective function to conduct feature selection/multi-kernel learning, NMF and adaptive graph regularization simultaneously. We further develop two iterative algorithms to solve the two optimization problems. Experimental results on two challenging pattern classification tasks demonstrate that the proposed methods significantly outperform state-of-the-art data representation methods.
Feature selection and multi-kernel learning for adaptive graph regularized nonnegative matrix factorization
S0957417414006253
The task of social tag suggestion is to recommend tags automatically for a user when he or she wants to annotate an online resource. In this study, we focus on how to make use of the text description of a resource to suggest tags. It is intuitive to select significant words from the text description of a source as the suggested tags. However, since users can arbitrarily annotate any tags to a resource, tag suggestion suffers from the vocabulary gap issue — the appropriate tags of a resource may be statistically insignificant or even do not appear in the corresponding description. In order to solve the vocabulary gap issue, in this paper we present a new perspective on social tag suggestion. By considering both a description and tags as summaries of a given resource composed in two languages, tag suggestion can be regarded as a translation from description to tags. We propose two methods to estimate the translation probabilities between words in descriptions and tags. Based on the translation probabilities between words and tags estimated for a large collection of description-tags pairs, we can suggest tags according to the words in a resource description. Experiments on real-world datasets indicate that our methods outperform other methods in precision, recall and F-measure. Moreover, our methods are relatively simple and efficient, which makes them practical for Web applications.
Estimating translation probabilities for social tag suggestion
S0957417414006472
Traditional clustering algorithms do not consider the semantic relationships among words so that cannot accurately represent the meaning of documents. To overcome this problem, introducing semantic information from ontology such as WordNet has been widely used to improve the quality of text clustering. However, there still exist several challenges, such as synonym and polysemy, high dimensionality, extracting core semantics from texts, and assigning appropriate description for the generated clusters. In this paper, we report our attempt towards integrating WordNet with lexical chains to alleviate these problems. The proposed approach exploits ontology hierarchical structure and relations to provide a more accurate assessment of the similarity between terms for word sense disambiguation. Furthermore, we introduce lexical chains to extract a set of semantically related words from texts, which can represent the semantic content of the texts. Although lexical chains have been extensively used in text summarization, their potential impact on text clustering problem has not been fully investigated. Our integrated way can identify the theme of documents based on the disambiguated core features extracted, and in parallel downsize the dimensions of feature space. The experimental results using the proposed framework on reuters-21578 show that clustering performance improves significantly compared to several classical methods.
A semantic approach for text clustering using WordNet and lexical chains
S0957417414007659
Error mapping of machine tools is a multi-measurement task that is planned based on expert knowledge. There are no intelligent tools aiding the production of optimal measurement plans. In previous work, a method of intelligently constructing measurement plans demonstrated that it is feasible to optimise the plans either to reduce machine tool downtime or the estimated uncertainty of measurement due to the plan schedule. However, production scheduling and a continuously changing environment can impose conflicting constraints on downtime and the uncertainty of measurement. In this paper, the use of the produced measurement model to minimise machine tool downtime, the uncertainty of measurement and the arithmetic mean of both is investigated and discussed through the use of twelve different error mapping instances. The multi-objective search plans on average have a 3% reduction in the time metric when compared to the downtime of the uncertainty optimised plan and a 23% improvement in estimated uncertainty of measurement metric when compared to the uncertainty of the temporally optimised plan. Further experiments on a High Performance Computing (HPC) architecture demonstrated that there is on average a 3% improvement in optimality when compared with the experiments performed on the PC architecture. This demonstrates that even though a 4% improvement is beneficial, in most applications a standard PC architecture will result in valid error mapping plan.
Multi-objective optimisation of machine tool error mapping using automated planning
S0957417415000238
The paper presents the main results of the KOMET (Knowledge and cOntent structuring via METhods of collaborative ontology design) project, which aims to develop a novel paradigm for knowledge structuring based on the interplay between cognitive psychology and ontology engineering. By the knowledge structure (a conceptual model) we define the main domain concepts and relations between them in the form of a graph, map or diagram. This approach considers individual cognitive styles and uses recent advances in knowledge engineering and conceptual structuring; it aims to create new, consistent and structurally holistic knowledge bases for various areas of science and technology. Two stages of research have been completed: research into correlations between the expert’s individual cognitive style and the peculiarities of the expert’s subject domain ontology development; and research into correlations between the expert’s individual cognitive style and the group ontology design (including design accomplished by groups of experts with either similar or different cognitive styles). The results of these research stages can be applied to organizing collaborative ontology design (especially for research and learning purposes), data structuring and other group analytical work. Implications for practice are briefly delineated.
Ontology design and individual cognitive peculiarities: A pilot study
S0957417415003012
When planning a new development, location decisions have always been a major issue. This paper examines and compares two modelling methods used to inform a healthcare infrastructure location decision. Two Multiple Criteria Decision Analysis (MCDA) models were developed to support the optimisation of this decision-making process, within a National Health Service (NHS) organisation, in the UK. The proposed model structure is based on seven criteria (environment and safety, size, total cost, accessibility, design, risks and population profile) and 28 sub-criteria. First, Evidential Reasoning (ER) was used to solve the model, then, the processes and results were compared with the Analytical Hierarchy Process (AHP). It was established that using ER or AHP led to the same solutions. However, the scores between the alternatives were significantly different; which impacted the stakeholders’ decision-making. As the processes differ according to the model selected, ER or AHP, it is relevant to establish the practical and managerial implications for selecting one model or the other and providing evidence of which models best fit this specific environment. To achieve an optimum operational decision it is argued, in this study, that the most transparent and robust framework is achieved by merging ER process with the pair-wise comparison, an element of AHP. This paper makes a defined contribution by developing and examining the use of MCDA models, to rationalise new healthcare infrastructure location, with the proposed model to be used for future decision. Moreover, very few studies comparing different MCDA techniques were found, this study results enable practitioners to consider even further the modelling characteristics to ensure the development of a reliable framework, even if this means applying a hybrid approach.
Development, test and comparison of two Multiple Criteria Decision Analysis (MCDA) models: A case of healthcare infrastructure location
S0957417415003346
Fuzzy C-means has been utilized successfully in a wide range of applications, extending the clustering capability of the K-means to datasets that are uncertain, vague and otherwise hard to cluster. This paper introduces the Fuzzy C-means++ algorithm which, by utilizing the seeding mechanism of the K-means++ algorithm, improves the effectiveness and speed of Fuzzy C-means. By careful seeding that disperses the initial cluster centers through the data space, the resulting Fuzzy C-means++ approach samples starting cluster representatives during the initialization phase. The cluster representatives are well spread in the input space, resulting in both faster convergence times and higher quality solutions. Implementations in R of standard Fuzzy C-means and Fuzzy C-means++ are evaluated on various data sets. We investigate the cluster quality and iteration count as we vary the spreading factor on a series of synthetic data sets. We run the algorithm on real world data sets and to account for the non-determinism inherent in these algorithms we record multiple runs while choosing different k parameter values. The results show that the proposed method gives significant improvement in convergence times (the number of iterations) of up to 40 (2.1 on average) times the standard on synthetic datasets and, in general, an associated lower cost function value and Xie–Beni value. A proof sketch of the logarithmically bounded expected cost function value is given.
Fuzzy C-means++: Fuzzy C-means with effective seeding initialization
S0957417415003590
The recent developments in cellular networks, along with the increase in services, users and the demand of high quality have raised the Operational Expenditure (OPEX). Self-Organizing Networks (SON) are the solution to reduce these costs. Within SON, self-healing is the functionality that aims to automatically solve problems in the radio access network, at the same time reducing the downtime and the impact on the user experience. Self-healing comprises four main functions: fault detection, root cause analysis, fault compensation and recovery. To perform the root cause analysis (also known as diagnosis), Knowledge-Based Systems (KBS) are commonly used, such as fuzzy logic. In this paper, a novel method for extracting the Knowledge Base for a KBS from solved troubleshooting cases is proposed. This method is based on data mining techniques as opposed to the manual techniques currently used. The data mining problem of extracting knowledge out of LTE troubleshooting information can be considered a Big Data problem. Therefore, the proposed method has been designed so it can be easily scaled up to process a large volume of data with relatively low resources, as opposed to other existing algorithms. Tests show the feasibility and good results obtained by the diagnosis system created by the proposed methodology in LTE networks.
Data mining for fuzzy diagnosis systems in LTE networks
S0957417415003759
In this paper we investigate the role of idioms in automated approaches to sentiment analysis. To estimate the degree to which the inclusion of idioms as features may potentially improve the results of traditional sentiment analysis, we compared our results to two such methods. First, to support idioms as features we collected a set of 580 idioms that are relevant to sentiment analysis, i.e. the ones that can be mapped to an emotion. These mappings were then obtained using a web-based crowdsourcing approach. The quality of the crowdsourced information is demonstrated with high agreement among five independent annotators calculated using Krippendorff’s alpha coefficient (α =0.662). Second, to evaluate the results of sentiment analysis, we assembled a corpus of sentences in which idioms are used in context. Each sentence was annotated with an emotion, which formed the basis for the gold standard used for the comparison against two baseline methods. The performance was evaluated in terms of three measures – precision, recall and F-measure. Overall, our approach achieved 64% and 61% for these three measures in two experiments improving the baseline results by 20 and 15 percent points respectively. F-measure was significantly improved over all three sentiment polarity classes: Positive, Negative and Other. Most notable improvement was recorded in classification of positive sentiments, where recall was improved by 45 percent points in both experiments without compromising the precision. The statistical significance of these improvements was confirmed by McNemar’s test.
The role of idioms in sentiment analysis
S0957417415004674
Feature selection is used in many application areas relevant to expert and intelligent systems, such as data mining and machine learning, image processing, anomaly detection, bioinformatics and natural language processing. Feature selection based on information theory is a popular approach due its computational efficiency, scalability in terms of the dataset dimensionality, and independence from the classifier. Common drawbacks of this approach are the lack of information about the interaction between the features and the classifier, and the selection of redundant and irrelevant features. The latter is due to the limitations of the employed goal functions leading to overestimation of the feature significance. To address this problem, this article introduces two new nonlinear feature selection methods, namely Joint Mutual Information Maximisation (JMIM) and Normalised Joint Mutual Information Maximisation (NJMIM); both these methods use mutual information and the ‘maximum of the minimum’ criterion, which alleviates the problem of overestimation of the feature significance as demonstrated both theoretically and experimentally. The proposed methods are compared using eleven publically available datasets with five competing methods. The results demonstrate that the JMIM method outperforms the other methods on most tested public datasets, reducing the relative average classification error by almost 6% in comparison to the next best performing method. The statistical significance of the results is confirmed by the ANOVA test. Moreover, this method produces the best trade-off between accuracy and stability.
Feature selection using Joint Mutual Information Maximisation
S0957417415005473
Growing evidence is suggesting that postings on online stock forums affect stock prices, and alter investment decisions in capital markets, either because the postings contain new information or they might have predictive power to manipulate stock prices. In this paper, we propose a new intelligent trading support system based on sentiment prediction by combining text-mining techniques, feature selection and decision tree algorithms in an effort to analyze and extract semantic terms expressing a particular sentiment (sell, buy or hold) from stock-related micro-blogging messages called “StockTwits”. An attempt has been made to investigate whether the power of the collective sentiments of StockTwits might be predicted and how the changes in these predicted sentiments inform decisions on whether to sell, buy or hold the Dow Jones Industrial Average (DJIA) Index. In this paper, a filter approach of feature selection is first employed to identify the most relevant terms in tweet postings. The decision tree (DT) model is then built to determine the trading decisions of those terms or, more importantly, combinations of terms based on how they interact. Then a trading strategy based on a predetermined investment hypothesis is constructed to evaluate the profitability of the term trading decisions extracted from the DT model. The experiment results based on 122-tweet term trading (TTT) strategies achieve a promising performance and the (TTT) strategies dramatically outperform random investment strategies. Our findings also confirm that StockTwits postings contain valuable information and lead trading activities in capital markets.
Quantifying StockTwits semantic terms’ trading behavior in financial markets: An effective application of decision tree algorithms
S0957417416301786
Sensor network technology is becoming more widespread and sophisticated, and devices with many sensors, such as smartphones and sensor nodes, have been used extensively. Since these devices have more easily accumulated various kinds of micrometeorological data, such as temperature, humidity, and wind speed, an enormous amount of micrometeorological data has been accumulated. In recent years, it has been expected that such an enormous amount of data, called big data, will produce novel knowledge and value. Accordingly, many current applications have used data mining technology or machine learning to exploit big data. However, micrometeorological data has a complicated correlation among different features, and its characteristics change variously with time. Therefore, it is difficult to predict micrometeorological data accurately with low computational complexity even if state-of-the-art machine learning algorithms are used. In this paper, we propose a new methodology for predicting micrometeorological data, sliding window-based support vector regression (SW-SVR) that involves a novel combination of support vector regression (SVR) and ensemble learning. To represent complicated micrometeorological data easily, SW-SVR builds several SVRs specialized for each representative data group in various natural environments, such as different seasons and climates, and changes weights to aggregate the SVRs dynamically depending on the characteristics of test data. In our experiment, we predicted the temperature after 1h and 6 h by using large-scale micrometeorological data in Tokyo. As a result, regardless of testing periods, training periods, and prediction horizons, the prediction performance of SW-SVR was always greater than or equal to other general methods such as SVR, random forest, and gradient boosting. At the same time, SW-SVR reduced the building time remarkably compared with those of complicated models that have high prediction performance.
Sliding window-based support vector regression for predicting micrometeorological data
S0963868714000055
Information systems (IS) are strategic in so far as they are used to realize strategic intent. Yet, while much has been said about aligning IS functionality with the strategic intent and how to organizationally implement strategically aligned systems, less is known of how to successfully implement strategic change associated with system use – a truly critical challenge within strategic IS implementation. Drawing on a strategy-as-practice perspective we address this gap by developing a multi-dimensional view of IS strategy, conceptualizing three key challenges in the IS strategy process, to explain how and why a paper mill, despite successfully implementing a strategic production management system, failed to produce intended strategic change. We call this outcome strategy blindness: organizational incapability to realize the strategic intent of implemented, available system capabilities. Using a longitudinal case study we investigate how cognitive rigidity of key actors and fixed, interrelated practices shaped the implementation of the new production system. We also identify core components and dynamics that constitute a richer multi-dimensional view of the IS strategy implementation (alignment) process. In particular, we identify three salient factors that contribute to strategy blindness – mistranslation of intent, flexibility of the IT artifact and cognitive entrenchment – and discuss how they affect strategic implementation processes. We conclude by discussing implications of our findings for IS strategy theory and practice, especially the contribution of strategy-as-practice to this stream of research.
Information systems use as strategy practice: A multi-dimensional view of strategic information system implementation and use
S096599781300029X
A parallel computing region-growing algorithm for surface reconstruction from unorganized point clouds is proposed in this research. The traditional region-growing algorithm belongs to sequential process and needs to update the topology information continuously to maintain the boundaries of the growing region. This constraint becomes a bottleneck for efficiency improvement. The proposed GPU-based region-growing algorithm is to decompose the traditional sequence and re-plan specific framework for the purpose of utilizing parallel computation. Then, a graphics card with multi-processing units will be used to build triangles in the parallel computing mode. In our GPU-based reconstruction process, each sampling point is regarded as an independent seed and expands simultaneously until all surrounding patches overlap each other. Following this, the overlapping patches are removed and holes are filled by the GPU-based calculation. Finally, a complete model is created. In order to validate the algorithm proposed, the unorganized point cloud was obtained by a 3D scanner and then reconstructed using the parallel computing region-growing algorithm. According to the results obtained, the algorithm proposed here shows 10 times better performance when compared to the traditional region-growing method.
A region-growing algorithm using parallel computing for surface reconstruction from unorganized points
S0965997813000318
Nature has provided inspiration for most of the man-made technologies. Scientists believe that dolphins are the second to humans in smartness and intelligence. Echolocation is the biological sonar used by dolphins and several kinds of other animals for navigation and hunting in various environments. This ability of dolphins is mimicked in this paper to develop a new optimization method. There are different meta-heuristic optimization methods, but in most of these algorithms parameter tuning takes a considerable time of the user, persuading the scientists to develop ideas to improve these methods. Studies have shown that meta-heuristic algorithms have certain governing rules and knowing these rules helps to get better results. Dolphin echolocation takes advantages of these rules and outperforms many existing optimization methods, while it has few parameters to be set. The new approach leads to excellent results with low computational efforts.
A new optimization method: Dolphin echolocation
S096599781300032X
This work describes a technique for generating two-dimensional triangular meshes using distributed memory parallel computers, based on a master/slaves model. This technique uses a coarse quadtree to decompose the domain and a serial advancing front technique to generate the mesh in each subdomain concurrently. In order to advance the front to a neighboring subdomain, each subdomain suffers a shift to a Cartesian direction, and the same advancing front approach is performed on the shifted subdomain. This shift-and-remesh procedure is repeatedly applied until no more mesh can be generated, shifting the subdomains to different directions each turn. A finer quadtree is also employed in this work to help estimate the processing load associated with each subdomain. This load estimation technique produces results that accurately represent the number of elements to be generated in each subdomain, leading to proper runtime prediction and to a well-balanced algorithm. The meshes generated with the parallel technique have the same quality as those generated serially, within acceptable limits. Although the presented approach is two-dimensional, the idea can be easily extended to three dimensions.
A distributed-memory parallel technique for two-dimensional mesh generation for arbitrary domains
S0965997813000367
At the conceptual design stage, automobile body is evaluated by simplified frame structure, consisting of thin-walled beams (TWBs). In the automobile practice, design engineers mostly rely on their experience and intuition when making decisions on cross-sectional shape of TWBs. So this paper presents a cross-sectional shape optimization method in order to achieve a high stiffness and lightweight TWB. Firstly, cross-sectional property formulations is summarized and reviewed. Secondly, we build up a shape optimization model to minimize the cross-sectional area and satisfy the stiffness and manufacturing demands. The objective and constraints are nonlinear polynomial functions of the point coordinates defining the cross-sectional shape. Genetic algorithm is introduced to solve this nonlinear optimization problem. Thirdly, object-oriented programming and design patterns are adopted to design and implement the software framework. Lastly, numerical example is used to verify the presented method. This software, “SuperBeam” for short, is released for free and does speed up the conceptual design of automobile body.
An object-oriented graphics interface design and optimization software for cross-sectional shape of automobile body
S0965997813000379
In this paper a simple and efficient algorithm for computing Boolean operations on polygons is presented. The algorithm works with almost any kind of input polygons: concave polygons, polygons with holes, several contours and self-intersecting edges. Important topological information, as the holes of the result polygon, is computed.
A simple algorithm for Boolean operations on polygons
S0965997813000380
This paper presents an expert system that was developed with the goal of classifying the surface finish of self-compacting concrete (SCC) precast elements. The classification concerns the presence of bugholes, which are imperfections that appear on a concrete surface after demoulding. The surface evaluation is based on digital images that are processed by an image analysis tool. This tool defines the parameters that will be evaluated by a fuzzy logic-based classification tool. The classification is based on a novel scale that considers the degree of treatment necessary to achieve proper surface finish. The proposed system was applied to evaluate SCC precast elements. The results from the case study highlight the success of the proposed expert system in evaluating SCC finish. In addition, the system proved to be of great help when used for systematic analyses of the effect of mixture proportions on surface quality.
Expert system applied for classifying self-compacting concrete surface finish
S0965997813000392
In order to take advantage of trends such as genetic-design students need to be familiar, and comfortable, with the concept of parametric computer models and how their parameters relate to physical-forms. Virtual learning software can aid in creating that understanding and help support studies at all undergraduate levels in engineering design disciplines. As an example, hydropower rotors are complex and largely rely on computational analysis of geometries for single rotor types. That problem can be significantly overcome using a parametric algorithm capable of creating an almost-infinite variety of computer models. Therefore, this paper investigates the shared parametric properties of common crossflow hydropower rotor geometries, resulting in a generic model that is then used to illustrate application in real-time interactive virtual learning software capable of producing accurate stereoscopic images and stereolithography files for 3D printing, as well as linking to constructive solid geometry software for slower, but more detailed, analysis. A pilot survey of student attitudes to the virtual learning prototype and resulting geometries is then discussed, illustrating the potential for 3D graphics as an effective addition to virtual learning of parametric design methods, and giving initial direction for future work.
Parametric virtual laboratory development: A hydropower case study with student perspectives
S0965997813000409
The paper aims at developing a simple two-step homogenization scheme for prediction of elastic properties of a high performance concrete (HPC) in which microstructural heterogeneities are distinguished with the help of nanoindentation. The main components of the analyzed material include blended cement, fly-ash and fine aggregate. The material heterogeneity appears on several length scales as well as porosity that is accounted for in the model. Grid nanoindentation is applied as a fundamental source of elastic properties of individual microstructural phases in which subsequent statistical evaluation and deconvolution of phase properties are employed. The multilevel porosity is evaluated from combined sources, namely mercury intrusion porosimetry and optical image analyses. Micromechanical data serve as input parameters for analytical (Mori–Tanaka) and numerical FFT-based elastic homogenizations at microscale. Both schemes give similar results and justify the isotropic character of the material. The elastic stiffness matrices are derived from individual phase properties and directly from the grid nanoindentation data with very good agreement. The second material level, which accounts for large air porosity and aggregate, is treated with analytical homogenization to predict the overall composite properties. The results are compared with macroscopic experimental measurements received from static and dynamic tests. Also here, good agreement was achieved within the experimental error, which includes microscale phase interactions in a very dense heterogeneous composite matrix. The methodology applied in this paper gives promising results for the better prediction of HPC elastic properties and for further reduction of expensive experimental works that must be, otherwise, performed on macroscopic level.
Application of multiscale elastic homogenization based on nanoindentation for high performance concrete
S0965997813000410
The reinforced-concrete slab road bridges in their simplest form are used as a cost-effective solution for local infrastructure in various parts of the world. Since the reinforced concrete slab is the load-carrying element whose upper surface is directly exposed to both road traffic and weather, the integrity of the upper layer of the concrete slab becomes the decisive factor for estimation of durability of these bridges. This paper presents a fuzzy-logic based approach to estimation of stiffness reduction of concrete in the compressed zone of the cross section which takes into account the combined effect of the cyclic loading, freeze–thawing and chloride contamination. The fuzzy logic is used for derivation of numerical relations from available experimental data on the three relevant effects, which can be readily implemented in or used with existing finite element codes. The proposed approach is demonstrated in an example of a model bridge subjected to moderate road traffic and mountainous climatic conditions.
Fuzzy modeling of combined effect of winter road maintenance and cyclic loading on concrete slab bridge
S0965997813000422
Several cement and alkali activated fly ash based concrete samples are examined in this paper with emphasis on their fracture properties. These are first obtained from an extensive experimental program. The measured loading curves are then compared with those derived numerically in the framework of an inverse approach. Here, the artificial neural network and the ATENA finite element code are combined to constitute the optimization driver that allows for a reliable determination of the modulus of elasticity, fracture energy, and tensile strength of individual concretes. A brief introduction to the numerical analysis of fiber reinforced specimens again in conjunction with inverse analysis is also provided.
Fracture properties of cement and alkali activated fly ash based concrete with application to segmental tunnel lining
S0965997813000434
This paper presents a density control based adaptive hexahedral mesh generation algorithm for three dimensional models. To improve the mesh quality of hexahedral elements, a set of improved 27-refinement templates is proposed and the refinement modes of these templates are given. A set of effective refinement templates for 8-refinement based mesh generation algorithm is employed. The corresponding date structure and the procedures for realization of the algorithm are also presented. A buffer layer is inserted on the concave domains to resolve the propagation problems. Finally, the effectiveness and robustness of the algorithm are demonstrated by using several examples.
Incorporating improved refinement techniques for a grid-based geometrically-adaptive hexahedral mesh generation algorithm
S0965997813000513
This paper demonstrates an application of a hybrid fuzzy-genetic system in the optimisation of lightweight cabled-truss structures. These structures are described as a system of cables and triangular bar formations jointed at their ends by hinged connections to form a rigid framework. The optimised lightweight structure is determined through a stochastic discrete topology and sizing optimisation procedure that uses ground structure approach, nonlinear finite element analysis, genetic algorithm, and fuzzy logic. The latter is used to include expertise into the evolutionary search with the aim of filtering individuals with low survival possibility, thereby decreasing the total number of evaluations. This is desired because cables, which are inherently nonlinear elements, demand the use of iterative procedures for computing the structural response. Such procedures are computationally costly since the stiffness matrix is evaluated in each iteration until the structure is in equilibrium. Initially, the proposed system is applied to truss benchmarks. Next, the use of cables is investigated and the system’s performance is compared against genetic algorithms. The results indicate that the hybrid system considerably decreased the number of evaluations over genetic algorithms. Also, cabled-trusses showed a significant improvement in structural mass minimisation when compared with trusses.
Hybrid fuzzy-genetic system for optimising cabled-truss structures
S0965997813000525
Detailed simulation and visualization of crane activities have been recently introduced to help engineers identify potential problems with critical erection tasks; however, the development of real-time erection visualizations for different types of cranes and lifting objects is time-consuming, and usually requires a certain amount of effort in modeling and setting up the environment. This research proposes a configurable model which is reusable, fast-prototyping, and extendable to support real-time visualization of the erection process. The developed model of the crane is divided into three modules which can be reconfigured for different erection tasks. Each module is defined using multiple rigid bodies and the joint constraints of multibody dynamics. The proposed modeling method can also be easily adapted to existing physics engines. To evaluate the feasibility of the proposed method, we observed the processes involved in a common erection activity and compared this with the visualization results obtained using proposed method. We also simulated a cooperative cranes scenario taken from an actual construction project to demonstrate its usability. We found that the proposed modeling method was easily able to visualize various erection activities with different cranes and configurations.
Configurable model for real-time crane erection visualization
S0965997813000549
In this article, a new methodology is presented to obtain representation models for a priori relation z = u(x 1, x 2,…, xn ) (1), with a known an experimental dataset z i , x i 1 , x i 2 , x i 3 , … , x i n i = 1 , 2 , … , p . In this methodology, a potential energy is initially defined over each possible model for the relationship (1), what allows the application of the Lagrangian mechanics to the derived system. The solution of the Euler–Lagrange in this system allows obtaining the optimal solution according to the minimal action principle. The defined Lagrangian, corresponds to a continuous medium, where a n-dimensional finite elements model has been applied, so it is possible to get a solution for the problem solving a compatible and determined linear symmetric equation system. The computational implementation of the methodology has resulted in an improvement in the process of get representation models obtained and published previously by the authors.
Generation of representation models for complex systems using Lagrangian functions
S0965997813000550
Crude oil pipeline laid with products pipeline in the same ditch is a new technology to save investment and protect environment. In order to provide reference to construction and operation optimization, the majority of this paper investigates the thermal influential factors affecting the crude oil temperature in double pipelines system using the computational fluid dynamics methodology. A two-dimensional rectangular region including the two pipelines is selected as computational domain in this investigation. Heat transfer models are proposed to obtain the temperature field distribution and the crude oil temperature drop. The impacts of pipeline interval, crude oil temperature at the outlet of heating station, diameter of crude oil pipeline and atmosphere temperature have been reasonably captured and analyzed in details. radial fraction concentration of wax moleculars, – specific heat capacity, Jkg−1 K−1 inner diameter of oil pipe, m coefficient of molecular diffusion, m2 s−1 per unit mass of internal energy, m2 s−2 coefficient of friction resistance, – acceleration of gravity, ms−2 per unit area of wax molecular diffusion quality, kgm−2 per unit mass of enthalpy, m2 s−2 entrance lengths, m Nusselt number pressure, Pa Prandtl number per unit area of heat flux density of crude oil, Wm−2 per unit mass of radiant heat transfer rate, m2 s−3 radius, m thickness of wax deposition, m Reynolds number time, s temperature, K temperature of inner pipe wall, K temperature of soil thermostat layer, K velocity, ms−1 coordinate, m heat transfer coefficient of oil flow, Wm−2 K−1 heat transfer coefficient of air, Wm−2 K−1 coefficient of heat conductivity, Wm−1 K−1 density, kgm−3 viscous stress, Pa circumferential direction,° kinetic viscosity, Pas per unit mass of energy dissipation by viscous friction, m2 s−3 air 1, 2 or 3, represents pipe wall, insulating layer and corrosion protective covering respectively wax water soil
Simulation analysis of thermal influential factors on crude oil temperature when double pipelines are laid in one ditch
S0965997813000562
In order to demonstrate and analyze the characteristic of the space vector pulse width modulation (SVPWM) technology, several key issues are discussed. The harmonic distortion factor derivation, the numerical analysis method and the program code are presented. The main components of the simulation model are discussed, and the key code is given. The theoretical and simulation spectra of the phase voltage and the DC bus current are presented. The program code is universal and can be used to many other kinds of SVPWM strategies with some minor modification. The simulation result shows that the simulation number has heavy effects on the analysis precision in the Monte Carlo method, and the simulation model is feasible and effective. The theoretical numerical analysis formulas of the SVPWM strategy are always complex, while the simulation method is convenient, so it is excellent if the two methods are used in conjunction.
Numeric analysis and simulation of space vector pulse width modulation
S0965997813000574
Renewable energy technologies are generally complex, requiring nonlinear simulation concepts. This holds true especially for solar updraft power plants, the scope of this treatment, which starts with a short introduction into their functioning. Then the basic physical conditions of the thermo-fluiddynamic processes in such plants including the solar radiation power transfers are summarized, and the expected numerical difficulties in the computer simulation are notified. It follows the discussion of the structure of the program code to be developed which shall compute the power generation of such plants in sufficient exactness and fast computing speed, in spite of strong nonlinearities. Finally, some economic statements for solar updraft power plants close the treatment.
An integrated computer model of a solar updraft power plant
S0965997813000616
Truss-Z (TZ) is a concept of a modular system for creating free-form links and ramp networks. It is intended as a universal transportation system for cyclists and pedestrians, especially ones with strollers or carts, and in particular – by persons on wheelchairs, the elders, etc. In other words, TZ is for people who have difficulties using regular stairs or escalators. With only two types of modules, TZ can be designed for nearly any situation and therefore is particularity suited for retrofitting to improve the mobility, comfort and safety of the users. This paper presents an application of evolution strategy (ES) and genetic algorithm (GA) for optimization of the planar layout of a TZ linkage connecting two terminals in a given environment. The elements of the environment, called obstacles, constrain the possible locations of the TZ modules. Criteria of this multi-objective optimization are: the number of modules to be the smallest, which can be regarded as quantitative economical optimization, and the condition that none of the modules collides with any other objects, which can be regarded as qualitative satisfaction of the geometrical constraints. Since TZ is modular, the optimization of its layout is discrete and therefore has combinatorial characteristic. Encoding of a planar TZ path, selection method, objective (cost) function and genetic operations are introduced. A number of trials have been performed; the results generated by ES and GA are compared and evaluated against backtracking-based algorithm and random search. The convergence of solutions is discussed and interpreted. A visualization of a realistic implementation of the best solution is presented. Further evaluation of the method on three other representative layouts is presented and the results are briefly discussed.
Application of evolutionary algorithms for optimum layout of Truss-Z linkage in an environment with obstacles
S096599781300063X
In order to follow modern trends in contemporary building architecture which is moving off the limits of current fire design models, assumption of homogeneous temperature conditions used for structural fire analysis needs to be validated. In this paper it is described, how temperature distribution in a medium-size fire compartment has been investigated experimentally by conducting fire test in two-storey experimental building in September 2011 in the Czech Republic. In the upper floor, a scenario of travelling fire was prepared. It has been observed that as flames were spreading across the compartment, considerable temperature gradients appeared. Numerical simulation of the travelling fire test conducted using FDS (Fire Dynamics Simulator) has been compared with simulation of compartment fire under uniform temperature conditions to highlight the potential impact of the gas temperature heterogeneity on structural behaviour. The temperature measurements from the fire test have been used for validation of the numerical simulation of travelling fire. The fire test has provided important data for design model of travelling fire and shown that its impact on structural behaviour is not in agreement with the assumption of homogenous temperature conditions.
Temperature heterogeneity during travelling fire on experimental building
S0965997813000641
Temperature and early-age mechanical properties in hydrating concrete structures present a significant risk for cracking, having a major impact on concrete durability. In order to tackle these phenomena, a multiscale analysis is formulated. It accounts for a high variety of cement properties, concrete composition, structure geometry and boundary conditions. The analysis consists of two steps. The first step focuses on the evolution of moisture and temperature fields. An affinity hydration model accompanied with non-stationary heat and moisture balance equations are employed. The second step contains quasi-static creep, plasticity and damage models. It imports the previously calculated moisture and temperature fields into the mechanical problem in the form of a staggered solution. The whole model has been implemented in the ATENA software, including also the effect of early-age creep, autogenous and drying shrinkage. Validation on selected structures shows a good prediction of temperature fields during concrete hardening and a reasonable performance of the mechanical part.
Multiscale hydro-thermo-mechanical model for early-age and mature concrete structures
S0965997813000744
This paper presents a methodology and software tools for parametric design of complex architectural objects, called digital or algorithmic forms. In order to provide a flexible tool, the proposed design philosophy involves two open source utilities DONKEY and MIDAS written in Grasshopper algorithm editor and C++, respectively, that are to be linked with a scripting-based architectural modellers Rhinoceros, IntelliCAD and the open source Finite Element solver OOFEM. The emphasis is put on the structural response in order to provide architects with a consistent learning framework and an insight into structural behaviour of designed objects. As demonstrated on three case studies, the proposed modular solution is capable of handling objects of considerable structural complexity, thereby accelerating the process of finding procedural design parameters from orders of weeks to days or hours.
A framework for integrated design of algorithmic architectural forms
S0965997813000756
This paper describes the calibration process for the uncoupled material model of ductile damage designed by Bai and Wierzbicki. This problem has been investigated within the project “Identification of ductile damage parameters for nuclear facilities”. The project includes material experiments, designing the shape of the samples, and using the samples for calibrating the material constants of selected ductile damage models. The calibration process for the Bai–Wierzbicki material model is based on fifteen tested samples corresponding with the literature. In this paper, we discuss some promising modifications of basic calibration approaches for uncoupled models. These modifications are based on variants in the formulation of the target function expressing the calibration error. The result of the calibration process was verified through FE simulation of a comparison of each specimen with experimental data. A material model with a description of the ductile damage in a wide range of stress states was searched within the project. For this purpose, the model was tested on some other specimens which exhibit higher stress concentration and showed the limitations of the Bai and Wierzbicki uncoupled model.
Calibration of fracture locus in scope of uncoupled elastic–plastic-ductile fracture material models
S0965997813000768
This paper presents a semi-analytical estimate of the response of a grandstand occupied by an active crowd and by a passive crowd. Filtered Gaussian white noise processes are used to approximate the loading terms representing an active crowd. Lumped biodynamic models with a single degree of freedom are included to reflect passive spectators occupying the structure. The response is described in terms of the first two moments, employing the Itô formula and the state augmentation method for the stationary time domain solution. The quality of the approximation is compared on the basis of three examples of varying complexity using Monte Carlo simulation based on a synthetic generator available in the literature. For comparative purposes, there is also a brief review of frequency domain estimates.
The response of grandstands driven by filtered Gaussian white noise processes
S096599781300077X
This study proposes a new robust multi-objective maintenance planning approach of the deteriorating bridges against uncertainty in performance degradation model. The main focus is to guarantee the performance requirements of the bridge by the scheduled maintenance interventions even in the presence of uncertainty in time-dependent performance degradation model. The uncertainties are modeled as the perturbation of the system parameters. These are simulated by a sampling method, and incorporated into the GA-based multi-objective optimization framework which produces a set of optimal preventive maintenance scenarios. In order to focus the searching on the most preferable region, the performance models of the bridge components are all integrated into single overall performance measure by using the preference-based objective-space reduction method. Numerical example of a typical prestressed concrete girder bridge is provided to demonstrate the new robust maintenance scheduling approach. For comparison purpose, non-robust multi-objective maintenance planning without considering uncertainty of the bridge performance is also provided. It is verified that the proposed approach can produce successfully-performing maintenance scenarios under the perturbation of bridge condition grades while maintaining well-balanced maintenance strategy both in terms of bridge performance and maintenance cost.
Robust multi-objective maintenance planning of deteriorating bridges against uncertainty in performance model
S0965997813000781
This paper mainly focuses on the impact contact problem during docking process of flexible probe. The docking dynamic model based on flexible probe is developed with the help of Lagrange analytical method. The modal equation of flexible rod is derived with the effect of the rigid counterweight considered. The contact models of docking impact in both normal and tangential directions are presented in detail. The time history of impact contact force, the shape and size of contact area, the distribution of contact stress and local deformation, and the effect of structural flexibility to the contact surface are discussed based on the theoretical model. Moreover, the ground-based docking impact experiment is conducted to verify the theoretical results. length of flexible beam angle between axis direction of chaser satellite and horizontal direction distance between the end of flexible beam and mass center of the chaser satellite counterweight of chaser satellite principal inertia moment of chaser satellite whole mass of target satellite principal inertia moment of target satellite density of docking probe coefficient of sliding friction Poisson ratio Poisson ratio geometric parameter of the cone cone angle axial displacement lateral deflection mode function of flexible beam modal coordinate modal exponent number mass inertia moment of docking probe mass inertia moment of the counterweight of chaser satellite deflection angle of the x-direction relative to the horizontal direction non-zero constant pressure distribution on the contact area local maximum stress loaded at the contact ellipse semi-major axis of the contact ellipse minor semi-axis of the contact ellipse first kind of complete elliptic integral second kind of complete elliptic integral elliptic eccentricity relative main curvature radius relative main curvature radius crossing angle between the two main curvature surfaces modulus of elasticity of docking probe modulus of elasticity of docking cone normal compression deformation total external load equivalent radius tangential displacement correction coefficient sectional inertia moment of docking probe
Contact analysis of flexible beam during space docking process
S0965997813000793
Overlapping and iteration between development activities are the main reasons to cause complexity in product development (PD) process. Overlapping may not only reduce duration of a project but also create rework risk, while iteration increases the project duration and cost. In order to balance the duration and cost, this article presents four types of time models from the angle of time overlapping and activities dependent relationships based on Collaboration Degree Design Structure Matrix (CD-DSM) and builds the cost model considering the negation cost. On basis of the formulated model, a hybridization of the Pareto genetic algorithm (PGA) and variable neighborhood search (VNS) algorithm is proposed to solve the bi-objective process optimization problem of PD project for reducing the project duration and cost. The VNS strategy is implemented after the genetic operation of crossover and mutation to improve the exploitation ability of the algorithm. And then, an industrial example, a LED module PD project in an optoelectronic enterprise, is provided to illustrate the utility of the proposed approach. The optimization model minimizes the project duration and cost associated with overlapping and iteration and yields a Pareto optimal solution of project activity sequence for project managers to make decision following different business purposes. The simulation results of two different problems show that the proposed approach has a good convergence and robustness.
Pareto process optimization of product development project using bi-objective hybrid genetic algorithm
S0965997813000902
This paper presents a statistical framework for assessing wireless systems performance using hierarchical data mining techniques. We consider WCDMA (wideband code division multiple access) systems with two-branch STTD (space time transmit diversity) and 1/2 rate convolutional coding (forward error correction codes). Monte Carlo simulation estimates the bit error probability (BEP) of the system across a wide range of signal-to-noise ratios (SNRs). A performance database of simulation runs is collected over a targeted space of system configurations. This database is then mined to obtain regions of the configuration space that exhibit acceptable average performance. The shape of the mined regions illustrates the joint influence of configuration parameters on system performance. The role of data mining in this application is to provide explainable and statistically valid design conclusions. The research issue is to define statistically meaningful aggregation of data in a manner that permits efficient and effective data mining algorithms. We achieve a good compromise between these goals and help establish the applicability of data mining for characterizing wireless systems performance.
Using hierarchical data mining to characterize performance of wireless system configurations
S0965997813000914
An open-source computer algebra system toolbox devoted to the analysis and synthesis for a wide class of nonlinear time-delay systems is introduced. This contribution provides a practical way to carry out all the computations used to characterize several properties of the systems under consideration, which involve elements in a non-commutative ring of polynomials, and an extended version of the Lie brackets. The package usage will be illustrated with some examples.
A computer algebra system for analysis and control of nonlinear time-delay systems
S0965997813000926
Cellular automata can be applied to solve several problems in a variety of areas, such as biology, chemistry, medicine, physics, astronomy, economics, and urban planning. The automata are defined by simple rules that give rise to behavior of great complexity running on very large matrices. 2D applications may require more than 106 ×106 matrix cells, which are usually beyond the computational capacity of local clusters of computers. This paper presents a solution for traditional cellular automata simulations. We propose a scalable software framework, based on cloud computing technology, which is capable of dealing with very large matrices. The use of the framework facilitates the instrumentation of simulation experiments by non-computer experts, as it removes the burden related to the configuration of MapReduce jobs, so that researchers need only be concerned with their simulation algorithms.
A cloud computing based framework for general 2D and 3D cellular automata simulation
S096599781300094X
A new hybrid-mixed stress finite element model for the static and dynamic non-linear analysis of concrete structures is presented and discussed in this paper. The main feature of this model is the simultaneous and independent approximation of the stress, the strain and the displacement fields in the domain of each element. The displacements along the static boundary, which is considered to include inter-element boundaries, are also directly approximated. To define the approximation bases in the space domain, complete sets of orthonormal Legendre polynomials are used. The adoption of these functions enables the use of analytical closed form solutions for the computation of all linear structural operators and leads to the development of very effective p-refinement procedures. To represent the material quasi-brittle behaviour, a physically non-linear model is considered by using damage mechanics. A simple isotropic damage model is adopted and to control strain localisation problems a non-local integral formulation is considered. To solve the dynamic non-linear governing system, a time integration procedure based on the use of the α-HHT method is used. For each time step, the solution of the non-linear governing system is achieved using an iterative algorithm based on a secant method. The model being discussed is applied to the solution of two-dimensional structures. To validate the model, to illustrate its potential and to assess its accuracy and numerical efficiency, several numerical examples are discussed and comparisons are made with solutions provided by experimental tests and with other numerical results obtained using conventional finite elements (CFE).
Static and dynamic physically non-linear analysis of concrete structures using a hybrid mixed finite element model
S0965997813000951
The use of hi-tech thermoplastic matrices (e.g. PEKK, PEEK or PPS) in carbon fiber-reinforced composites is growing, mainly in the aircraft industry. The manufacturing process is carried out at higher temperatures, which leads to residual stresses and dimensional changes of the manufactured part. Our work is focused on making an analytical prediction of the springback angle of C/PPS laminate with textile reinforcement. Better prediction will make the manufacturing tool more precise. Our analytical model took into account the temperature change, the moisture change and resin shrinkage during the cure cycle (which is crucial for semi-crystalline matrices). The analytical model is based on classical lamination theory (CLT) and an equation for through thickness characteristics. The description of the model was written in Matlab code, which was subsequently transformed into Java with GUI for easier input of the characteristics of the composite (lay-up, materials, angle of the layer, radii, volumetric fiber fraction and characteristics of the textile reinforcement-number of threads in warp and weft direction, thickness, type of weave, etc.) The results from our program were compared with the results measured by the manufacturer, and good agreement was achieved.
Java application for springback analysis of composite plates
S0965997813000975
The paper briefly summarizes the theoretical derivation of the objective stress rates that are work-conjugate to various finite strain tensors, and then briefly reviews several practical examples demonstrating large errors that can be used by energy inconsistent stress rates. It is concluded that the software makers should switch to the Truesdell objective stress rate, which is work-conjugate to Green’s Lagrangian finite strain tensor. The Jaumann rate of Cauchy stress and the Green-Naghdi rate, currently used in most software, should be abandoned since they are not work-conjugate to any finite strain tensor. The Jaumann rate of Kirchhoff stress is work-conjugate to the Hencky logarithmic strain tensor but, because of an energy inconsistency in the work of initial stresses, can lead to severe errors in the cases of high natural orthotropy or strain-induced incremental orthotropy due to material damage. If the commercial softwares are not revised, the user still can make in the user’s implicit or explicit material subroutines (such as UMAT and VUMAT in ABAQUS) a simple transformation of the incremental constitutive relation to the Truesdell rate, and the commercial software then delivers energy consistent results.
Review of energy conservation errors in finite element softwares caused by using energy-inconsistent objective stress rates
S0965997813000999
Emergency evacuation under fire condition in a mass transit station is a great concern especially in developing countries. The interaction between fire and human is very important in the analysis of emergency evacuation under fire condition. An integrated fire–human model, FDS+Evac, is widely used to solve numerically the simultaneous fire and evacuation processes. However, when the simulation runs increase, the simulation time and cost will increase dramatically. The use of discrete design method (DDM) to reduce the simulation time and cost in fire emergency evacuation simulations is proposed. The method is applied to an underground subway station to study the influence of different factors on fire emergency evacuation. The grid resolution is analyzed to determine an appropriate grid size that will optimize the solution accuracy and time. Different fire locations, heat release rates, occupant loadings, ventilation conditions and material properties are considered under fire condition in the underground subway station. It shows that the heat release rate has a weak influence on fire emergency evacuation, but the fire location, occupant loading, ventilation condition and material property have a great influence on fire emergency evacuation. Furthermore, the five parameters have a coupled function on fire emergency evacuation.
Fire emergency evacuation simulation based on integrated fire–evacuation model with discrete design method
S0965997813001063
User profiles play an important role in information retrieval system. In this paper, we propose a novel method for the acquisition of ontology-based user profiles. In the method, the ontology-based user profiles can maintain the representations of personal interest. In addition, user ontologies can be automatically constructed. The method can make user profiles strong expressive and less manually interfered.
A method for the acquisition of ontology-based user profiles
S0965997813001075
A practical approach for the thermal modeling of complex thermal systems, called the component interaction network (CIN) is presented. Its stages are explained: description of the thermal system as a set of non-overlapping components and their interactions by heat and mass exchanges, modeling of components with different levels of precision using finite volumes and finite elements, modeling of interactions by conduction, convection, radiation and advection, time resolution scheme and simulation. Non-conventional notions of conditional existence of components or time events are introduced. The approach is illustrated with a simple example of an electric furnace. It is then applied to a rapid thermal processing (RTP) furnace and validated experimentally. The advantages of the CIN approach are demonstrated.
The component interaction network approach for modeling of complex thermal systems
S0965997813001087
An urban scale Eulerian non-reactive multilayer air pollution model is proposed describing convection, turbulent diffusion and emission. A mass-consistent wind field model developed by authors is included in the air pollution model. An Adaptive Finite Element Method with characteristics in the horizontal directions and Finite Differences in the vertical direction using splitting techniques is proposed to numerically solve the corresponding PDE problem. A parallel version of the algorithm improves the precision of the solution keeping computation time below real time of simulation. A numerical example illustrates the whole problem.
An efficient algorithm for solving a multi-layer convection–diffusion problem applied to air pollution problems
S0965997813001099
The objective of the paper is to present methods and software for the efficient statistical, sensitivity and reliability assessment of engineering problems. Attention is given to small-sample techniques which have been developed for the analysis of computationally intensive problems. The paper shows the possibility of “randomizing” computationally intensive problems in the manner of the Monte Carlo type of simulation. In order to keep the number of required simulations at an acceptable level, Latin Hypercube Sampling is utilized. The technique is used for both random variables and random fields. Sensitivity analysis is based on non-parametric rank-order correlation coefficients. Statistical correlation is imposed by the stochastic optimization technique – simulated annealing. A hierarchical sampling approach has been developed for the extension of the sample size in Latin Hypercube Sampling, enabling the addition of simulations to a current sample set while maintaining the desired correlation structure. The paper continues with a brief description of the user-friendly implementation of the theory within FReET commercial multipurpose reliability software. FReET-D software is capable of performing degradation modeling, in which a large number of reinforced concrete degradation models can be utilized under the main FReET software engine. Some of the interesting applications of the software are referenced in the paper.
FReET: Software for the statistical and reliability analysis of engineering problems and FReET-D: Degradation module
S0965997813001105
It is well known that the variations of the element size have to be controlled in order to generate a high-quality mesh. However, it is not enough to limit the gradient of the size function to generate a mesh that correctly preserves the prescribed element size. To address this issue, in this work we define a criterion to assess when an element reproduces a size field. Then, using this criterion, we develop a novel technique to modify the initial size function by solving a non-linear equation. The new size function ensures that the elements will preserve the original size function. Moreover, an approximated method is developed to improve the computational cost of solving the non-linear equation. We use these techniques in two applications. First, we show that we can reduce the number of iterations to converge an adaptive process. Second, we show that quadrilateral and hexahedral meshing algorithms benefit from the new size function since it is not needed to perform a refining process to capture the initial size function.
Preserving isotropic element size functions in adaptivity, quadrilateral and hexahedral mesh generation
S0965997813001154
Three-dimensional transition elements are proposed achieve efficient and accurate connections of nonmatching meshes with different resolutions. These elements, termed variable-node elements, allow additional nodes on element faces of conventional hexahedral elements, as well as on element edges. By taking proper polynomial bases and their absolute values that correspond to the additional nodes, compatible trilinear shape functions are systematically derived in master domains of the elements. When one hexahedral element meets many other hexahedral elements at its faces or edges, the variable-node elements enable one-to-many connection of the dissimilar hexahedral elements in a seamless way. The effectiveness of the proposed scheme is demonstrated through numerical examples of local mesh refinement and subdivision modeling involving nonmatching mesh problems.
An efficient scheme for coupling dissimilar hexahedral meshes with the aid of variable-node transition elements
S0965997813001178
Most of the existing methods for dam behavior modeling require a persistent set of input parameters. In real-world applications, failures of the measuring equipment can lead to a situation in which a selected model becomes unusable because of the volatility of the independent variables set. This paper presents an adaptive system for dam behavior modeling that is based on a multiple linear regression (MLR) model and is optimized for given conditions using genetic algorithms (GA). Throughout an evolutionary process, the system performs real-time adjustment of regressors in the MLR model according to currently active sensors. The performance of the proposed system has been evaluated in a case study of modeling the Bocac dam (at the Vrbas River located in the Republic of Srpska), whereby an MLR model of the dam displacements has been optimized for periods when the sensors were malfunctioning. Results of the analysis have shown that, under real-world circumstances, the proposed methodology outperforms traditional regression approaches.
Adaptive system for dam behavior modeling based on linear regression and genetic algorithms
S0965997813001191
This paper presents an efficient and stable as-rigid-as-possible mesh deformation algorithm for planar shape deformation and hexahedral mesh generation. The deformation algorithm aims to preserve two local geometric properties: scale-invariant intrinsic variables and elastic deformation energy, which are together represented in a quadric energy function. To preserve these properties, the position of each vertex is further adjusted by iteratively minimizing this quadric energy function to meet the position constraint of the controlling points. Experimental results show that the deformation algorithm is efficient, and can obtain physically plausible results, which have the same topology structure with the original mesh. Such a mesh deformation method is useful to project the source surface mesh onto the target surfaces in hexahedral mesh generation based on sweep method, and application results show that the proposed method is feasible to mesh projection not only between similar surface contours but also dissimilar surface contours.
As-rigid-as-possible mesh deformation and its application in hexahedral mesh generation
S0965997813001208
In most numerical analyses using the Finite Element Method, several quantities, such as stresses, strains, fluid velocities and gradients, are computed at points in the interior of the solid elements, such as Gauss integration points for instance. Nevertheless, in many applications it is necessary to extrapolate these values to nodal points. That is the case with most visualization tools and post-processors, also in programs with auto-adaptive meshes, large deformations schemes such as Arbitrary Lagrangian–Eulerian Methods, and in programs using the Dynamic Programming Method. A generic methodology to perform this extrapolation in a precise and efficient way is proposed.
A local extrapolation method for finite elements
S096599781300121X
Ultra large scale systems are a new generation of distributed software system that are composed of various changing, inconsistent or even conflicting components that are distributed in a wide domain. Some important characteristics of these systems include their very large size, global geographical distribution, operational and managerial independence of their member systems. The main function of these systems arises from the interoperability between their components. Nowadays one of the most important challenges facing ultra large scale systems is the interoperability of their component systems. Interoperability is the ability by which system elements can exchange and understand the information required with each other. This paper aims to solve the mentioned challenge, which is divided into two main parts. In the first part, this paper presents a maturity model for the interoperability of ultra large scale systems, by using the interoperability level of the component system of one ultra large scale system its maturity level can be determined. In the second part, by proposing a framework we try to increase the interoperability of the component systems in ultra large scale systems based on the interoperability maturity levels determined in the first part. Consequently their interoperability is improved.
An interoperability model for ultra large scale systems
S0965997813001221
Overtaking is a complex driving behavior for intelligent vehicles. Current research on modeling overtaking behavior pays little attention on the effect of environment. This paper focuses on the modeling and simulation of the overtaking behavior in virtual reality traffic simulation system involving environment information, such as road geometry and wind. First, an intelligent vehicle model is proposed to better understand environment information and traffic situation. Then, overtaking behavior model is introduced in detail, the lane changing feasibility is analyzed and the fuzzy vehicle controllers considering the road and wind effect are researched. Virtual reality traffic simulation system is designed to realize the simulation of overtaking behavior, with realistic road geometry features. Finally, simulation results show the correctness and the effectiveness of our approach.
Modeling and simulation of overtaking behavior involving environment
S0965997813001518
Simulating fast transient phenomena involving fluids and structures in interaction for safety purposes requires both accurate and robust algorithms, and parallel computing to reduce the calculation time for industrial models. Managing kinematic constraints linking fluid and structural entities is thus a key issue and this contribution promotes a dual approach over the classical penalty approach, introducing arbitrary coefficients in the solution. This choice however severely increases the complexity of the problem, mainly due to non-permanent kinematic constraints. An innovative parallel strategy is therefore described, whose performances are demonstrated on significant examples exhibiting the full complexity of the target industrial simulations. local density for structures and fluid structural displacement structural Cauchy stress tensor structural Almansi–Euler strain tensor fluid velocity fluid pressure fluid total energy structural body forces fluid body forces
Advanced parallel strategy for strongly coupled fast transient fluid-structure dynamics with dual management of kinematic constraints
S096599781300152X
In the simulation of a chain of manufacturing processes, several finite element packages can be employed and for each process or package a different mesh density or element type may be the most suitable. Therefore, there is a need for transferring finite element analysis (FEA) data among packages and mapping it between meshes. This paper presents efficient algorithms for mapping FEA data between meshes with different densities and element types. An in-core spatial index is created on the mesh from which FEA data is transferred. The index is represented by a dynamic grid partitioning the underlying space from which nodes and elements are drawn into equal-sized cells. Buckets containing references to the nodes indexed are associated with the cells in a many-to-one correspondence. Such an index makes nearest neighbour searches of nodes and elements much faster than sequential scans. An experimental evaluation of the mapping techniques using the index is conducted. The algorithms have been implemented in the open source finite element data exchange system FEDES.
Fast mapping of finite element field variables between meshes with different densities and element types
S0965997813001531
The vibration response of an initially pre-stressed anchor cable made of parallel-lay aramid fibres excited by a measured and artificially simulated spatial turbulent wind field is presented in the paper. Results of the analyses of in situ measured wind records are described. For selected data set statistical characteristics and power spectral density functions of the measured wind velocity components are calculated. The wind stochastic velocity fluctuation is modelled as a one-variate bi-dimensional random field. Cross-power spectral density functions, at different point locations are introduced. The combination of the weighted amplitude wave superposition method (WAWS) with the Shinozuka–Deodatis method is used for the analyzed problem. A time-dependent behaviour of the synthetic cable is investigated which is subjected to turbulent wind with large expected oscillations that arise as a result of slackening due to the relaxation effects. A nonlinear transient dynamic analysis is used in conjunction with the finite element method to determine the dynamic response of the cable subjected to turbulent wind at its initially prestressed state and in the selected times after the relaxation effect. The constitutive equation of the relaxation of the aramid cable follows an experimentally obtained law of the logarithmic type. To monitor the dependences of the individual quantities of cable vibration in the phase space, attractors and Poincaré maps are created by sampling the cable’s displacement and velocity at periods of relevant frequencies. Interesting findings based on the response of the cable with rheological properties to turbulent wind are presented.
Vibrations of an aramid anchor cable subjected to turbulent wind
S0965997813001555
Architectural synthesis has gained rapid dominance in the design flows of application specific computing. Exploring an optimal design point during architectural synthesis is a tedious task owing to the orthogonal issues of reducing exploration time and enhancing design quality as well as resolving the conflicting parameters of power and performance. This paper presents a novel design space exploration (DSE) methodology multi-objective particle swarm exploration MO-PSE, based on the particle swarm optimization (PSO) for designing application specific processor (ASP). To the best of the authors’ knowledge, this is the first work that directly maps a complete PSO process for multi-objective DSE for power-performance trade-off of application specific processors. Therefore, the major contributions of the paper are: (i) Novel DSE methodology employing a particle swarm optimization process for multi-objective tradeoff, (ii) Introduction of a novel model for power parameter used during evaluation of design points in MO-PSE, (iii) A novel fitness function used for design quality assessment, (iv) A novel mutation algorithm for improving DSE convergence and exploration time, (v) Novel perturbation algorithm to handle boundary outreach problem during exploration and (vi) Results of comparison performed during multiple experiments that indicates average improvement in the quality of results (QoR) achieved is around 9% and average reduction in exploration time of greater than 90% compared to recent genetic algorithm (GA) based DSE approaches. The paper also reports results based on the variation and impact of different PSO parameters such as swarm size, inertia weight, acceleration coefficient, and termination condition on multi-objective DSE. multi-objective particle swarm exploration design space exploration high level synthesis particle swarm optimization system on chip lower boundary of design space upper boundary of design space resource configuration resource configuration (position) of ith particle dth dimension resource configuration (position) of ith particle new resource configuration (position) of ith particle dth dimension local best resource configuration of ith particle global best resource configuration local best resource configuration velocity velocity of ith particle dth dimension new velocity of ith particle dth dimension fitness function local best fitness of ith particle global best fitness among all particle number of Iteration current iteration number of particle (in Figs. 1 and 2 n =Ns) number of dimension current dimension minimum power consumed by resource configuration power consumption constrained specified by user maximum power consumed by resource c configuration total power consumed by resource configuration minimum execution time taken by a resource configuration execution time constrained specified by user maximum execution time taken by a resource configuration execution time taken by a resource configuration stopping criterion
MO-PSE: Adaptive multi-objective particle swarm optimization based design space exploration in architectural synthesis for application specific processor design
S0965997813001567
This paper presents a software framework, PARIS (PARameter Identification System), developed for automated finite element model updating for structural health monitoring. With advances in Application Programming Interfaces (API) for modern computing, the traditional boundaries between different standalone software packages hardly exist. Now complex problems can be distributed between different software platforms with advanced and specialized capabilities. PARIS takes advantage of the advancements in the computing environment and interfacing capabilities provided by commercial software to systematically distribute the structural parameter estimation problem into an iterative optimization and finite element analysis problem across different computing platforms. Three validation examples using simulated nondestructive test data for updating full-scale structural models under typically encountered damage scenarios are included. The results of model updating process for realistic structural models and their systematic treatment provide enhanced understanding of the aforementioned parameter estimation process and an encouraging path towards its feasible field application for structural health monitoring and structural condition assessment.
Automated finite element model updating of full-scale structures with PARameter Identification System (PARIS)
S0965997813001579
The maintenance management plays an important role in the monitoring of business activities. It ensures a certain level of services in industrial systems by improving the ability to function in accordance with prescribed procedures. This has a decisive impact on the performance of these systems in terms of operational efficiency, reliability and associated intervention costs. To support the maintenance processes of a wide range of industrial services, a knowledge-based component is useful to perform the intelligent monitoring. In this context we propose a generic model for supporting and generating industrial lights maintenance processes. The modeled intelligent approach involves information structuring and knowledge sharing in the industrial setting and the implementation of specialized maintenance management software in the target information system. As a first step we defined computerized procedures from the conceptual structure of industrial data to ensure their interoperability and effective use of information and communication technologies in the software dedicated to the management of maintenance (E-candela). The second step is the implementation of this software architecture with specification of business rules, especially by organizing taxonomical information of the lighting systems, and applying intelligence-based operations and analysis to capitalize knowledge from maintenance experiences. Finally, the third step is the deployment of the software with contextual adaptation of the user interface to allow the management of operations, editions of the balance sheets and real-time location obtained through geolocation data. In practice, these computational intelligence-based modes of reasoning involve an engineering framework that facilitates the continuous improvement of a comprehensive maintenance regime.
Software architecture knowledge for intelligent light maintenance
S0965997813001580
The management of concrete quality is an important task of concrete industry. This paper researched on the structured and unstructured factors which affect the concrete quality. Compressive strength of concrete is one of the most essential qualities of concrete, conventional regression models to predict the concrete strength could not achieve an expected result due to the unstructured factors. For this reason, two hybrid models were proposed in this paper, one was the genetic based algorithm the other was the adaptive network-based fuzzy inference system (ANFIS). For the genetic based algorithm, genetic algorithm (GA) was applied to optimize the weights and thresholds of back-propagation artificial neural network (BP-ANN). For the ANFIS model, two building methods were explored. By adopting these predicting methods, considerable cost and time-consuming laboratory tests could be saved. The result showed that both of these two hybrid models have good performance in desirable accuracy and applicability in practical production, endowing them high potential to substitute the conventional regression models in real engineering practice.
Prediction of concrete compressive strength: Research on hybrid models genetic based algorithms and ANFIS
S0965997813001609
A new combination of swarm intelligence and chaos theory is presented for optimal design of truss structures. Here the tendency to form swarms appearing in many different organisms and chaos theory has been the source of inspiration, and the algorithm is called chaotic swarming of particles (CSP). This method is a kind of multi-phase optimization technique which employs chaos theory in two phases, in the first phase it controls the parameter values of the particle swarm optimization (CPVPSO) and the second phase is utilized for local search (CLSPSO). Some truss structures are optimized using the CSP algorithm, and the results are compared to those of the other meta-heuristic algorithms showing the effectiveness of the new method.
Chaotic swarming of particles: A new method for size optimization of truss structures
S0965997813001610
In this paper, an automatic grid generator based on STL models is proposed. The staircase boundary treatment is implemented to handle irregular geometries and the computation domain is discretized using a regular Cartesian grid. Using the grid generator, staircase grids that are suitable for fast and accurate finite difference analysis could be generated. Employing the slicing algorithm in RP technologies [1], the STL models are sliced with a set of parallel planes to generate 2D slices after the STL files obtained from a CAD system undergo topology reconstruction. To decrease the staircase error (increase accuracy) and enhance working efficiency, the cross-section at the middle of the layer is taken to represent the cross-section of whole layer. The scan line filling technique of computer graphics [2] is used to achieve grid generation after slicing. Finally, we demonstrate an application of the introduced method to generate staircase grids, which allows successful FDM simulation in the field of explosion. The example shows that the automatic grid generator based on STL models is fast and gives simulation results that are in agreement with practical observations.
A grid generator for 3-D explosion simulations using the staircase boundary approach in Cartesian coordinates based on STL models
S0965997813001622
The main objective of this paper is to propose an optimization strategy which uses partially converged data to minimize the computational effort associated with an optimization procedure. The framework of this work is the optimization of assemblies involving contact and friction. Several tools have been developed in order to use a surrogate model as an alternative to the actual mechanical model. Then, the global optimization can be carried out using this surrogate model, which is much less expensive. This approach has two drawbacks: the CPU time required to generate the surrogate model and the inaccuracy of this model. In order to alleviate these drawbacks, we propose to minimize the CPU time by using partially converged data and then to apply a correction strategy. Two methods are tested in this paper. The first one consists in updating a partially converged metamodel using global enrichment. The second one consists in seeking the global minimum using the weighted expected improvement. One can achieve a time saving of about 10 when seeking the global minimum.
The use of partially converged simulations in building surrogate models
S0965997813001646
A novel self-adaptive geometric primitive for functional geometric shape synthesis is presented. This novel geometric primitive, for CAD use, is specifically designed to reproduce geometric shapes with functional requirements, such as the aerodynamic and hydrodynamic ones, once the functional parameters are furnished. It produces a typical CAD representation of a functional profile: a set of Bézier curves. The proposed primitive follows a generate-and-test approach and takes advantage of the use of a properly designed artificial neural network (BNN). It combines the properties of a geometric primitive and the capability to manage the engineering knowledge in a specific field of application. The proposed evolutionary primitive is applied to a real engineering application: the automatic synthesis of airfoils. Some examples are simulated in order to test the effectiveness of the proposed method. The results obtained by an original prototypal software are presented and critically discussed.
An evolutionary geometric primitive for automatic design synthesis of functional shapes: The case of airfoils
S0965997813001658
Bat inspired (BI) algorithm is a recently developed metaheuristic optimization technique inspired by echolocation behavior of bats. In this study, the BI algorithm is examined in the context of discrete size optimization of steel frames designed for minimum weight. In the optimum design problem frame members are selected from available set of steel sections for producing practically acceptable designs subject to strength and displacement provisions of American Institute of Steel Construction-Allowable Stress Design (AISC-ASD) specification. The performance of the technique is quantified using three real-size large steel frames under actual load and design considerations. The results obtained provide a sufficient evidence for successful performance of the BI algorithm in comparison to other metaheuristics employed in structural optimization.
Bat inspired algorithm for discrete size optimization of steel frames
S096599781300166X
Smoke is a leading cause of death in fire. To minimize the potential harm from the smoke hazards in the course of a fire, a rational virtual reality (VR)-based fire training simulator taking full account of the various aspects of smoke hazards has been developed and is described herein. In this simulator, a visualization technique based on volume rendering and fire dynamics data has been especially designed to create a realistic and accurate smoke environment for the purposes of effective virtual training, which allows the trainees to experience a realistic and yet non-threatening fire scenario. In addition, an integrated assessment model of smoke hazards is also established in order to assess the safety of different paths for evacuation or rescue in virtual training, which allows the trainees to learn to identify the safest path. Two case studies of a subway station and a primary school demonstrated a high level of accuracy and smooth interactive performance of the proposed simulator, which is thus shown to be valuable for the training of both people who might become trapped in fire and firefighters engaged in learning the proper rescue procedures.
A virtual reality based fire training simulator with smoke hazard assessment capacity
S0965997813001671
This paper presents an integrated approach for aerodynamic blade design in an MDO (multidisciplinary design optimization) environment. First, requisite software packages and data sources for flow computations and airfoil modeling are integrated into a single cybernetic environment, which significantly enhances their interoperability. Subsequently, the aerodynamic blade design is implemented in a quasi-3D way, supported by sophisticated means of project management, task decomposition and allotment, process definition and coordination. Major tasks of aerodynamic blade design include 1D meanline analysis, streamsurface computations, generation of 2D sections, approximation of 3D airfoils, and 3D flow analysis. After compendiously depicting all the major design/analysis tasks, this paper emphatically addresses techniques for blade geometric modeling and flow analysis in more detail, with exemplar application illustrations.
Integrated aerodynamic design and analysis of turbine blades
S0965997813001683
This paper describes an ongoing work in the development of a finite element analysis system, called TopFEM, based on the compact topological data structure, TopS [1,2]. This new framework was written to take advantage of the topological data structure together with object-oriented programming concepts to handle a variety of finite element problems, spanning from fracture mechanics to topology optimization, in an efficient, but generic fashion. The class organization of the TopFEM system is described and discussed within the context of other frameworks in the literature that share similar ideas, such as GetFEM++, deal.II, FEMOOP and OpenSees. Numerical examples are given to illustrate the capabilities of TopS attached to a finite element framework in the context of fracture mechanics and to establish a benchmark with other implementations that do not make use of a topological data structure.
An object-oriented framework for finite element analysis based on a compact topological data structure
S0965997813001750
One of the difficulties encountered in thermal modelling of welding processes is the determination of the input parameters and in particular the thermal boundary conditions. This paper describes a novel method of determining these values using an artificial neural network to solve the Inverse Heat Conduction Problem using the thermal history as input data. The method has been successfully applied to models that represent the heat transfer to the backing bar with a contact gap conductance heat transfer. Both constant and temperature dependent values of the contact gap conductance heat transfer coefficient have been used. The ANN was able to find the contact gap conductance heat transfer successfully in both cases, however the error was significantly lower for the constant value. The key to successful implementation is the ANN topology (e.g. generalized feedforward), and the development of effective methods of abstracting the thermal data.
Hybrid modelling of the contact gap conductance heat transfer in welding process
S0965997813001762
The full description of a two-stage speed reducer generally requires a large number of design variables (typically, well over ten), resulting a very large and heavily constrained design space. This paper presents the specific case of the complete automated optimal design with Genetic Algorithms of a two-stage helical coaxial speed reducer. The objective function (i.e. the mass of the entire speed reducer) was described by a set of 17 mixed design variables (i.e. integer, discrete and real) and also was subjected to 76 highly non-linear constraints. It can be observed that the proposed Genetic Algorithm offers better design solutions as compared with the results obtained by using the traditional design method (i.e. a commonly trial and cut error).
Optimal mass minimization design of a two-stage coaxial helical speed reducer with Genetic Algorithms
S0965997813001774
Online applications such as decision support systems depend on data collected from multiple sources. We develop a generic data acquisition and transmission framework by modularizing the repetitive functions. Other than data acquisition and distribution with necessary transformation, the framework can act as a middle storage when the sources cannot connect to the destination directly. We designed three types of data extractors to accommodate the data acquisition from the file system, the net protocol and the database. After being collected by the extractors, the data is processed by an assembler module, to fit the target’s data structure. The data is inserted into the database by a loader module, which gets data from the assembler module. The assembler module and the loader module are controlled by a monitor and controller module. These modules are highly configurable and they form a 3-Level hierarchy. Taking advantage of modular design and shared library technique, the framework is extensible and flexible.
A generic framework for data acquisition and transmission
S0965997813001786
In this paper, a multi-project scheduling in critical chain problem is addressed. This problem considers the influence of uncertainty factors and different objectives to achieve completion rate on time of the whole projects. This paper introduces a multi-objective optimization model for multi-project scheduling on critical chain, which takes into consideration multi-objective, such as overall duration, financing costs and whole robustness. The proposed model can be used to generate alternative schedules based on the relative magnitude and importance of different objectives. To respond to this need, a cloud genetic algorithm is proposed. This algorithm using randomness and stability of Normal Cloud Model, cloud genetic algorithm was designed to generate priority of multi-project scheduling activities and obtain plan of multi-project scheduling on critical chain. The performance comparison shows that the cloud genetic algorithm significantly outperforms the previous multi-objective algorithm.
Multi-objective optimization model for multi-project scheduling on critical chain
S0965997813001798
Currently, open source software (OSS) products have started to become popular in the market as an alternative to traditional proprietary or closed source software. Governments and organizations are beginning to adopt OSS on a large scale and several governmental initiatives have encouraged the use of OSS in the private sector. One major issue for the government and private sector is the selection of appropriate OSS. This paper uses new internal quality characteristics for selecting OSS that can be added to the dimensions of DeLone and McLean information systems’ model. Through this study, the quality characteristics are organized in a two level hierarchy, which list characteristics and sub-characteristics that are interconnected with three main dimensions: system quality, information quality and service quality. These characteristic dimensions are tailored to the criteria having been built from literature study and standard for software quality and guidelines. This paper presents case study results of applying the proposed quality characteristic on eight different open source software that are divided between open source network tools and learning management systems.
Empirical study of open source software selection for adoption, based on software quality characteristics
S0965997813001804
During the last decades, multigrid methods have been extensively used in order to solve large scale linear systems derived from the discretization of partial differential equations using the finite difference method. The effectiveness of the multigrid method can be also exploited by using the finite element method. Finite Element Approximate Inverses in conjunction with Richardon’s iterative method could be used as smoothers in the multigrid method. Thus, a new class of smoothers based on approximate inverses can be derived. Effectiveness of explicit approximate inverses relies in the fact that they are close approximants to the inverse of the coefficient matrix and are fast to compute in parallel. Furthermore, the proposed class of finite element approximate inverses in conjunction with the explicit preconditioned Richardson method yield improved results against the classic smoothers such as Jacobi method. Moreover, a dynamic relaxation scheme is proposed based on the Dynamic Over/Under Relaxation (DOUR) algorithm. Furthermore, results for multigrid preconditioned Krylov subspace methods, such as GMRES(res), IDR(s) and BiCGSTAB based on approximate inverse smoothing and a dynamic relaxation technique are presented for the steady-state convection-diffusion equation.
On the numerical modeling of convection-diffusion problems by finite element multigrid preconditioning methods