FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0965997813001816
In this research, a novel near optimum automated rigid aircraft engine parts assembly path planning algorithm based on particle swarm optimization approach is proposed to solve the obstacle free assembly path planning process in a 3d haptic assisted environment. 3d path planning using valid assembly sequence information was optimized by combining particle swarm optimization algorithm enhanced by the potential field path planning concepts. Furthermore, the presented approach was compared with traditional particle swarm optimization algorithm (PSO), ant colony optimization algorithm (ACO) and genetic algorithm (CGA). Simulation results showed that the proposed algorithm has faster convergence rate towards the optimal solution and less computation time when compared with existing algorithms based on genetics and ant colony approach. To confirm the optimality of the proposed algorithm, it was further experimented in a haptic guided environment, where the users were assisted with haptic active guidance feature to perform the process opting the optimized assembly path. It was observed that the haptic guidance feature further reduced the overall task completion time.
Haptic assisted aircraft optimal assembly path planning scheme based on swarming and artificial potential field approach
S0965997813001828
Dissimilar welded joints are commonly used in fossil power plants to connect martensitic steel components and austenitic stainless steel piping systems. The integrity for such welded structures is depended on residual stresses induced by manufacturing process. In this paper, the characteristics of residual stresses on the dissimilar welded pipe between T92 steel and S30432 steel were investigated using finite element method. Moreover, the effects of heat input, groove shape and layer number on the residual stress distribution were studied to find the approach to reduce the residual stress. The numerical results revealed that the hoop and axial stress in heat affected zone (HAZ) of T92 steel side of the dissimilar welded joint had sharp gradients. By decreasing the groove angle, the peak values of the hoop and axial stress on T92 steel side were reduced greatly while the peak values in welded metal and HAZ of S30432 steel side differed little. Furthermore, more layer number and less heat input decreased the peak value of the tensile residual stress on welded metal and S30432 steel side, but had little effect on the residual stress in T92 steel side.
Numerical simulation on the effect of welding parameters on welding residual stresses in T92/S30432 dissimilar welded pipe
S096599781300183X
Evidence theory employs a much more general and flexible framework to quantify the epistemic uncertainty, and thereby it is adopted to conduct reliability analysis for engineering structures recently. However, the large computational cost caused by its discrete property significantly influences the practicability of evidence theory. This paper proposes an efficient response surface (RS) method to evaluate the reliability for structures using evidence theory, and hence improves its applicability in engineering problems. A new design of experiments technique is developed, whose key issue is the search of the important control points. These points are the intersections of the limit-state surface and the uncertainty domain, thus they have a significant contribution to the accuracy of the subsequent established RS. Based on them, a high precise radial basis functions RS to the actual limit-state surface is established. With the RS, the reliability interval can be efficiently computed for the structure. Four numerical examples are investigated to demonstrate the effectiveness of the proposed method.
A response surface approach for structural reliability analysis using evidence theory
S0965997813001841
Cloud service is a new and distinctive business model for service providers. Access control is an emerging and challenging issue in supporting cloud service business. This work proposes a new access control mechanism called cloud service access control (CSAC). The CSAC mechanism considers payment status and service level as the two essential characteristics of cloud service. Ontology is a theoretical foundation for the CSAC mechanism. Inconsistent access control policies are detected by a set of proposed policy conflict analysis rules. Inappropriate user accesses are inhibited by access control policies according the proposed access denying rules. System architecture is designed to support the CSAC mechanism. A case study is provided to demonstrate how CSAC works. Finally, an evaluation is conducted to measure the concept explosion issue in CSAC.
Cloud service access control system based on ontologies
S0965997813001853
This work proposes a new meta-heuristic called Grey Wolf Optimizer (GWO) inspired by grey wolves (Canis lupus). The GWO algorithm mimics the leadership hierarchy and hunting mechanism of grey wolves in nature. Four types of grey wolves such as alpha, beta, delta, and omega are employed for simulating the leadership hierarchy. In addition, the three main steps of hunting, searching for prey, encircling prey, and attacking prey, are implemented. The algorithm is then benchmarked on 29 well-known test functions, and the results are verified by a comparative study with Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), Differential Evolution (DE), Evolutionary Programming (EP), and Evolution Strategy (ES). The results show that the GWO algorithm is able to provide very competitive results compared to these well-known meta-heuristics. The paper also considers solving three classical engineering design problems (tension/compression spring, welded beam, and pressure vessel designs) and presents a real application of the proposed method in the field of optical engineering. The results of the classical engineering design problems and real application prove that the proposed algorithm is applicable to challenging problems with unknown search spaces.
Grey Wolf Optimizer
S0965997813001865
The paper is dedicated to 3D computational modeling of amphibious aircraft Be-200. Hereby the process of amphibious aircraft components phased modeling is presented. Variants of shading and rendering of model under development are presented as well.
Computational modeling of multipurpose amphibious aircraft Be-200
S0965997813001877
In this work, we present a simulation model that makes it possible to find optimal values for various building parameters and the associated impacts that reduce the energy demand or consumption of the building. In the study, we consider several situations with different levels of thermal insulation. To define and to integrate the different models, a formal language (Specification and Description Language, SDL) is used. The main reason for using this formal language is that it makes it possible to define simulation models from graphical diagrams in an unambiguous and standard way. This simplifies the multidisciplinary interaction between team members. Additionally, the fact that SDL is an ISO standard simplifies its implementation because several tools understand this language. This simplification of the model makes it possible to increase the model credibility and simplify the validation and verification processes. In the present project, the simulation tools used were SDLPS (to rule the main simulation process) and Energy+ (as a calculus engine for energy demand). The interactions between all these tools are detailed and specified in the model, allowing a deeper comprehension of the process that define the life of a building from the point of view of its sustainability.
Formal simulation model to optimize building sustainability
S0965997814000106
In this paper we describe a process of developing software systems by capturing the conceptual domain knowledge of the problem domain using concept maps. To illustrate the implementation of this process we have used the example of developing Personal Health Information Systems. In addition to the aforementioned development process, the paper also describes an evaluation metric developed using Design Structure Matrix and information entropy to measure the structural properties of the concept map. The determination of entropy is based on the information derived from the hierarchical structure of a concept map. The probability distributions and the information entropy were calculated defining a new metric: node source connectivity strength based on the number of unique paths from a node to another. The results were compared by using a more standard metric, the graph node connectivity.
Semantic requirements sharing approach to develop software systems using concept maps and information entropy: A Personal Health Information System example
S0965997814000118
In recent years, the importance of economical considerations in the field of structures has motivated many researchers to propose new methods for minimizing the weight of the structures. In this paper, a new and simple optimization algorithm is presented to solve weight optimization of truss structures with continuous variables. The Colliding Bodies Optimization (CBO) is an algorithm based on one-dimensional collisions between two bodies, where each agent solution is modeled as the body. After a collision of two moving bodies, having specified masses and velocities, these are separated and moved to new positions with new velocities. This process is repeated until a termination criterion is satisfied and the optimum CB is found. Comparison of the results of the CBO with those of some previous studies, demonstrate its capability in solving the present optimization problems.
Colliding Bodies Optimization method for optimum design of truss structures with continuous variables
S096599781400012X
The paper is focussed on the robustness of parallel computation in the case of buckling and post-buckling analyses. In the nonlinear context, domain decomposition methods are mainly used as a solver for the tangent problem solved at each iteration of a Newton–Raphson algorithm. In case of strongly nonlinear and heterogeneous problems as those encountered in buckling and post-buckling, this procedure may lead to severe difficulties regarding convergence and efficiency. The problem of convergence is regarded as the most critical issue at the industrial level. Indeed if a method, which can show efficiency for some problems, is not robust with respect to convergence the method will not be implemented by industrial end-users. Therefore, two paths are explored to gain robustness when making use of domain decomposition methods: (1) a nonlinear localization strategy which may also improve the robustness by treating the nonlinearity at the subdomain level; and (2) a mixed framework allowing to circumvent the problem of local divergence (i.e. at the subdomain level). It is to be noted that those two ingredients may also be used to improve the numerical efficiency of the method but this is not the main focus of the paper. Simple structures are first considered to illustrate the method performances. Results obtained in the case of a boxed structure and of a stiffened panel are then discussed.
Domain decomposition methods with nonlinear localization for the buckling and post-buckling analyses of large structures
S0965997814000143
Flows with suspended particles is a challenging task and important in many applications such as sedimentation, rheology and fluidized suspensions. The coupling between the suspending liquid flow and the particles’ motion is the central point in the complete understanding of the phenomena that occur in these applications. Finite Element/Fictitious Domain is an important class of method used to solve this problem. In this work we propose a simple object oriented implementation for simulations of flows with suspended particles in the plane using the Fictitious Domain method together with Lagrange multipliers to solve the Navier–Stokes and rigid body equations with a fully implicitly and fully coupled Finite Element approach. To have an efficient implementation for Fictitious Domain/Finite Element method, we introduce a new topological data structure that is concise in terms of storage and very suitable for searching the elements of the mesh intersected by the particles.
Finite Element/Fictitious Domain programming for flows with particles made simple
S0965997814000167
For the prediction of ground vibrations generated by railway traffic, finite element analysis (FEA) appears as a competitive alternative to simulation tools based on the boundary element method: it is largely used in industry and does not suffer any limitation regarding soil geometry or material properties. However, boundary conditions must be properly defined along the domain border so as to mimic the effect of infinity for ground wave propagation. This paper presents a full three-dimensional FEA for the prediction of railway ground-borne vibrations. Non-reflecting boundaries are compared to fixed and free boundary conditions, especially concerning their ability to model the soil wave propagation and reflection. Investigations with commercial FEA software ABAQUS are presented also, with the development of an external meshing tool, so as to automatically define the infinite elements at the model boundary. Considering that ground wave propagation is a transient problem, the problem is formulated in the time domain. The influence of the domain dimension and of the element size is analysed and rules are established to optimise accuracy and computational burden. As an example, the structural response of a building is simulated, considering homogeneous or layered soil, during the passage of a tram at constant speed.
Using three-dimensional finite element analysis in time domain to model railway-induced ground vibrations
S0965997814000180
In this study a numerical calibration procedure is proposed; while its application in some of the most widely accepted damage indices (DIs) used for quantifying the extent of damage in reinforced concrete structures is presented. In particular, without loss of generality of the applicability of the proposed procedure, the Park and Ang local damage index, its modified variant presented by Kunnath, Reinhorn and Lobo; the Chung, Meyer and Shinozuka local damage index; along with the maximum and final softening damage indices proposed by DiPasquale and Çakmak, are calibrated on the basis of the width of crack openings. The estimation of the crack width is performed by means of detailed modelling with hexahedral finite elements for the concrete and rod elements for the steel reinforcement; while due to the computing demands the databank of values for the damage indices under investigation is defined based on coarse models with beam–column elements. These two steps of the proposed procedure are based on the incremental dynamic analysis. Next, the statistical characteristics of the DIs are computed by means of horizontal statistics in conjunction with the maximum likelihood function method and an optimization algorithm.
Numerical calibration of damage indices
S0965997814000192
Due to the increase in speed and lightweight construction, modern robots vibrate significantly during motion. Thus, accurate mechanical modeling and detailed controller behavior is essential for accurate path planning and control design of robots. For the suppression of undesired vibrations detailed models are used to develop robust controllers. Least square identification methods require deep insight in the analytical equations and thus are not very suitable for identification of different highly nonlinear robot models. Recently, we presented our genetic parameter identification in Brussels, Ludwig and Gerstmayr (2011). It minimizes the error of measured and simulated quantities. Highly efficient models in the multibody system tool HOTINT lead to short computational times for various simulations with different parameters. The simulation models can easily be assembled by engineers without a detailed knowledge of the underlying multibody system. As drawback of genetic optimization, many sub-minima were detected. Many simulations were required for the determination of the global minimum. Our current approach was to extend our previous algorithm. Measured and simulated quantities are transformed into the frequency domain. In contrast to previous work, Ludwig and Gerstmayr (2013), amplitude spectra of measured and simulated quantities are smoothed prior to the L2-norm computation. The presented method is tested using small scale test problems as well as real robots. Smoothing in the frequency domain leads to a smaller number of simulations needed for obtaining higher accuracy. It turns out that the presented algorithm is more accurate and precise than a standard algorithm and reduces the computational cost.
Special Genetic Identification Algorithm with smoothing in the frequency domain
S0965997814000209
A constrained version of ant colony optimisation algorithm (ACOA) is proposed in this paper for layout optimization of looped water distribution networks. A novel formulation is used to represent the layout optimization problem of pipe networks in the proper form required for the application of the ant algorithm. The proposed formulation is based on the engineering concept of reliability in which the number of independent paths from the source node to each of the network nodes is considered as a measure of reliability. In the proposed formulation, the ants are constrained to choose from the options provided by a constraining procedure so that only looped layouts are constructed by the ant leading to huge reduction of search space size compared to the original search space. Three different constraining procedures are used leading to three different algorithms. The proposed methods are used to find the optimal layout of three benchmark examples from the literature and the results are presented and compared to the results of the conventional ant colony optimization algorithm. The results show the efficiency and effectiveness of the proposed method for optimal layout determination of looped networks.
Layout optimization of looped networks by constrained ant colony optimisation algorithm
S0965997814000210
Refined models and nonlinear time-history analysis have been important developments in the field of urban regional seismic damage simulation. However, the application of refined models has been limited because of their high computational cost if they are implemented on traditional central processing unit (CPU) platforms. In recent years, graphics processing unit (GPU) technology has been developed and applied rapidly because of its powerful parallel computing capability and low cost. Hence, a coarse-grained parallel approach for seismic damage simulations of urban areas based on refined models and GPU/CPU cooperative computing is proposed. The buildings are modeled using a multi-story concentrated-mass shear (MCS) model, and their seismic responses are simulated using nonlinear time-history analysis. The benchmark cases demonstrate the performance-to-price ratio of the proposed approach can be 39 times as great as that of a traditional CPU approach. Finally, a seismic damage simulation of a medium-sized urban area is implemented to demonstrate the capacity and advantages of the proposed method.
A coarse-grained parallel approach for seismic damage simulations of urban areas based on refined models and GPU/CPU cooperative computing
S0965997814000222
Meshing tools are highly complex software for generating and managing geometrical discretizations. Due to their complexity, they have generally been developed by end users – physicists, forest engineers, mechanical engineers – with ad hoc methodologies and not by applying well established software engineering practices. Different meshing tools have been developed over the years, making them a good application domain for Software Product Lines (SPLs). This paper proposes building a domain model that captures the different domain characteristics such as features, goals, scenarios and a lexicon, and the relationships among them. The model is partly specified using a formal language. The domain model captures product commonalities and variabilities as well as the particular characteristics of different SPL products. The paper presents a rigorous process for building the domain model, where specific roles, activities and artifacts are identified. This process also clearly establishes consistency and completeness conditions. The usefulness of the model and the process are validated by using them to generate a software product line of Tree Stem Deformation (TSD) meshing tools. We also present Meshing Tool Generator, a software that follows the SPL approach for generating meshing tools belonging to the TSD SPL. We show how an end user can easily generate three different TSD meshing tools using Meshing Tool Generator.
Domain modeling as a basis for building a meshing tool software product line
S0965997814000234
The lattice Boltzmann method is being increasingly employed in the field of computational fluid dynamics due to its computational efficiency. Floating-point operations in the lattice Boltzmann method involve local data and therefore allow easy cache optimization and parallelization. Due to this, the cache-optimized lattice Boltzmann method has superior computational performance over traditional finite difference methods for solving unsteady flow problems. When solving steady flow problems, the explicit nature of the lattice Boltzmann discretization limits the time step size and therefore the efficiency of the lattice Boltzmann method for steady flows. To quantify the computational performance of the lattice Boltzmann method for steady flows, a comparison study between the lattice Boltzmann method (LBM) and the alternating direction implicit (ADI) method was performed using the 2-D steady Burgers’ equation. The comparison study showed that the LBM performs comparatively poor on high-resolution meshes due to smaller time step sizes, while on coarser meshes where the time step size is similar for both methods, the cache-optimized LBM performance is superior. Because flow domains can be discretized with multiblock grids consisting of coarse and fine grid blocks, the cache-optimized LBM can be applied on the coarse grid block while the traditional implicit methods are applied on the fine grid blocks. This paper finds the coupled cache-optimized lattice Boltzmann-ADI method to be faster by a factor of 4.5 over the traditional methods while maintaining similar accuracy.
Domain decomposition based coupling between the lattice Boltzmann method and traditional CFD methods—Part I: Formulation and application to the 2-D Burgers’ equation
S0965997814000301
Constitutive models for concrete based on the microplane concept have repeatedly proven their ability to well-reproduce non-linear response of concrete on material as well as structural scales. The major obstacle to a routine application of this class of models is, however, the calibration of microplane-related constants from macroscopic data. The goal of this paper is twofold: (i) to introduce the basic ingredients of a robust inverse procedure for the determination of dominant parameters of the M4 model proposed by Bažant et al. (2000) based on cascade artificial neural networks trained by evolutionary algorithm and (ii) to validate the proposed methodology against a representative set of experimental data. The obtained results demonstrate that the soft computing-based method is capable of delivering the searched response with an accuracy comparable to the values obtained by expert users.
Soft computing-based calibration of microplane M4 model parameters: Methodology and validation
S0965997814000313
Verification of the quantities of interest computed with the finite element method (FEM) requires an upper bound on the strain energy, which is half of the energy norm of displacement solutions. Recently, a modified finite element method with strain smoothing, the node-based smoothed finite element method (NS-FEM), has been proposed to solve solid mechanics problems. It has been found in some cases that the energy norm formed by the smoothed strain of NS-FEM solutions bounds the energy norm of exact displacements from above. We analyze the bounding property of this method, give three kind of energy norms of solutions computed by FEM and NS-FEM, and extend them to the computation of an upper bound and a lower bound on the linear functional of displacements. By examining the bounding property of NS-FEM with different energy norms using some linear elastic problems, the advantages of NS-FEM over the traditional error estimate based methods is observed.
The verification of the quantities of interest based on energy norm of solutions by node-based smoothed finite element method
S0965997814000325
This paper addresses the problem of a seamless interface between hydrodynamics and structural analyses. A pressure distribution on a hydro model computed from seakeeping analysis needs to be transferred to a structural model for evaluating structural strength and its integrity. However, due to the differences in the computation and representation methods for both analyses, the load on the hydro model may not be correctly transferred to the structural model, leading to a different load distribution on the structural model and resulting in some unbalanced force and moment components. In this paper, a method is proposed to solve this problem. A pressure distribution on the hydro model is mapped on the structural model through projection, and force and moment imbalances on the structural model are eliminated through optimization of the nodal forces on the structural model. Moreover, a viscous force distribution along the center of each member of the hydro model is transferred to the nodal forces on the structural model based on the minimum distance measure with resolving any force and moment imbalance. Examples are presented to demonstrate the validity of the proposed method.
Load mapping for seamless interface between hydrodynamics and structural analyses
S0965997814000337
A physically based approach to model vehicle dynamics, transient engine performance and engine thermal management system is presented. This approach enables modeling dynamic processes in the individual components and is the dynamic interaction of all relevant domains. The modeling framework is based on a common innovative solver, where all processes are solved using tailored numerical techniques suited to account for characteristic time scales of individual domains. This approach enables achieving very short computational times of the overall model. The paper focuses on the integration of cooling and lubrication models into the framework of a vehicle dynamics simulation including transient engine performance demonstrated on a modern passenger car featuring split cooling functionality. A validated model with a mechanically driven coolant pump provides the base for analyzing the impact of introducing an electrically driven coolant pump. Analyses are performed for two drive cycles featuring significantly different velocity profiles to reveal their influences on the operational principles of the powertrain components and their interaction. The results show for both drive cycles fuel saving due to the application of the electric water pump is relatively small and amounts between 0.75% and 1.1%. However, it is important to address that application of the electric coolant pump results in higher turbine outlet temperatures and thus in faster catalyst heat-up. Detailed analyses of the interaction between vehicle dynamics, transient engine performance and engine thermal management system provide insight into the underlying mechanisms. This is made possible by the application of physically based system level model. brake mean effective pressure crank-angle combustion combustion products central processing unit differential algebraic equation engine control unit exhaust gas recirculation engine thermal management electrical water pump fuel burned friction mean effective pressure fuel vapor heat transfer liquid heat exchanger conductive heat transfer element convective gas heat transfer element convective liquid heat transfer element internal combustion engine Japanese test cycle lumped mass mechanical water pump new European drive cycle ordinary differential equation rate of heat release spark ignition turbocharged gasoline direct injection three way catalyst element acceleration vector (m/s2) area (m2) capacity matrix (variable) momentum to mass connection matrix (–) constrain matrix for accelerations (–) heat capacity (J/kgK) constrain matrix for momentums (–) diameter (m) flux vector (variable) generic right-hand-side function (–) gravity acceleration (m/s2) specific enthalpy (J/kg) enthalpy flow (W) loss coefficient (–) length (m) mass (kg) mass flow (kg/s) mass matrix (kgm2) element torque (Nm) speed (rpm) perimeter (m) pressure (Pa) specific gas constant (J/kgK) system matrix (variable) time (s) temperature (K) specific internal energy (J/kg) volume (m3) velocity (m/s) mass fraction (kg/kg) species flow vector (kg/s) state vector (variable) spatial coordinate (m) efficiency (–) thermal conductivity (W/mK) friction coefficient (–) density (kg/m3) torque (Nm) state vector (variable) integration increment, time, crank angle (–) angular position (rad) channel shape factor (–) angular velocity (rad/s) active species all additional blow by combustion cross section component cylinder downstream element heat transfer hydraulic element index inlet injection element index element index losses outlet passive species piston ring pump upstream wall ∗The “bold B” represents two different matrices.
Assessment of engine thermal management through advanced system engineering modeling
S0965997814000349
The efficient management of monitoring data is necessary for large geotechnical engineering projects. The development of an information management, prediction and warning software system for geotechnical monitoring is presented in this study. Seven categories of property objects that describe the hierarchical relationships among the monitoring objects, as well as two objects that represent and manage the construction progress, are proposed based on the requirements of geotechnical monitoring, data flow and the monitoring objectives of the site. The corresponding data structure and database were established using the object-oriented method in the Visual C++ environment. The software integrated various types of information and document management schemes, including data input and processing, CAD drawing visualisation, data modelling and prediction, as well as an early warning function. The applied case studies indicate that the software system is highly flexible and reliable and can be widely applied to monitor the sites of various geotechnical construction projects, such as tunnels, underground caverns, slopes and foundation pits.
A relationship-based and object-oriented software for monitoring management during geotechnical excavation
S0965997814000416
The monitoring of tool wear status is paramount for guaranteeing the workpiece quality and improving the manufacturing efficiency. In some cases, classifier based on small training samples is preferred because of the complex tool wear process and time consuming samples collection process. In this paper, a tool wear monitoring system based on relevance vector machine (RVM) classifier is constructed to realize multi categories classification of tool wear status during milling process. As a Bayesian algorithm alternative to the support vector machine (SVM), RVM has stronger generalization ability under small training samples. Moreover, RVM classifier results in fewer relevance vectors (RVs) compared with SVM classifier. Hence, it can be carried out much faster compared to the SVM. To show the advantages of the RVM classifier, milling experiment of Titanium alloy was carried out and the multi categories classification of tool wear status under different numbers of training samples and test samples are realized by using SVM and RVM classifier respectively. The comparison of SVM with RVM shows that the RVM can get more accurate results under different number of small training samples. Moreover, the speed of classification is faster than SVM. This method casts some new lights on the industrial environment of the tool condition monitoring.
Force based tool wear monitoring system for milling process based on relevance vector machine
S0965997814000428
In this paper, we present a method for estimating local thickness distribution in finite element models, applied to injection molded and cast engineering parts. This method features considerable improved performance compared to two previously proposed approaches, and has been validated against thickness measured by different human operators. We also demonstrate that the use of this method for assigning a distribution of local thickness in FEM crash simulations results in a much more accurate prediction of the real part performance, thus increasing the benefits of computer simulations in engineering design by enabling zero-prototyping and thus reducing product development costs. The simulation results have been compared to experimental tests, evidencing the advantage of the proposed method. Thus, the proposed approach to consider local thickness distribution in FEM crash simulations has high potential on the product development process of complex and highly demanding injection molded and cast parts and is currently being used by Ford Motor Company.
Improving FEM crash simulation accuracy through local thickness estimation based on CAD data
S0965997814000441
In this paper a new mathematical geometric model of spiral triangular wire strands with a construction of (3+9) and (3+9+15) wires is proposed and an accurate computational two-layered triangular strand 3D solid modelling, which is used for a finite element analysis, is presented. The present geometric model fully considers the spatial configuration of individual wires in the strand. The three dimensional curve geometry of wires axes in the individual layers of the triangular strand consists of straight linear and helical segments. The derived mathematical representation of this curve is in the form of parametric equations with variable input parameters which facilitate the determination of the centreline of an arbitrary circular wire of the right and left hand lay triangular one and two-layered strands. Derived geometric equations were used for the generation of accurate 3D geometric and computational strand models. The correctness of the derived parametric equations and performance of the generated strand model are controlled by visualizations. The 3D computational model was used for a finite element behaviour analysis of the two-layered triangular strand subjected to tension loadings. Illustrative examples are presented to highlight the benefits of the proposed geometric parametric equations and computational modelling procedures by using the finite element method.
Computer modelling and finite element analysis of spiral triangular strands
S0965997814000453
One of the most critical issues when deploying wireless sensor networks for long-term structural health monitoring (SHM) is the correct and reliable operation of sensors. Sensor faults may reduce the quality of monitoring and, if remaining undetected, might cause significant economic loss due to inaccurate or missing sensor data required for structural assessment and life-cycle management of the monitored structure. This paper presents a fully decentralized approach towards autonomous sensor fault detection and isolation in wireless SHM systems. Instead of physically installing multiple redundant sensors in the monitored structure (“physical redundancy”), which would involve substantial penalties in cost and maintainability, the information inherent in the SHM system is used for fault detection and isolation (“analytical redundancy”). Unlike traditional centralized approaches, the analytical redundancy approach is implemented distributively: Partial models of the wireless SHM system, implemented in terms of artificial neural networks in an object-oriented fashion, are embedded into the wireless sensor nodes deployed for monitoring. In this paper, the design and the prototype implementation of a wireless SHM system capable of autonomously detecting and isolating various types of sensor faults are shown. In laboratory experiments, the prototype SHM system is validated by injecting faults into the wireless sensor nodes while being deployed on a test structure. The paper concludes with a discussion of the results and an outlook on possible future research directions.
Decentralized fault detection and isolation in wireless structural health monitoring systems using analytical redundancy
S0965997814000465
In this paper two types of tensor product finite macro-elements are contrasted, the former being the well known Lagrange type and the latter the Bézier (Bernstein) type elements. Although they have a different mathematical origin and seemingly are irrelevant, they both are based on complete polynomials thus sharing the same functional space, i.e. the classes { x n } and { y n } . Therefore, from the theoretical point of view it is anticipated that they should lead to numerically identical results in both static and dynamic analysis. For both types of elements details are provided concerning the main computer programming steps, while selective parts of a typical MATLAB® code are presented. Numerical application includes static (Laplace, Poisson), eigenvalue (acoustics) and transient (heat conduction) problems of rectangular, circular and elliptic shapes, which were treated as a single macroelement. In agreement to the theory, in all six examples the results obtained using Bézier and Lagrange polynomials were found to be identical and of exceptional accuracy.
Bézier versus Lagrange polynomials-based finite element analysis of 2-D potential problems
S0965997814000477
Within the multibody systems literature, few attempts have been made to use automatic differentiation for solving forward multibody dynamics and evaluating its computational efficiency. The most relevant implementations are found in the sensitivity analysis field, but they rarely address automatic differentiation issues in depth. This paper presents a thorough analysis of automatic differentiation tools in the time integration of multibody systems. To that end, a penalty formulation is implemented. First, open-chain generalized positions and velocities are computed recursively, while using Cartesian coordinates to define local geometry. Second, the equations of motion are implicitly integrated by using the trapezoidal rule and a Newton–Raphson iteration. Third, velocity and acceleration projections are carried out to enforce kinematic constraints. For the computation of Newton–Raphson’s tangent matrix, instead of using numerical or analytical differentiation, automatic differentiation is implemented here. Specifically, the source-to-source transformation tool ADIC2 and the operator overloading tool ADOL-C are employed, in both dense and sparse modes. The theoretical approach is backed with the numerical analysis of a 1-DOF spatial four-bar mechanism, three different configurations of a 15-DOF multiple four-bar linkage, and a 16-DOF coach maneuver. Numerical and automatic differentiation are compared in terms of their computational efficiency and accuracy. Overall, we provide a global perspective of the efficiency of automatic differentiation in the field of multibody system dynamics.
Performance of automatic differentiation tools in the dynamic simulation of multibody systems
S0965997814000489
In practice the maximum usage of container space arises in many applications which is one of the crucial economical requirements that have a wide impact on good transportation. A huge amount of monetary infrastructure is spent by companies on packing and transportation. This study recommends that there exists a scope for further optimization which if implemented can lead to huge saving. In this paper, we propose a new hyper heuristic approach which automates the design process for packing of two dimensional rectangular blocks. The paper contributes to the literature by introducing a new search technique where genetic algorithm is coupled with the hyper heuristic to get the optimal or sub optimal solution at an acceptable rate. The results obtained show the benefits of hyper-heuristic over traditional one when compared statistically on large benchmark dataset at the 5% level of significance. Improvements on the solution quality with high filling rate up to 99% are observed on benchmark instances.
Design of efficient packing system using genetic algorithm based on hyper heuristic approach
S0965997814000635
A design methodology to predict optimum post-tensioning forces and dimensioning of the cable system for hybrid cable-stayed suspension (HCS) bridges is proposed. The structural model is based on the combination of an FE approach and an iterative optimization procedure. The former is able to provide a refined description of the bridge structure, which takes into account geometric nonlinearities involved in the bridge components. The latter is utilized to optimize the shape of post-tensioning forces as well as the geometry of the cable system to achieve minimum deflections, lowest steel quantity involved in the cable system and maximum performance of the cables under live load configurations. Results are proposed in terms of comparisons with existing formulations to validate the proposed methodology. Moreover, parametric studies on more complex long span structures are also developed to verify existing cable-dimensioning rules and to analyze between HCS bridges and conventional cable-stayed or suspension systems. values arising from previous iteration step or trial values obtained by preliminary dimensioning rules stiffening girder variable pylon variable stay variable hanger variable main cable variable cross-section area incremental cross-section area of the cable half girder cross-section width cable-stayed bridge girder depth dead load configuration material elasticity modulus main cable sag self-weight per unit length pylon height initial main cable horizontal axial force hybrid cable-stayed suspension bridge moment of inertia with respect to the i-axis factor torsional stiffness number of iterations in plane flexural top pylon stiffness lateral span length portion of the main span without stays central span length ith cable length girder total length live load configuration number of cables live load per unit length total cable steel quantity stayed-suspension coupling parameter incremental initial stress in the ith cable cable stress under self-weight loading initial design stress of the ith cable allowable cable stress incremental allowable cable stress suspension bridge component of the displacement field along i-axis longitudinal stay geometric slope maximum allowable displacement stay spacing step maximum longitudinal main cable geometric slope stress performance factor of the ith cable material specific weight strength performance factor of the ith cable cable cross-section optimization factor cable stress optimization factor component of the rotation field around i-axis
Optimum design analysis of hybrid cable-stayed suspension bridges
S0965997814000647
This paper deals with the parallel solution of the stationary obstacle problem with convection–diffusion operator. The obstacle problem can be formulated by various ways and in the present study it is formulated like a multivalued problem. Another formulation by complementary problem is also considered. Appropriate discretization schemes are considered for the numerical solution on decentralised memory machines by using parallel synchronous and asynchronous Schwarz alternating algorithms. The considered discretization schemes ensure the convergence of the parallel synchronous or asynchronous Schwarz alternating methods on one hand for the solution of the multivalued problem and on the other hand for the solution of the complementary problem. Finally the implementation of the algorithms is described and the results of parallel simulations are presented.
Asynchronous Schwarz methods applied to constrained mechanical structures in grid environment
S0965997814000659
This paper proposes a new multi-objective optimization method for a family of double suction centrifugal pumps with various blade shapes, using a Simulation-Kriging model-Experiment (SKE) approach. The Kriging metamodel is established to approximate the characteristic performance functions of a pump, namely, the efficiency and required net positive suction head (NPSHr). Hence, the two objectives are to maximize the efficiency and simultaneously to minimize NPSHr. The Non-dominated Sorting Genetic Algorithm II (NSGA II) and Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) have been applied to the multi-objective optimization problem, respectively. The Pareto solution set is obtained by a more effective and efficient manner of the two multi-objective optimization algorithms. A tradeoff optimal design point is selected from the Pareto solution set by means of a robust design based on Monte Carlo simulations, and the optimal solution is further compared with the value of the physical prototype test. The results show that the solution of the proposed multi-objective optimization method is in line with the experiment test.
Multi-objective optimization of double suction centrifugal pump using Kriging metamodels
S0965997814000660
An open source program to generate zero-thickness cohesive interface elements in existing finite element discretizations is presented. This contribution fills the gap in the literature that, to the best of the author’s knowledge, there is no such program exists. The program is useful in numerical modeling of material/structure failure using cohesive interface elements. The program is able to generate one/two dimensional, linear/quadratic cohesive elements (i) at all inter-element boundaries, (ii) at material interfaces and (iii) at grain boundaries in polycrystalline materials. Algorithms and utilization of the program is discussed. Several two dimensional and three dimensional fracture mechanics problems are given including debonding process of material interfaces, multiple delamination of composite structures, crack propagation in polycrystalline structures.
An open source program to generate zero-thickness cohesive interface elements
S0965997814000672
Discrete models are based on the descriptions of the physical states (e.g., velocity, position, temperature, magnetic momenta and electric potential) of a large number of discrete elements that form the media under study. These models are not based on a continuous description of the media. Thus, the models are particularly well adapted to describe the evolution of media driven by discontinuous phenomena such as multi-fracturation followed by debris flow as occurs in wear studies. Recently, the use of discrete models has been widened to face problems of complex rheological behaviors and/or multi-physical behaviors. Multi-physical problems involves complex mathematical formulations because of the combination of different families of differential equations when a continuous approach is chosen. These formulas are often much simpler to express in discrete models, in which each particle has a physical state and the evolution of that state is due to local physical interactions among particles. Since the year 2000, this method has been widely applied to the study of tribological problems including wear (Fillot et al., 2007) [1], the thermo-mechanical behavior of a contact (Richard et al., 2008) [2] and subsurface damage due to surface polishing (Iordanoff et al., 2008) [3]. Recent works have shown how this method can be used to obtain quantitative results (André et al., 2012) [4]. To assist and promote research in this area, a free platform GranOO has been developed under a C++ environment and is distributed under a free GPL license. The primary features of this platform are presented in this paper. In addition, a series of examples that illustrate the main steps to construct a reliable tribological numerical simulation are detailed. The details of this platform can be found at http://www.granoo.org.
The GranOO workbench, a new tool for developing discrete element simulations, and its application to tribological problems
S0965997814000684
The recently introduced Finite Cell Method combines the fictitious domain idea with the benefits of high-order finite elements. Although previous publications demonstrated the method’s excellent applicability in various contexts, the implementation of a three-dimensional Finite Cell code is challenging. To lower the entry barrier, this work introduces the object-oriented MATLAB toolbox FCMLab allowing for an easy start into this research field and for rapid prototyping of new algorithmic ideas. The paper reviews the essentials of the methods applied and explains in detail the class structure of the framework. Furthermore, the usage of the toolbox is discussed by means of different two- and three-dimensional examples demonstrating all important features of FCMLab (http://fcmlab.cie.bgu.tum.de/).
FCMLab: A finite cell research toolbox for MATLAB
S0965997814000751
This paper investigates the search performances of various meta-heuristics (MHs) for solving truss mass minimisation with dynamic constraints. Several established MHs were used to solve five truss optimisation problems. The results obtained from using the various MHs were statistically compared based upon convergence rate and consistency. It was found that the best optimisers for this design task are evolution strategy with covariance matrix adaptation (CMAES) and differential evolution (DE). Furthermore, the best penalty function technique was discovered while four penalty function techniques assigned with several parameter settings were used in combination with the five best optimisers to solve the truss optimisation problems.
Comparative performance of meta-heuristic algorithms for mass minimisation of trusses with dynamic constraints
S0965997814000763
A uniform multiscale computational method is developed for 2D static and dynamic analyses of lattice truss materials in elasticity based on the extended multiscale finite element method. A kind of multi-node coarse element is proposed to describe the more complex deformations compared with the original four-node coarse element and the mode base functions are added into the original multiscale base functions to consider the effects of inertial forces for the dynamic problems. The constructions of the displacement and mode base functions are introduced in detail. In addition, the orthogonality of the displacement and mode base functions are also proved, which indicates that the macroscopic displacement DOF and modal DOF are irrelevant and independent of each other. Finally, some numerical experiments are carried out to verify the validity and efficiency of the proposed method by comparison with the reference solution obtained by the standard finite element method on the fine mesh.
An equivalent multiscale method for 2D static and dynamic analyses of lattice truss materials
S0965997814000775
Optimizations have been applied in many different engineering fields. Most of these applications may have similar characteristics: intensive on computing resources, time-consuming on calculation iterations, similar on computing environments and so on. This paper describes a generic cloud platform for engineering optimization by leveraging the compute resources hosted in cloud datacenters. The methodology developed was to decompose the engineering optimization processes into several interconnected sub-tasks, which were further converted and implemented as virtual applications for dynamic cloud deployment using OpenStack. The system can dynamically allocate and recycle the compute resources according to the specific engineering optimization applications. The research presented in the paper contributes on the generic engineering optimization process virtualization and cloud computing based implementation with innovative algorithms development. The system test results showed a way that potentially engineering optimization problems could be embedded into the put forward platform due to the developed large scale and intelligent cloud based optimization services. Further applications for building energy simulation and optimization, stents optimization, water distribution optimization are currently under development.
A generic cloud platform for engineering optimization based on OpenStack
S0965997814000787
The effect of topographic features on wind speed and wake turbulence is evaluated by conducting Computational Fluid Dynamics (CFD) simulations using an in-house CFD program that features various turbulence models. The simulation results are assessed by computing Fractional Speed Up Ratio (FSUR) along longitudinal lines at different elevations. Such information is useful for evaluating wind loads on long span structures and micro-siting of wind turbines on complex terrain. Simulations are conducted on both idealized and real topographic features in both 2D and 3D domain. The turbulence structure behind hills is examined using several turbulence models such as the mixing-length, standard k – ∊ , RNG k – ∊ , realizable k – ∊ and Smagorinsky LES models. All turbulence models predicted FSUR values on upstream side of hills adequately; however, the performance of simple turbulence models, such as mixing length, is found to be insufficient for characterizing wakes behind hills. RANS turbulence models gave results close to one another; however, those models that incorporate modifications to account for adverse pressure gradient conditions performed better at wakes behind hills. LES conducted at full scale dimensions, and using wall functions, failed to give results that are comparable to the other turbulence models. Re-conducting the simulations at model scale dimensions, hence at relatively small Reynolds number, and without using wall functions gave results that are comparable to those found in the literature. Therefore, use of wall functions can degrade quality of results in LES of high Reynolds number flows of practical interest.
Wind flow simulations on idealized and real complex terrain using various turbulence models
S0965997814000799
The dynamic and lubrication characteristics are all very complex problem in piston-liner analysis, and they have great effect on the power output, vibration, noise emission. In this paper, the numerical model which concludes lubrication part and dynamic motion is established, the lubrication is solved by the finite element method, and dynamic equation is solved by Runge–Kutta. The effect of piston skirt parameters on dynamic characteristics are compared based on a typical inline six-cylinder engine, such as: clearance, offset of piston pin, length of piston skirt, position of bump, curvature parameter and ellipticity of the piston, all the result mainly focus on the slap noise of the engine. All the analyses are very useful to design of piston-liner at the development of the engine, and it can provide the guidance for the design of the low noise engine. diameter of piston offset of the piston pin mass of the piston mass of the piston pin mass of small conrod side length of the conrod speed of the engine wave of the piston mass of the first piston ring mass of the second piston ring mass of the third piston ring radius of the crank-pin width of the first piston ring width of the second piston ring width of the third piston ring initial force of the piston system length of the piston skirt the length from the third ring to the pin center the moment initial of the pin the moment initial of the piston the velocity along x, y, z direction dynamic viscosity of oil film pressure flow factors along x, y direction shear flow factors roughness of surface oil film pressure oil film thickness time velocity of the piston the offset distance relative the mass of the piston the tilt angle of the piston the distance from piston mass center to top of piston skirtedge. the coordinate of the element center the transformed coordinate the long and short axis of oval the crank angle the lateral displacement of the top skirt the lateral displacement of the bottom skirt the cylinder pressure the piston groove pressure in the first ring the piston groove pressure in the second ring the diameter of the cylinder the cylinder pressure force the friction moment the tilt angle the friction coefficient the moment caused by oil film the friction force the distance from the first ring to pin center the distance from the second ring to pin center the connect rod force the elliptic coefficient the ellipticity of the piston skirt the curve fitting coefficient
Piston dynamic characteristics analyses based on FEM method Part I: Effected by piston skirt parameters
S0965997814000805
Haptic devices are nowadays gaining popularity because of their increasing availability. These special input/output devices provide, unlike mouse or keyboard, a native 3D manipulation, especially a more precise control and a force interaction. With more accurate description of the model, haptics can achieve more realistic force feedback. Therefore, triangulated surface models are often used for an authentic interpretation of 3D models. A common task in haptic visualization using triangulated surface models is to find a triangle which is in the collision trajectory of the haptic probe. Since the render rate of the haptic visualization is relatively high (usually about 1kHz), the task becomes highly non-trivial for complex mesh models, especially for the meshes which are changing over time. The paper presents a fast and novel location algorithm able to find the triangle which is close to the haptic probe and in the direction of the probe motion vector. The algorithm has negligible additional memory requirements, since it does not need additional searching data structures and uses only the information usually available for triangulated models. Therefore, the algorithm could handle even triangular meshes changing over time. Results show that the proposed algorithm is fast enough to be used in haptic visualization of complex-shaped models with hundreds of thousands of triangles.
Surface point location by walking algorithm for haptic visualization of triangulated 3D models
S0965997814000817
A new algorithm for hanging node elimination in octree structures is developed. The proposed algorithm utilizes hanging node elimination by refinement templates and a new mesh conditioning technique based on decoupling templates. Refinement templates insert transition elements to eliminate hanging nodes. Decoupling templates insert circular loops in the dual mesh without introducing or removing hanging nodes. Decoupling templates are introduced to avoid full refinement in the cases that do not match any of the available refinement templates. The proposed algorithm eliminates hanging nodes for concavely refined regions without excessive refinement. Another advantage of the proposed algorithm lies in eliminating narrow gaps of coarse meshes between refined regions. This step has a positive effect on the mesh quality as it avoids introducing non-regular templates with a limited penalty of uniform refinement. The presented algorithm produces good quality meshes and provides a consistent and complete method for producing conformally refined octree structures.
A consistent octree hanging node elimination algorithm for hexahedral mesh generation
S0965997814000908
In this study, we have thoroughly researched on performance of six state-of-the-art Multiobjective Evolutionary Algorithms (MOEAs) under a number of carefully crafted many-objective optimization benchmark problems. Each MOEA apply different method to handle the difficulty of increasing objectives. Performance metrics ensemble exploits a number of performance metrics using double elimination tournament selection and provides a comprehensive measure revealing insights pertaining to specific problem characteristics that each MOEA could perform the best. Experimental results give detailed information for performance of each MOEA to solve many-objective optimization problems. More importantly, it shows that this performance depends on two distinct aspects: the ability of MOEA to address the specific characteristics of the problem and the ability of MOEA to handle high-dimensional objective space.
Comparison of many-objective evolutionary algorithms using performance metrics ensemble
S096599781400091X
This paper presents an open and integrated framework that performs the structural design optimization by associating the improved sequential approximation optimization (SAO) algorithm with the CAD/CAE integration technique. In the improved SAO algorithm, a new estimate of the width of Gaussian kernel functions is proposed to enhance the surrogate models for SAO. Based on the improved surrogate models, an adaptive sampling strategy is developed to balance the exploration/exploitation in the sampling process, which better balances between the competence to locate the global optimum and the computation efficiency in the optimization process. Fewer function evaluations are required to seek the optimum, which is of great significance for computation-intensive structural optimization problems. Moreover, based on scripting program languages and Application Programming Interfaces (APIs), integration between commercial CAD and CAE software packages is implemented to expand the applications of the SAO algorithm in mechanical practices. Two benchmark tests from simple to complex, from low-dimension to moderate-dimension were performed to validate the efficacy of the proposed framework. Results show that the proposed approach facilitates the structural optimization process and reduces the computing cost immensely compared to other approaches.
A CAD/CAE integrated framework for structural design optimization using sequential approximation optimization
S0965997814000921
A generalized formulation to design Multi-Input–Multi-Output (MIMO) compliant mechanisms is presented in this work. This formulation also covers the simplified cases of the design of Multi-Input and Multi-Output compliant mechanisms, more commonly used in the literature. A Sequential Element Rejection and Admission (SERA) method is used to obtain the optimum design that converts one or more input works into one or more output displacements in predefined directions. The SERA procedure allows material to flow between two different material models: ‘real’ and ‘virtual’. The method works with two separate criteria for the rejection and admission of elements to efficiently achieve the optimum design. Examples of Multi-Input, Multi-Output and MIMO compliant mechanisms are presented to demonstrate the validity of the proposed procedure to design complex complaint mechanisms.
Topology synthesis of Multi-Input–Multi-Output compliant mechanisms
S0965997814000933
Quantitative assessment is made of using two display techniques, providing two different levels of depth perception, in conjunction with a haptic device for manipulating 3D objects in virtual environments. The two display techniques are 2D display, and interactive 3D stereoscopic virtual holography display on a zSpace tablet. Experiments were conducted, by several users of different ages and computer training. The experiments involved selected pointing and manipulation tasks. The speed of performing the tasks using the two display techniques were recorded. Statistical analysis of the data is presented. As expected, the use of interactive 3D stereoscopic display resulted in faster performance of the tasks. The improvement in performance was particularly noticeable for the cases wherein the subjects needed to manipulate the haptic arm to reach objects/targets at different depths, and also when the objects/targets were occluded partially by the obstacles.
Quantitative assessment of the effectiveness of using display techniques with a haptic device for manipulating 3D objects in virtual environments
S0965997814000945
This paper presents a numerical model developed specifically for ultrasonic shot peening (USP). It allows simulating the shot dynamics (trajectories in the chamber and impacts on the peened sample) in industrial configurations. The model supports complex 3D geometries, rotating parts and employs efficient collision detection algorithms for short computation times. The aim is to improve peening chamber designs and the choice of process parameters. The algorithm and main assumptions are presented. Numerical studies are then conducted to determine the performances of the model, in terms of computation time. Finally, a case study on a spur gear tests the model in an industrial configuration and shows a high correlation between the numerical results and experimental data.
CAD based model of ultrasonic shot peening for complex industrial parts
S0965997814000957
This paper introduces NiHu, a C++ template library for boundary element methods (BEM). The library is capable of computing the coefficients of discretised boundary integral operators in a generic way with arbitrarily defined kernels and function spaces. NiHu’s template core defines the workflow of a general BEM algorithm independent of the specific application. The core provides expressive syntax, based on the operator notation of the BEM, reflecting the mathematics behind boundary elements in the C++ source code. The customisable Component library contains elements specific to particular applications such as different numerical integration techniques and regularisation methods. The library can be used for creating a standalone C++ application using external open source libraries, or compiling a Matlab toolbox through the MEX interface. By massively exploiting C++ template metaprogramming, NiHu generates optimised codes for specific applications, including heterogeneous problems. The paper introduces the main concepts of the novel development, demonstrates its versatility and flexibility and compares the implementation’s performance to that of other open source projects.
NiHu: An open source C++ BEM library
S0965997814000969
The present study applies the numerical manifold method (NMM) as a tool to investigate the rockfall hazard in underground engineering. The crack evolution technique with crack initiation and propagation criterion, which has been successfully applied to handle cracking problems in rocks, is used in this study. A rockbolt element is introduced, which is first validated by a simple case. The mechanism of the rockbolt in reinforcing a layered rock mass is then investigated through a four-layered rock beam example. The developed NMM is then used to investigate the rockfall instability caused by either natural joints or mining induced fractures in an underground power station house or a tunnel. The results illustrate that the developed NMM can not only capture the entire dynamic process of the rockfall but also locate the keyblock successfully. As such, corresponding reinforcement methods can be chosen reasonably.
Underground rockfall stability analysis using the numerical manifold method
S0965997814000970
Structural optimization with frequency constraints is a challenging class of optimization problems characterized by highly non-linear and non-convex search spaces. When using a meta-heuristic algorithm to solve a problem of this kind, exploration/exploitation balance is a key feature to control the performance of the algorithm. An excessively exploitative algorithm might focus on certain areas of the search space ignoring the others. On the other hand, an algorithm that is too explorative overlooks high quality solutions as a result of not performing adequate local search. This paper compares nine multi-agent meta-heuristic algorithms for sizing and layout optimization of truss structures with frequency constraints. The variation of the diversity index during the optimization history is analyzed in order to inspect exploration/exploitation properties of each algorithm. It appears that there is a significant relationship between the algorithm efficiency and the evolution of the diversity index.
Comparison of nine meta-heuristic algorithms for optimal design of truss structures with frequency constraints
S0965997814000994
This work deals with optimization methods for the selection of submarine pipeline routes, employed to carry the oil & gas from offshore platforms. The main motives are related to the assessment of constraint-handling techniques, an important issue in the application of genetic algorithms and other nature-inspired algorithms to such complex, real-world engineering problems. Several methods associated to the modeling and solution of the optimization problem are addressed, including: the geometrical parameterization of candidate routes; their encoding in the context of the genetic algorithm; and, especially, the incorporation into the objective function of the several design criteria involved in the route evaluation. Initially, we propose grouping the design criteria as either “soft” or “hard”, according to the practical consequences of their violation. Then, the latter criteria are associated to different constraint-handling techniques: the classical static penalty function method, and more advanced techniques such as the Adaptive Penalty Method, the ε-Constrained method, and the Ho-Shimizu technique. Case studies are presented to compare the performance of these methods, applied to actual offshore scenarios. The results indicate the importance of clearly characterizing feasible and infeasible solutions, according to the classification of design criteria as “soft” or “hard” respectively. They also indicate that the static penalty approach is not adequate, while the other techniques performed better, especially the ε-Constrained and the Ho-Shimizu methods. Finally, it is seen that the optimization tool may reduce the design time to assess an optimal route, providing accurate results, and minimizing the costs of installation and operation of submarine pipelines.
Optimal design of submarine pipeline routes by genetic algorithm with different constraint handling techniques
S0965997814001008
The budgetary pressure of engineering simulation platform development is continuing to increase in industry, especially for those scientific institutions, while simulation platform is becoming increasingly important for the researchers in their daily work. Plenty of basic components provided for free by open-source communities can be used to develop simulation platform; however, using open-source components is much more technically difficult than directly utilizing commercial simulation platform. We propose a novel gluing components to tackle the existing difficulties of engineering simulation platform development by using open-source components and home-made resources. In order to reduce the development cost, a holistic framework named TPL.Frame, whose core is the gluing component, is designed to develop engineering simulation platform that consists of SALOME simulation platform, OGRE engine and some other home-made numerical codes. Unlike general commercial simulation solutions, the framework provides not only the basic functionalities, including CAD (Computer-Aided Design) and CAE (Computer-Aided Engineering), but also advanced features, such as real-time 3D visualization, interoperability between FE (Finite Element) method and MB (Multi-Body) dynamics, distributed simulation modeling and other user-defined features. To compare with traditional development method, several cases studied in railway industry are given to demonstrate how to rapidly develop engineering simulation platform using TPL.Frame, thus to prove the effectiveness of the proposed framework.
A holistic framework for engineering simulation platform development gluing open-source and home-made software resources
S096599781400101X
Many computer applications such as racing games and driving simulations demand high-fidelity 3D road network models. However, few methods exist for the automatic generation of 3D realistic road networks, especially for those in the real world. On the other hand, vast 2D road network data in various geographical information systems (GIS) have been collected in the past and are used by a wide range of applications. A method that can automatically produce 3D high-fidelity road network models from 2D real road GIS data will significantly reduce both the labor and time cost, and greatly benefit applications involving road networks. Based on a set of carefully selected civil engineering rules for road design, this paper proposes a novel approach that transforms existing road GIS data that contain only 2D road centerline information into high-fidelity 3D road network models. The proposed method consists of several major components, including road GIS data preprocessing, 3D centerline modeling, and 3D geometric modeling. With this approach, basic road elements such as road segments, road intersections and traffic interchanges are generated automatically to compose sophisticated road networks in a seamless manner. Results show that this approach provides a rapid and efficient 3D road modeling method for applications that have stringent requirements on high-fidelity road models.
Automatic high-fidelity 3D road network modeling based on 2D GIS data
S0965997814001021
This paper describes an Essential Software Framework for Meshfree Methods (ESFM). Through thorough analyses of many existing meshfree methods, their common elements and procedures are identified, and a general procedure is formulated into ESFM that can facilitate their implementations and accelerate new developments in meshfree methods. ESFM also modulates performance-critical components such as neighbor-point searching, sparse-matrix storage, and sparse-matrix solver enabling developed meshfree analysis programs to achieve high-performance. ESFM currently consists of 21 groups of classes and 94 subclasses, and more algorithms can be easily incorporated into ESFM. Finally, ESFM provides a common ground to compare various meshfree methods, enabling detailed analyses of performance characteristics.
ESFM: An Essential Software Framework for Meshfree Methods
S0965997814001033
The probabilistic distribution of wind speed is one of the discriminating wind qualities for the assessment of wind energy potential and for the execution of wind energy conversion frameworks. The wind energy spread might be obtained when wind speed probability function is known. Thusly, the probability movement of wind speed is an uncommonly huge touch of information needed in the assessment of wind energy potential. The two-parameter Weibull circulation has been normally used, recognized and endorsed in expositive interpretation to express the wind speed repeat transport for most wind regions. The Gumbel and Frechet dissemination is frequently used to model large wind speeds. The joint probability density functions (JPDF) model is advanced by minimal disseminations of wind speed and wind direction that is expected as an Extreme-Value mathematical statement. In the present study an exertion has been made to figure out the best fitting circulation of wind speed information by a soft computing methodology. We used adaptive neuro-fuzzy inference framework (ANFIS) in this paper, which is a specific kind of the neural frameworks family, to foresee the wind speed probability density dispersion. For this reason, two parameters Weibull and JPDF and three parameter Frechet and Gumbel conveyances are fitted to data and parameters for each distribution and utilized as preparing and checking information for ANFIS model. At long last, ANFIS effects are contrasted and the four introduced appropriations recommending that ANFIS conveyances are discovered to be most suitable as contrasted with the Weibull, JPDF, Frechet and Gumbel circulations.
Survey of four models of probability density functions of wind speed and directions by adaptive neuro-fuzzy methodology
S0965997814001124
This study presents a new Data-Enabled Design Model for high-rise buildings driven by pressure datasets, DEDM-HRP, which seamlessly combines synchronous pressure measurement databases with a rigorous computational framework to offer convenient estimation of wind load effects on high-rise buildings for their preliminary design. To respond to the need for practical applications, DEDM-HRP employs a web-based on-the-fly framework designed with user-friendly/intuitive web interfaces for the assessment of wind-induced responses as well as equivalent static wind loads in the three principal response directions, for any incident wind angle of interest, with minimum added complications or requirements of knowledge of comprehensive background theories for its use. Architectural Institute of Japan American Society of Civil Engineering Data-Enabled Design Data-Enabled Design Module for High-Rise buildings driven by Pressure datasets Equivalent Static Wind Load High Frequency Base Balance NatHaz Aerodynamic Loads Database Synchronous Pressure Measurement Tokyo Polytechnic University x and y coordinates of the point on the top floor where the acceleration is to be estimated (Fig. 1 ) x and y distances of the design column, respectively (Fig. 1) x and y coordinates of the point on the top floor where the displacement is to be estimated x and y components of the eccentricity of the mass center at the ith floor, respectively (Fig. 1) inter-story drift responses in the x and y directions at the ith floor, respectively acceleration responses in the x and y directions at point p, respectively number of building stories response directions (Fig. 1) mean loading sub-vector in the s direction zero mean aerodynamic loading sub-vector in the s direction mass and inertia sub-matrices, respectively damping and stiffness sub-matrices in the sl direction, respectively displacement, velocity, and acceleration response sub-vectors in the s direction, respectively number of modes kept in the model truncation jth generalized mass, force, damping ratio, circular frequency, mode shape sub-matrix in the s direction, respectively jth generalized displacement, velocity, and acceleration response, as well as standard deviation of the resonant modal response, respectively jth modal participation coefficient for R a generic response parameter, e.g., base moments, shears and displacements, etc. mean and standard deviation of the response process R as well as the standard deviation of its derivative, respectively peak factor, background peak factor and resonant peak factor, respectively Hermite model coefficients skewness and excess kurtosis, respectively Euler’s constant zero crossing rate observation period in seconds background and resonant base moment/torque, respectively weighting factors with respect to Mb and Mr , respectively covariance of the aerodynamic loads acting in the s direction s direction gust loading envelope and inertial ESWL, respectively s direction background, resonant and total ESWLs, respectively wind direction mean wind speed profile building height mean wind speed at the building height H power law exponent of the mean wind velocity profile building width and depth, respectively air and building bulk density drag coefficient
A cyberbased Data-Enabled Design framework for high-rise buildings driven by synchronously measured surface pressures
S0965997814001136
In this paper, we present a tool combining two software applications aimed at optimizing structural design problems of the civil engineering domain. Our approach lies in integrating an application for designing 2D and 3D bar structures, called Ebes, with the jMetal multi-objective optimization framework. The result is a software package that helps civil engineers to create bar structures which can be optimized further with multi-objective metaheuristics according to different goals, such as minimizing the structure weight and minimizing the deformation. The main features of both Ebes and jMetal are described and how they are combined together in one single tool is explained. Finally a case study to illustrate how the application works is presented.
Integrating a multi-objective optimization framework into a structural design software
S0965997814001148
This paper examines issues regarding computational modeling of multipurpose passenger amphibian aircraft Be-200 cabin interior. Here different concepts of cabin layout are introduced: economy variant; comfortable layout; with coupe-type seating; corporate variant with berths. Objects interior is designed on the basis of ergonomic principles. For cabin computational modeling the 3ds Max graphic system is used. Objects modeling is carried out by means of Spline Extrude, Polygon Extrude methods. In course of scenes shading, the materials assignment is performed at the level of subobjects. Scenes of realistic rendering of various aircraft cabin layouts are introduced.
Computational modeling of passenger amphibian aircraft Be-200 cabin interior
S096599781400115X
The paper describes an efficient numerical model for better understanding the influence of the microstructure on the thermal conductivity of heterogeneous media. This is the extension of an approach recently proposed for simulating and evaluating effective thermal conductivities of alumina/Al composites. A C++ code called MultiCAMG, taking into account all steps of the proposed approach, has been implemented in order to satisfy requirements of efficiency, optimization and code unification. Thus, on the one hand, numerical tools such as the efficient Eyre–Milton scheme for computing the thermal response of composites have been implemented for reducing the calculation cost. On the other hand, statistical parameters such as the covariance and the distribution of contact angles between particles are now estimated for better analyzing the microstructure. In the present work we focus our investigations on the effects of anisotropy on the effective thermal conductivity of alumina/Al composites. First of all, an isotropic benchmark is set up for comparison purposes. Secondly, anisotropic configurations are studied in order to direct the heat flux. A transversally isotropic structure, taking benefit of wall effects, is finally proposed for controlling the orientation of contact angles. Its thermal capabilities are related to the current issue of heat dissipation in automotive engine blocks.
An efficient numerical model for investigating the effects of anisotropy on the effective thermal conductivity of alumina/Al composites
S0965997814001161
Recent progress in entertainment and gaming systems has brought more natural and intuitive human–computer interfaces to our lives. Innovative technologies, such as Xbox Kinect, enable the recognition of body gestures, which are a direct and expressive way of human communication. Although current development toolkits provide support to identify the position of several joints of the human body and to process the movements of the body parts, they actually lack a flexible and robust mechanism to perform high-level gesture recognition. In consequence, developers are still left with the time-consuming and tedious task of recognizing gestures by explicitly defining a set of conditions on the joint positions and movements of the body parts. This paper presents EasyGR (Easy Gesture Recognition), a tool based on machine learning algorithms that help to reduce the effort involved in gesture recognition. We evaluated EasyGR in the development of 7 gestures, involving 10 developers. We compared time consumed, code size, and the achieved quality of the developed gesture recognizers, with and without the support of EasyGR. The results have shown that our approach is practical and reduces the effort involved in implementing gesture recognizers with Kinect.
Easy gesture recognition for Kinect
S0965997814001173
Permutation flow shop scheduling (PFSP) is among the most studied scheduling settings. In this paper, a hybrid Teaching–Learning-Based Optimization algorithm (HTLBO), which combines a novel teaching–learning-based optimization algorithm for solution evolution and a variable neighborhood search (VNS) for fast solution improvement, is proposed for PFSP to determine the job sequence with minimization of makespan criterion and minimization of maximum lateness criterion, respectively. To convert the individual to the job permutation, a largest order value (LOV) rule is utilized. Furthermore, a simulated annealing (SA) is adopted as the local search method of VNS after the shaking procedure. Experimental comparisons over public PFSP test instances with other competitive algorithms show the effectiveness of the proposed algorithm. For the DMU problems, 19 new upper bounds are obtained for the instances with makespan criterion and 88 new upper bounds are obtained for the instances with maximum lateness criterion.
An effective hybrid teaching–learning-based optimization algorithm for permutation flow shop scheduling problem
S0965997814001185
The main purpose of this paper is to determine what joints are most strained in the proposed underactuated finger by adaptive neuro-fuzzy methodology. For this, kinetostatic analysis of the finger structure is established with added torsional springs in every single joint. Since the finger’s grasping forces depend on torsional spring stiffness in the joints, it is preferable to determine which joints have the most influence on grasping forces. Hence, the finger joints experiencing the most strain during the grasping process should be determined. It is desirable to select and analyze a subset of joints that are truly relevant or the most influential to finger grasping forces in order to build a finger model with optimal grasping features. This procedure is called variable selection. In this study, variable selection is modeled using the adaptive neuro-fuzzy inference system (ANFIS). Variable selection using the ANFIS network is performed to determine how the springs implemented in the finger joints affect the output grasping forces. This intelligent algorithm is applied using the Matlab environment and the performance is analyzed. The simulation results presented in this paper show the effectiveness of the developed method.
Determining the joints most strained in an underactuated robotic finger by adaptive neuro-fuzzy methodology
S0965997814001276
Computation-intensive analyses/simulations are becoming increasingly common in engineering design problems. To improve the computation efficiency, surrogate models are used to replace expensive simulations of engineering problems. This paper proposes a new high-fidelity surrogate modeling approach which is called the Sparsity-promoting Polynomial Response Surface (SPPRS). In the SPPRS model, a series of Legendre polynomials is selected as basis functions, and its number is compatible with the sample size so as to enhance the expression ability for complex functional relationships. The coefficients associated with basis functions are estimated using a “sparsity-promoting” regression approach which is an ensemble of two techniques: least squares and ℓ1-norm regularization. As a result, only these basis functions relevant to explain the function relationship are picked out, and that dedicates to ease the problem of overfitting for training points. With the sparsity-promoting regression approach, such a surrogate model intends to capture both the global trend of the functional variation and a reasonable local accuracy in the neighborhood of training points. Additionally, Latin hypercube design (LHD) is proved conducive to improving the predictive capability of our model. The SPPRS is applied to seven benchmark test functions and a complex engineering problem. The results illustrate the promising benefits of this novel surrogate modeling technique.
Sparsity-promoting polynomial response surface: A new surrogate model for response prediction
S0965997814001288
As the main safety facility on the highway, a guardrail system is very essential for the highway traffics safety. In this paper, the Finite Element (FE) models of the vehicle and the corrugated beam guardrail system were created. Two types of widely used corrugated beam semi-rigid guardrails were simulated, which were the W-beam guardrail and the Thrie-beam guardrail. The collision between the corrugated beam guardrail systems and the vehicle body was analyzed. In the collision process, the snagging effect of the post to the vehicle body was also concerned. Under the considerations of the collision safety and the mechanism of the snagging effect, the multiobjective optimization problem was defined with dimensional sizes of guardrails to be the design variables. And the radial basis function (RBF) was applied to construct the regression models of the analytical objective, which increased the accuracy of fitting. The Pareto set and the optimal solution were obtained. After the optimization design, the W-beam guardrail and Thrie-beam guardrail were both greatly improved, that increased the collision safety between the corrugated beam guardrail and the vehicle body. This kind of analytical method can also be used for the crashworthiness optimization between any other cars and guardrails.
Optimization design of corrugated beam guardrail based on RBF-MQ surrogate model and collision safety consideration
S096599781400129X
Colliding Bodies Optimization (CBO) is a new multi-agent algorithm inspired by a collision between two objects in one-dimension. Each agent is modeled as a body with a specified mass and velocity. A collision occurs between pairs of objects and the new positions of the colliding bodies are updated based on the collision laws. In this paper, Enhanced Colliding Bodies Optimization (ECBO) which uses memory to save some best solutions is developed. In addition, a mechanism is utilized to escape from local optima. The performance of the proposed algorithm is compared to those of standard CBO and some optimization techniques on some benchmark mathematical functions and three standard discrete and continuous structural design problems. Optimization results confirm the validity of the proposed approach.
Enhanced colliding bodies optimization for design problems with continuous and discrete variables
S0965997814001306
This paper addresses the economic lot scheduling problem where multiple items produced on a single facility in a cyclical pattern have shelf life restrictions. A mixed integer non-linear programming model is developed which allows each product to be produced more than once per cycle and backordered. However, production of each item more than one time may result in an infeasible schedule due to the overlapping production times of various items. To eliminate the production time conflicts and to achieve a feasible schedule, the production start time of some or all the items must be adjusted by either advancing or delaying. The objective is to find the optimal production rate, production frequency, cycle time, as well as a feasible manufacturing schedule for the family of items, in addition to minimizing the long-run average cost. Metaheuristic methods such as the genetic algorithm (GA), simulated annealing (SA), particle swarm optimization (PSO), and artificial bee colony (ABC) algorithms are adopted for the optimization procedures. Each of such methods is applied to a set of problem instances taken from literature and the performances are compared against other existing models in the literature. The computational performance and statistical optimization results shows the superiority of the proposed metaheuristic methods with respect to lower total costs compared with other reported procedures in the literature.
Optimization of mixed integer nonlinear economic lot scheduling problem with multiple setups and shelf life using metaheuristic algorithms
S0965997814001318
Due to the complexity and uncertainty in the process, the soft computing methods such as regression analysis, neural networks (ANN), support vector regression (SVR), fuzzy logic and multi-gene genetic programming (MGGP) are preferred over physics-based models for predicting the process performance. The model participating in the evolutionary stage of the MGGP method is a linear weighted sum of several genes (model trees) regressed using the least squares method. In this combination mechanism, the occurrence of gene of lower performance in the MGGP model can degrade its performance. Therefore, this paper proposes a modified-MGGP (M-MGGP) method using a stepwise regression approach such that the genes of lower performance are eliminated and only the high performing genes are combined. In this work, the M-MGGP method is applied in modelling the surface roughness in the turning of hardened AISI H11 steel. The results show that the M-MGGP model produces better performance than those of MGGP, SVR and ANN. In addition, when compared to that of MGGP method, the models formed from the M-MGGP method are of smaller size. Further, the parametric and sensitivity analysis conducted validates the robustness of our proposed model and is proved to capture the dynamics of the turning phenomenon of AISI H11 steel by unveiling dominant input process parameters and the hidden non-linear relationships.
Stepwise approach for the evolution of generalized genetic programming model in prediction of surface finish of the turning process
S096599781400132X
We examine the rotational (in)variance of the continuous-parameter genetic algorithm (CPGA). We show that a standard CPGA, using blend crossover and standard mutation, is rotationally variant. To construct a rotationally invariant CPGA it is possible to modify the crossover operation to be rotationally invariant. This however results in a loss of diversity. Hence we introduce diversity in two ways: firstly using a modified mutation scheme, and secondly by adding a self-scaling random vector with a standard normal distribution, sampled uniformly from the surface of a n-dimensional unit sphere to the offspring vector. This formulation is strictly invariant, albeit in a stochastic sense only. We compare the three formulations in terms of numerical efficiency for a modest set of test problems; the intention not being the contribution of yet another competitive and/or superior CPGA variant, but rather to present formulations that are both diverse and invariant, in the hope that this will stimulate additional future contributions, since rotational invariance in general is a desirable, salient feature for an optimization algorithm.
On rotationally invariant continuous-parameter genetic algorithms
S0965997814001331
This work is related with the implementation of a finite volume method to solve the 2D Shallow Water Equations on Graphic Processing Units (GPU). The strategy is fully oriented to work efficiently with unstructured meshes which are widely used in many fields of Engineering. Due to the design of the GPU cards, structured meshes are better suited to work with than unstructured meshes. In order to overcome this situation, some strategies are proposed and analyzed in terms of computational gain, by means of introducing certain ordering on the unstructured meshes. The necessity of performing the simulations using unstructured instead of structured meshes is also justified by means of some test cases with analytical solution.
An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes
S0965997814001343
This paper proposes a new approach for the design of a composite structure. This approach is formulated as an optimization problem where the weight of the structure is minimized such that a reserve factor is higher than a threshold. The thickness of each region of the structure is optimized together with its stacking sequence and the ply drop-offs. The novelty of this approach is that, unlike in common practice, the optimization problem is not simplified and split into two steps, one for finding the thicknesses and one for the stacking sequence. The optimization problem is solved without any simplification assumption. It is formulated as a bilevel integer programming and it uses the backtracking procedure to satisfy the blending and the manufacturing rules. Some numerical experiments are performed to show the efficiency of the proposed optimization method over complex cases which cannot be solved with the existing methods.
A bilevel integer programming method for blended composite structures
S0965997814001355
A requirement for new robotic manipulators is the ability to detect and manipulate objects in their environments. Robotic manipulators are highly nonlinear systems, and an accurate mathematical model is difficult to obtain using conventional techniques. Therefore, an efficient technique is required to deal with these types of complex and dynamic systems. Differential Evolution (DE) algorithm is a very powerful optimization technique and has become popular in many fields. Arguably, it is now one of the most predominant stochastic algorithms for real-parameter optimization. However, DE is very sensitive to its control parameters of the mutation operation (F) and crossover operation (CR) in such a way that their fine tuning greatly affect DE performance. Fuzzy Adaptive DE (FADE) algorithm is one of the well known adaptive DE variants that show superiority and reliability in solving different types of optimization problems. The objective of this article is to develop a new dynamic parameter identification framework to estimate the barycentric parameters of the CRS A456 robot manipulator based on FADE. The simulation results presented in this paper show the effectiveness of the FADE method over other conventional techniques, transcending the limits of the existing state-of-the-art algorithms in solving the problem of robot.
System identification and control of robot manipulator based on fuzzy adaptive differential evolution algorithm
S0965997814001446
A numerical modelling approach capable of simulating Shot Peening (SP) processes of industrial interest was developed by combining the Discrete Element Method (DEM) with the Finite Element Method (FEM). In this approach, shot–shot and shot–target interactions as well as the overall shot flow were simulated efficiently using rigid body dynamics. A new algorithm to dynamically adapt the coefficient of restitution (CoR) for repeated impacts of shots on the same spot was implemented in the DEM code to take into account the effect of material hardening. Then, a parametric study was conducted using the Finite Element Method (FEM) to investigate the influence of the SP parameters on the development of residual stresses. Finally, a two-step coupling method is presented to combine the output of DEM simulation with FEM analyses to retrieve the Compressive Residual Stresses (CRS) after multiple impacts with the aim to evaluate the minimum area required to be modelled to realistically capture the field of residual stresses. A series of such coupled analyses were performed to determine the effect of peening angle and the combination of initial velocity and mass flow rate on CRS.
A combined DEM–FEM numerical method for Shot Peening parameter optimisation
S0965997814001458
Current complex engineering software systems are often composed of many components and can be built based on a multiagent approach, resulting in what are called complex multiagent software systems. In a complex multiagent software system, various software agents may cite the operation results of others, and the citation relationships among agents form a citation network; therefore, the importance of a software agent in a system can be described by the citations from other software agents. Moreover, the software agents in a system are often divided into various groups, and each group contains the agents undergoing similar tasks or having related functions; thus, it is necessary to find the influential agent group (not only the influential individual agent) that can influence the system outcome utilities more than the others. To solve such a problem, this paper presents a new model for finding influential agent groups based on group centrality analyses in citation networks. In the presented model, a concept of extended group centrality is presented to evaluate the impact of an agent group, which is collectively determined by both direct and indirect citations from other agents outside the group. Moreover, the presented model addresses two typical types of agent groups: one is the adjacent group where agents of a group are adjacent in the citation network, and the other is the scattering group where agents of a group are distributed separately in the citation network. Finally, we present case studies and simulation experiments to prove the effectiveness of the presented model.
Finding influential agent groups in complex multiagent software systems based on citation network analyses
S096599781400146X
Application of techniques for modelling of boundary value problems implies two conflicting requirements: obtaining high accuracy of the results and speed of the solution. Accurate results can be obtained only by using appropriate models and algorithms. In the previous papers the authors applied the parametric integral equations system (PIES) in modelling and solving boundary value problems. The first requirement was satisfied – the results were obtained with very high accuracy. This paper fulfils the second requirement by novel approach to accelerate PIES. Graphics cards (GPUs) programming for numerical calculations in general purpose applications (GPGPU) using NVIDIA CUDA is used for this purpose. The speed of calculations increased up to 80 times whereas high accuracy of the solutions was maintained. Examples included in this paper concern solving elasticity problems which are modelled by three-dimensional Navier–Lamé equations.
GPU-based acceleration of computations in elasticity problems solving by parametric integral equations system
S0965997814001471
The analysis of masonry double curvature structures by means of the kinematic theorem of limit analysis is traditionally the most diffused and straightforward method for an estimate of the load carrying capacity. However, the evaluation of the actual failure mechanism is not always trivial, especially for complex geometries and load conditions. Usually, the failure mechanism is simply hypothesized basing on previous experience, or – due to the complexity of the problem – FE rigid elements with interfaces are used. Both strategies may result in a wrong evaluation of the failure mechanism and hence, in the framework of the kinematic theorem of limit analysis, in an overestimation of the collapse load. In this paper, a simple discontinuous upper bound limit analysis approach with sequential linear programming mesh adaptation to analyze masonry double curvature structures is presented. The discretization of the vault is performed with infinitely resistant triangular elements (curved elements basing on a quadratic interpolation), with plastic dissipation allowed only at the interfaces for possible in- and out-of-plane jumps of velocities. Masonry is substituted with a fictitious material exhibiting an orthotropic behavior, by means of consolidated homogenization strategies. To progressively favor that the position of the interfaces coincide with the actual failure mechanism, an iterative mesh adaptation scheme based on sequential linear programming is proposed. Non-linear geometrical constraints on nodes positions are linearized with a first order Taylor expansion scheme, thus allowing to treat the NLP problem with consolidated LP routines. The choice of inequalities constraints on elements nodes coordinates turns out to be crucial on the algorithm convergence. The model performs poorly for coarse and unstructured meshes (i.e. at the initial iteration), but converges to the actual solution after few iterations. Several examples are treated, namely a straight circular and a skew parabolic arch, a cross vault and a dome. The results obtained at the final iteration fit well, for all the cases analyzed, previously presented numerical approaches.
Upper bound sequential linear programming mesh adaptation scheme for collapse analysis of masonry vaults
S0965997814001483
Legged exoskeletons supplement human intelligence with the strength and endurance of a pair of wearable mechanical legs that support heavy loads. The exoskeleton-type system is a nonlinear system with uncertainty of parameters, which is not easy to be identified with traditional mathematical model. This paper presents co-simulations of a novel exoskeleton-human robot system on humanoid gaits with fuzzy-PID/PID algorithms, which do not need the precise model. The lower extremity exoskeleton model with series–parallel topology was briefly described and the gait characteristics were analyzed. The co-simulation method integrates ADAMS and MATLAB/SIMULINK with fuzzy-PID/PID algorithms, which were used to develop the control schematic of the exoskeleton-human robot system. Finally, co-simulations of humanoid gaits and movements, which include level walking, stair ascent, stair descent, side kick, squatting down and standing up, were provided to confirm the performances and effectiveness of the proposed control approach.
Co-simulation research of a novel exoskeleton-human robot system on humanoid gaits with fuzzy-PID/PID algorithms
S0965997814001501
This study introduces a welding process design tool to determine optimal arc welding process parameters based on Finite Element Method (FEM), Response Surface Method (RSM) and Genetic Algorithms (GA). Here, a sequentially integrated FEM–RSM–GA framework has been developed and implemented to reduce the weld induced distortion in the final welded structure. It efficiently incorporates finite element based numerical welding simulations to investigate the desired responses and the effect of design variables without expensive trial experiments. To demonstrate the effectiveness of the proposed methodology, a lap joint fillet weld specimen has been used in this paper. Four process parameters namely arc voltage, input current, welding speed and welding direction have been optimized to minimize the distortion of the structure. The optimization results revealed the effectiveness of the methodology for welding process design with reduced cost and time.
Process parameter optimization of lap joint fillet weld based on FEM–RSM–GA integration technique
S0965997814001513
We report computational fluid dynamics (CFD) code developments using the high-level programming syntax of the open source C++ library OpenFOAM®. CFD simulations utilizing the large-eddy simulation (LES) approach are carried out using the developed code in a real-world application. We investigate wind flowing over the Bolund hill, Denmark. In the present configuration a west–east wind meets the steep west side of the hill. Such conditions lead to flow separation at the location of a sharp cliff. A full scale simulation, with a simulation duration of over one month, is carried out on a supercomputer. Physically, about 45min of real time is simulated in the LES enabling the statistical averaging of the results. The novelty of the paper consists of the following features: (1) we report validation results of the newly developed LES code for the Bolund hill case, (2) we show the high-level LES solver code in its entirety in a few tens of code lines which promotes transparency in CFD-code development in the OpenFOAM® environment, (3) the study is the first study to use LES in pointing out the complex 3d characteristics of the Bolund hill case with the computationally challenging west–east (270°) wind direction, and (4) based on the comparison with previous experimental data, and Reynolds averaged Navier–Stokes (RANS) simulations, the present LES gives so far the best match for the turbulent kinetic energy increase at the considered measurement positions.
Large-eddy simulation in a complex hill terrain enabled by a compact fractional step OpenFOAM® solver
S0965997814001525
We introduce a new architecture for the design of a tool for modeling and simulation of continuous and hybrid systems. The environment includes a compiler based on Modelica, a modular and a causal standard specification language for physical systems modeling (the tool supports models composed using certain component classes defined in the Modelica Standard Library, and the instantiation, parameterization and connection of these MSL components are described using a subset of Modelica). Models are defined in Modelica and are translated into DEVS models. DEVS theory (originally defined for modeling and simulation of discrete event systems) was extended in order to permit defining these of models. The different steps in the compiling process are show, including how to model these dynamic systems under the discrete event abstraction, including examples of model simulation with their execution results.
Using a Discrete-Event System Specifications (DEVS) for designing a Modelica compiler
S0965997814001756
This article describes implementation details of a graphical editor which is intended for efficient 3D visualization of finite element meshes. The purpose of this program is to prepare the input data for the finite element method – a widely known method for numerical solutions of scientific and engineering problems described by partial differential equations. Nowadays, increasing demand on accuracy of the calculation through the boom of parallel computing involves the use of finer and more detailed finite element meshes. However, the meshes with a very large number of elements and nodes cause problems with visualization in commonly used programs that are very slow, unresponsive and for some operations often unusable. Therefore, efficient data structures and algorithms were designed and implemented to enable fast work with very large finite element meshes.
Efficient methods to visualize finite element meshes
S0965997814001768
Automobiles, aircraft, and ships require tremendously many parts to be assembled. For developing such large assemblies, most companies accelerate the design process by having many design engineers in different functional or sectional design groups working concurrently. However, interferences and gaps can be found when the parts and sub-assemblies of different design groups are to be assembled. These error cause design changes and additional repair processes, resulting in an unexpected increase in costs and time delays. While the interference problem has been resolved by digital mockup and concurrent engineering methodology, many cases of the gap problem in the automotive industry have been covered by temporary treatments when the gaps are small enough to be filled with sealants. This kind of fast fix can cause leakage into the engine chamber and passenger cabin when the gap size is too big for filling or when the sealant gets old, which can turn fatal. With this research, we have developed a program to automatically find gaps between the parts of an assembly so that design engineers can correct their designs before the manufacturing stage begins. By using the method of decomposition model representation, the program can visualize gaps between complex car body parts as well as estimate their volumetric information. It can also automatically define the boundary between a gap and the exterior space. Although we have reviewed the benefits of the program by applying it to car development, it can also be applied to aircraft and ship designs comprising several parts.
Development of a gap searching program for automotive body assemblies based on a decomposition model representation
S0965997814001781
The development of Jacobian-free software for solving problems formulated by nonlinear partial differential equations is of increasing interest to simulate practical engineering processes. For the first time, this work uses the so-called derivative-free spectral algorithm for nonlinear equations in the simulation of flows in porous media. The model considered here is the one employed to describe the displacement of miscible compressible fluid in porous media with point sources and sinks, where the density of the fluid mixture varies exponentially with the pressure. This spectral algorithm is a modern method for solving large-scale nonlinear systems, which does not use any explicit information associated with the Jacobin matrix of the considered system, being a Jacobian-free approach. Two dimensional problems are presented, along with numerical results comparing the spectral algorithm to a well-developed Jacobian-free inexact Newton method. The results of this paper show that this modern spectral algorithm is a reliable and efficient method for simulation of compressible flows in porous media.
Study of a Jacobian-free approach in the simulation of compressible fluid flows in porous media using a derivative-free spectral method
S0965997814001872
A natural consequence of the extended use of CAD systems for the design and production of any kind of vessel is its use in Virtual Reality environments, mainly because now it has become an accessible technology. Virtual Reality is extended in every industry, in every sector, at any level. Important improvements both in software and hardware have had an important impact in its use in the shipbuilding industry, where it is necessary to handle complex ship 3D models with huge amount of data. So, efficiency is the base condition in the Virtual Reality navigation around a vessel. To enhance it, there are three important factors that play a fundamental role. The first one is having an appropriate CAD system with all the information of the ship in a single data base. The second important issue is to have a viewer, which is a tool that allows the management of the 3D model to be used in Virtual Reality environments. No need to say that a good integration between the viewer and the CAD system will be translated into more functionality and better performance. Finally, the third important player is hardware, which makes possibility the Virtual Reality navigation in many different environments. This paper describes first of all different efficient uses of Virtual Reality in the shipbuilding industry, taking in consideration all the agents involved and describing in particular the advantages for any of them. Regarding the software requirements, it will be described in particular the new FVIEWER, developed by SENER for the Virtual Reality navigation and design review and based on the FORAN system. From the hardware side, it will be described some of the most relevant and feasible applications of the Virtual Reality, taking in consideration potential uses and accessible technology in the market. The future of the Virtual Reality in shipbuilding will be explained after that.
Virtual Reality in a shipbuilding environment
S0965997814001884
The paper describes the application of the latest Information Technologies in business processes such as design and manufacturing. More specifically it examines the use of cloud computing in the mechanical drawing and design process of an enterprise. It proposes a specific architecture with different servers, for the implementation of a collaborative cloud based Design system. Finally as an application example, it compares the operating cost of an industry’s design department before and after the use of the proposed system. This example uses a private cloud deployment model so that the comparison of the operating cost would be feasible. While public cloud may offer more functionality and economy, private cloud is best suitable to make conclusions and comparison between on-premise and cloud operation, because all of the cost is handled by the organization that uses it.
Collaborative design in the era of cloud computing
S0965997814001896
Structural optimization for performance-based seismic design (PBSD) in earthquake engineering aims at finding optimum design variables corresponding to a minimum objective function with constraints on performance requirements. In this study, an efficient methodology, consisting of two computational strategies, is presented for performance-based optimum seismic design (PBOSD) of steel moment frames. In the first strategy, a modified firefly algorithm (MFA) is proposed to efficiently find PBOSD at the performance levels. Because that for computing the structural responses at the performance levels a nonlinear static pushover analysis must be conducted, the overall computational time of optimization process is extremely large. In the second strategy, to reduce the computational burden, a new neural network model termed as wavelet cascade-forward back-propagation (WCFBP) is proposed to effectively predict the results of nonlinear pushover analysis during the optimization process. To illustrate the effectiveness of the proposed methodology, 3, 6 and 12 storey planar steel moment resisting frames are optimized for various performance levels. The results demonstrate the effectiveness of the proposed soft computing-based methodology for PBOSD of steel structures spending low computational cost.
Performance-based optimum seismic design of steel structures by a modified firefly algorithm and a new neural network
S0965997814001902
Installing pedestrian ramps is a common improvement towards a barrier-free environment. This paper introduces a graph-theoretical method of retrofitting of a single-branch Truss-Z (TZ) ramp in a constrained environment. The results produced by this exhaustive search method are usually ideal and better than those produced previously with meta-heuristic methods. A large case study of linking two sections of the Hongo Campus of Tokyo University using an overpass in an extremely constrained environment is presented. TZ modules with 1:12 (8.3%) slope are used, which is allowable in most countries for ramps for self-powered wheelchairs. The results presented here are highly satisfactory both in terms of structural optimization and aesthetics. Visualizations of the TZ ramp system, composed of 124 units, are presented.
Retrofitting of pedestrian overpass by Truss-Z modular systems using graph-theory approach
S0965997814002002
The lattice Boltzmann method (LBM) and traditional finite difference methods have separate strengths when solving the incompressible Navier–Stokes equations. The LBM is an explicit method with a highly local computational nature that uses floating-point operations that involve only local data and thereby enables easy cache optimization and parallelization. However, because the LBM is an explicit method, smaller grid spacing requires smaller numerical time steps during both transient and steady state computations. Traditional implicit finite difference methods can take larger time steps as they are not limited by the CFL condition, but only by the need for time accuracy during transient computations. To take advantage of the strengths of both methods, a multiple solver, multiple grid block approach was implemented and validated for the 2-D Burgers’ equation in Part I of this work. Part II implements the multiple solver, multiple grid block approach for the 2-D backward step flow problem. The coupled LBM–VSM solver is found to be faster by a factor of 2.90 (2.87 and 2.93 for Re =150 and Re =500, respectively) on a single processor than the VSM for the 2-D backward step flow problem while maintaining similar accuracy.
Domain decomposition based coupling between the lattice Boltzmann method and traditional CFD methods – Part II: Numerical solution to the backward facing step flow
S0965997814002014
Spatial discretization of high-dimensional partial differential equations requires data representations that are of low overhead in terms of memory and complexity. Uniform discretization of computational domains quickly grows out of reach due to an exponential increase in problem size with dimensionality. Even with spatial adaptivity, the number of mesh data points can be unnecessarily large if care is not taken as to where refinement is done. This paper proposes an adaptive scheme that generates the mesh by recursive bisection, allowing mesh blocks to be arbitrarily anisotropic to allow for fine structures in some directions without over-refining in those directions that suffice with less refinement. Within this framework, the mesh blocks are organized in a linear kd-tree with an explicit node index map corresponding to the hierarchical splitting of internal nodes. Algorithms for refinement, coarsening and 2:1 balancing of a mesh hierarchy are derived. To demonstrate the capabilities of the framework, examples of generated meshes are presented and the algorithmic scalability is evaluated on a suite of test problems. In conclusion, although the worst-case complexity of sorting the nodes and building the node map index is n 2 , the average runtime scaling in the studied examples is no worse than n log n .
Data structures and algorithms for high-dimensional structured adaptive mesh refinement
S0965997814002026
The paper presents an application of two domain repartitioning methods to solving hopper discharge problem simulated by the discrete element method. Quantitative comparison of parallel speed-up obtained by using the multilevel k-way graph partitioning method and the recursive coordinate bisection method is presented. The detailed investigation of load balance, interprocessor communication and repartitioning is performed. Speed-up of the parallel computations based on the dynamic domain decomposition is investigated by a series of benchmark tests simulating the granular visco-elastic frictional media in hoppers containing 0.3×106 and 5.1×106 spherical particles. A soft-particle approach is adopted, when the simulation is performed by small time increments and the contact forces between the particles are calculated using the contact law. The parallel efficiency of 0.87 was achieved on 2048 cores, modelling the hopper filled with 5.1×106 particles.
The comparison of two domain repartitioning methods used for parallel discrete element computations of the hopper discharge
S096599781400204X
This paper proposes a novel training algorithm for radial basis function neural networks based on fuzzy clustering and particle swarm optimization. So far, fuzzy clustering has proven to be a very efficient tool in designing such kind of networks. The motivation of the current work is to quantify the exact effect of fuzzy cluster analysis on the network’s performance and use it in order to substantially improve this performance. There are two key theoretical findings resulting from the present work. First, it is analytically proved that when the standard fuzzy c-means algorithm is used to generate the input space fuzzy partition, the main effect this partition imposes to the network’s square error (i.e. performance index) can be written down in terms of a distortion function that measures the ability of the partition to recreate the original data. Second, using the aforementioned distortion function, an upper bound of the network’s square error can be constructed. Then, the particle swarm optimization (PSO) is put in place to minimize the above upper bound and determine the network’s parameters. To further improve the accuracy, the basis function widths and the connection weights are fine-tuned by employing a steepest descent approach. The main experimental findings are: (a) the implementation of the PSO obtains a significant reduction of the square error while exhibiting a smooth dynamic behavior, (b) although the steepest descent further decreases the error it finally obtains smaller reduction rates, meaning that the strongest impact on the error reduction is provided by the PSO, and (c) the improved performance of the proposed network is demonstrated through an extensive comparison with other related methods using a 10-fold cross-validation analysis.
Improving the effect of fuzzy clustering on RBF network’s performance in terms of particle swarm optimization
S0965997814002051
The paper presents an improved sectional discretization method for evaluating the response of reinforced concrete sections. The section is subdivided into parametric subdomains that allow the modelization of any complex geometry while taking advantage of the Gauss quadrature techniques. In particular, curved boundaries are dealt with two nested parametric transformations, reducing the modeling approximation. It is shown how the so-called fiber approach is simply a particular case of the present more general method. Many benchmarks are presented in order to assess the accuracy of the results. The influence of the discretization into subdomains and of the quadrature rules, chosen for integration, is discussed. The numerical tests highlight also the effects of spurious stress distributions in the tensile concrete zone, due the interpolation functions adopted for the Gauss integration. It is shown how balancing the number of subdomains and the number of sampling points such spurious effects vanish. The method shows to be accurate, very flexible in the discretization process and robust in analyzing any sectional state. Moreover, it converges faster than the fiber method, reducing the computational demand. All these properties are of great importance when the computations are iteratively repeated, as for the case of the sectional analysis within a computational procedure for a R.C. frame analysis.
A parametric subdomain discretization for the analysis of the multiaxial response of reinforced concrete sections
S0965997814002063
This paper discusses the cloud computing based approach for parallelization of large displacement stability analysis of orthotropic prismatic shell structures with simply supported boundary conditions along the diaphragm-supported edges. We review the harmonic coupled finite strip method (HCFSM), and describe a software system for nonlinear analysis of reinforced concrete (RC) structures. We combine different parallelization models – MPI and OpenMP – in order to cope with the increased computational complexity, which originates from coupling of all series terms in the HCFSM formulation. We discuss the effects of parallelization from the perspective of a cloud environment. Our results show that rational usage of cloud resources can lead to significant performance improvements and monetary savings. In certain cases, the achieved performance can be very close to the maximum one.
Hybrid MPI/OpenMP cloud parallelization of harmonic coupled finite strip method applied on reinforced concrete prismatic shell structure
S0965997814002075
The fatigue behavior of 3D 4-directional braided composites was investigated based on the unit cell approach. First, the unit cell models of 3D 4-directional braided composites with different braiding angles and fiber volume fraction were built up using ABAQUS. Then, the fatigue behavior of the 3D 4-directional braided composites was analyzed, and the effect of fatigue loading direction on the fatigue damage evolution and fatigue life was studied. Finally, the effect of braiding angles and fiber volume fraction of the unit cell on the fatigue behavior of 3D 4-directional braided composites was analyzed. These results will play an important role for evaluating the fatigue behavior of 3D 4-directional braided composites in engineering.
Computational analysis of fatigue behavior of 3D 4-directional braided composites based on unit cell approach
S0965997814002087
A novel automated design space exploration (DSE) approach of multi-cycle transient fault detectable datapath based on multi-objective user constraints (power and delay) for application specific computing is presented in this paper. To the best of the authors’ knowledge, this is the first work in the literature to solve this problem. The presented approach, driven by bacterial foraging optimization (BFO) algorithm provides easy flexibility to change direction in the design space through tumble/swim actions if a search path is found ineffective. The approach is highly capable of reaching true Pareto optimal curve indicated by the closeness of our non-dominated solutions to the true Pareto front and their uniform distribution over the Pareto curve (implying diversity). The contributions of this paper are as follows: (a) novel exploration approach for generating a high quality fault detectable structure based on user provided requirements of power-delay, which is capable of transient error detection in the datapath; (b) novel fault detectable algorithm for handling single and multi-cycle transient faults. The results of the proposed approach indicated an average improvement in Quality of Results (QoR) of >9% and reduction in hardware usage of >23% compared to recent approaches that are closer in solving a similar objective.
Automated design space exploration of multi-cycle transient fault detectable datapath based on multi-objective user constraints for application specific computing
S0965997814002099
Perpetual high traffic volumes on U.S. highways have raised more public concern than ever about transportation safety. Over the years, various traffic barrier systems, including cable median barriers (CMBs), have been developed to reduce the number and severity of vehicle crashes. Despite their general effectiveness, there remains room for improvement, especially when CMBs are installed on unlevelled terrains such as sloped medians. The destructive nature of crashes imposes significant challenges to barrier design using full-scale physical testing; numerical simulations thus become a viable means to support crash analysis, performance evaluation, and barrier designs. In this study, validated vehicle and CMB models were used to perform full-scale simulations of vehicle-CMB impacts. Several CMB designs, including the currently used one, were evaluated under vehicular impacts at different velocities and angles. To address the challenge of modeling slender members such as cables and hook-bolts in contact analyses, an efficient beam-element contact model was employed in the analysis. Different design options of cable height and spacing under various impact velocities and angles were investigated in this study.
Crash analysis and evaluation of cable median barriers on sloped medians using an efficient finite element model
S0965997814002105
The challenges of machining, particularly milling, glass fibre-reinforced polymer (GFRP) composites are their abrasiveness (which lead to excessive tool wear) and susceptible to workpiece damage when improper machining parameters are used. It is imperative that the condition of cutting tool being monitored during the machining process of GFRP composites so as to re-compensating the effect of tool wear on the machined components. Until recently, empirical data on tool wear monitoring of this material during end milling process is still limited in existing literature. Thus, this paper presents the development and evaluation of tool condition monitoring technique using measured machining force data and Adaptive Network-Based Fuzzy Inference Systems during end milling of the GFRP composites. The proposed modelling approaches employ two different data partitioning techniques in improving the predictability of machinability response. Results show that superior predictability of tool wear was observed when using feed force data for both data partitioning techniques. In particular, the ANFIS models were able to match the nonlinear relationship of tool wear and feed force highly effective compared to that of the simple power law of regression trend. This was confirmed through two statistical indices, namely r 2 and root mean square error (RMSE), performed on training as well as checking datasets.
Monitoring of tool wear using measured machining forces and neuro-fuzzy modelling approaches during machining of GFRP composites
S0965997815000022
In this paper, a CFD (Computational Fluid Dynamics) based DG (Discontinuous Galerkin) method is introduced to solve the three-dimensional Maxwell’s equations for complex geometries on unstructured grids. In order to reduce the computing expense, both the quadrature-free implementation method and the parallel computing based on domain decomposition are employed. On the far-field boundary, the non-reflecting boundary condition is implemented. Numerical integration rather than the quadrature-free implementation is used over the faces on the solid boundary to implement the electromagnetic solid boundary condition for perfectly conducting objectives. Both benchmark examples and complex geometry case are tested with the CFD-based DG solver. Numerical results indicate that highly accurate results can be obtained when using high order even on coarse grid and the present method is very suitable for complex geometries. Furthermore, the costs of CPU time and the speedup of the parallel computation are also evaluated.
A CFD-based high-order Discontinuous Galerkin solver for three dimensional electromagnetic scattering problems
S0965997815000034
In order to accelerate fast multipole boundary element method (FMBEM), in terms of the intrinsic parallelism of boundary elements and the FMBEM tree structure, a series of CUDA based GPU parallel algorithms for different parts of FMBEM with level-skip M2L for 3D elasticity are presented. A rigid body motion method (RBMM) for the FMBEM is proposed based on special displacement boundary conditions to deal with strongly singular integration and free term coefficients. The numerical example results show that our parallel algorithms obviously accelerates the FMBEM and can be used in large scale engineering problems with wide applications in the future.
Graphics processing unit (GPU) accelerated fast multipole BEM with level-skip M2L for 3D elasticity problems
S0965997815000046
For the analysis of noise problems in medium-to-high frequency ranges, the energy flow boundary element method (EFBEM) has been studied. EFBEM is numerical analysis method of energy flow analysis (EFA), and solves energy governing equations using a boundary element method in complex structures. Based on EFBEM, a noise prediction software, “noise analysis system by energy flow analysis” (NASEFA), was developed. For effective maintenance, NASEFA is composed of three main modules: the translator, the model converter, and the main solver. The translator changes the FE model to the NASEFA BE model, and the model converter changes the BE model to an EFBE model, including various data, such as structural materials, medium properties, sources, and boundary conditions. NASEFA then solves the acoustic energy density and intensity on boundary and in the field. Moreover, it analyzes interior and exterior noise problems for single and multiple domains in two and three dimensions. Finally, for the validation of the software developed, interior and exterior noise predictions of various structures were performed. The results obtained with NASEFA were compared with those of the commercial SEA program and experiment. From these comparative studies, the usefulness of NASEFA was established.
Development of a noise prediction software NASEFA and its application in medium-to-high frequency ranges