FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0377221714007115 | The theory of aggregation most often deals with measures of central tendency. However, sometimes a very different kind of a numeric vector’s synthesis into a single number is required. In this paper we introduce a class of mathematical functions which aim to measure spread or scatter of one-dimensional quantitative data. The proposed definition serves as a common, abstract framework for measures of absolute spread known from statistics, exploratory data analysis and data mining, e.g. the sample variance, standard deviation, range, interquartile range (IQR), median absolute deviation (MAD), etc. Additionally, we develop new measures of experts’ opinions diversity or consensus in group decision making problems. We investigate some properties of spread measures, show how are they related to aggregation functions, and indicate their new potentially fruitful application areas. | Spread measures and their relation to aggregation functions |
S0377221714007127 | In the vertical alignment phase of road design, one minimizes the cost of moving material between different sections of the road while maintaining safety and building code constraints. Existing vertical alignment models consider neither the side-slopes of the road nor the natural blocks like rivers, mountains, etc., in the construction area. The calculated cost without the side-slopes can have significant errors (more than 20 percent), and the earthwork schedule without considering the blocks is unrealistic. In this study, we present a novel mixed integer linear programming model for the vertical alignment problem that considers both of these issues. The numerical results show that the approximation of the side-slopes can generate solutions within an acceptable error margin specified by the user without increasing the time complexity significantly. | A mixed-integer linear programming model to optimize the vertical alignment considering blocks and side-slopes in road construction |
S0377221714007139 | In this paper we consider timetable design at a European freight railway operator. The timetable is designed by choosing the time of service for customer unit train demands among a set of discrete points. These discrete points are all found within the a time-window. The objective of the model is to minimize cost while adhering to constraints regarding infrastructure usage, demand coverage, and engine availability. The model is solved by a column generation scheme where feasible engine schedules are designed in a label setting algorithm with time-dependent cost and service times. | Freight railway operator timetabling and engine scheduling |
S0377221714007140 | In this paper, we address the impact of uncertainty introduced when the experts complete pairwise comparison matrices, in the context of multi-criteria decision making. We first discuss how uncertainty can be quantified and modeled and then show how the probability of rank reversal scales with the number of experts. We consider the impact of various aspects which may affect the estimation of probability of rank reversal in the context of pairwise comparisons, such as the uncertainty level, alternative preference scales and different weight estimation methods. We also consider the case where the comparisons are carried out in a fuzzy manner. It is shown that in most circumstances, augmenting the size of the expert group beyond 15 produces a small change in the probability of rank reversal. We next address the issue of how this probability can be estimated in practice, from information gathered simply from the comparison matrices of a single expert group. We propose and validate a scheme which yields an estimate for the probability of rank reversal and test the applicability of this scheme under various conditions. The framework discussed in the paper can allow decision makers to correctly choose the number of experts participating in a pairwise comparison and obtain an estimate of the credibility of the outcome. | Convergence properties and practical estimation of the probability of rank reversal in pairwise comparisons for multi-criteria decision making problems |
S0377221714007152 | We derive the proper form of the Akaike information criterion for variable selection for mixture cure models, which are often fit via the expectation–maximization algorithm. Separate covariate sets may be used in the mixture components. The selection criteria are applicable to survival models for right-censored data with multiple competing risks and allow for the presence of a non-susceptible group. The method is illustrated on credit loan data, with pre-payment and default as events and maturity as the non-susceptible case and is used in a simulation study. | An Akaike information criterion for multiple event mixture cure models |
S0377221714007164 | Economic assessment of energy-related processes needs to adapt to the development of large-scale integration of renewable energies into the energy system. Flexible electrochemical processes, such as the electrolysis of water to produce hydrogen, are foreseen as cornerstones to renewable energy systems. These types of technologies require the current methods of energy storage scheduling and capacity planning to incorporate their distinct non-linear characteristics in order to be able to fully assess their economic impact. A combined scheduling and capacity planning model for an innovative, flexible electricity-to-hydrogen-to-ammonia plant is derived in this paper. A heuristic is presented, which is able to translate the depicted, non-convex and mixed-integer problem into a set of convex and continuous non-linear problems. These can be solved with commercially available solvers. The global optimum of the original problem is encircled by the heuristic, and, as the numerical illustration with German electricity market data of 2013 shows, can be narrowed down and approximated very well. The results show, that it is not only meaningfulness, but also feasible to solve a combined scheduling and capacity problem on a convex non-linear basis for this and similar new process concepts. Application to other hydrogen based concepts is straightforward and to other, non-linear chemical processes generally possible. | Combined scheduling and capacity planning of electricity-based ammonia production to integrate renewable energies |
S0377221714007176 | In this paper, we treat the problem of stochastic comparison of standby [active] redundancy at component level versus system level. In the case of standby redundancy, we present some interesting comparison results of both series systems and parallel systems in the sense of various stochastic orderings for both the matching spares case and non-matching spares case, respectively. In the case of active redundancy, a likelihood ratio ordering result of series systems is presented for the matching spares case; and for the non-matching spares case, a counterexample is provided to show that there does not exist similar result even for the hazard rate ordering. The results established here strengthen and generalize some of those known in the literature. Some numerical examples are also provided to illustrate the theoretical results. | Redundancy allocation at component level versus system level |
S0377221714007188 | This paper studies a firm’s time-to-market decision and subsequent sales channel, pricing and production decisions under three main sources of uncertainty: possibility of qualifying for lucrative sales channels, competitors’ time-to-market behavior and price-sensitive uncertain demand. In particular, we consider a firm that can potentially sell through two distinct channels. Selling through the primary channel requires the firm to first get its product qualified. The secondary channel does not require qualification. Prior to market entry, the firm performs product and process design activities which improve manufacturing yield and the chances of getting qualified for the primary sales channel. A long delay in market entry allows competitors to enter the market before the firm, reducing the firm’s market share. This delay also affects the firm’s sales channel strategy. While deciding when to enter the market, the firm also needs to decide what price to charge and how much to produce at each period of a finite planning horizon. Demand distributions depend on the product’s price through general stochastic demand functions. Pricing and production decisions can be specified dynamically as a function of the state of the system and they are intertwined with the time-to-market decision. The paper provides a unified model that captures the key relationships and trade-offs among time-to-market, sales channel, pricing and production decisions. Explicitly modeling the linkages among these key decisions enables us to characterize and quantify their joint role in profit generation. This paper provides managers with a tool and a process that can guide them in determining an optimal policy for market-timing, pricing and production decisions that maximize firms’ expected profits. | Integrating dynamic time-to-market, pricing, production and sales channel decisions |
S0377221714007206 | To assess a product's reliability for subsequent managerial decisions such as designing an extended warranty policy and developing a maintenance schedule, Accelerated Degradation Test (ADT) has been used to obtain reliability information in a timely manner. In particular, Step-Stress ADT (SSADT) is one of the most commonly used stress loadings for shortening test duration and reducing the required sample size. Although it was demonstrated in many previous studies that the optimum SSADT plan is actually a simple SSADT plan using only two stress levels, most of these results were obtained numerically on a case-by-case basis. In this paper, we formally prove that, under the Wiener degradation model with a drift parameter being a linear function of the (transformed) stress level, a multi-level SSADT plan will degenerate to a simple SSADT plan under many commonly used optimization criteria and some practical constraints. We also show that, under our model assumptions, any SSADT plan with more than two distinct stress levels cannot be optimal. These results are useful for searching for an optimum SSADT plan, since one needs to focus only on simple SSADTs. A numerical example is presented to compare the efficiency of the proposed optimum simple SSADT plans and a SSADT plan proposed by a previous study. In addition, a simulation study is conducted for investigating the efficiency of the proposed SSADT plans when the sample size is small. | Optimum step-stress accelerated degradation test for Wiener degradation process under constraints |
S0377221714007218 | In this paper the notion of restricted dissimilarity function is discussed and some general results are shown. The relation between the concepts of restricted dissimilarity function and penalty function is presented. A specific model of construction of penalty functions by means of a wide class of restricted dissimilarity functions based upon automorphisms of the unit interval is studied. A characterization theorem of the automorphisms which give rise to two-dimensional penalty functions is proposed. A generalization of the previous theorem to any dimension n > 2 is also provided. Finally, a not convex example of generator of penalty functions of arbitrary dimension is illustrated. | Penalty functions based upon a general class of restricted dissimilarity functions |
S0377221714007231 | This study examines the supply chain demand collaboration between a manufacturer and a retailer. We study how the timing of collaboration facilitates production decision of the manufacturer when the information exchanged in the collaboration is asymmetric. We investigate two collaboration mechanisms: ‘Too Little’ and ‘Too Late’, depending on the timing of information sharing between the manufacturer and the retailer. Our research results indicate that early collaboration as in the ‘Too Little’ mechanism leads to a stable production schedule, which decreases the need of production adjustment when production cost information becomes available; whereas a late collaboration as in the ‘Too Late’ mechanism enhances the flexibility of production adjustment when demand information warrants it. In addition, the asymmetric demand information confounds production decisions all the time; the manufacturer has to provide proper incentives to ensure truthful information sharing in collaboration. Information asymmetry might also reduce the difference in production decision between the ‘Too Little’ and ‘Too Late’ collaboration mechanisms. Numerical analysis is further conducted to demonstrate the performance implications of the collaboration mechanisms on the supply chain. | ‘Too Little’ or ‘Too Late’: The timing of supply chain demand collaboration |
S0377221714007243 | This paper sheds new light on the relationship between inputs and outputs in the framework of the educational production function. In particular, it is geared at gaining a better understanding of which factors may be affected in order to achieve an optimal educational output level. With this objective in mind, we analyze teacher-based assessments (actual marks) in three different subjects using a multiobjective schema. For much of the analysis we use data from a recent (2010) Survey – ESOC10, linked with the results from an educational assessment program conducted among 11 and 15 year-old students and with the administrative records on teacher-based scores. Following the statistical and econometric analysis of these data, they are used to build a multiobjective mixed integer model. A reference point approach is used to determine the profile of, potentially, the most “successful and balanced” students in terms of educational outcomes. This kind of methodology in multiobjective programming allows generating “very balanced” solutions in terms of the objective values (subjects). Finally, a sensitivity analysis is used to determine policies that can be carried out in order to improve the performance levels of primary and secondary education students. Particularly, policy makers should be more concerned with the need to promote some cultural habits – such as reading –, from both the students’ and parents’ side. Additionally, policy efforts should be focused on making the vocational pathways available to Spanish youth more appealing, with the aim of taking advantage of the particular skills of students not succeeding in the academic track. | On the potential balance among compulsory education outcomes through econometric and multiobjective programming analysis |
S0377221714007255 | With the rapid development of Web 2.0 applications, social media have increasingly become a major factor influencing the purchase decisions of customers. Longitudinal individual and engagement behavioral data generated on social media sites post challenges to integrate diverse heterogeneous data to improve prediction performance in customer response modeling. In this study, a hierarchical ensemble learning framework is proposed for behavior-aware user response modeling using diverse heterogeneous data. In the framework, a general-purpose data transformation and feature extraction strategy is developed to transform the heterogeneous high-dimensional multi-relational datasets into customer-centered high-order tensors and to extract attributes. An improved hierarchical multiple kernel support vector machine (H-MK-SVM) is developed to integrate the external, tag and keyword, individual behavioral and engagement behavioral data for feature selection from multiple correlated attributes and for ensemble learning in user response modeling. The subagging strategy is adopted to deal with large-scale imbalanced datasets. Computational experiments using a real-world microblog database were conducted to investigate the benefits of integrating diverse heterogeneous data. Computational results show that the improved H-MK-SVM using longitudinal individual behavioral data exhibits superior performance over some commonly used methods using aggregated behavioral data and the improved H-MK-SVM using engagement behavioral data performs better than using only the external and individual behavioral data. | Behavior-aware user response modeling in social media: Learning from diverse heterogeneous data |
S0377221714007267 | The selection of either a pull or a push price promotion has mainly been investigated in contexts where manufacturers offer deals to consumers at the time of purchase or offer trade deals to retailers. This paper extends this framework to where manufacturers can offer either trade deals or rebate-like promotions to consumers such as on-pack coupons that stimulate the first and second purchases or a combination of the two promotion vehicles. It is demonstrated that the decision to implement either of the three promotion options critically depends, among other factors, on the percentage of first-time buyers who redeem their coupons at the second purchase. Particularly, a necessary condition to simultaneously offer both a trade deal and coupons is to have a positive coupon redemption rate. When possible, manufacturers prefer on-pack coupons over trade deals to take advantage of slippage and to further increase the overall demand via coupon-induced repeat purchase. Manufacturers are more likely to take the lion’s share of channel profits. | Trade deals and/or on-package coupons |
S0377221714007279 | Today, an increasing number of companies have publicized policies to improve corporate social responsibility performance related to human resources in their supply chain management practices, such as safer and healthier working conditions, higher payment, better benefits, and fewer working hours. However, there are still concerns about the sincerity and effectiveness of such efforts. This could be attributed to the fact that the economic benefit of those improvements is not clearly structured as an integral part in the companies’ decision-making process. In this paper, we develop a stylized analytic framework that links a firm's supply chain social performance in people with its economic performance in profit. Specifically, we examine how consumers’ responses to the outbreak of social misconduct in supply chains affect competing firms’ market segmentation and profit and subsequently provide economic incentive for proactive social responsibility investment. Closed-form optimal solutions for proactive social responsibility strategy are found for our model setup. Numerical tests and sensitivity analyses have been conducted to study the effects of various factors on the firms’ supply chain social responsibility strategies. Factors such as consumers’ ethical disposition, the social environment, and consumers’ perception of a product's functional versus social value are a few among them. The results of our analysis demonstrate that proactive investment in supply chain social responsibility can enhance a firm's competitive advantage and economic performance, thereby suggesting a profit-driven approach to achieving supply chain social responsibility and sustainability. “You have to be able to do good to do well.” –Julius Rosenwald | A profit-driven approach to building a “people-responsible” supply chain |
S0377221714007280 | We propose a profit maximization model for the decision support system of a firm that wishes to establish or rationalize a multinational manufacturing and distribution network to produce and deliver finished goods from sources to consumers. The model simultaneously evaluates all traditional location factors in a manufacturing and distribution network design problem and sets intra-firm transfer prices that take account of tax and exchange rate differentials between countries. Utilizing the generalized Benders decomposition approach, we exploit the partition between the product flow and the cash allocation (i.e., the pricing and revenue assignment) decisions in the supply chain to find near optimal model solutions. Our proposed profit maximizing strategic planning model produces intuitive results. We offer computational experiments to illustrate the potential valuable guidance the model can provide to a firm's supply chain design strategic planning process. | Formation of a strategic manufacturing and distribution network with transfer prices |
S0377221714007292 | This paper provides an updated overview of the rapidly developing research field of multi-attribute online reverse auctions. Our focus is on academic research, although we briefly comment on the state-of-the-art in practice. The role that Operational Research plays in such auctions is highlighted. We review decision- and game-theoretic research, experimental studies, information disclosure policies, and research on integrating and comparing negotiations and auctions. We conclude by discussing implementation issues regarding online procurement auctions in practice. | Multi-attribute online reverse auctions: Recent research trends |
S0377221714007309 | This article focuses on the evaluation of moves for the local search of the job-shop problem with the makespan criterion. We reason that the omnipresent ranking of moves according to their resulting value of a criterion function makes the local search unnecessarily myopic. Consequently, we introduce an alternative evaluation that relies on a surrogate quantity of the move’s potential, which is related to, but not strongly coupled with, the bare criterion. The approach is confirmed by empirical tests, where the proposed evaluator delivers a new upper bound on the well-known benchmark test yn2. The line of the argumentation also shows that by sacrificing accuracy the established makespan estimators unintentionally improve on the move evaluation in comparison to the exact makespan calculation, in contrast to the belief that the reliance on estimation degrades the optimization results. | Job-shop local-search move evaluation without direct consideration of the criterion’s value |
S0377221714007310 | Recently, it has been pointed out that transport models should reflect all significant traveler choice behavior. In particular, trip generation, trip distribution, modal split as well as route choice should be modeled in a consistent process based on the equilibrium between transport supply and travel demand. In this paper a general fixed-point approach that allows dealing with multi-user stochastic equilibrium assignment with variable demand is presented. The main focus was on investigating the effectiveness of internal and external approaches and of different algorithmic specifications based on the method of successive averages within the internal approach. The vector demand function was assumed non-separable, non-symmetric cost functions were adopted and implementation issues, such updating step and convergence criterion, were investigated. In particular the aim was threefold: (i) compare the internal and the external approaches; (ii) investigate the effectiveness of different algorithmic specifications to solve the variable demand equilibrium assignment problem through the internal approach; (iii) investigate the incidence of the number of the links with non-separable and/or asymmetrical cost functions. The proposed analyses were carried out with respect to two real-scale urban networks regarding medium-size urban contexts in Italy. | Stochastic equilibrium assignment with variable demand: Theoretical and implementation issues |
S0377221714007322 | We introduce debt issuance limit constraints along with market debt and bank debt to consider how financial frictions affect investment, financing, and debt structure strategies. Our model provides four important results. First, a firm is more likely to issue market debt than bank debt when its debt issuance limit increases. Second, investment strategies are nonmonotonic with respect to debt issuance limits. Third, debt issuance limits distort the relationship between a firm’s equity value and investment strategy. Finally, debt issuance limit constraints lead to debt holders experiencing low risk and low returns. That is, the more severe the debt issuance limits are, the lower are the credit spreads and default probabilities. Our theoretical results are consistent with stylized facts and empirical results. | Investment timing, debt structure, and financing constraints |
S0377221714007334 | Traveling salesman problem is a fundamental combinatorial optimization model studied in the operations research community for nearly half a century, yet there is surprisingly little literature that addresses uncertainty and multiple objectives in it. A novel TSP variation, called uncertain multiobjective TSP (UMTSP) with uncertain variables on the arc, is proposed in this paper on the basis of uncertainty theory, and a new solution approach named uncertain approach is applied to obtain Pareto efficient route in UMTSP. Considering the uncertain and combinatorial nature of UMTSP, a new ABC algorithm inserted with reverse operator, crossover operator and mutation operator is designed to this problem, which outperforms other algorithms through the performance comparison on three benchmark TSPs. Finally, a new benchmark UMTSP case study is presented to illustrate the construction and solution of UMTSP, which shows that the optimal route in deterministic TSP can be a poor route in UMTSP. | Uncertain multiobjective traveling salesman problem |
S0377221714007346 | We consider a risk-averse entrepreneur who invests in a project with idiosyncratic risk. In contrast to the literature, we assume the entrepreneur is unable to get a loan from a bank directly because of the low creditability of the entrepreneur and so an innovative financial contract, named equity-for-guarantee swap, is signed among a bank, an insurer, and the entrepreneur. It is shown that the new swap leads to higher leverage, which brings more diversification and tax benefits. The new swap not only solves the problems of financing constraints, but also significantly improves the welfare level of the entrepreneur. The growth of welfare level increases dramatically with risk aversion index and the volatility of idiosyncratic risk. | Entrepreneurial finance with equity-for-guarantee swap and idiosyncratic risk |
S0377221714007358 | We contribute to the literature by developing a normative theory of the relationship between stock and mutual insurers based on a contingent claims framework. To consistently price policies provided by firms in these two legal forms of organization, we extend the work of Doherty and Garven (1986) to the mutual case, thus ensuring that the formulae for the stock insurer are nested in our more general model. This set-up allows us to separately consider the ownership and policyholder stakes included in the mutual insurance premium and explicitly takes into account the right to charge additional premiums in times of financial distress, restrictions on the ability of members to realize the value of their equity stake, as well as relevant market frictions. A numerical implementation of our model shows that, for the premiums of stock and mutual insurers to be equal, the latter would need to hold comparatively less equity capital. We then evaluate panel data for the German motor liability insurance sector and demonstrate that observed premiums are not consistent with our normative findings. The combination of theory and empirical evidence suggests that policies offered by stock insurers are overpriced relative to policies of mutuals. Consequently, we suspect considerable wealth transfers between the stakeholder groups. | Stock vs. mutual insurers: Who should and who does charge more? |
S0377221714007371 | In this paper we analyze a situation in which several firms deal with inventory problems concerning the same type of product. We consider that each firm uses its limited capacity warehouse for storing purposes and that it faces an economic order quantity model where storage costs are irrelevant (and assumed to be zero) and shortages are allowed. In this setting, we show that firms can save costs by placing joint orders and obtain an optimal order policy for the firms. Besides, we identify an associated class of costs games which we show to be concave. Finally, we introduce and study a rule to share the costs among the firms which provides core allocations and can be easily computed. | Cooperation on capacitated inventory situations with fixed holding costs |
S0377221714007383 | Quality function deployment (QFD) is one of the very effective customer-driven quality system tools typically applied to fulfill customer needs or requirements (CRs). It is a crucial step in QFD to derive the prioritization of design requirements (DRs) from CRs for a product. However, effective prioritization of DRs is seriously challenged due to two types of uncertainties: human subjective perception and customer heterogeneity. This paper tries to propose a novel two-stage group decision-making approach to simultaneously address the two types of uncertainties underlying QFD. The first stage is to determine the fuzzy preference relations of different DRs with respect to each customer based on the order-based semantics of linguistic information. The second stage is to determine the prioritization of DRs by synthesizing all customers’ fuzzy preference relations into an overall one by fuzzy majority. Two examples, a Chinese restaurant and a flexible manufacturing system, are used to illustrate the proposed approach. The restaurant example is also used to compare with three existing approaches. Implementation results show that the proposed approach can eliminate the burden of quantifying qualitative concepts and model customer heterogeneity and design team’s preference. Due to its easiness, our approach can reduce the cognitive burden of QFD planning team and give a practical convenience in QFD planning. Extensions to the proposed approach are also given to address application contexts involving a wider set of HOQ elements. | A group decision-making approach to uncertain quality function deployment based on fuzzy preference relation and fuzzy majority |
S0377221714007395 | In this paper we consider a single-item inventory system with lost sales and fixed order cost. We numerically illustrate the lack of a clear structure in optimal replenishment policies for such systems. However, policies with a simple structure are preferred in practical settings. Examples of replenishment policies with a simple parametric description are the (s, S) policy and the (s, nQ) policy. Besides these known policies in literature, we propose a new type of replenishment policy. In our modified (s, S) policy we restrict the order size of the standard (s, S) policy to a maximum. This policy results in near-optimal costs. Furthermore, we derive heuristic procedures to set the inventory control parameters for this new replenishment policy. In our first approach we formulate closed-form expressions based on power approximations, whereas in our second approach we derive an approximation for the steady-state inventory distribution. As a result, the latter approach could be used for inventory systems with different objectives or service level constraints. The numerical experiments illustrate that the heuristic procedures result on average in 2.4 percent and 1.8 percent cost increases, respectively, compared to the optimal replenishment policy. Therefore, we conclude that the heuristic procedures are very effective to set the inventory control parameters. | Parametric replenishment policies for inventory systems with lost sales and fixed order cost |
S0377221714007401 | Forest fires can impose substantial social, environmental and economic burdens on the communities on which they impact. Well managed and timely fire suppression can demonstrably reduce the area burnt and minimise consequent losses. In order to effectively coordinate emergency vehicles for fire suppression, it is important to have an understanding of the time that elapses between vehicle dispatch and arrival at a fire. Forest fires can occur in remote locations that are not necessarily directly accessible by road. Consequently estimations of vehicular travel time may need to consider both on and off road travel. We introduce and demonstrate a novel framework for estimating travel times and determining optimal travel routes for vehicles travelling from bases to forest fires where both on and off road travel may be necessary. A grid based, cost-distance approach was utilised, where a travel time surface was computed indicating travel time from the reported fire location. Times were calculated using a discrete event simulation cellular automata (CA) model, with the CA progressing outwards from the fire location. Optimal fastest travel paths were computed by recognising chains of parent–child relationships. Our results achieved comparable results to traditional network analysis techniques when considering travel along roads; however the method was also demonstrated to be effective in estimating travel times and optimal routes in complex terrain. | Using discrete event simulation cellular automata models to determine multi-mode travel times and routes of terrestrial suppression resources to wildland fires |
S0377221714007589 | This paper first introduces an original trajectory model using B-splines and a new semi-infinite programming formulation of the separation constraint involved in air traffic conflict problems. A new continuous optimization formulation of the tactical conflict-resolution problem is then proposed. It involves very few optimization variables in that one needs only one optimization variable to determine each aircraft trajectory. Encouraging numerical experiments show that this approach is viable on realistic test problems. Not only does one not need to rely on the traditional, discretized, combinatorial optimization approaches to this problem, but, moreover, local continuous optimization methods, which require relatively fewer iterations and thereby fewer costly function evaluations, are shown to improve the performance of the overall global optimization of this non-convex problem. | Solving air traffic conflict problems via local continuous optimization |
S0377221714007590 | Packaging links the entire supply chain and coordinates all participants in the process to give a flexible and effective response to customer needs in order to maximize satisfaction at optimal cost. This research proposes an optimization model to define the minimum total cost combination of outer packs in various distribution channels with the least opening ratio (the percentage of total orders requiring the opening of an outer pack to exactly meet the demand). A simple routine to define a feasible start point is proposed to reduce the complexity caused by the number of possible combinations. A Fast-Moving Consumer Goods company in an emerging economy (Colombia) is analyzed to test the proposed methodology. The main findings are useful for emerging markets in that they provide significant savings in the whole supply chain and insights into the packaging problem. | A cost-efficient method to optimize package size in emerging markets |
S0377221714007607 | Studies have widely used stochastic frontier models to assess financial efficiency; however, traditional static approaches do not take dynamic characteristics of financial systems into account. This article develops a dynamic stochastic frontier model to evaluate regional financial efficiency and provides an empirical test of the model by using panel data of 62 Chinese counties during the 2001–2010 period. The model measures the dynamic impact of the input–output variables and environmental variables on financial efficiency, and allows for the separation of technical change from the change in technical efficiency. The results show that the dynamic model provides a better fit to the data than the static model. In addition, a gradient difference emerges in the regional financial efficiency among the six major regions of China. The results offer practical implications for the development of regional financial services in China, as well as other developing countries and emerging economies. | A dynamic stochastic frontier model to evaluate regional financial efficiency: Evidence from Chinese county-level panel data |
S0377221714007619 | The study addresses a distributor's delivery strategy problem with consideration of carbon emissions, retailers’ time-dependent demands and demand–supply interactions. A mathematical programming model is formulated for solving optimal number and time-window of service cycles. A case study is presented. The results show that the distributor would adopt a frequent delivery strategy if carbon taxes are insufficiently high. Providing the retailers with price discounts does not work out since retailers would value the delays in receiving their ordered products more than the compensations of less frequent delivery given by the distributor in response to carbon taxes. Models with demand–supply interactions can result in larger profit and market share than those without demand–supply interactions. This study sheds new light to the distributors, the retailers, and the regulators in greening the logistics transport. | Optimal delivery strategies considering carbon emissions, time-dependent demands and demand–supply interactions |
S0377221714007620 | In a bankruptcy situation individuals are not equally affected since each one has its own specific characteristics. These aspects cannot be ignored and may justify an allocation bias in favor of or against some individuals. This paper develops a theory of differentiation in claims problems that considers not only the vector of claims, but also some justified differentiating criteria based on other characteristics (wealth, net-income, GDP, etc.). Accordingly, we propose some progressive transfers from richer to poorer claimants with the purpose of distributing the damage as evenly as possible. Finally, we characterize our solution by means of the Lorenz criterion. Endogenous convex combinations between solutions are also considered. | Why and how to differentiate in claims problems? An axiomatic approach |
S0377221714007632 | In this paper we consider characterizations of the robust uncertainty sets associated with coherent and distortion risk measures. In this context we show that if we are willing to enforce the coherent or distortion axioms only on random variables that are affine or linear functions of the vector of random parameters, we may consider some new variants of the uncertainty sets determined by the classical characterizations. We also show that in the finite probability case these variants are simple transformations of the classical sets. Finally we present results of computational experiments that suggest that the risk measures associated with these new uncertainty sets can help mitigate estimation errors of the Conditional Value-at-Risk. | Restricted risk measures and robust optimization |
S0377221714007644 | A novel self-adaptive variable-size multi-objective differential evolution algorithm is presented to find the best reconfiguration of existing on-orbit satellites for some particular targets on the ground when an emergent requirement arises in a short period. The main contribution of this study is that three coverage metrics are designed to assess the performance of the reconfiguration. Proposed algorithm utilizes the idea of fixed-length chromosome encoding scheme combined with expression vector and the modified initialization, mutation, crossover and selection operators to search for optimal reconfiguration structure. Multi-subpopulation diversity initialization is adopted first, then the mutation based on estimation of distribution algorithm and adaptive crossover operators are defined to manipulate variable-length chromosomes, and finally a new selection mechanism is employed to generate well-distributed individuals for the next generation. The proposed algorithm is applied to three characteristically different case studies, with the objective to improve the performance with respect to specified targets by minimizing fuel consumption and maneuver time. The results show that the algorithm can effectively find the approximate Pareto solutions under different topological structures. A comparative analysis demonstrates that the proposed algorithm outperforms two other related multi-objective evolutionary optimization algorithms in terms of quality, convergence and diversity metrics. | Reconfiguration of satellite orbit for cooperative observation using variable-size multi-objective differential evolution |
S0377221714007656 | Distribution companies that serve a very large number of customers, courier companies for example, often partition the geographical region served by a depot into zones. Each zone is assigned to a single vehicle and each vehicle serves a single zone. An alternative approach is to partition the distribution region into smaller microzones that are assigned to a preferred vehicle in a so-called tactical plan. The moment the workload in each microzone is known, the microzones can be reassigned to vehicles in such a way that the total distance traveled is minimized, the workload of the different vehicles is balanced, and as many microzones as possible are assigned to their preferred vehicle. In this paper we model the resulting microzone-based vehicle routing problem as a multi-objective optimization problem and develop a simple yet effective algorithm to solve it. We analyze this algorithm and discuss the results it obtains. | Multi-objective microzone-based vehicle routing for courier companies: From tactical to operational planning |
S0377221714007668 | This paper develops primal and dual versions of the dynamic Luenberger productivity growth measures that are based on the dynamic directional distance function and intertemporal cost minimization, respectively. The empirical application focuses on panel data of Dutch dairy farms over the period 1995–2005. Primal dynamic Luenberger productivity growth averages 1.5 percent annually in the period under investigation, with technical change being the main driver of annual change. Dual dynamic Luenberger productivity growth is −0.1 percent in the same period. Improvements in technical inefficiency and technical change are partly counteracted by deteriorations of allocative inefficiency, with large dairy farms presenting a slightly higher productivity growth than small dairy farms. | Primal and dual dynamic Luenberger productivity indicators |
S0377221714007681 | This paper analyzes three major asymmetries in stock markets, namely, asymmetry in return reversals, asymmetry in return persistency and asymmetry in return volatilities. It argues for a case of return persistency as stock returns do not always reverse, in theory and in practice. Patterns in return-volatility asymmetries are conjectured and investigated jointly, under different stock market conditions. Results from modeling the world's major stock return indexes render support to the propositions of the paper. Return reversal asymmetry is illusionary arising from ambiguous parameter estimations and deluding interpretations of parameter signs. Asymmetry in return persistency, still weak though, is more prevalent. | Asymmetries in stock markets |
S0377221714007693 | The classical multi-level capacitated lot-sizing problem formulation is often not suitable to correctly capture resource requirements and precedence relations. Depending on lead time assumptions, either the model provides infeasible production plans or plans with costly needless inventory. We tackle this issue by explicitly modeling these two aspects and the synchronization of batches of products in the multi-level lot-sizing and scheduling formulation. Two models are presented; one considering batch production and the other one allowing lot-streaming. Comparisons with traditional models demonstrate the capability of the new approach in delivering more realistic results. The generated production plans are always feasible and cost savings of 30–40 percent compared to classical models are observed. | Lead time considerations for the multi-level capacitated lot-sizing problem |
S0377221714007711 | Working environment affects human health condition and performance. Human Factors (HF) scholars aim to elaborate this effect. However, HF studies mostly focus on employee occupational health and safety elements and their consequences on employee health conditions. They do not take into account Work-related Ill Health (WIH) risk factor effects at the system level. In contrast, operations research studies usually assume that operators involved in a system have identical performances and rarely consider WIH risk factor effects in optimizing system performance. This paper proposes a 2-state Markov chain model to quantify WIH risk factor effects and thereby estimate their economic impacts in optimizing a serial assembly line’s performance. Results of this research demonstrate between 0.52 percent and 8 percent increase for the total cost of the system as WIH risk levels change. This paper opens a new window to understand economic consequences of WIH effects, and to enhance systems performance by investigating working conditions. | Investigating work-related ill health effects in optimizing the performance of manufacturing systems |
S0377221714007723 | Building on recent work by the authors, we consider the problem of performing multiobjective optimization when the objective functions of a problem have differing evaluation times (or latencies). This has general relevance to applications since objective functions do vary greatly in their latency, and there is no reason to expect equal latencies for the objectives in a single problem. To deal with this issue, we provide a general problem definition and suitable notation for describing algorithm schemes that can use different evaluation budgets for each objective. We propose three schemes for the bi-objective version of the problem, including methods that interleave the evaluations of different objectives. All of these can be instantiated with existing multiobjective evolutionary algorithms (MOEAs). In an empirical study we use an indicator-based evolutionary algorithm (IBEA) as the MOEA platform to study performance on several benchmark test functions. Our findings generally show that the default approach of going at the rate of the slow objective is not competitive with our more advanced ones (interleaving evaluations) for most scenarios. | Multiobjective optimization: When objectives exhibit non-uniform latencies |
S0377221714007735 | We are considering induction of ordinal classification rules, which assign objects to preference-ordered decision classes, within the dominance-based rough set approach. In order to extract such rules, it is necessary to define dominance inconsistencies with respect to a set of condition attributes containing at least one ordinal condition attribute. Furthermore, it is also assumed that we know if there exist increasing or decreasing monotonicity relationships between the values of ordinal condition and decision attributes. Very often, however, this information is unknown a priori. One solution to this issue is to transform the ordinal condition attributes with unknown directions of preference to pairs of attributes with supposed inverse monotonic relationships. Both local and global monotonicity relationships can be represented by decision rules induced from transformed decision tables. However, in some cases, transforming a decision table in this way is overcomplex. In this paper, we propose the inconsistency rates based on dominance and fuzzy preference relations that have the capacity of discovering monotonic relationships directly from data rather than induced decision rules. Moreover, we propose a refined transformation method by introducing an additional monotonicity checking using these inconsistency rates to determine whether an ordinal condition attribute should be cloned or not. Experiments are also provided to evaluate the usefulness of the refined transformation method. | Induction of ordinal classification rules from decision tables with unknown monotonicity |
S0377221714007759 | The bipartite boolean quadratic programming problem (BBQP) is a generalization of the well studied boolean quadratic programming problem. The model has a variety of real life applications; however, empirical studies of the model are not available in the literature, except in a few isolated instances. In this paper, we develop efficient heuristic algorithms based on tabu search, very large scale neighborhood (VLSN) search, and a hybrid algorithm that integrates the two. The computational study establishes that effective integration of simple tabu search with VLSN search results in superior outcomes, and suggests the value of such an integration in other settings. Complexity analysis and implementation details are provided along with conclusions drawn from experimental analysis. In addition, we obtain solutions better than the best previously known for almost all medium and large size benchmark instances. | Integrating tabu search and VLSN search to develop enhanced algorithms: A case study using bipartite boolean quadratic programs |
S0377221714007760 | We consider a family of composite bivariate distributions, or probability mass functions (pmfs), with uniform marginals for simulating optimization-problem instances. For every possible population correlation, except the extreme values, there are an infinite number of valid joint distributions in this family. We quantify the entropy for all member distributions, including the special cases under independence and both extreme correlations. Greater variety is expected across optimization-problem instances simulated based on a high-entropy pmf. We present a closed-form solution to the problem of finding the joint pmf that maximizes entropy for a specified population correlation, and we show that this entropy-maximizing pmf belongs to our family of pmfs. We introduce the entropy range as a secondary indicator of the variety of instances that may be generated for a particular correlation. Finally, we discuss how to systematically control entropy and correlation to simulate a set of synthetic problem instances that includes challenging examples and examples with realistic characteristics. | A family of composite discrete bivariate distributions with uniform marginals for simulating realistic and challenging optimization-problem instances |
S0377221714007772 | In the airline industry, deciding the ticket price for each flight directly affects the number of people that in the future will try to buy a ticket. Depending on the willingness-to-pay of the customers the flight might take off with empty seats or seats sold at a lower price. Therefore, based on the behavior of the customers, a price must be fixed for each type of product in each period. We propose a stochastic dynamic pricing model to solve this problem, applying phase type distributions and renewal processes to model the inter-arrival time between two customers that book a ticket and the probability that a customer buys a ticket. We test this model in a real-world case where as a result the revenue is increased on average by 31 percent. | A stochastic dynamic pricing model for the multiclass problems in the airline industry |
S0377221714007784 | Environmental strains are causing consumers to trade up to greener alternatives and many brown products are losing market coverage to premium-priced green rivals. In order to tackle this threat, many companies currently offering only brown products are contemplating the launch of a green product to complement their product portfolio. This paper provides strategic insights into and tactical ramifications of expanding a brown product line with a new green product. Our analysis explicitly incorporates a segmented consumer market where individual consumers may value the same product differently, the economies of scale and the learning effects associated with new green products, and capacity constraints for the current production system. It is shown that a single pricing scheme for the new green product limits a firm’s ability to appropriate the value different customers will relinquish in a segmented market and/or to avoid cannibalization. A two-level pricing structure can diminish and even completely avoid the salience of cannibalization. However, when resources are scarce, a firm can never protect his products from the threat of cannibalization by just revising the pricing structure which can spell the end of his brown product’s presence in the market or preclude the firm from launching the green product. At this point, the degree of cannibalization is higher for the brown product when the green product offers a sufficiently differentiated proposition to green segment consumers. | Pricing, market coverage and capacity: Can green and brown products co-exist? |
S0377221714007796 | Technological innovations in warehouse automation systems, such as Autonomous Vehicle based Storage and Retrieval System (AVS/RS), are geared towards achieving greater operational efficiency and flexibility that would be necessary in warehouses of the future. AVS/RS relies on autonomous vehicles and lifts for horizontal and vertical transfer of unit-loads respectively. To implement a new technology such as AVS/RS, the choice of a design variable setting, interactions among the design variables, and the design trade-offs need to be well understood. In particular, design decisions such as the choice of vehicle dwell-point and location of cross-aisles could significantly affect the performance of an AVS/RS. Hence, we investigate the effect of these design decisions using customized analytical models based on multi-class semi-open queuing network theory. Numerical studies suggest that the average percentage reduction in storage and retrieval transactions with appropriate choice of dwell-point is about 8 percent and 4 percent respectively. While end of aisle location of the cross-aisle is commonly used in practice, our model suggests that there exists a better cross-aisle location within a tier (about 15 percent from end of aisle); however, the cycle time benefits by choosing the optimal cross-aisle location in comparison to the end of aisle cross-aisle location is marginal. Detailed simulations also indicate that the analytical model yields fairly accurate results. | Queuing models to analyze dwell-point and cross-aisle location in autonomous vehicle-based warehouse systems |
S0377221714007802 | The Linear Ordering Problem is a popular combinatorial optimisation problem which has been extensively addressed in the literature. However, in spite of its popularity, little is known about the characteristics of this problem. This paper studies a procedure to extract static information from an instance of the problem, and proposes a method to incorporate the obtained knowledge in order to improve the performance of local search-based algorithms. The procedure introduced identifies the positions where the indexes cannot generate local optima for the insert neighbourhood, and thus global optima solutions. This information is then used to propose a restricted insert neighbourhood that discards the insert operations which move indexes to positions where optimal solutions are not generated. In order to measure the efficiency of the proposed restricted insert neighbourhood system, two state-of-the-art algorithms for the LOP that include local search procedures have been modified. Conducted experiments confirm that the restricted versions of the algorithms outperform the classical designs systematically when a maximum number of function evaluations is considered as the stopping criterion. The statistical test included in the experimentation reports significant differences in all the cases, which validates the efficiency of our proposal. Moreover, additional experiments comparing the execution times reveal that the restricted approaches are faster than their counterparts for most of the instances. | The linear ordering problem revisited |
S0377221714007814 | The rectangle packing area minimization problem is a key sub-problem of floorplanning in VLSI design. This problem places a set of axis aligned two-dimensional rectangular items of given sizes onto a rectangular plane such that no two items overlap and the area of the enveloping rectangle is minimized. This paper presents a dynamic reduction algorithm that transforms an instance of the original problem to a series of instances of the rectangle packing problem by dynamically determining the dimensions of the enveloping rectangle. We define an injury degree to evaluate the possible negative impact for candidate placements, and we propose a least injury first approach for solving the rectangle packing problem. Next, we incorporate a compacting approach to compact the resulting layout by alternatively moving the items left and down toward a bottom-left corner such that we may obtain a smaller enveloping rectangle. We also show the feasibility, compactness, non-inferiority, and halting properties of the compacting approach. Comprehensive experiments were conducted on 11 MCNC and GSRC benchmarks and 28 instances reported in the literature. The experimental results show the high efficiency and effectiveness of the proposed dynamic reduction algorithm, especially on large-scale instances with hundreds of items. | Dynamic reduction heuristics for the rectangle packing area minimization problem |
S0377221714007826 | In this paper, a survey of scheduling problems with due windows is presented. The due window of a job is a generalization of the classical due date and it is a time interval in which this job should be finished. If the job is completed before or after the due window, it incurs an earliness or a tardiness penalty. A review of an extensive literature concerning problems with various models of given due windows, due window assignment and job-independent and job-dependent earliness/tardiness penalties is presented. The article focuses mainly on the computational complexity of the problems, mentioning also their solution algorithms. Moreover, the practical applications of the reviewed problems are mentioned, giving the appropriate references to the scientific literature. One particularly interesting IT example is described in detail. | A survey on scheduling problems with due windows |
S0377221714007838 | We present a novel approach for practically tackling uncertainty in preference elicitation and predictive modeling to support complex multi-criteria decisions based on multi-attribute utility theory (MAUT). A simplified two-step elicitation procedure consisting of an online survey and face-to-face interviews is followed by an extensive uncertainty analysis. This covers uncertainty of the preference components (marginal value and utility functions, hierarchical aggregation functions, aggregation parameters) and the attribute predictions. Context uncertainties about future socio-economic developments are captured by combining MAUT with scenario planning. We perform a global sensitivity analysis (GSA) to assess the contribution of single uncertain preference parameters to the uncertainty of the ranking of alternatives. This is exemplified for sustainable water infrastructure planning in a case study in Switzerland. We compare 11 water supply alternatives ranging from conventional water supply systems to novel technologies and management schemes regarding 44 objectives. Their performance is assessed for four future scenarios and 10 stakeholders from different backgrounds and decision-making levels. Despite uncertainty in the ranking of alternatives, potential best and worst solutions could be identified. We demonstrate that a priori assumptions such as linear value functions or additive aggregation can result in misleading recommendations, unless thoroughly checked during preference elicitation and modeling. We suggest GSA to focus elicitation on most sensitive preference parameters. Our GSA results indicate that output uncertainty can be considerably reduced by additional elicitation of few parameters, e.g. the overall risk attitude and aggregation functions at higher-level nodes. Here, rough value function elicitation was sufficient, thereby substantially reducing elicitation time. | Tackling uncertainty in multi-criteria decision analysis – An application to water supply infrastructure planning |
S0377221714007851 | ValueâÂÂincome ratios, such as dividend yields in finance and priceâÂÂrent ratios in housing and real estate markets, impact society in a variety of ways. This paper proposes a new type of the present value model that features income growth with time-varying yields. It offers a new risk perspective, which may alleviate timid investor behavior in market downturns while cooling down the market in seemingly booming times. A binding relationship, the valueâÂÂincome ratio adjusted by yields of the asset and growth in income, is revealed. This has notable implications for empirical research, which examines valueâÂÂincome ratios time and again. Incorrectly perceived market behavior distorts the formation of investor behavior, and vice versa, which has serious consequences to the functioning of the market and beyond. | A new approach to estimating valueâÂÂincome ratios with income growth and time-varying yields |
S0377221714007863 | In this paper we provide a model which describes how voluntary disclosure impacts on the timing of a firm’s investment decisions. A manager chooses a time to invest in a project and a time to disclose the investment return in order to maximise his monetary payoff. We assume that this payoff is linked to the level of the firm’s stock price. Prior to investing, the profitability of the project and the market reaction to the disclosure of the investment return are uncertain, but the manager receives signals at random points in time which assist in resolving some of this uncertainty. We find that a manager whose objective can only be achieved through voluntarily disclosing the return is motivated to invest at a time that would be sub-optimal for an identical manager with a profit maximising objective. | The impact of voluntary disclosure on a firm’s investment policy |
S0377221714007887 | This paper considers returns policies under which consumers’ valuation depends on the refund amount they receive and the length of time they must wait after the item is returned. Consumers face an uncertain valuation before purchase, and the realization of that purchase's value occurs only after the return deadline has passed. Depending on the product lifecycle length and magnitude of return rate, a retailer decides on strategies for that product's return deadline, including return prohibition, life-cycle return, and fixed return deadline. In addition, the influence of the return deadline on consumers’ behavior and the pricing and inventory policies of the retailer are systematically investigated. Moreover, based on the analysis of consumer return behavior on a traditional buy-back contract, we present a new differentiated buy-back contract, contingent on return deadline, to coordinate a supply chain consisting of an upstream manufacturer and a downstream retailer. Finally, extensions on some specific behavioral factors such as moral hazard, inertia return, and external effect are investigated. | Consumer returns policies with endogenous deadline and supply chain coordination |
S0377221714007899 | We introduce a new preference disaggregation modeling formulations for multiple criteria sorting with a set of additive value functions. The preference information supplied by the Decision Maker (DM) is composed of: (1) possibly imprecise assignment examples, (2) desired class cardinalities, and (3) assignment-based pairwise comparisons. The latter have the form of imprecise statements referring to the desired assignments for pairs of alternatives, but without specifying any concrete class. Additionally, we account for preferences concerning the shape of the marginal value functions and desired comprehensive values of alternatives assigned to a given class or class range. The exploitation of all value functions compatible with these preferences results in three types of results: (1) necessary and possible assignments, (2) extreme class cardinalities, and (3) necessary and possible assignment-based preference relations. These outputs correspond to different types of admitted preference information. By exhibiting different outcomes, we encourage the DM in various ways to enrich her/his preference information interactively. The applicability of the framework is demonstrated on data involving the classification of cities into liveability classes. | Modeling assignment-based pairwise comparisons within integrated framework for value-driven multiple criteria sorting |
S0377221714007905 | One of the main tasks of conjoint analysis is to identify consumer preferences about potential products or services. Accordingly, different estimation methods have been proposed to determine the corresponding relevant attributes. Most of these approaches rely on the post-processing of the estimated preferences to establish the importance of such variables. This paper presents new techniques that simultaneously identify consumer preferences and the most relevant attributes. The proposed approaches have two appealing characteristics. Firstly, they are grounded on a support vector machine formulation that has proved important predictive ability in operations management and marketing contexts and secondly they obtain a more parsimonious representation of consumer preferences than traditional models. We report the results of an extensive simulation study that shows that unlike existing methods, our approach can accurately recover the model parameters as well as the relevant attributes. Additionally, we use two conjoint choice experiments whose results show that the proposed techniques have better fit and predictive accuracy than traditional methods and that they additionally provide an improved understanding of customer preferences. | Advanced conjoint analysis using feature selection via support vector machines |
S0377221714007917 | This article analyzes the complexity of the modular tool switching problem arising in flexible manufacturing environments. A single, numerically controlled placement machine is equipped with an online tool magazine consisting of several changeable tool feeder modules. The modules can hold a number of tools necessary for the jobs. In addition to the online modules, there is a set of offline modules which can be changed to the machine during a job change. A number of jobs are processed by the machine, each job requiring a certain set of tools. Tools between jobs can be switched individually, or by replacing a whole module containing multiple tools. or a whole module, containing multiple tools can be replaced. We consider the complexity of the problem of arranging tools into the modules, so that the work for module and tool loading is minimized. Tools are of uniform size and have unit loading costs. We show that the general problem is NP-hard, and in the case of fixed number of modules and fixed module capacity the problem is solvable in polynomial time. | The modular tool switching problem |
S0377221714007929 | In this paper we present complexity results for flow shop problems with synchronous movement which are a variant of a non-preemptive permutation flow shop. Jobs have to be moved from one machine to the next by an unpaced synchronous transportation system, which implies that the processing is organized in synchronized cycles. This means that in each cycle the current jobs start at the same time on the corresponding machines and after processing have to wait until the last job is finished. Afterwards, all jobs are moved to the next machine simultaneously. Besides the general situation we also investigate special cases involving machine dominance which means that the processing times of all jobs on a dominating machine are at least as large as the processing times of all jobs on the other machines. Especially, we study flow shops with synchronous movement for a small number of dominating machines (one or two) and different objective functions. | Complexity results for flow shop problems with synchronous movement |
S0377221714007930 | The Standard Quadratic Problem (StQP) is an NP-hard problem with many local minimizers (stationary points). In the literature, heuristics based on unconstrained continuous non-convex formulations have been proposed (Bomze & Palagi, 2005; Bomze, Grippo, & Palagi, 2012) but none dominates the other in terms of best value found. Following (Cassioli, DiLorenzo, Locatelli, Schoen, & Sciandrone, 2012) we propose to use Support Vector Machines (SVMs) to define a multistart global strategy which selects the “best” heuristic. We test our method on StQP arising from the Maximum Clique Problem on a graph which is a challenging combinatorial problem. We use as benchmark the clique problems in the DIMACS challenge. | Using SVM to combine global heuristics for the Standard Quadratic Problem |
S0377221714007942 | Businesses are increasingly subject to disruptions. It is almost impossible to predict their nature, time and extent. Therefore, organizations need a proactive approach equipped with a decision support framework to protect themselves against the outcomes of disruptive events. In this paper, a novel framework is proposed for integrated business continuity and disaster recovery planning for efficient and effective resuming and recovering of critical operations after being disrupted. The proposed model addresses decision problems at all strategic, tactical and operational levels. At the strategic level, the context of the organization is first explored and the main features of the organizational resilience are recognized. Then, a new multi-objective mixed integer linear programming model is formulated to allocate internal and external resources to both resuming and recovery plans simultaneously. The model aims to control the loss of resilience by maximizing recovery point and minimizing recovery time objectives. Finally, at the operational level, hypothetical disruptive events are examined to evaluate the applicability of the plans. We also develop a novel interactive augmented ε-constraint method to find the final preferred compromise solution. The proposed model and solution method are finally validated through a real case study. | Integrated business continuity and disaster recovery planning: Towards organizational resilience |
S0377221714007954 | Emergency medical services (EMS) assist different classes of patients according to their medical seriousness. In this study, we extended the well-known hypercube model, based on the theory of spatially distributed queues, to analyze systems with multiple priority classes and a queue for waiting customers. Then, we analyzed the computational results obtained when applying this approach to a case study from an urban EMS in the city of Ribeirão Preto, Brazil. We also investigated some scenarios for this system studying different periods of the day and the impact of increasing the demands of the patient classes. The results showed that relevant performance measures can be obtained to analyze such a system by using the analytical model extended to deal with queuing priority. In particular, it can accurately evaluate the average response time for each class of emergency calls individually, paying particular attention to high priority calls. | Incorporating priorities for waiting customers in the hypercube queuing model with application to an emergency medical service system in Brazil |
S0377221714007966 | ELECTRE TRI is a set of methods designed to sort alternatives evaluated on several criteria into ordered categories. The original method uses limiting profiles. A recently introduced method uses central profiles. We study the relations between these two methods. We do so by investigating if an ordered partition obtained with one method can also be obtained with the other method, after a suitable redefinition of the profiles. We also investigate a number of situations in which the original method using limiting profiles gives results that do not fit well our intuition. This leads us to propose a variant of ELECTRE TRI that uses limiting profiles. We show that this variant may have some advantages over the original method. | On the relations between ELECTRE TRI-B and ELECTRE TRI-C and on a new variant of ELECTRE TRI-B |
S0377221714007978 | Given a set N, a pairwise distance function d and an integer number m, the Dispersion Problems (DPs) require to extract from N a subset M of cardinality m, so as to optimize a suitable function of the distances between the elements in M. Different functions give rise to a whole family of combinatorial optimization problems. In particular, the max-sum DP and the max-min DP have received strong attention in the literature. Other problems (e.g., the max-minsum DP and the min-diffsum DP) have been recently proposed with the aim to model the optimization of equity requirements, as opposed to that of more classical efficiency requirements. Building on the main ideas which underly some state-of-the-art methods for the max-sum DP and the max-min DP, this work proposes some constructive procedures and a Tabu Search algorithm for the new problems. In particular, we investigate the extension to the new context of key features such as initialization, tenure management and diversification mechanisms. The computational experiments show that the algorithms applying these ideas perform effectively on the publicly available benchmarks, but also that there are some interesting differences with respect to the DPs more studied in the literature. As a result of this investigation, we also provide optimal results and bounds as a useful reference for further studies. | Construction and improvement algorithms for dispersion problems |
S0377221714007991 | We consider the optimization problem implementing current market rules for European day-ahead electricity markets. We propose improved algorithmic approaches for that problem. First, a new MIP formulation is presented which avoids the use of complementarity constraints to express market equilibrium conditions, and also avoids the introduction of auxiliary continuous or binary variables. Instead, we rely on strong duality theory for linear or convex quadratic optimization problems to recover equilibrium constraints. When so-called stepwise bid curves are considered to describe continuous bids, the new formulation allows to take full advantage of state-of-the-art MILP solvers, and in most cases, an optimal solution including market prices can be computed for large-scale instances without any further algorithmic work. Second, the new formulation suggests a Benders-like decomposition procedure. This helps in the case of piecewise linear bid curves that yield quadratic primal and dual objective functions leading to a dense quadratic constraint in the formulation. This procedure essentially strengthens classical Benders cuts locally. Computational experiments using 2011 historical instances for the Central Western Europe region show excellent results. In the linear case, both approaches are very efficient, while for quadratic instances, only the decomposition procedure is appropriate. Finally, when most orders are block orders, and instances are combinatorially very hard, the direct MILP approach is substantially more efficient. | Computationally efficient MIP formulation and algorithms for European day-ahead electricity market auctions |
S0377221714008005 | We test the lower bound for a static remanufacturing system with returns proposed by Feng and Viswanathan (2014) against two heuristics proposed by Choi et al. (2007) and Schulz and Voigt (2014). A numerical study with 81,000 instances concludes that the lower bound always holds for the Choi et al. (2007) heuristic, but does not hold for 45 percent of all tested instances compared to the Schulz and Voigt (2014) heuristic. In these 45 percent, the average deviation from the lower bound is 4 percent with a maximum deviation of 14.9 percent. The main difference between the two analyzed heuristics is that the Choi et al. heuristic applies equally sized manufacturing and remanufacturing batches (which is also assumed to hold in the proposed lower bound), while Schulz and Voigt present a heuristic in which the respective remanufacturing batch sizes may vary. In contrast to Feng and Viswanathan (2014), we conclude that management should be cautious to use too simple policy structures when obtaining a solution for a static remanufacturing system with product returns. | Note on “Heuristics with guaranteed performance bounds for a manufacturing system with product recovery” |
S0377221714008017 | Under fairly general assumptions requiring neither a differentiable frontier nor a constant-returns-to-scale technology, this paper introduces a new definition of an optimal scale size based on the minimization of unit costs. The corresponding measure, average-cost efficiency, combines scale and allocative efficiency, and generalizes the measurement of scale economies in efficiency analysis while providing a performance criterion which is stricter than both cost efficiency and scale efficiency measurement. The average-cost efficiency is not reliant upon the uniformity of the firms’ input-price vector, and we supply procedures to compute it in both convex and non-convex production technologies. Empirical illustration of the theoretical results is given with reference to large sets of production units. average-cost efficiency constant returns to scale data envelopment analysis decision making unit free disposal hull most productive scale size overall efficiency/cost efficiency optimal scale size technical and scale radial efficiency variable returns to scale | Average-cost efficiency and optimal scale sizes in non-parametric analysis |
S0377221714008029 | In this paper, we study the problem of minimizing the maximum total completion time per machine on m parallel and identical machines. We prove that the problem is strongly NP-hard if m is a part of the input. When m is a given number, a pseudo-polynomial time dynamic programming is proposed. We also show that the worst-case ratio of SPT is at most 2.608 and at least 2.5366 when m is sufficiently large. We further present another algorithm which has a worst-case ratio of 2. | Scheduling to minimize the maximum total completion time per machine |
S0377221714008030 | The maximum clique problem (MCP) is to determine in a graph a clique (i.e., a complete subgraph) of maximum cardinality. The MCP is notable for its capability of modeling other combinatorial problems and real-world applications. As one of the most studied NP-hard problems, many algorithms are available in the literature and new methods are continually being proposed. Given that the two existing surveys on the MCP date back to 1994 and 1999 respectively, one primary goal of this paper is to provide an updated and comprehensive review on both exact and heuristic MCP algorithms, with a special focus on recent developments. To be informative, we identify the general framework followed by these algorithms and pinpoint the key ingredients that make them successful. By classifying the main search strategies and putting forward the critical elements of the most relevant clique methods, this review intends to encourage future development of more powerful methods and motivate new applications of the clique approaches. | A review on algorithms for maximum clique problems |
S0377221714008042 | With the ongoing trend of mass-customization and an increasing product variety, just-in-time part logistics more and more becomes one of the greatest challenges in today’s automobile production. Thousands of parts and suppliers, a multitude of different equipments, and hundreds of logistics workers need to be coordinated, so that the final assembly lines never run out of parts. This paper describes the elementary process steps of part logistics in the automotive industry starting with the initial call order to the return of empty part containers. In addition to a detailed process description, important decision problems are specified, existing literature is surveyed, and open research challenges are identified. | Part logistics in the automotive industry: Decision problems, literature review and research agenda |
S0377221714008054 | In this research, critical infrastructure protection against intentional attacks is modeled as a discrete simultaneous game between the protector and the attacker, to model the situation that both players keep the information of their resource allocation secret. We prove that keeping the information regarding protection strategies secret can obtain a better effect of critical infrastructure protection than truthfully disclosing it. Solving a game theoretic problem, even in the case of two players, has been known to be intractable. To deal with this complexity, after proving that pure-strategy Nash equilibrium solutions do not exist for the proposed simultaneous game, a new approach is proposed to identify its mixed-strategy Nash equilibrium solution. | Critical infrastructure protection using secrecy – A discrete simultaneous game |
S0377221714008066 | Parameter estimation based on uncertain data represented as belief structures is one of the latest problems in the Dempster–Shafer theory. In this paper, a novel method is proposed for the parameter estimation in the case where belief structures are uncertain and represented as interval-valued belief structures. Within our proposed method, the maximization of likelihood criterion and minimization of estimated parameter’s uncertainty are taken into consideration simultaneously. As an illustration, the proposed method is employed to estimate parameters for deterministic and uncertain belief structures, which demonstrates its effectiveness and versatility. | Parameter estimation based on interval-valued belief structures |
S0377221714008078 | We study an incremental network design problem, where in each time period of the planning horizon an arc can be added to the network and a maximum flow problem is solved, and where the objective is to maximize the cumulative flow over the entire planning horizon. After presenting two mixed integer programming (MIP) formulations for this NP-complete problem, we describe several heuristics and prove performance bounds for some special cases. In a series of computational experiments, we compare the performance of the MIP formulations as well as the heuristics. | Incremental network design with maximum flows |
S0377221714008108 | A VNS-based heuristic using both a facility as well as a customer type neighbourhood structure is proposed to solve the p-centre problem in the continuous space. Simple but effective enhancements to the original Elzinga–Hearn algorithm as well as a powerful ‘locate–allocate’ local search used within VNS are proposed. In addition, efficient implementations in both neighbourhood structures are presented. A learning scheme is also embedded into the search to produce a new variant of VNS that uses memory. The effect of incorporating strong intensification within the local search via a VND type structure is also explored with interesting results. Empirical results, based on several existing data set (TSP-Lib) with various values of p, show that the proposed VNS implementations outperform both a multi-start heuristic and the discrete-based optimal approach that use the same local search. | The continuous p-centre problem: An investigation into variable neighbourhood search with memory |
S0377221714008121 | Direct current (DC) electricity distribution systems have been proposed as an alternative to traditional, alternating current (AC) distribution systems for commercial buildings. Partial replacement of AC distribution with DC distribution can improve service to DC loads and overall building energy efficiency. This article develops (i) a mixed-integer, nonlinear, nonconvex mathematical programming problem to determine maximally energy efficient designs for mixed AC–DC electricity distribution systems in commercial buildings, and (ii) describes a tailored global optimization algorithm based on Nonconvex Generalized Benders Decomposition. The results of three case studies demonstrate the strength of the decomposition approach compared to state-of-the-art general-purpose global solvers. | Optimal design of mixed AC–DC distribution systems for commercial buildings: A Nonconvex Generalized Benders Decomposition approach |
S0377221714008133 | This paper considers a contest setting in which a challenger chooses between one of two contests to enter after observing the level of defense at each. Despite the challenger’s chance of success being determined by a proportional contest success function, the defenders effectively find themselves in an all-pay auction that largely dissipates the value of the defended resources because the challenger will target the weaker defender. However, if the defenders form a protective alliance then their expected profits increase despite the fact that a successful challenge is theoretically more likely, given the overall reduction in defense. Controlled laboratory experiments designed to test the model’s predictions are also reported. Observed behavior is generally consistent with the comparative static predictions although challengers exhibit the familiar overbidding pattern. Defenders appear to anticipate this reaction and adjust their behavior accordingly. | Defense against an opportunistic challenger: Theory and experiments |
S0377221714008145 | The Vehicle Routing Problem with Simultaneous Pickup and Delivery with Time Limit (VRPSPDTL) is a variant of the basic Vehicle Routing Problem where the vehicles serve delivery as well as pick up operations of the clients under time limit restrictions. The VRPSPDTL determines a set of vehicle routes originating and terminating at a central depot such that the total travel distance is minimized. For this problem, we propose a mixed-integer mathematical optimization model and a perturbation based neighborhood search algorithm combined with the classic savings heuristic, variable neighborhood search and a perturbation mechanism. The numerical results show that the proposed method produces superior solutions for a number of well-known benchmark problems compared to those reported in the literature and reasonably good solutions for the remaining test problems. | A perturbation based variable neighborhood search heuristic for solving the Vehicle Routing Problem with Simultaneous Pickup and Delivery with Time Limit |
S0377221714008157 | We study the capacitated k-facility location problem, in which we are given a set of clients with demands, a set of facilities with capacities and a positive integer k. It costs fi to open facility i, and cij for facility i to serve one unit of demand from client j. The objective is to open at most k facilities serving all the demands and satisfying the capacity constraints while minimizing the sum of service and opening costs. In this paper, we give the first fully polynomial time approximation scheme (FPTAS) for the single-sink (single-client) capacitated k-facility location problem. Then, we show that the capacitated k-facility location problem with uniform capacities is solvable in polynomial time if the number of clients is fixed by reducing it to a collection of transportation problems. Third, we analyze the structure of extreme point solutions, and examine the efficiency of this structure in designing approximation algorithms for capacitated k-facility location problems. Finally, we extend our results to obtain an improved approximation algorithm for the capacitated facility location problem with uniform opening costs. | Approximation algorithms for hard capacitated k-facility location problems |
S0377221714008169 | Standard mean-variance analysis is based on the assumption of normal return distributions. However, a growing body of literature suggests that the market oscillates between two different regimes – one with low volatility and the other with high volatility. In such a case, even if the return distributions are normal in both regimes, the overall distribution is not – it is a mixture of normals. Mean-variance analysis is inappropriate in this framework, and one must either assume a specific utility function or, alternatively, employ the more general and distribution-free Second degree Stochastic Dominance (SSD) criterion. This paper develops the SSD rule for the case of mixed normals: the SSDMN rule. This rule is a generalization the mean-variance rule. The cost of ignoring regimes and assuming normality when the distributions are actually mixed normal can be quite substantial – it is typically equivalent to an annual rate of return of 2–3 percent. | Portfolio selection in a two-regime world |
S0377221714008170 | In this paper we introduce a time-dependent probabilistic location model for Emergency Medical Service (EMS) vehicles. The goal is to maximize the expected coverage throughout the day and at the same time minimize the number of opened facilities and the number of relocations. We apply our model to both a randomly generated test instance and to data from the city of Amsterdam, the Netherlands. We see that time-dependent models can result in better solutions than time-independent models. Furthermore, we see that the current set of base locations in Amsterdam is not optimal. We can obtain higher coverage with even less base locations. | Time-dependent MEXCLP with start-up and relocation cost |
S0377221714008352 | Benchmarking and target setting should identify best practices that are not only technically achievable but also desirable in the light of prior knowledge and expert opinion. It should also be considered the possibility of finding targets by minimizing the gap between actual and efficient performances, so that the units under evaluation can achieve these targets with less effort. We extend here the DEA models that provide closest targets for use when expert preferences are incorporated into the analysis. This approach is illustrated by applying the model proposed to the evaluation of educational performance of public Spanish universities. | Benchmarking and target setting with expert preferences: An application to the evaluation of educational performance of Spanish universities |
S0377221714008364 | We consider a single-echelon continuous review inventory system for spare parts with two parallel locations. Each location faces independent Poisson demand and backorders are allowed. In this paper we consider the possibility of lateral transshipments between the locations. The transshipment leadtime is positive and deterministic, and there is an additional cost for making a transshipment. We suggest a transshipment policy which is based on the timing of all outstanding orders, and develop and solve a heuristic model by using theory and concepts from doubly stochastic Poisson processes and also partial differential equations. A simulation study indicates that our heuristic works very well, and that the relative cost increase of disregarding the transshipment leadtime may be quite high. Our results also indicate that it is, in general, worth the effort of reducing the transshipment leadtime, even if it is already relatively short. | Emergency lateral transshipments in a two-location inventory system with positive transshipment leadtimes |
S0377221714008376 | We address the Flight and Maintenance Planning (FMP) problem, i.e., the problem of deciding which available aircraft to fly and for how long, and which grounded aircraft to perform maintenance operations on in a group of aircraft that comprise a unit. The aim is to maximize the unit fleet availability over a multi-period planning horizon, while also ensuring that certain flight and maintenance requirements are satisfied. Heuristic approaches that are used in practice to solve the FMP problem often perform poorly, generating solutions that are far from the optimum. On the other hand, the exact optimization models that have been developed to tackle the problem handle small problems effectively, but tend to be computationally inefficient for larger problems, such as the ones that arise in practice. With these in mind, we develop an exact solution algorithm for the FMP problem, which is capable of identifying the optimal solution of considerably large realistic problems in reasonable computational times. The algorithm solves suitable relaxations of the original problem, utilizing valid cuts that guide the search towards the optimal solution. We present extensive experimental results, which demonstrate that the algorithm's performance on realistic problems is superior to that of two popular commercial optimization software packages, whereas the opposite is true for a class of problems with special characteristics that deviate considerably from those of realistic problems. The important conclusion of this research is that the proposed algorithm, complemented by generic optimization software, can handle effectively a large variety of FMP problem instances. | An exact solution algorithm for maximizing the fleet availability of a unit of aircraft subject to flight and maintenance requirements |
S0377221714008388 | The construction of composite indicators (CI) is useful to synthesize complex social and economic phenomena, but some underlying assumptions in “classical methods”, as in particular the compensability among indicators, are very strictly. The aim of this paper is to propose an original approach that enhance non-compensatory issue by introducing “directional” penalties in a Benefit of the Doubt model in order to consider the preference structure among simple indicators. Principal component analysis on simple indicators hyperplane allows to estimate both the direction and the intensity of the average rates of substitution. Under an empirical point of view, our method has been tested on both simulated data and on infrastructural endowment data in European regions. | Enhancing non-compensatory composite indicators: A directional proposal |
S0377221714008406 | This paper uses Monte Carlo Data Envelopment Analysis (Monte Carlo DEA) to evaluate the relative technical efficiency of small health care areas in probabilistic terms with respect to both mental health care as well as the efficiency of the whole system. Taking into account that the number of areas did not permit maximum discrimination to be achieved, all the scenarios of non-correlated inputs and outputs of a specific size were designed using Monte Carlo Pearson to maximize the discrimination of Monte Carlo DEA and the information included in the models. A knowledge base was included in the simulation engine in order to guide the dynamic interpretation of non-standard inputs and outputs. Results show the probability that all DMU and the whole system have of being efficient, as well as the specific inputs and outputs that make the areas or the system efficient or inefficient, along with a classification of the areas into four groups according to their efficiency (k-means cluster analysis). This final classification was compared with an expert-based classification to validate both the knowledge base and the Monte Carlo DEA model. Both classifications showed results that were very similar although not exactly the same, basically due to the difficulty experts experience in recognizing “intermediately-inefficient” DMU. We propose this methodology as an instrument that could help health care managers to assess relative technical efficiency in complex systems under uncertainty. | Evaluation of system efficiency using the Monte Carlo DEA: The case of small health areas |
S0377221714008418 | Facility dispersion problems involve placing a number of facilities as far apart from each other as possible. Four different criteria of facility dispersal have been proposed in the literature (Erkut & Neuman, 1991). Despite their formal differences, these four classic dispersion objectives can be expressed in a unified model called the partial-sum dispersion model (Lei & Church, 2013). In this paper, we focus on the unweighted partial sum dispersion problem and introduce an efficient formulation for this generalized dispersion problem based on a construct by Ogryczak and Tamir (2003). We also present a fast branch-and-bound based exact algorithm. | On the unified dispersion problem: Efficient formulations and exact algorithms |
S0377221714008431 | A bullwhip measure for a two-stage supply chain with an order-up-to inventory policy is derived for a general, stationary SARMA(p, q) × (P, Q) s demand process. Explicit expressions for several SARMA models are obtained to illustrate the key relationship between lead-time and seasonal lag. It is found that the bullwhip effect can be reduced considerably by shortening the lead-time in relation to the seasonal lag value. | Measuring the bullwhip effect for supply chains with seasonal demand components |
S0377221714008443 | The economical and environmental benefits are the central issues for remanufacturing. Whereas extant remanufacturing research focuses primarily on such issues in remanufacturing technologies, production planning, inventory control and competitive strategies, we provide an alternative yet somewhat complementary approach to consider both issues related to different channels structures for marketing remanufactured products. Specifically, based on observations from current practice, we consider a manufacturer sells new units through an independent retailer but with two options for marketing remanufactured products: (1) marketing through its own e-channel (Model M) or (2) subcontracting the marketing activity to a third party (Model 3P). A central result we obtain is that although Model M is always greener than Model 3P, firms have less incentive to adopt it because both the manufacturer and retailer may be worse off when the manufacturer sells remanufactured products through its own e-channel rather than subcontracting to a third party. Extending both models to cases in which the manufacturer interacts with multiple retailers further reveals that the more retailers in the market, the greener Model M relative to Model 3P. | Bricks vs. clicks: Which is better for marketing remanufactured products? |
S0377221714008455 | Pairwise comparison (PC) is a well-established method to assist decision makers in estimating their preferences. In PCs, the acquired judgments are used to construct a PC matrix (PCM) that is used to check whether the inconsistency in judgments is acceptable or requires revision. The use of Consistency Ratio (CR)—a widely used measure for inconsistency—has been widely debated and the literature survey has identified a need for a more appropriate measure. Considering this need, a new measure, termed congruence, is proposed in this paper. The measure is shown to be useful in finding the contribution of individual judgments toward overall inconsistency of a PCM and, therefore, can be used to detect and correct cardinally inconsistent judgments. The proposed measure is applicable to incomplete sets of PC judgments without modification, unlike CR which requires a complete set of PC judgments. To address ordinal inconsistency, another measure termed dissonance, is proposed as a supplement to the congruence measure. The two measures appear useful in detecting both outliers and the phenomenon of consistency deadlock where all judgments equally contribute toward the overall inconsistency. | Contribution of individual judgments toward inconsistency in pairwise comparisons |
S0377221714008467 | This article presents a goal programming framework to solve group decision making problems where decision-makers’ judgments are provided as incomplete interval additive reciprocal comparison matrices (IARCMs). New properties of multiplicative consistent IARCMs are put forward and used to define consistent incomplete IARCMs. A two-step goal programming method is developed to estimate missing values for an incomplete IARCM. The first step minimizes the inconsistency of the completed IARCMs and controls uncertainty ratios of the estimated judgments within an acceptable threshold, and the second step finds the most appropriate estimated missing values among the optimal solutions obtained from the previous step. A weighted geometric mean approach is proposed to aggregate individual IARCMs into a group IARCM by employing the lower bounds of the interval additive reciprocal judgments. A two-step procedure consisting of two goal programming models is established to derive interval weights from the group IARCM. The first model is devised to minimize the absolute difference between the logarithm of the group preference and that of the constructed multiplicative consistent judgment. The second model is developed to generate an interval-valued priority vector by maximizing the uncertainty ratio of the constructed consistent IARCM and incorporating the optimal objective value of the first model as a constraint. Two numerical examples are furnished to demonstrate validity and applicability of the proposed approach. | A multi-step goal programming approach for group decision making with incomplete interval additive reciprocal comparison matrices |
S0377221714008479 | Many inventory control studies consider either continuous review and continuous ordering, or periodic review and periodic ordering. Mixtures of the two are hardly ever studied. However, the model with periodic review and continuous ordering is highly relevant in practice, as information on the actual inventory level is not always up to date while making ordering decisions. This paper will therefore treat this model. Assuming zero fixed ordering costs, and allowing for a non-negative lead time and a general demand process, we first consider a one-period decision problem without salvage cost for inventory remaining at the end of the period. In this setting we derive a base-line optimal order path, described by a simple newsvendor solution with safety stocks increasing towards the end of a review period. We then show that for the general, multi-period problem, the optimal policy in a period is to first arrive at this path by not ordering until the excess buffer stock from the previous review period is depleted, then follow the path by continuous ordering, and stop ordering towards the end to limit excess stocks for the next review period. An important managerial insight is that, typically, no order should be placed at a review moment, although this may seem intuitive and is also the standard assumption in periodic review models. We illustrate that adhering to the optimal ordering path instead can lead to cost reductions of 30–60 percent compared to pure periodic ordering. | Periodic review and continuous ordering |
S0377221714008480 | In this paper we examine multi-objective linear programming problems in the face of data uncertainty both in the objective function and the constraints. First, we derive a formula for the radius of robust feasibility guaranteeing constraint feasibility for all possible scenarios within a specified uncertainty set under affine data parametrization. We then present numerically tractable optimality conditions for minmax robust weakly efficient solutions, i.e., the weakly efficient solutions of the robust counterpart. We also consider highly robust weakly efficient solutions, i.e., robust feasible solutions which are weakly efficient for any possible instance of the objective matrix within a specified uncertainty set, providing lower bounds for the radius of highly robust efficiency guaranteeing the existence of this type of solutions under affine and rank-1 objective data uncertainty. Finally, we provide numerically tractable optimality conditions for highly robust weakly efficient solutions. | Robust solutions to multi-objective linear programs with uncertain data |
S0377221714008492 | This paper presents two new procedures for ranking and selection (R&S) problems where the best system designs are selected from a set of competing ones based on multiple performance measures evaluated through stochastic simulation. In the procedures, the performance measures are aggregated with a multi-attribute utility function, and incomplete preference information regarding the weights that reflect the relative importance of the measures is taken into account. A set of feasible weights is determined according to preference statements that are linear constraints on the weights given by a decision-maker. Non-dominated designs are selected using two dominance relations referred to as pairwise and absolute dominance based on estimates for the expected utilities of the designs over the feasible weights. The procedures allocate a limited number of simulation replications among the designs such that the probabilities of correctly selecting the pairwise and absolutely non-dominated designs are maximized. The new procedures offer ease of eliciting the weights compared with existing R&S procedures that aggregate the performance measures using unique weights. Moreover, computational advantages are provided over existing procedures that identify non-dominated designs based on the expected values of the performance measures. The new procedures allow to obtain a smaller number of non-dominated designs. They also identify these designs correctly with a higher probability or require a smaller number of replications for correct selection. Finally, the new procedures allocate a larger number of replications to the non-dominated designs that are therefore evaluated with greater accuracy. These computational advantages are illustrated through several numerical experiments. | Ranking and selection for multiple performance measures using incomplete preference information |
S0377221714008509 | One of the most critical barriers to widespread adoption of electric cars is the lack of charging station infrastructure. Although it is expected that a sufficient number of charging stations will be constructed eventually, due to various practical reasons they may have to be introduced gradually over time. In this paper, we formulate a multi-period optimization model based on a flow-refueling location model for strategic charging station location planning. We also propose two myopic methods and develop a case study based on the real traffic flow data of the Korean Expressway network in 2011. We discuss the performance of the three proposed methods. | Multi-period planning for electric car charging station locations: A case of Korean Expressways |
S0377221714008510 | The work reported in this paper addresses the problem of transmission expansion planning under uncertainty in an electric energy system. We consider different sources of uncertainty, including future demand growth and the availability of generation facilities, which are characterized for different regions within the electric energy system. An adaptive robust optimization model is used to derive the investment decisions that minimizes the system’s total costs by anticipating the worst case realization of the uncertain parameters within an uncertainty set. The proposed formulation materializes on a mixed-integer three-level optimization problem whose lower-level problem can be replaced by its KKT optimality conditions. The resulting mixed-integer bilevel model is efficiently solved by decomposition using a cutting plane algorithm. A realistic case study is used to illustrate the working of the proposed technique, and to analyze the relationship between the optimal transmission investment plans, the investment budget and the level of supply security at the different regions of the network. | Robust transmission expansion planning |
S0377221714008522 | We analyze the impact of managerial compensation structure in publicly-traded banks on their risk taking behavior, specifically the changes in risk taking through the changing regulatory environment for these banks. We perform a simulation analysis to study the impact of the interaction between regulatory changes and competitiveness in banking on managerial compensation, and in turn their joint impact on a bank's riskiness. The three hypotheses we examine using the simulation analysis are, (1) increase in competitiveness after deregulation results in higher levels of risk for banks, (2) regulatory changes can result in change in the composition of managerial compensation, which creates an environment of incentives for enhanced risk taking, (3) regulatory changes accompanied by certain governance or managerial compensation controls can bring prudence in the risk taking behavior. The simulation model allows isolating each factor for its impact on a particular bank's riskiness due to the regulatory changes. This impact is then correlated with the governance characteristics of the bank. We observe that competition uniformly increases the risk in firm value and shareholder-equity of all the banks, more severely for some than others. Its effect on change of firm value through regulatory changes observed is opposite from its effect on shareholder-equity for some banks. Change in competition combined with change in managerial compensation captures significantly more of the increased risk in firm value and shareholder-equity. Lastly, the governance characteristics show that risk differential between competition alone and competition combined with compensation is low for banks with good governance. | Impact of compensation structure and managerial incentives on bank risk taking |
S0377221714008534 | Model risk has a huge impact on any risk measurement procedure and its quantification is therefore a crucial step. In this paper, we introduce three quantitative measures of model risk when choosing a particular reference model within a given class: the absolute measure of model risk, the relative measure of model risk and the local measure of model risk. Each of the measures has a specific purpose and so allows for flexibility. We illustrate the various notions by studying some relevant examples, so as to emphasize the practicability and tractability of our approach. | Assessing financial model risk |
S0377221714008546 | The wheels are one of the most worn components on a train. When the wear is unacceptable, the re-profiling can restore the shape of the wheel flange with the cost of decreasing the wheel diameter. The decision of re-profiling has serious implications for the life span of wheels. In this paper, based on the analysis of the wear and re-profiling characteristics of metro wheels, a data-driven model of the relationship between the wheel diameter, the flange thickness, their wear rates, and the re-profiling gain is built for the wheels of Guangzhou Metro Line One. A (S dP, S dR) re-profiling strategy is proposed, where S dP is the wheel flange thickness threshold to trigger a preventive re-profiling and S dR is the wheel flange thickness after the preventive re-profiling. Then the Monte Carlo simulation model of the re-profiling strategy is described in this paper. To find out when a re-profiling should be performed in terms of the flange wear-out level and what values of the flange thickness should be brought to by re-profiling, the simulation results for optimizing the decision variables (S dP, S dR) of the re-profiling strategy are given in this paper. Those having longer life spans are listed as the preferred re-profiling strategies. The study in this paper reveals that the wear rate of the flange thickness is correlated with the flange thickness, while the diameter wear rate could be considered independent of the flange thickness in terms of the wheels of Guangzhou Metro Line One. On the other hand, based on the observation and analysis of an available sample set from Guangzhou Metro Line One, the re-profiling gain is dependent on the flange thickness before or after re-profiling. The preferred re-profiling strategies suggested by this study can increase the life span comparing with the existing re-profiling strategies based on the simulation. The models and methods presented in this paper could benefit both city metro companies and inter-city rail companies by prolonging the life span of rolling stock wheels. | Optimizing the re-profiling strategy of metro wheels based on a data-driven wear model |
S0377221714008558 | In this paper, we study a multi-periodic production planning problem in agriculture. This problem belongs to the class of crop rotation planning problems, which have received considerable attention in the literature in recent years. Crop cultivation and fallow periods must be scheduled on land plots over a given time horizon so as to minimize the total surface area of land used, while satisfying crop demands every period. This problem is proven strongly NP-hard. We propose a 0-1 linear programming formulation based on crop-sequence graphs. An extended formulation is then provided with a polynomial-time pricing problem, and a Branch-and-Price-and-Cut (BPC) algorithm is presented with adapted branching rules and cutting planes. The numerical experiments on instances varying the number of crops, periods and plots show the effectiveness of the BPC for the extended formulation compared to solving the compact formulation, even though these two formulations have the same linear relaxation bound. | A branch-and-price-and-cut approach for sustainable crop rotation planning |
S0377221714008571 | This paper proposes an agent-based simulation model to study and analyse the performance of various procurement and production policies in the recycled paper industry. The proposed model includes the recycled pulp production process, as well as the waste paper inventory and procurement processes. A detailed simulation model developed in partnership with a large recycled pulp producer in North America was developed in order to emulate the procurement manager's behaviour. Therefore, based on the observed behaviour of the procurement manager, a procurement behaviour model, which takes both market price and inventory requirement into account, is introduced. This paper also introduces a waste paper market model that simulates a market price and enables the control of price forecast accuracy. Two series of experiments were carried out in order to study the performance of procurement and production policies in several productions contexts. Results show that production Volume Flexibility has a negative impact on costs, inventory and quality. However, it is possible to partially reduce these issues with the introduction of contracts with Volume Flexibility, although only a limited effect has been observed in our experiments. A more significant strategy to improve costs consists in reducing production rate to the minimum level required to meet demand. | Waste paper procurement optimization: An agent-based simulation approach |
S0377221714008583 | U-shaped assembly lines are an important configuration of modern manufacturing systems due to their flexibility to adapt to varying market demands. In U-shaped lines, tasks are assigned after their predecessors or successors. Some MILP models have been proposed to formulate the U-shaped assembly line balancing problem using either–or constraints to express precedence relationships. We show that this modeling approach reported in the literature may often find optimal solutions that are infeasible and verify this on a large set of benchmark problems. We present a revision to this model to accurately express the precedence relationships without introducing additional variables or constraints. We also illustrate on the same benchmark problems that our revision always reports solutions that are feasible. | On the MILP model for the U-shaped assembly line balancing problems |
S0377221714008595 | Despite linear programming and duality have correctly been incorporated in algorithms to compute the nucleolus, we have found mistakes in how these have been used in a broad range of applications. Overlooking the fact that a linear program can have multiple optimal solutions and neglecting the relevance of duality appear to be crucial sources of mistakes in computing the nucleolus. We discuss these issues and illustrate them in five mistaken examples from this and other journals. The purpose of this note is to prevent these mistakes propagate longer by clarifying how linear programming and duality can be correctly used for computing the nucleolus. | Common mistakes in computing the nucleolus |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.