FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0377221714010601
We introduce and study the range contract, which allows a buyer to procure from a supplier at a prescribed price any amount within a specified range. In return, the supplier is compensated up front for the width of the range with a range fee. This fee can be viewed as the buyer trading monetary value for reduced uncertainty. The range contract generalizes and unifies many common contracts, such as fixed-price, JIT, option, and quantity-flexibility contracts. The parameters that maximize the expected profit of the centralized supply chain are derived here and are shown to crucially depend on production flexibility. We also study here the buyer’s expected profit-maximizing range endpoints as a function of the pricing parameters of the contract. Using the buyer’s optimal range, we demonstrate how the supplier can set the contract’s pricing parameters so as to maximize the supplier’s expected profit for a uniform distribution of demand. We provide computational evidence, for uniformly distributed demand, that the range contract allows the optimal decentralized supply chain to attain significant reductions in standard deviation of profit in exchange for moderate reductions in expected value of profit. We further demonstrate computationally that both the buyer and supplier can benefit simultaneously, attaining higher risk-adjusted profits than the centralized supply chain.
Range contracts: Risk sharing and beyond
S0377221714010613
This paper offers insights into how the bullwhip effect in two parallel supply chains with interacting price-sensitive demands is affected in contrast to the situation of a single product in a serial supply chain. In particular, this research studies two parallel supply chains, each consisting of a manufacturer and a retailer, and the external demand for a single product depends on its price and the other's price in a situation in which each price follows a first-order autoregressive process. In this paper, we propose an analytical framework that incorporates two parallel supply chains, and we explore their interactions to determine the bullwhip effect. We identify the conditions under which the bullwhip effect is amplified or lessened with interacting price-sensitive demands relative to the situation without interaction.
Analysis of the bullwhip effect in two parallel supply chains with interacting price-sensitive demands
S0377221714010625
In this paper, firm profit loss is decomposed as the sum of two terms related to the output price uncertainty (price expectation error and risk preference), plus one extra term expressing technical inefficiency. We then describe the implementation of our theoretical model in a robust data envelopment analysis (DEA) framework, which allows an effective and separate estimation of each term of the decomposition. In addition, we offer an operational tool to reveal producers’ risk preferences. A 2009 database of French fattening pig farms is used as an illustration. Our results indicate that risk preference and technical inefficiency are the main sources of profit loss.
A decomposition of profit loss under output price uncertainty
S0377221714010637
We study a discontinuous mispricing model of a risky asset under asymmetric information where jumps in the asset price and mispricing are modelled by Lévy processes. By contracting the filtration of the informed investor, we obtain optimal portfolios and maximum expected utilities for the informed and uninformed investors. We also discuss their asymptotic properties, which can be estimated using the instantaneous centralized moments of return. We find that optimal and asymptotic utilities are increased due to jumps in mispricing for the uninformed investor but the informed investor still has excess utility, provided there is not too little or too much mispricing.
A discontinuous mispricing model under asymmetric information
S0377221714010649
The periodical fluctuation phenomenon appears in coal mine and other fields of government safety supervision. The paper provides a theoretical explanation by building an evolutionary game model between coal mine industry and governmental supervision institutions. Moreover, the paper provides a numerical example to demonstrate how the initial state and the costs (or gains) influence the fluctuation amplitude and the equilibrium position. We find that the initial state and the payoffs of different strategies are the two main determinants of periodical fluctuation phenomenon. The successful experience of coal mine safety supervision in China shows the importance of highly efficient government safety governance in developing countries.
Historical evolution and benefit–cost explanation of periodical fluctuation in coal mine safety supervision: An evolutionary game analysis framework
S0377221714010650
With increasing global concerns regarding energy management, the concept of the smart grid has become a particularly important interdisciplinary research topic. In order to continually monitor a power utility system and efficiently observe all of the states of electric nodes and branches on a smart grid, placing PMUs (phasor measurement units) at selected nodes on the grid can monitor the operation conditions of the entire power grid. This study investigates methods for minimizing the high installation costs of PMUs, in order to monitor the entire system using a set of PMUs according to the power observation rules. Notably, this problem of monitoring a power grid can be transformed into the OPP (optimal PMU placement) problem. The objective is to simultaneously minimize the number of PMUs and ensure the complete observability of the whole power grid. This combinatorial optimization problem has been shown to be NP-complete. In this paper, we propose a hybrid two-phase algorithm for this problem. The first phase of the algorithm quickly identifies a set of candidate locations of PMUs based on a graph-theoretic decomposition approach for the power domination problem in tree-type graphs. Then, we use a local search heuristic method to derive the minimum number of PMUs in the second phase. In addition to the practical model, this study also considers the ideal model, in which all load nodes are assumed to be zero injection. The numerical studies on various IEEE power test systems demonstrate the superior performance of the proposed algorithm in both the models in regard to computational time and solution quality. In particular, in the ideal model, the number of PMUs required for the test systems can be significantly reduced. We also provide theoretical lower bounds on the number of installed PMUs in the ideal model and show that the derived solution can achieve the bound of the test systems.
Hybrid search for the optimal PMU placement problem on a power grid
S0377221714010662
This paper addresses the capacitated vehicle routing problem with two-dimensional loading constraints (2L-CVRP), which is a generalized capacitated vehicle routing problem in which customer demand is a set of two-dimensional, rectangular, weighted items. The objective is to design the route set of minimum cost for a homogenous fleet of vehicles, starting and terminating at a central depot, to serve all the customers. All the items packed in one vehicle must satisfy the two-dimensional orthogonal packing constraints. A variable neighborhood search is proposed to address the routing aspect, and a skyline heuristic is adapted to examine the loading constraints. To speed up the search process, an efficient data structure (Trie) is utilized to record the loading feasibility information of routes, but also to control the computational effort of the skyline spending on the same route. The effectiveness of our approach is verified through experiments on widely used benchmark instances involving two distinct versions of loading constraints (unrestricted and sequential versions). Numerical experiments show that the proposed method outperforms all existing methods and improves or matches the majority of best known solutions for both problem versions.
A variable neighborhood search for the capacitated vehicle routing problem with two-dimensional loading constraints
S0377221714010674
This paper examines the sailing speed of containerships and refueling of bunker in a liner shipping network while considering that the real speed may deviate from the planned one. It develops a mixed-integer nonlinear optimization model to minimize the total cost consisting of ship cost, bunker cost, and inventory cost, under the worst-case bunker consumption scenario. A close-form expression for the worst-case bunker consumption is derived and three linearization techniques are proposed to transform the nonlinear model to a mixed-integer linear programming formulation. A case study based on the Asia–Europe–Oceania network of a global liner shipping company demonstrates the applicability of the proposed model and interesting managerial insights are obtained.
Robust bunker management for liner shipping networks
S0377221714010686
This paper describes a limited-memory quasi-Newton method in which the initial inverse Hessian approximation is constructed based on the concept of equilibration of the inverse Hessian matrix. Curvature information about the objective function is stored in the form of a diagonal matrix, and plays the dual role of providing an initial matrix and of equilibrating for limited memory BFGS (LBFGS) iterations. An extensive numerical testing has been performed showing that the diagonal scaling strategy proposed is very effective.
Dynamic scaling on the limited memory BFGS method
S0377221714010698
We consider restoring multiple interdependent infrastructure networks after a disaster damages components in them and disrupts the services provided by them. Our particular focus is on interdependent infrastructure restoration (IIR) where both the operations and the restoration of the infrastructures are linked across systems. We provide new mathematical formulations of restoration interdependencies in order to incorporate them into an interdependent integrated network design and scheduling (IINDS) problem. The IIR efforts resulting from solving this IINDS problem model a centralized decision-making environment where a single decision-maker controls the resources of all infrastructures. In reality, individual infrastructures often determine their restoration efforts in an independent, decentralized manner with little communication among them. We provide algorithms to model various levels of decentralization in IIR efforts. These algorithms are applied to realistic damage scenarios for interdependent infrastructure systems in order to determine the loss in restoration effectiveness resulting from decentralized decision-making. Our computational tests demonstrate that this loss can be greatly mitigated by having infrastructures share information about their planned restoration efforts.
Interdependent network restoration: On the value of information-sharing
S0377221715000028
The purpose of this study is to (1) assess the feasibility of predicting increases in Facebook usage frequency, (2) evaluate which algorithms perform best, (3) and determine which predictors are most important. We benchmark the performance of Logistic Regression, Random Forest, Stochastic Adaptive Boosting, Kernel Factory, Neural Networks and Support Vector Machines using five times twofold cross-validation. The results indicate that it is feasible to create models with high predictive performance. The top performing algorithm was Stochastic Adaptive Boosting with a cross-validated AUC of 0.66 and accuracy of 0.74. The most important predictors include deviation from regular usage patterns, frequencies of likes of specific categories and group memberships, average photo album privacy settings, and recency of comments. Facebook and other social networks alike could use predictions of increases in usage frequency to customize its services such as pacing the rate of advertisements and friend recommendations, or adapting News Feed content altogether. The main contribution of this study is that it is the first to assess the prediction of increases in usage frequency in a social network.
CRM in social media: Predicting increases in Facebook usage frequency
S0377221715000041
We study various two-agent scheduling problems on a single machine with equal job processing times. The equal processing time assumption enables us to design new polynomial-time or faster-than-known optimization algorithms for many problems. We prove, however, that there exists a subset of problems for which the computational complexity remains NP -hard. The set of hard problems includes different variations where the objective functions of the two agents are either minimizing the weighted sum of completion times or the weighted number of tardy jobs. For these problems, we present pseudo-polynomial time algorithms.
Single machine scheduling with two competing agents and equal job processing times
S0377221715000053
Branch and Fix Coordination is an algorithm intended to solve large scale multi-stage stochastic mixed integer problems, based on the particular structure of such problems, so that they can be broken down into smaller subproblems. With this in mind, it is possible to use distributed computation techniques to solve the several subproblems in a parallel way, almost independently. To guarantee non-anticipativity in the global solution, the values of the integer variables in the subproblems are coordinated by a master thread. Scenario ‘clusters’ lend themselves particularly well to parallelisation, allowing us to solve some problems noticeably faster. Thanks to the decomposition into smaller subproblems, we can also attempt to solve otherwise intractable instances. In this work, we present details on the computational implementation of the Branch and Fix Coordination algorithm.
A parallelised distributed implementation of a Branch and Fix Coordination algorithm
S0377221715000065
Despite the fact that the Capacitated Arc Routing Problems (CARPs) received substantial attention in the literature, most of the research concentrates on the symmetric and single-depot version of the problem. In this paper, we fill this gap by proposing an approach to solving a more general version of the problem and analysing its properties. We present an MILP formulation that accommodates asymmetric multi-depot case and consider valid inequalities that may be used to tighten its LP relaxation. A symmetry breaking scheme for a single-depot case is also proposed. An extensive numerical study is carried to investigate the properties of the problem and the proposed solution approach.
An approach to the asymmetric multi-depot capacitated arc routing problem
S0377221715000077
We propose an exact method, based on Generalized Benders Decomposition, to select the best M features during induction. We provide details of the method and highlight some interesting parallels between the technique proposed here and some of those published in the literature. We also propose a relaxation of the problem where selecting too many features is penalized. The original method performs well on a variety of data sets. The relaxation, though competitive, is sensitive to the penalty parameter.
Feature selection for support vector machines using Generalized Benders Decomposition
S0377221715000272
This paper studies the optimal investment strategies of an incumbent and a potential entrant that can both choose between a product flexible and dedicated technology, in a two-product market characterized by uncertain demand. The product flexible production technology has certain advantages, especially when the economic environment is uncertain. On the other hand, the dedicated production technology allows a firm to commit to production quantities. This gives strategic advantages, which can outweigh the ‘value of flexibility’. It turns out that both firms prefer, for some scenarios, the dedicated production technology. However, we find that in a game with sequential technology choices, both firms investing dedicated, will not be an equilibrium. Especially when the economic environment is more uncertain, the incumbent overinvests in product flexible capacity to force the entrant to choose the dedicated technology. Then, the incumbent is the only firm with the product flexible production technology, which results in a high payoff.
Dedicated vs product flexible production technology: Strategic capacity investment choice
S0377221715000284
The redundancy allocation problem is the problem of finding an optimal allocation of redundant components subject to a set of resource constraints. The problem studied in this paper refers to a series-parallel system configuration and allows for component mixing. We propose a new modeling/solution approach, in which the problem is transformed into a multiple choice knapsack problem and solved to optimality via a branch and cut algorithm. The algorithm is tested on well-known sets of benchmark instances. All instances have been solved to optimality in milliseconds or very few seconds on a normal workstation.
An exact algorithm for the reliability redundancy allocation problem
S0377221715000296
This study investigates the look-ahead control of a conveyor-serviced production station (CSPS), viewed as a production center, which is connected to a sales center. The production station is equipped with a buffer to temporarily store the parts that will flow into the product bank of the sales center after processing. The whole two-center system is characterized by random parts arrival, random customer demand, random processing time and limited buffer or bank capacities. Thus, the decision-making on the look-ahead range of such demand-driven CSPS is subject to the constraints of production and sales levels. In this paper, we will focus on modeling the stochastic control problem and providing solutions for finding the optimal look-ahead control policy under either average- or discounted-cost criteria. We first establish a detailed semi-Markov decision process for the look-ahead control of the demand-driven CSPS by combining the vacancies of both the buffer and the bank into one state, which can be solved by policy iteration or value iteration if the system parameters are known precisely. Then, to avoid the curse of dimensionality and modeling in the numerical optimization methods, we also propose a Q-learning algorithm combined with a simulated annealing technique to derive the approximate solutions. Simulation results are finally presented to show that by our established model and proposed optimization methods the system can achieve an optimal or suboptimal look-ahead control policy once the capacities of both the buffer and the bank are designed appropriately.
Modeling and optimization control of a demand-driven, conveyor-serviced production station
S0377221715000302
The global minimum variance portfolio computed using the sample covariance matrix is known to be negatively affected by parameter uncertainty, an important component of model risk. Using a robust approach, we introduce a portfolio rule for investors who wish to invest in the global minimum variance portfolio due to its strong historical track record, but seek a rule that is robust to parameter uncertainty. Our robust portfolio corresponds theoretically to the global minimum variance portfolio in the worst-case scenario, with respect to a set of plausible alternative estimators of the covariance matrix, in the neighbourhood of the sample covariance matrix. Hence, it provides protection against errors in the reference sample covariance matrix. Monte Carlo simulations illustrate the dominance of the robust portfolio over its non-robust counterpart, in terms of portfolio stability, variance and risk-adjusted returns. Empirically, we compare the out-of-sample performance of the robust portfolio to various competing minimum variance portfolio rules in the literature. We observe that the robust portfolio often has lower turnover and variance and higher Sharpe ratios than the competing minimum variance portfolios.
Global minimum variance portfolio optimisation under some model risk: A robust regression-based approach
S0377221715000314
The analytic network process (ANP) is a methodology for multi-criteria decision making used to derive priorities of the compared elements in a network hierarchy, where the dependences and feedback within and between the elements can be considered. However, the ANP is limited to the input preferences as crisp judgments, which is often unfavorable in practical applications. As an extension of the ANP, a generalized analytic network process (G-ANP) is developed to allow multiple forms of preferences, such as crisp (fuzzy) judgments, interval (interval fuzzy) judgments, hesitant (hesitant fuzzy) judgments and stochastic (stochastic fuzzy) judgments. In the G-ANP, a concept of complex comparison matrices (CCMs) is developed to collect decision makers’ preferences in the multiple forms. From a stochastic point of view, we develop an eigenvector method based stochastic preference method (EVM-SPM) to derive priorities from CCMs. The main steps of the G-ANP are summarized, and the implementation of the G-ANP in Matlab and Excel environments are given in detail, which is also a prototype for a decision support system. A real-life example of the piracy risk assessment to the energy channels of China is proposed to demonstrate the G-ANP.
Generalized analytic network process
S0377221715000326
Retail activities are increasingly exposed to unseasonal weather causing lost sales and profits, as climate change is aggravating climate variability. Although research has provided insights into the role of weather on consumption, little is known about the precise relationship between weather and sales for strategic and financial decision-making. Using apparel as an illustration, for all seasons, we estimate the impact on sales caused by unexpected deviations of daily temperature from seasonal patterns. We apply Seasonal Trend decomposition using Loess to isolate changes in sales volumes. We use a linear regression to find the relationship between temperature and sales anomalies and construct the historical distribution to determine sales-at-risk due to unseasonal weather. We show how to use weather derivatives to offset the potential loss. Our contribution is twofold. We provide a new general method for managers to understand how their performance is weather-related. We lay out a blueprint for tailor-made weather derivatives to mitigate this risk.
Assessing and hedging the cost of unseasonal weather: Case of the apparel sector
S0377221715000338
In this paper, we conduct a study of the job-shop scheduling problem with reverse flows. This NP-hard problem is characterized by two flows of jobs that cover the same machines in opposite directions. The objective is to minimize the maximal completion time of the jobs (i.e., the makespan). We start by analyzing the complexity and identifying particular cases of the problem. Then, we provide a mathematical model that we use in conjunction with a solver to determine the computational times. These times are often too long because the problem is NP-hard. Thus, in this paper, we present a new heuristic method for solving the NP-hard 3-machine case. We evaluate the performance of this heuristic by computing several lower bounds and conducting tests on a Taillard-based benchmark. These tests give satisfying results and show that the heuristic ensures good performance when the two flows have comparable numbers of jobs. Then, we suggest a hybrid method that consists of a combination between a heuristic and a solver-based procedure to address the m-machine problem.
Job-shop production scheduling with reverse flows
S0377221715000351
Endogeneity, and the distortions on the estimation of economic models that it causes, is a usual problem in the econometrics literature. Although non-parametric methods like Data Envelopment Analysis (DEA) are among the most used techniques for measuring technical efficiency, the effects of such problem on efficiency estimates have received little attention. The aim of this paper is to alert DEA practitioners about the accuracy of their estimates under the presence of endogeneity. For this, first we illustrate the endogeneity problem and its causes in production processes and its implications for the efficiency measurement from a conceptual perspective. Second, using synthetic data generated in a Monte Carlo experiment we evaluate how different levels of positive and negative endogeneity can impair DEA estimations. We conclude that, although DEA is robust to negative endogeneity, a high positive endogeneity level, i.e., the existence of a high positive correlation between one input and the true efficiency level, might bias severely DEA estimates.
Testing the accuracy of DEA estimates under endogeneity through a Monte Carlo simulation
S0377221715000363
Food product contamination has potentially devastating effects on companies and supply chains. However, the impact of contamination has still not been thoroughly studied from a supply chain planning perspective. This paper models a contamination event in a generic food supply chain consisting of suppliers, processing centers, and retailers. Contamination is detected through either company or government agency sampling tests or through reports of a food borne illness. In this research, we analyze the impact of origin and choice of sampling strategies, and product and supply chain attributes on a contamination event. We also simulate a real-world tomato contamination case to gain further insights.
Product contamination in a multi-stage food supply chain
S0377221715000375
In this paper, a continuous multi-product model is developed to represent the shop floor dynamics of a job shop, based on dynamic modeling and on analogies to electrical components. This approach allows the mathematical formulation of the model (state representation) and the analysis of its dynamic response via simulation. A real case application in the textile industry is presented. Thus, this research contributes in the following ways: first, proposing a model that is suitable for multi-product systems with intricate job shop configuration and that is generalizable to various manufacturing systems; second, presenting a real case application of the proposed model. As practical implications, it provides production managers and practitioners with a prescriptive decision model that considers the dynamics of the production systems and the interdependencies of the decisions made in the shop floor. From the academic perspective, it contributes to the existing literature by presenting the application of an alternative modeling methodology, and by extending this methodology to manufacturing systems with multiple products, instead of single-product systems. Continuous models such as the one proposed can benefit from a wide range of tools for system analysis and control design, come from control theory. Although these tools have been extensively applied to model the supply chain, applications devoted to the plant level seem to be neglected over the past years. This model also aims to contribute in this direction.
Modeling the dynamics of a multi-product manufacturing system: A real case application
S0377221715000387
In this paper we deal with fusion functions, i.e., mappings from [0, 1] n into [0, 1]. As a generalization of the standard monotonicity and recently introduced weak monotonicity, we introduce and study the directional monotonicity of fusion functions. For distinguished fusion functions the sets of all directions in which they are increasing are determined. Moreover, in the paper the directional monotonicity of piecewise linear fusion functions is completely characterized. These results cover, among others, weighted arithmetic means, OWA operators, the Choquet, Sugeno and Shilkret integrals.
Directional monotonicity of fusion functions
S0377221715000399
In this paper we define a new problem, the aim of which is to find a set of k dissimilar solutions for a vehicle routing problem (VRP) on a single instance. This problem has several practical applications in the cash-in-transit sector and in the transportation of hazardous materials. A min–max mathematical formulation is proposed which requires a maximum similarity threshold between VRP solutions, and the number k of dissimilar VRP solutions that need to be generated. An index to measure similarities between VRP solutions is defined based on the edges shared between pairs of alternative solutions. An iterative metaheuristic to generate k dissimilar alternative solutions is also presented. The solution approach is tested using large and medium size benchmark instances for the capacitated vehicle routing problem.
The k-dissimilar vehicle routing problem
S0377221715000405
In this paper we introduce the discrete time window assignment vehicle routing problem (DTWAVRP) that can be viewed as a two-stage stochastic optimization problem. Given a set of customers that must be visited on the same day regularly within some period of time, the first-stage decisions are to assign to each customer a time window from a set of candidate time windows before demand is known. In the second stage, when demand is revealed for each day of the time period, vehicle routes satisfying vehicle capacity and the assigned time windows are constructed. The objective of the DTWAVRP is to minimize the expected total transportation cost. To solve this problem, we develop an exact branch-price-and-cut algorithm and derive from it five column generation heuristics that allow to solve larger instances than those solved by the exact algorithm. We illustrate the performance of these algorithms by means of computational experiments performed on randomly generated instances.
The discrete time window assignment vehicle routing problem
S0377221715000417
We present a column generation algorithm for solving the bi-objective multi-commodity minimum cost flow problem. This method is based on the bi-objective simplex method and Dantzig–Wolfe decomposition. The method is initialised by optimising the problem with respect to the first objective, a single objective multi-commodity flow problem, which is solved using Dantzig–Wolfe decomposition. Then, similar to the bi-objective simplex method, our algorithm iteratively moves from one non-dominated extreme point to the next one by finding entering variables with the maximum ratio of improvement of the second objective over deterioration of the first objective. Our method reformulates the problem into a bi-objective master problem over a set of capacity constraints and several single objective linear fractional sub-problems each over a set of network flow conservation constraints. The master problem iteratively updates cost coefficients for the fractional sub-problems. Based on these cost coefficients an optimal solution of each sub-problem is obtained. The solution with the best ratio objective value out of all sub-problems represents the entering variable for the master basis. The algorithm terminates when there is no entering variable which can improve the second objective by deteriorating the first objective. This implies that all non-dominated extreme points of the original problem are obtained. We report on the performance of the algorithm on several directed bi-objective network instances with different characteristics and different numbers of commodities.
A bi-objective column generation algorithm for the multi-commodity minimum cost flow problem
S0377221715000429
Dynamic inventory rationing is considered for systems with multiple demand classes, stationary stochastic demands, and backordering. In the literature, dynamic programming has been often applied to address this type of problems. However, due to the curse of dimensionality, computation is a critical challenge for dynamic programming. In this paper, an innovative two-step approach is proposed based on an idea similar to the certainty equivalence principle. First the deterministic inventory rationing problem is studied, where the future demands are set to be the expectation of the stochastic demand processes. The important properties obtained from solving the problem with the KKT conditions are then used to develop effective dynamic rationing policies for stochastic demands, which gives closed-form expressions for dynamic rationing thresholds. These expressions are easy to calculate and are applicable to any number of demand classes. Numerical results show that the expressions are close to and provide a lower bound for the optimal dynamic thresholds. They also shed light on important managerial insights, for example, the relation between different parameters and the rationing thresholds.
Multi-class dynamic inventory rationing with stochastic demands and backordering
S0377221715000430
We consider a discrete-time inventory system for a perishable product where demand exists for product of different ages; an example of such a product is blood platelets. In addition to the classical costs for inventory holding, outdating, and shortage, our model includes substitution (mismatch) costs incurred when a demand for a certain-aged item is satisfied by a different-aged item. We propose a simple inventory replenishment and allocation heuristic to minimize the expected total cost over an infinite time horizon. In our heuristic, inventory of the newest items is replenished in fixed quantities and the newest items are protected for future use by limiting some substitutions when making allocation decisions according to a critical-level policy. We model our problem as a Markov Decision Process (MDP), derive the costs of our heuristic policy, and computationally compare this policy to extant “near optimal” policies in the literature. Our extensive computational study shows that our policy leads to superior performance compared to existing heuristics in the literature, particularly when supplies are limited.
Blood platelet inventory management with protection levels
S0377221715000442
Many public healthcare systems struggle with excessive waiting lists for elective patient treatment. Different countries address this problem in different ways, and one interesting method entails a maximum waiting time guarantee. Introduced in Denmark in 2002, it entitles patients to treatment at a private hospital in Denmark or at a hospital abroad if the public healthcare system is unable to provide treatment within the stated maximum waiting time guarantee. Although clearly very attractive in some respects, many stakeholders have been very concerned about the negative consequences of the policy on the utilization of public hospital resources. This paper illustrates the use of a queue modelling approach in the analysis of elective patient treatment governed by the maximum waiting time policy. Drawing upon the combined strengths of analytic and simulation approaches we develop both Continuous-Time Markov Chain and Discrete Event Simulation models, to provide an insightful analysis of the public hospital performance under the policy rules. The aim of this paper is to support the enhancement of the quality of elective patient care, to be brought about by better understanding of the policy implications by hospital planners and strategic decision makers.
Use of queue modelling in the analysis of elective patient treatment governed by a maximum waiting time policy
S0377221715000454
Portfolio optimization context has shed only a little light on the dependence structure among the financial returns along with the fat-tailed distribution associated with them. This study tries to find a remedy for this shortcoming by exploiting stable distributions as the marginal distributions together with the dependence structure based on copula function. We formulate the portfolio optimization problem as a multi-objective mixed integer programming. Value-at-Risk (VaR) is specified as the risk measure due to its intuitive appeal and importance in financial regulations. In order to enhance the model's applicability, we take into account cardinality and quantity constraints in the model. Imposing such practical constraints has resulted in a non-continuous feasible region. Hence, we propose two variants of multi-objective particle swarm optimization (MOPSO) algorithms to tackle this issue. Finally, a comparative study among the proposed MOPSOs, NSGAII and SPEA2 algorithms is made to demonstrate which algorithm is outperformed. The empirical results reveal that one of the proposed MOPSOs is superior over the other salient algorithms in terms of performance metrics.
Multi-objective portfolio optimization considering the dependence structure of asset returns
S0377221715000466
In this paper, we consider issues of sustainability in the context of joint trade credit and inventory management in which the demand depends on the length of the credit period offered by the retailer to its customers. We quantify the impacts of the credit period and environmental regulations on the inventory model. Starting with some mild assumptions, we first analyze the model with generalized demand and default risk rates under the Carbon Cap-and-Trade policy, and then we make some extensions to the model with the Carbon Offset policy. We further analytically examine the effects of carbon emission parameters on the retailer’s trade credit and replenishment strategies. Finally, a couple of numerical examples and sensitivity analysis are given to illustrate the features of the proposed model, which is followed by concluding remarks.
Sustainable trade credit and replenishment decisions with credit-linked demand under carbon emission constraints
S0377221715000478
Cloud computing promises the flexible delivery of computing services in a pay-as-you-go manner. It allows customers to easily scale their infrastructure and save on the overall cost of operation. However Cloud service offerings can only thrive if customers are satisfied with service performance. Allowing instantaneous access and flexible scaling while maintaining the service levels and offering competitive prices poses a significant challenge to Cloud computing providers. Furthermore services will remain available in the long run only if this business generates a stable revenue stream. To address these challenges we introduce novel policy-based service admission control models that aim at maximizing the revenue of Cloud providers while taking informational uncertainty regarding resource requirements into account. Our evaluation shows that policy-based approaches statistically significantly outperform first come first serve approaches, which are still state of the art. Furthermore the results give insights in how and to what extent uncertainty has a negative impact on revenue.
Revenue management for Cloud computing providers: Decision models for service admission control under non-probabilistic uncertainty
S0377221715000491
Patriksson (2008) provided a then up-to-date survey on the continuous, separable, differentiable and convex resource allocation problem with a single resource constraint. Since the publication of that paper the interest in the problem has grown: several new applications have arisen where the problem at hand constitutes a subproblem, and several new algorithms have been developed for its efficient solution. This paper therefore serves three purposes. First, it provides an up-to-date extension of the survey of the literature of the field, complementing the survey in Patriksson (2008) with more then 20 books and articles. Second, it contributes improvements of some of these algorithms, in particular with an improvement of the pegging (that is, variable fixing) process in the relaxation algorithm, and an improved means to evaluate subsolutions. Third, it numerically evaluates several relaxation (primal) and breakpoint (dual) algorithms, incorporating a variety of pegging strategies, as well as a quasi-Newton method. Our conclusion is that our modification of the relaxation algorithm performs the best. At least for problem sizes up to 30 million variables the practical time complexity for the breakpoint and relaxation algorithms is linear.
Algorithms for the continuous nonlinear resource allocation problem—New implementations and numerical studies
S0377221715000508
In a multi-component system, the assumption of failure independence among components is seldom valid, especially for those complex systems with complicated failure mechanism. For such systems, warranty cost is subject to all the factors including system configuration, quality of each component and the extent of failure dependence among components. In this paper, a model is developed based on renewing free-replacement warranty by considering failure interaction among components. It is assumed that whenever a component (subsystem) fails, it can induce a failure of one or more of the remaining components (subsystems). Cost models for series and parallel system configurations are presented, followed by numerical examples with sensitivity analysis. The results show that, compared with series systems, warranty cost for parallel systems is more sensitive to failure interaction.
Cost analysis for multi-component system with failure interaction under renewing free-replacement warranty
S0377221715000521
Process planning and jobshop scheduling problems are both crucial functions in manufacturing. In reality, dynamic disruptions such as machine breakdown or rush order will affect the feasibility and optimality of the sequentially-generated process plans and machining schedules. With the approach of integrated process planning and scheduling (IPPS), the actual process plan and the schedule are determined dynamically in accordance with the order details and the status of the manufacturing system. In this paper, an object-coding genetic algorithm (OCGA) is proposed to resolve the IPPS problems in a jobshop type of flexible manufacturing systems. An effective object-coding representation and its corresponding genetic operations are suggested, where real objects like machining operations are directly used to represent genes. Based on the object-coding representation, customized methods are proposed to fulfill the genetic operations. An unusual selection and a replacement strategy are integrated systematically for the population evolution, aiming to achieve near-optimal solutions through gradually improving the overall quality of the population, instead of exploring neighborhoods of good individuals. Experiments show that the proposed genetic algorithm can generate outstanding outcomes for complex IPPS instances.
An object-coding genetic algorithm for integrated process planning and scheduling
S0377221715000533
Small and Medium-sized Enterprises (SMEs) face many obstacles when they try to access the credit market. These obstacles increase if the SMEs are innovative. In this case, financial data are insufficient or even unreliable. Thus, building a judgmental rating model, mainly based on qualitative criteria (soft information), is very important to finance SMEs’ activities. Till now, there has not been a multicriteria credit risk model based on soft information for innovative SMEs. In this paper, we try to fill this gap by presenting a multicriteria credit risk model named ELECTRE-TRI. A SMAA-TRI analysis is also implemented to obtain robust SMEs’ assignments to the risk classes. SMAA-TRI incorporates ELECTRE-TRI by considering different sets of preference parameters and uncertainty in the data via Monte Carlo simulations. Finally, we carry out a real case study with the aim of illustrating the multicriteria credit risk model.
The financing of innovative SMEs: A multicriteria credit rating model
S0377221715000545
This paper presents a novel approach for solving an integrated production planning and scheduling problem. In theory as well as in practice, because of their complexity, these two decision levels are most of the time treated sequentially. Scheduling largely depends on the production quantities (lot sizes) computed at the production planning level and ignoring scheduling constraints in planning leads to inconsistent decisions. Integrating production planning and scheduling is therefore important for efficiently managing operations. An integrated model and an iterative solution procedure were proposed in earlier research papers: The approach has limitations, in particular when solving the planning problem. In this paper, a new formulation is proposed to determine a feasible optimal production plan, i.e. lot sizes, for a fixed sequence of operations on the machines when setup costs and times are taken into account. Capacity constraints correspond to paths of the conjunctive graph associated to the sequence. An original Lagrangian relaxation approach is proposed to solve this NP-hard problem. A lower bound is derived and an upper bound is calculated using a novel constructive heuristic. The quality of the approach is tested on numerous problem instances.
A Lagrangian heuristic for an integrated lot-sizing and fixed scheduling problem
S0377221715000557
The paper deals with the definition and the computation of surrogate upper bound sets for the bi-objective bi-dimensional binary knapsack problem. It introduces the Optimal Convex Surrogate Upper Bound set, which is the tightest possible definition based on the convex relaxation of the surrogate relaxation. Two exact algorithms are proposed: an enumerative algorithm and its improved version. This second algorithm results from an accurate analysis of the surrogate multipliers and the dominance relations between bound sets. Based on the improved exact algorithm, an approximated version is derived. The proposed algorithms are benchmarked using a dataset composed of three groups of numerical instances. The performances are assessed thanks to a comparative analysis where exact algorithms are compared between them, the approximated algorithm is confronted to an algorithm introduced in a recent research work.
Surrogate upper bound sets for bi-objective bi-dimensional binary knapsack problems
S0377221715000569
Resource loading appears in many variants in tactical (mid-term) capacity planning in multi-project environments. It develops a rough sketch of the resource usage and timing of the work packages of a portfolio of orders. The orders need to be executed within a time horizon organized into periods, each of which has a known number of workers available. Each order has a time window during which it must be executed, as well as an upper and lower bound on the number of workers that can work on this order in a period. The duration of the order is not fixed beforehand, but depends on the number of workers (intensity) with which it is executed. In this article we define three fundamental variants of resource loading and study six special cases that are common to the three variants. We present algorithms for those cases that can be solved either in polynomial time or in pseudo-polynomial time. The remaining cases are proven to be np-complete in the strong sense, and we discuss the existence of approximation algorithms for some of these cases. Finally, we comment on the validity of our results when orders must be executed without preemption. Although inspired by a number of practical applications, this work focuses on the properties of the underlying generic combinatorial problems. Our findings contribute to a better understanding of these problems and may also serve as a reference work for authors looking to design efficient algorithms for similar problems.
Resource loading with time windows
S0377221715000570
It has recently been shown that the incorporation of weight restrictions in models of data envelopment analysis (DEA) may induce free or unlimited production of output vectors in the underlying production technology, which is expressly disallowed by standard production assumptions. This effect may either result in an infeasible multiplier model with weight restrictions or remain undetected by normal efficiency computations. The latter is potentially troubling because even if the efficiency scores appear unproblematic, they may still be assessed in an erroneous model of production technology. Two approaches to testing the existence of free and unlimited production have recently been developed: computational and analytical. While the latter is more straightforward than the former, its application is limited only to unlinked weight restrictions. In this paper we develop several new analytical conditions for a larger class of unlinked and linked weight restrictions.
Consistent weight restrictions in data envelopment analysis
S0377221715000582
Generalized geometric programming (GGP) problems are converted to mixed-integer linear programming (MILP) problems using piecewise-linear approximations. Our approach is to approximate a multiple-term log-sum function of the form log (x 1 + x 2 + ⋅⋅⋅ + xn ) in terms of a set of linear equalities or inequalities of log x 1, log x 2, …, and log xn , where x 1, …, xn are strictly positive. The advantage of this approach is its simplicity and readiness to implement and solve using commercial MILP solvers. While MILP problems in general are no easier than GGP problems, this approach is justified by the phenomenal progress of computing power of both personal computers and commercial MILP solvers. The limitation of this approach is discussed along with numerical tests.
A MILP formulation for generalized geometric programming using piecewise-linear approximations
S0377221715000594
This article presents a practicable algorithm for globally solving sum of linear ratios problem (SLR). The algorithm works by globally solving a bilinear programming problem (EQ) that is equivalent to the problem (SLR). In the algorithm, by utilizing convex envelope and concave envelope of bilinear function, the initial nonconvex programming problem is reduced to a sequence of linear relaxation programming problems. In order to improve the computational efficiency of the algorithm, a new accelerating technique is introduced, which provides a theoretical possibility to delete a large part of the investigated region in which there exists no global optimal solution of the (EQ). By combining this innovative technique with branch and bound operations, a global optimization algorithm is designed for solving the problem (SLR). Finally, numerical experimental results show the feasibility and efficiency of the proposed algorithm.
A practicable branch and bound algorithm for sum of linear ratios problem
S0377221715000600
This paper proposes a variant of the well-known capacitated vehicle routing problem that models the routing of vehicles in the cash-in-transit industry by introducing a risk constraint. In the Risk-constrained Cash-in-Transit Vehicle Routing Problem (RCTVRP), the risk of being robbed, which is assumed to be proportional both to the amount of cash being carried and the time or the distance covered by the vehicle carrying the cash, is limited by a risk threshold. A library containing two sets of instances for the RCTVRP, some with known optimal solution, is generated. A mathematical formulation is developed and small instances of the problem are solved by using IBM CPLEX. Four constructive heuristics as well as a local search block composed of six local search operators are developed and combined using two different metaheuristic structures: a multistart heuristic and a perturb-and-improve structure. In a statistical experiment, the best parameter settings for each component are determined, and the resulting heuristic configurations are compared in their best possible setting. The resulting metaheuristics are able to obtain solutions of excellent quality in very limited computing times.
Metaheuristics for the risk-constrained cash-in-transit vehicle routing problem
S0377221715000612
The failure processes of repairable systems may be impacted by operational and environmental stress factors. To accommodate such factors, reliability can be modelled using a multiplicative intensity function. In the proportional intensity model, the failure intensity is the product of the failure intensity function of the baseline system that quantifies intrinsic factors and a function of covariates that quantify extrinsic factors. The existing literature has extensively studied the failure processes of repairable systems using general repair concepts such as age-reduction when no covariate effects are considered. This paper investigates different approaches for modelling the failure and repair process of repairable systems in the presence of time-dependent covariates. We derive probabilistic properties of the failure processes for such systems.
Decline and repair, and covariate effects
S0377221715000624
Real world applications for vehicle collection or delivery along streets usually lead to arc routing problems, with additional and complicating constraints. In this paper we focus on arc routing with an additional constraint to identify vehicle service routes with a limited number of shared nodes, i.e. vehicle service routes with a limited number of intersections. This constraint leads to solutions that are better shaped for real application purposes. We propose a new problem, the bounded overlapping MCARP (BCARP), which is defined as the mixed capacitated arc routing problem (MCARP) with an additional constraint imposing an upper bound on the number of nodes that are common to different routes. The best feasible upper bound is obtained from a modified MCARP in which the minimization criteria is given by the overlapping of the routes. We show how to compute this bound by solving a simpler problem. To obtain feasible solutions for the bigger instances of the BCARP heuristics are also proposed. Computational results taken from two well known instance sets show that, with only a small increase in total time traveled, the model BCARP produces solutions that are more attractive to implement in practice than those produced by the MCARP model.
The mixed capacitated arc routing problem with non-overlapping routes
S0377221715000636
Systems of systems are collections of independent systems which interact and share information to provide services. To communicate, systems can opportunistically make use of contacts that occur when two entities are close enough to each other. In this paper, it is assumed that reliable predictions can be made about the sequence of such contacts for each system. An information item (a datum) is split into several datum units which are to be delivered to recipient systems. During a contact between two systems, a sending system can transfer one stored datum unit to a receiving system. Source systems initially store some of the datum units. The data transfer problem consists in searching for a valid transfer plan, i.e. a transfer plan allowing the datum units to be transmitted from their source systems to the recipient systems. The dissemination problem consists in searching a valid transfer plan which minimizes the dissemination length, i.e. the number of contacts which are necessary to deliver all the datum units to the recipient nodes. To our knowledge, there is no previous work attempting to determine the theoretical complexity of these problems. The aim of this paper is to determine the frontier between easy and hard problems. We show that the problems are strongly NP -Hard when the number of recipients is equal to 2 or more (while the number of datum units is unbounded) or the number of datum units is equal to 2 or more (while the number of recipients is unbounded). We also show that these problems are polynomially solvable when the number of datum units or the number of recipient nodes is equal to 1, or when these parameters are all upper bounded by given positive numbers. The complexity of two related problems is also studied. It is shown that knowing whether there exist k mutually arc-disjoint branchings in an evolving graph and k arc-disjoint Steiner trees in a directed graph without circuit are strongly NP -Complete.
The data transfer problem in a system of systems
S0377221715000648
Population growth creates a challenge to food availability and access. To balance supply with growing demand, more food has to move from production to consumption sites. Moreover, demand for locally-grown food is increasing and the U.S. Department of Agriculture (USDA) seeks to develop and strengthen regional and local food systems. This article examines wholesale facility (hub) locations in food supply chain systems on a national scale to facilitate the efficient transfer of food from production regions to consumption locations. It designs an optimal national wholesale or hub location network to serve food consumption markets through efficient connections with production sites. The mathematical formulation is a mixed integer linear programming (MILP) problem that minimizes total network costs which include costs of transporting goods and locating facilities. A scenario study is used to examine the model's sensitivity to parameter changes, including travel distance, hub capacity, transportation cost, etc. An application is made to the U.S. fruit and vegetable industry. We demonstrate how parameter changes affect the optimal locations and number of wholesale facilities.
Optimal wholesale facilities location within the fruit and vegetables supply chain with bimodal transportation options: An LP-MIP heuristic approach
S0377221715000661
Discrete event simulation (DES) studies in healthcare are thought to benefit from stakeholder participation during the study lifecycle. This paper reports on a multi-methodology framework, called PartiSim that is intended to support participative simulation studies. PartiSim combines DES, a traditionally hard OR approach, with soft systems methodology (SSM) in order to incorporate stakeholder involvement in the study lifecycle. The framework consists of a number of prescribed activities and outputs as part of the stages involved in the simulation lifecycle, which include study initiation, finding out about the problem, defining a conceptual model, model coding, experimentation and implementation. In PartiSim four of these stages involve facilitated workshops with a group of stakeholders. We explain the organisation of workshops, the key roles assigned to analysts and stakeholders, and how facilitation is embedded in the framework. We discuss our experience of using the framework, provide guidance on when to use it and conclude with future research directions.
PartiSim: A multi-methodology framework to support facilitated simulation modelling in healthcare
S0377221715000673
In order to stimulate or subdue the economy, banking regulators have sought to impose caps or floors on individual bank's lending to certain types of borrowers. This paper shows that the resultant decision problem for a bank of which potential borrower to accept is a variant of the marriage/secretary problem where one can accept several applicants. The paper solves the decision problem using dynamic programming. We give results on the form of the optimal lending problem and counter examples to further “reasonable” conjectures which do not hold in the general case. By solving numerical examples we show the potential loss of profit and the inconsistency in the lending decision that are caused by introducing floors and caps on lending. The paper also describes some other situations where the same decision occurs.
Lending decisions with limits on capital available: The polygamous marriage problem
S0377221715000685
In this paper, we develop a set of mathematical models to examine and compare different pricing and launch strategies of electronic books (e-books) under two types of copyright arrangements, namely the royalty and buyout arrangements. We conduct a sensitivity analysis to assess how various market structure parameters influence the publisher's pricing options in different copyright, launch modes, and channels of distribution. Aimed at gaining managerial insights into the complex issues in pricing and launch strategies involving e-books, we recommend optimal launch strategies and pricing decisions for the e-book supply chain.
“Do the electronic books reinforce the dynamics of book supply chain market?”–A theoretical analysis
S0377221715000697
In this paper, we propose the Electric Vehicle Routing Problem with Time Windows and Mixed Fleet (E-VRPTWMF) to optimize the routing of a mixed fleet of electric commercial vehicles (ECVs) and conventional internal combustion commercial vehicles (ICCVs). Contrary to existing routing models for ECVs, which assume energy consumption to be a linear function of traveled distance, we utilize a realistic energy consumption model that incorporates speed, gradient and cargo load distribution. This is highly relevant in the context of ECVs because energy consumption determines the maximal driving range of ECVs and the recharging times at stations. To address the problem, we develop an Adaptive Large Neighborhood Search algorithm that is enhanced by a local search for intensification. In numerical studies on newly designed E-VRPTWMF test instances, we investigate the effect of considering the actual load distribution on the structure and quality of the generated solutions. Moreover, we study the influence of different objective functions on solution attributes and on the contribution of ECVs to the overall routing costs. Finally, we demonstrate the performance of the developed algorithm on benchmark instances of the related problems VRPTW and E-VRPTW.
Routing a mixed fleet of electric and conventional vehicles
S0377221715000703
This paper analyses farming efficiency by the means of the partial frontiers and Multi-Directional Efficiency Analysis (MEA). In particular, we apply the idea of the conditional efficiency framework to the MEA approach to ensure that observations are compared to their homogeneous counterparts. Moreover, this paper shows that combining the traditional one-directional and multi-directional efficiency framework yields valuable insights. It allows one to identify what factors matter in terms of output production and input consumption. The application deals with Lithuanian family farms, for which we have a rich dataset. The results indicate that the output efficiency positively correlates to a time trend and negatively to the subsidy share in the total output. The MEA-based analysis further suggests that the time trend has been positively affecting the productive efficiency due to increase in the labour use efficiency. Meanwhile, the increasing subsidy rate has a negative influence upon MEA efficiencies associated with all the inputs.
One- and multi-directional conditional efficiency measurement – Efficiency in Lithuanian family farms
S0377221715000715
This paper investigates the constraints handling technique (CHT) in algorithms of the constrained multi-criteria optimization problem (CMOP). The CHT is an important research topic in constrained multi-criteria optimization (MO). In this paper, two simple and practicable CHTs are proposed, where one is a nonequivalent relaxation approach which is much suitable for the constrained multi-criteria discrete optimization problem (MDOP), and the other is an equivalent relaxation approach for the general CMOP. By using these CHTs, a CMOP (i.e., the primal problem) can be transformed into an unconstrained multi-criteria optimization problem (MOP) (i.e., the relaxation problem). Based on the first CHT, it is theoretically proven that the efficient set of the primal CMOP is a subset of the strictly efficient set E ¯ of the relaxation problem and can be extracted from E ¯ by simply checking the dominance relation between the solutions in E ¯ . Follows from these theoretical results, a three-phase based idea is given to effectively utilize the existing algorithms for the unconstrained MDOP to solve the constrained MDOP. In the second CHT, the primal CMOP is equivalently transformed into an unconstrained MOP by a special relaxation approach. Based on such a CHT, it is proven that the primal problem and its relaxation problem have the same efficient set and, therefore, general CMOPs can be solved by utilizing any of the existing algorithms for the unconstrained MOPs. The implementing idea, say two-phase based idea, of the second CHT is illustrated by implanting a known MOEA. Finally, the two-phase based idea is applied to some of the early MOEAs and the application performances are comprehensively tested with some benchmarks of the CMOP.
Generic constraints handling techniques in constrained multi-criteria optimization and its application
S0377221715000727
This paper studies the inventory system of a retailer who orders his products from two supply sources, a local one that is responsive and reliable, but expensive, and a global one that is low-cost but less reliable. The deliveries from the global source only partially satisfy the quality requirements. We model this situation with a dual-sourcing inventory model with positive lead times and random yield. We propose a dual-index order-up-to policy (DOP) based on approximating the inventory model with an unreliable supplier by a sequence of dual-sourcing models with reliable suppliers and suitably modified demand distributions. Numerical results show that the performance of this heuristic is close to that of the optimal DOP. Moreover, we extend the heuristic to models with advance yield information and study its impact on the total inventory costs.
An approximate policy for a dual-sourcing inventory model with positive lead times and binomial yield
S0377221715000739
Combinatorial auctions allow allocation of bundles of items to the bidders who value them the most. The NP -hardness of the winner determination problem (WDP) has imposed serious computational challenges when designing efficient solution algorithms. This paper analytically studies the Lagrangian relaxation of WDP and expounds a novel technique for efficiently solving the relaxation problem. Moreover, we introduce a heuristic algorithm that adjusts any infeasibilities from the Lagrangian optimal solution to reach an optimal or a near optimal solution. Extensive numerical experiments illustrate the class of problems on which application of this technique provides near optimal solutions in much less time, as little as a fraction of a thousand, as compared to the CPLEX solver.
A Lagrangian approach to the winner determination problem in iterative combinatorial reverse auctions
S0377221715000740
Drastic changes in consumer markets over the last decades have increased the pressure and challenges for the trade exhibition industry. Exhibiting organizations demand higher levels of justification for involvement and expect returns on trade show investments. This study proposes an RFID-enabled track and traceability framework to improve information visibility at the trade site. The identification information can potentially create detailed, accurate, and complete visibility of attendees’ movements and purchasing behaviors and consequently lead to considerable analytical benefits. Leveraging the wealth of information made available by RFID is challenging; thus, the objective of this study is to outline how to incorporate RFID data into existing enterprise data to deliver analytical solutions to the trade show and exhibition industry. The results show that the exhibitor can use RFID to gather visitor intelligence and the key findings of this study provide valuable feedback to business analysts to promote follow-up marketing strategies.
Integration of RFID and business analytics for trade show exhibitors
S0377221715000752
The problem of sharing a cost M among n individuals, identified by some characteristic c i ∈ R + , appears in many real situations. Two important proposals on how to share the cost are the egalitarian and the proportional solutions. In different situations a combination of both distributions provides an interesting approach to the cost sharing problem. In this paper we obtain a family of (compromise) solutions associated to the Perron’s eigenvectors of Levinger’s transformations of a characteristics matrix A. This family includes both the egalitarian and proportional solutions, as well as a set of suitable intermediate proposals, which we analyze in some specific contexts, as claims problems and inventory cost games.
Cost sharing solutions defined by non-negative eigenvectors
S0377221715000764
This paper presents a discrete-time single-server finite-buffer queue with Markovian arrival process and generally distributed batch-size-dependent service time. Given that infinite service time is not commonly encountered in practical situations, we suppose that the distribution of the service time has a finite support. Recently, a similar continuous-time system with Poisson input process was discussed by Banerjee and Gupta (2012). But unfortunately, their method is hard to apply in the analysis of discrete-time case with versatile Markovian point process due to the fact that the difference equation governing the boundary state probabilities is more complex than the continuous one. If we follow their ideas, we will eventually find that some important joint queue length distributions cannot be computed and thus some key performance measures cannot be derived. In this paper, replacing the finite support renewal distribution with an appropriate phase-type distribution, the joint state probabilities at various time epochs (arbitrary, pre-arrival and departure) have been obtained by using matrix analytic method and embedded Markov chain technique. Furthermore, UL-type RG-factorization is employed in numerical computation of block-structured Markov chains with finitely-many levels. Some numerical examples are presented to demonstrate the feasibility of the proposed algorithm for several service time distributions. Moreover, the impact of the correlation factor on loss probability and mean sojourn time is also investigated.
Algorithm for computing the queue length distribution at various time epochs in DMAP/G (1, a, b)/1/N queue with batch-size-dependent service time
S0377221715000776
This paper proposes and compares different techniques for maintenance optimization based on Genetic Algorithms (GAs), when the parameters of the maintenance model are affected by uncertainty and the fitness values are represented by Cumulative Distribution Functions (CDFs). The main issues addressed to tackle this problem are the development of a method to rank the uncertain fitness values, and the definition of a novel Pareto dominance concept. The GA-based methods are applied to a practical case study concerning the setting of a condition-based maintenance policy on the degrading nozzles of a gas turbine operated in an energy production plant.
Genetic algorithms for condition-based maintenance optimization under uncertainty
S0377221715000788
This paper argues that if OR is to prosper it needs to more closely reflect the needs of organisations and its practitioners. Past research has highlighted a gap between theoretical research developments, applications and the methods most frequently used in organisations. The scope of OR applications has also been contested with arguments as to the expanding boundaries of OR. But despite this, anecdotal evidence suggests that OR has become marginalised in many contexts. In order to understand these changes, IFORS (International Federation of OR Societies) in 2009 sponsored a survey of global OR practice. The aim was to provide current evidence on the usage of OR tools, areas of application, and the barriers to OR's uptake, as well as the educational background of OR practitioners. Results presented here show practitioners falling into three segments, which can be loosely characterised as those practicing ‘traditional’ OR, those adopting a range of softer techniques including Problem Structuring Methods (PSMs), and a Business Analytics cluster. When combined with other recent survey evidence, the use of PSMs and Business Analytics is apparently extending the scope of OR practice. In particular, the paper considers whether the Business Analytics movement, with an overlapping skill set to traditional OR but with a fast growing organisational base, offers a route to diminishing the gap between academic research and practice. The paper concludes with an exploration of whether this represents not just an opportunity for OR but also a serious challenge to its established practices. “In theory, theory and practice are the same. In practice, they are not.” Multiple attributions including Einstein and Yogi Berra This paper is stimulated by a survey of global OR practice sponsored by IFORS. In placing the new results in the context of other practice surveys it has become clear that the gap between academic research and practice – the ‘natural drift’ for professions identified by Corbett and Van Wassenhove (1993) – still exists and that changes in the organisational environment make the need to address the gap more urgent. Further, although the scope of OR practice continues to extend and evolve, particularly arising from Problem Structuring Methods (PSMs) sometimes referred to as Soft OR, a new and distinct movement, ‘Business Analytics’ has recently emerged, capitalizing on the ‘Big Data revolution’. These developments present both a threat to and an opportunity for the OR community. This paper reviews these developments from an OR practice perspective and suggests how the OR community should respond in order to ensure long term survival.
Reassessing the scope of OR practice: The Influences of Problem Structuring Methods and the Analytics Movement
S0377221715000806
In selecting the preferred course of action, decision makers are often uncertain about one or more probabilities of interest. The experimental literature has ascertained that this uncertainty (ambiguity) might affect decision makers’ preferences. Then, the decision maker might wish to incorporate ambiguity aversion in the analysis. We investigate the modeling ambiguity attitudes in the solution of decision analysis problems through functionals well-established in the decision theory literature. We obtain the multiple-event problems for subjective expected utility, smooth ambiguity and maximin decision makers. This allows us to establish the conditions under which these alternative decision makers face equivalent problems. Results for certainty equivalents and risk premia in the presence of both risk and ambiguity aversion are obtained. A recent generalization of the classical Arrow–Pratt quadratic approximation allows us to quantify the portions of a premium due to risk–PLXINSERT-, and to ambiguity-aversion. The numerical implementation of the objective functions is addressed, showing that all functionals can be estimated at no additional burden through Monte Carlo simulation. The well known Carter Racing case study is addressed quantitatively to demonstrate the findings. “Management decisions, occur in a behavioral context in which the nature and causes of problems are ambiguous, and critical pieces of information are unavailable or suppressed” (Brittain and Sitkin, 1989, p. 62).
Decision analysis under ambiguity
S0377221715000818
Data Envelopment Analysis (DEA) is a methodology for evaluating the relative efficiencies of a set of decision-making units (DMUs). The original model is based on the assumption that in a multiple input, multiple output setting, all inputs impact all outputs. In many situations, however, this assumption may not apply, such as would be the case in manufacturing environments where some products may require painting while others would not. In earlier work by the authors, the conventional DEA methodology was extended to allow for efficiency measurement in such situations where partial input-to-output interactions exist. In that methodology all DMUs have identical input/output profiles. It is often the case, however, that these profiles can be different for some DMUs than is true of others. This phenomenon can be prevalent, for example, in manufacturing settings where some plants may use robots for spot welding while other plants may use human resources for that task. Consider a highway maintenance application where consulting services for safety corrections may not be employed on low traffic roadways, but are commonly used on high traffic, multilane highways. Thus, input to output links in the case of some DMUs are missing in the case of others. To address this, the current paper extends the methodology presented earlier by the authors to allow for efficiency measurement in situations where some DMUs have different input/output profiles than is true of others. The new methodology is then applied to the problem of evaluating the efficiencies of a set of road maintenance patrols.
Partial input to output impacts in DEA: The case of DMU-specific impacts
S0377221715000831
This paper investigates the powerplay in one-day cricket. The rules concerning the powerplay have been tinkered with over the years, and therefore the primary motivation of the paper is the assessment of the impact of the powerplay with respect to scoring. The form of the analysis takes a “what if” approach where powerplay outcomes are substituted with what might have happened had there been no powerplay. This leads to a paired comparisons setting consisting of actual matches and hypothetical parallel matches where outcomes are imputed during the powerplay period. Some of our findings include (a) the various forms of the powerplay which have been adopted over the years have different effects, (b) recent versions of the powerplay provide an advantage to the batting side, (c) more wickets also occur during the powerplay than had there been no powerplay and (d) there is some effect in run production due to the over where the powerplay is initiated. We also investigate individual batsmen and bowlers and their performances during the powerplay.
A study of the powerplay in one-day cricket
S0377221715000843
University rankings are the subject of a paradox: the more they are criticized by social scientists and experts on methodological grounds, the more they receive attention in policy making and the media. In this paper we attempt to give a contribution to the birth of a new generation of rankings, one that might improve on the current state of the art, by integrating new kind of information and using new ranking techniques. Our approach tries to overcome four main criticisms of university rankings, namely: monodimensionality; statistical robustness; dependence on university size and subject mix; lack of consideration of the input–output structure. We provide an illustration on European universities and conclude by pointing on the importance of investing in data integration and open data at European level both for research and for policy making.
Rankings and university performance: A conditional multidimensional approach
S0377221715000855
We consider a decentralized supply chain comprised of one manufacturer and one retailer where the manufacturer has random yield, and the retailer faces uncertain demand. To guarantee product availability, the retailer requires a service level of the product supply from the manufacturer. However, we determine that the high service level indeed benefits the retailer whereas causes the manufacturer’s profit loss. Therefore, to promote the high-service-level cooperation, the retailer has to provide incentives for the manufacturer, such as bonuses. We consider two common bonus contracts: unit bonus and flat (or lump-sum) bonus. The primary question we address is whether the service-level based bonus contracts can achieve the two firms’ Pareto-improving for both service level and profits, which is a prerequisite for the retailer to carry out them with the manufacturer. The results show that both bonus contracts can achieve Pareto-improving. While it is simpler for the retailer to carry out the unit bonus contract, the retailer can achieve a higher service level and higher profits under the flat bonus contract.
Incentives to improve the service level in a random yield supply chain: The role of bonus contracts
S0377221715001046
We consider an OEM who is responsible for the availability of her systems in the field through performance-based contracts. She detects that a critical reparable component in her systems has a poor reliability performance. She decides to improve its reliability by a redesign of that component and an upgrade of the systems by replacing the old components with the improved ones. We introduce a model for studying the following two upgrading policies that she may implement after the redesign: (1) Upgrade all systems preventively just after the redesign (at time 0), (2) Upgrade systems one-by-one correctively; i.e., only when an old component fails. Under Policy 2, the OEM decides on an initial supply quantity of the improved components. Once this initial supply is depleted, she can procure improved components in fixed-sized batches with a higher unit price. Per policy, we derive total cost functions, which include procurement/replenishment costs of the new components, upgrading costs, repair costs of the new components, inventory holding costs and downtime costs. We perform exact analysis and provide an efficient optimization algorithm for Policy 2. Through a numerical study, we derive insights on which of the two policies is the best one and we show how this depends on the lifetime of the systems, the reliability of the old components, the improvement level in the reliability, the increase in the unit price, downtime costs, the size of installed base, and the batch size.
On the upgrading policy after the redesign of a component for reliability improvement
S0377221715001058
This paper investigates minimization of both the makespan and delivery costs in on-line supply chain scheduling for single-machine and parallel-machine configurations in a transportation system with a single customer. The jobs are released as they arrive, which implies that no information on upcoming jobs, such as the release time, processing time, and quantity, is known beforehand to the scheduler. The jobs are processed on one machine or parallel machines and delivered to the customer. The primary objective of the scheduling is time, which is makespan in this case. The delivery cost, which changes due to the varying number of batches (though the cost for each batch is assumed to be the same) in delivery, is also concerned. The goal of scheduling is thus to minimize both the makespan and the total delivery cost. This scheduling involves deciding when to process jobs, which machine to process jobs, when to deliver jobs, and which batch to include jobs. We define 10 problems in terms of (1) the machine configuration, (2) preemption of job processing, (3) the number of vehicles, and (4) the capacity of vehicles. These problems ( P 1 , P 2 , … , P 10 ) have never been studied before in literature. The lower bound for each problem is first proved by constructing a series of intractable instances. Algorithms for these problems, denoted by H 1 , H 2 , … , H 10 , respectively, are then designed and a theoretical analysis is performed. The results show that H1, H2, H6, and H7 are optimal ones according to the competitive ratio criterion, while the other algorithms deviate slightly from the optimum. We also design the optimal algorithm for a special case of P5. A case study is provided to illustrate the performance of H5 and to demonstrate the practicality of the algorithms.
On-line supply chain scheduling for single-machine and parallel-machine configurations with a single customer: Minimizing the makespan and delivery cost
S0377221715001071
Evaluating an industrial opportunity often means to engage in financial modeling which results in estimation of a large amount of economic and accounting data, which are then gathered in an economically rational framework: the pro forma financial statements. While the standard net present value (NPV) condenses all the available pieces of information into a single metric, we make full use of the crucial information supplied in the pro forma financial statements and give a more detailed account of how economic value is created. In particular, we construct a general model, allowing for varying interest rates, which decomposes the project into investment side and financing side and quantifies the value created by either side; an equity/debt decomposition is also accomplished, which enables to appreciate the role of debt in adding or subtracting value to equityholders. Further, the major role of accounting rates of return as value drivers is highlighted, and new relative measures of worth are introduced: the project ROA and the project WACC, which aggregate information deriving from the period rates of return. To achieve these results, we make use of the Average-Internal-Rate-of-Return (AIRR) approach, recently introduced, which rests on capital-weighted arithmetic means and sets a direct relation between holding period rates and NPV.
Investment, financing and the role of ROA and WACC in value creation
S0377221715001083
In this paper, we take a novel approach to address the dilemma of innovation sharing versus protection among supply chain partners. The paper conducts an exploratory study that introduces factors affecting a firm's optimum supply chain innovation strategy. We go beyond the conventional Prisoners’ Dilemma, with its limiting assumptions of players’ preferences and symmetry, to explore a larger pool of 2 × 2 games that may effectively model the problem. After classifying firm types according to collaboration motive and relative power, we use simulation to explore the effects of firm type, opponent type, and payoff structure on repeated innovation interactions (or, equivalently, long-term relations) and optimality of ‘niceness’. Surprisingly, we find that opponent type is essentially irrelevant in long-term innovation interactions, and focal firm type is only conditionally relevant. The paper contributes further by introducing reciprocation of strategy type (nice versus mean), showing that reciprocation is recommended, while identifying and explaining the exceptions to this conclusion.
Strategizing niceness in co-opetition: The case of knowledge exchange in supply chain innovation projects
S0377221715001095
As bunker fuel cost constitutes a major portion of the shipping liners’ operating cost, it is imperative for them to minimize the bunkering cost to remain competitive. Service contract with a fuel supplier is a strategy they venture on to reduce this cost. Typically, liner operators enter into a contract with fuel suppliers where the contract is specified by a fixed fuel price and amount, to mitigate the fluctuating spot prices and uncertain fuel consumption between the ports. In this paper, we study such bunkering service contracts with known parameters and determine the liner's optimal bunkering strategy. We propose to use bunker up to level policy for refueling, where the up to level is dynamic based on the observed spot price and determine the bunkering decisions (where to bunker and how much to bunker) at the ports. A dynamic programming model is formulated to minimize the total bunkering cost. Due to the inherent complexity in determining the gradient of the cost-to-go function, we estimate it by Monte Carlo simulation. Numerical experiments suggest that all the contract parameters must be considered together in determination of the optimal bunkering strategy. Contracting an amount lesser than the average consumption for the entire voyage, at a contract price lesser than the average spot price is found to be beneficial. The insights derived from this study can be helpful in designing these types of service contracts.
Bunkering decisions for a shipping liner in an uncertain environment with service contract
S0377221715001101
A decision support tool is proposed for optimising the expansion planning of a semi-liberalised electricity market, whilst the underlying interaction of the generating mix with electricity prices is researched, in the long-run. A nonlinear stochastic programming algorithm is used for handling multiple uncertainties, optimising the power sector characteristics, both in terms of financial and environmental performance. Two endogenous models and an exogenous one are analysed and compared. The endogenous model results indicate that consumers might benefit by the moderate electricity prices in case the optimal loads and capacity orders are rendered. The exogenous model is insensitive to generating mix variations. The long-term actions suggested for system operators are comprised of the permits issued for new entries. They are affected by the evolution of electricity prices, since the permits granted for conventional technologies are maintained when their profits are rising. The permits granted for renewable technologies are also maintained, thus allowing cleaner electricity production to be induced to the grid. The optimal bid strategies of generators interact with their dispatching schedule and the diversification of their load curves. The relevant bids are primarily driven by the merit order, the plants are dispatched in. The assigned load levels may be raised in profitable producers so that their profit is maximised. They might be restrained instead, in case there are no significant prospects for individual profits. The lognormal distribution of electricity price results is characterised by increasing variance over time, indicating that the model is more robust in the most imminent solutions.
The effect of long-term expansion on the evolution of electricity price: numerical analysis of a theoretically optimised electricity market
S0377221715001113
A network needs to be constructed by a server (construction crew) that has a constant construction speed which is incomparably slower than the server’s travel speed within the already constructed part of the network. A vertex is recovered when it becomes connected to the depot by an already constructed path. Due dates for recovery times are associated with vertices. The problem is to obtain a construction schedule that minimizes the maximum lateness of vertices, or the number of tardy vertices. We introduce these new problems, discuss their computational complexity, and present mixed-integer linear programming formulations, heuristics, a branch-and-bound algorithm, and results of computational experiments.
Network construction problems with due dates
S0377221715001125
Operations management aims to match the supply with demand of material flows, whereas corporate finance seeks to match the supply with demand of monetary flows. These two supply–demand matching processes are connected by real investment and revenue management in a “closed-loop” of resources. We propose a risk management framework for multidimensional integration of operations–finance interface models. Ten aspects are examined to specify conditions under which firms should integrate operations and finance. We present categorizations of operational hedging and financial flexibility. By linking relationship analysis (complements or substitutes) and approach choice (centralization or decentralization) of integrated risk management, we find that: (i) Zero interaction effects between operations and finance lead to decentralization. (ii) Operations and finance should be centralized even if they are partial substitutes.
Operations–finance interface models: A literature review and framework
S0377221715001137
Managerial flexibility can have a significant impact on the value of new product development projects. We investigate how the market environment in which a firm operates influences the value and use of development flexibility. We characterize the market environment according to two dimensions, namely (i) its intensity, and (ii) its degree of innovation. We show that these two market characteristics can have a different effect on the value of flexibility. In particular, we show that more intense or innovative environments may increase or decrease the value of flexibility. For instance, we demonstrate that the option to defer a product launch is typically most valuable when there is little competition. We find, however, that under certain conditions defer options may be highly valuable in more competitive environments. We also consider the value associated with the flexibility to switch development strategies, from a focus on incremental innovations to more risky ground-breaking products. We find that such a switching option is most valuable when the market is characterized by incremental innovations and by relatively intense competition. Our insights can help firms understand how managerial flexibility should be explored, and how it might depend on the nature of the environment in which they operate.
New product development flexibility in a competitive environment
S0377221715001186
We introduce a new family of coalitional values designed to take into account players’ attitudes with regard to cooperation. This new family of values applies to cooperative games with a coalition structure by combining the Shapley value and the multinomial probabilistic values, thus generalizing the symmetric coalitional binomial semivalues. Besides an axiomatic characterization, a computational procedure is provided in terms of the multilinear extension of the game and an application to the Catalonia Parliament, Legislature 2003–2007, is shown.
Coalitional multinomial probabilistic values
S0377221715001198
In this article we investigate the job Sequencing and tool Switching Problem (SSP), a NP -hard combinatorial optimization problem arising from computer and manufacturing systems. Starting from the results described in Tang and Denardo (1987), Crama et al. (1994) and Laporte et al. (2004), we develop new integer linear programming formulations for the problem that are provably better than the alternative ones currently described in the literature. Computational experiments show that the lower bounds obtained by the linear relaxation of the considered formulations improve, on average, upon those currently described in the literature and suggest, at the same time, new directions for the development of future exact solution approaches.
Improved integer linear programming formulations for the job Sequencing and tool Switching Problem
S0377221715001204
We investigate the Minimum Evolution Problem (MEP), an NP -hard network design problem arising from computational biology. The MEP consists in finding a weighted unrooted binary tree having n leaves, minimal length, and such that the sum of the edge weights belonging to the unique path between each pair of leaves is greater than or equal to a prescribed value. We study the polyhedral combinatorics of the MEP and investigate its relationships with the Balanced Minimum Evolution Problem. We develop an exact solution approach for the MEP based on a nontrivial combination of a parallel branch-price-and-cut scheme and a non-isomorphic enumeration of all possible solutions to the problem. Computational experiments show that the new solution approach outperforms the best mixed integer linear programming formulation for the MEP currently described in the literature. Our results give a perspective on the combinatorics of the MEP and suggest new directions for the development of future exact solution approaches that may turn out useful in practical applications. We also show that the MEP is statistically consistent.
A branch-price-and-cut algorithm for the minimum evolution problem
S0377221715001216
We develop a model for a one-time special purchasing opportunity where there is uncertainty with respect to the materialization of the discounted purchasing offer. Our model captures the phenomenon of an anticipated future event that may or may not lead to a discounted offer. We analyze the model and show that the optimal solution results from a tradeoff between preparing for the special offer and staying with the regular ordering policy. We quantify the tradeoff and find that the optimal solution is one of four intuitive policies. We present numerical illustrations that provide additional insights on the relationship between the different ordering policies.
Optimal ordering for a probabilistic one-time discount
S0377221715001228
We consider the economic lot-sizing problem with perishable items (ELS-PI), where each item has a deterministic expiration date. Although all items in stock are equivalent regardless of procurement or expiration date, we allow for an allocation mechanism that defines an order in which the items are allocated to the consumers. In particular, we consider the following allocation mechanisms: First Expiration, First Out (FEFO), Last Expiration, First Out (LEFO), First In, First Out (FIFO) and Last In, First Out (LIFO). We show that the ELS-PI can be solved in polynomial time under all four allocation mechanisms in case of no procurement capacities. This result still holds in case of time-invariant procurement capacities under the FIFO and LEFO allocation mechanisms, but the problem becomes NP -hard under the FEFO and LIFO allocation mechanisms.
The economic lot-sizing problem with perishable items and consumption order preference
S0377221715001241
Minmax regret optimization aims at finding robust solutions that perform best in the worst-case, compared to the respective optimum objective value in each scenario. Even for simple uncertainty sets like boxes, most polynomially solvable optimization problems have strongly NP-complete minmax regret counterparts. Thus, heuristics with performance guarantees can potentially be of great value, but only few such guarantees exist. A popular heuristic for combinatorial optimization problems is to compute the midpoint solution of the original problem. It is a well-known result that the regret of the midpoint solution is at most 2 times the optimal regret. Besides some academic instances showing that this bound is tight, most instances reveal a way better approximation ratio. We introduce a new lower bound for the optimal value of the minmax regret problem. Using this lower bound we state an algorithm that gives an instance-dependent performance guarantee for the midpoint solution that is at most 2. The computational complexity of the algorithm depends on the minmax regret problem under consideration; we show that our sharpened guarantee can be computed in strongly polynomial time for several classes of combinatorial optimization problems. To illustrate the quality of the proposed bound, we use it within a branch and bound framework for the robust shortest path problem. In an experimental study comparing this approach with a bound from the literature, we find a considerable improvement in computation times.
A new bound for the midpoint solution in minmax regret optimization with an application to the robust shortest path problem
S0377221715001253
The main aim of this paper is to measure the social welfare loss for a continuous moral hazard model when a set of minimal assumptions are fulfilled. By using a new approach, we are able to reproduce the results of Balmaceda, Balseiro, Correa, and Stier-Moses (2010) pertaining to the social welfare loss for discrete and continuous models respectively. Previous studies rely on the validity of the first-order approach at the expense of strong assumptions, in particular the convexity of the distribution function condition while we do not make such a restrictive assumption in our developments. In addition, we obtain new bounds for the social welfare loss that are both tight and easy to compute.
Quantifying the social welfare loss in moral hazard models
S0377221715001265
The complex multi-attribute large-group decision-making problems that are based on interval-valued intuitionistic fuzzy information have become a common topic of research in the field of decision-making. Due to the complexity of this kind of problem, alternatives are usually described by multiple attributes that exhibit a high degree of interdependence or interactivity. In addition, decision makers tend to be derived from different interest groups, which cause the assumption of independence between the decision maker preferences in the same interest group to be violated. Because traditional aggregation operators are proposed based on the independence axiom, directly applying these operators to the information aggregation process in the complex multi-attribute large-group decision-making problem is not appropriate. Although these operators can obtain the overall evaluation value of each alternative, the results may be biased. Therefore, we draw the thought from the conventional principal component analysis model and propose the interval-valued intuitionistic fuzzy principal component analysis model. Based on this new model, we provide a decision-making method for the complex multi-attribute large-group decision-making problem. First, we treat the attributes and the decision makers as interval-valued intuitionistic fuzzy variables, and we transform these two types of variables into several independent variables using the proposed principal component analysis model. We then obtain each alternative's overall evaluation value by utilizing conventional information aggregation operators. Moreover, we obtain the optimal alternative(s) based on the ranks of the alternative overall evaluation values. An illustrative example is provided to demonstrate the proposed technique and evaluate its feasibility and validity.
An interval-valued intuitionistic fuzzy principal component analysis model-based method for complex multi-attribute large-group decision-making
S0377221715001277
Strong uncertainties is a key challenge for the application of scheduling algorithms in real-world production environments, since the optimized schedule at a time often turns to be deteriorated or even infeasible during its execution due to a large majority of unexpected events. This paper studies the uncertain scheduling problem arising from the steelmaking-continuous casting (SCC) process and develops a soft-decision based two-layered approach (SDA) to cope with the challenge. In our approach, traditional scheduling decisions, i.e. the beginning time and assigned machine for each job at each stage, are replaced with soft scheduling decisions in order to provide more flexibility towards unexpected events. Furthermore, all unexpected events are classified into two categories in terms of the impact degree on scheduling: critical events and non-critical events. In the two-layered solution framework, the upper layer is the offline optimization layer for handling critical events, in which a particle swarm optimization algorithm is proposed for generating soft scheduling decisions; while the lower layer is the online dispatching layer for handling non-critical events, where a dispatching heuristic is designed to decide in real time which charge and when to process after a machine becomes available, with the guidance of the soft schedule given by the upper layer. Computational experiments on randomly generated SCC scheduling instances and practical production data demonstrate that the proposed soft-decision based approach can obtain significantly better solutions compared to other methods under strongly uncertain SCC production environments.
A soft-decision based two-layered scheduling approach for uncertain steelmaking-continuous casting process
S0377221715001289
This paper considers the loading optimization problem for a set of containers and pallets transported into a cargo aircraft that serves multiple airports. Because of pickup and delivery operations that occur at intermediate airports, this problem is simultaneously a Weight, and Balance Problem and a Sequencing Problem. Our objective is to minimize fuel and handling operation costs. This problem is shown to be NP-hard. We resort to a mixed integer linear program. Based on real-world data from a professional partner (TNT Airways), we perform numerical experiments using a standard B, and C library. This approach yields better solutions than traditional manual planning, which results in substantial cost savings.
The Airline Container Loading Problem with pickup and delivery
S0377221715001290
The Capacitated Vehicle Routing Problem is a much-studied (and strongly N P -hard) combinatorial optimization problem, for which many integer programming formulations have been proposed. We present two new multi-commodity flow (MCF) formulations, and show that they dominate all of the existing ones, in the sense that their continuous relaxations yield stronger lower bounds. Moreover, we show that the relaxations can be strengthened, in pseudo-polynomial time, in such a way that all of the so-called knapsack large multistar (KLM) inequalities are satisfied. The only other relaxation known to satisfy the KLM inequalities, based on set partitioning, is strongly N P -hard to solve. Computational results demonstrate that the new MCF relaxations are significantly stronger than the previously known ones.
Stronger multi-commodity flow formulations of the Capacitated Vehicle Routing Problem
S0377221715001307
Firms maintain a capital charge to manage the risk of low-frequency, high-impact operational disruptions. The loss distribution approach (LDA) measures the capital charge using two inputs: the frequency and severity of operational disruptions. In this study, we investigate whether or not capital charge could be combined with process improvement, an approach predominantly employed for managing high-frequency, low-impact operational disruptions. Using the categorization of events defined by the Basel Accord for different types of operational risk events, we verify three propositions. First, we test whether classification of operational disruptions is warranted to manage the risk. Second, we posit that classification of operational disruptions will display different statistical properties in manufacturing and in the financial services sector. Finally, we test whether risk of operational disruptions can be managed through a combination of process improvement and capital adequacy. We obtained data on 5442 operational disruptions and ran Monte Carlo simulations spanning both these sectors and seven event types. The results reveal that process improvement can be a first line of defense to manage certain types of operational risk events.
Managing operational disruptions through capital adequacy and process improvement
S0377221715001319
This paper investigates the modal split and trip scheduling decisions with consideration of the commuters’ uncertainty expectation in the morning commute problem. Two physically separated modes for transportation, the auto mode on highway and the transit mode on subway, are available for commuters to choose for traveling from home to workplace. The travel time uncertainty is assumed to only occur on highway. Every commuter is faced with the joint choice of transport mode and trip scheduling for minimizing her/his generalized travel cost. The study reveals that uncertainty expectation can significantly influence the travel decisions and lead to a distinctive flow pattern. We also examine the effects of transit headway and fare on modal split and equilibrium cost by numerical examples.
Modeling the modal split and trip scheduling with commuters’ uncertainty expectation
S0377221715001320
Day ahead electricity prices in European markets are determined by double-blind auctions. That is, both buyers and sellers may place anonymous orders with different prices and quantities. The market operator has to solve an optimization problem within an hour to clear the auction and determine the prices for the Day Ahead Market (DAM) which are used as a reference point for the other electricity contracts. All electricity traded at the same time period is traded at the same price, called market clearing price. The market operator has to end the algorithm with a feasible solution if the algorithm could not find the optimal solution within the time limit. In this paper, we develop an optimization model to solve the problem with day ahead electricity prices including all the practical considerations in the Turkish DAM. We present a mixed integer formulation and provide methods based on aggregation techniques and variable elimination to solve the problem within the limits of the practical requirements. Using real market data, we show that, aggregation reduces the problem size approximately 60 percent whereas variable elimination provides another 30 percent reduction. We also propose an IP-based large neighborhood search to obtain an initial solution. Empirical evidences coming from the Turkish DAM data indicate the heuristic has a substantial solution quality and the overall suggestions deliver remarkable solution time improvements. This is the first paper in terms of formulating DAM problem in Turkey, developing new approaches to solve it within the time limits of the market, and using real data.
On the determination of European day ahead electricity prices: The Turkish case
S0377221715001332
This paper presents a model and solution methodology for scheduling patients in a multi-class, multi-resource surgical system. Specifically, given a master schedule that provides a cyclic breakdown of total OR availability into specific daily allocations to each surgical specialty, the model provides a scheduling policy for all surgeries that minimizes a combination of the lead time between patient request and surgery date, overtime in the operating room and congestion in the wards. To the best of our knowledge, this paper is the first to determine a surgical schedule based on making efficient use of both the operating rooms and the recovery beds. Such a problem can be formulated as Markov Decision Process model but the size of any realistic problem makes traditional solution methods intractable. We develop a version of the Least Squares Approximate Policy Iteration algorithm and test our model on data from a local hospital to demonstrate the success of the resulting policy.
A simulation based approximate dynamic programming approach to multi-class, multi-resource surgical scheduling
S0377221715001344
Multistage stochastic programs show time-inconsistency in general, if the objective is neither the expectation nor the maximum functional. This paper considers distortion risk measures (in particular the Average Value-at-Risk) at the final stage of a multistage stochastic program. Such problems are not time consistent. However, it is shown that by considering risk parameters at random level and by extending the state space appropriately, the value function corresponding to the optimal decisions evolves as a martingale and a dynamic programming principle is applicable. In this setup the risk profile has to be accepted to vary over time and to be adapted dynamically. Further, a verification theorem is provided, which characterizes optimal decisions by sub- and supermartingales. These enveloping martingales constitute a lower and an upper bound of the optimal value function. The basis of the analysis is a new decomposition theorem for the Average Value-at-Risk, which is given in a time consistent formulation.
Time-inconsistent multistage stochastic programs: Martingale bounds
S0377221715001368
There are many applications across a broad range of business problem domains in which equity is a concern and many well-known operational research (OR) problems such as knapsack, scheduling or assignment problems have been considered from an equity perspective. This shows that equity is both a technically interesting concept and a substantial practical concern. In this paper we review the operational research literature on inequity averse optimization. We focus on the cases where there is a tradeoff between efficiency and equity. We discuss two equity related concerns, namely equitability and balance. Equitability concerns are distinguished from balance concerns depending on whether an underlying anonymity assumption holds. From a modeling point of view, we classify three main approaches to handle equitability concerns: the first approach is based on a Rawlsian principle. The second approach uses an explicit inequality index in the mathematical model. The third approach uses equitable aggregation functions that can represent the DM’s preferences, which take into account both efficiency and equity concerns. We also discuss the two main approaches to handle balance: the first approach is based on imbalance indicators, which measure deviation from a reference balanced solution. The second approach is based on scaling the distributions such that balance concerns turn into equitability concerns in the resulting distributions and then one of the approaches to handle equitability concerns can be applied. We briefly describe these approaches and provide a discussion of their advantages and disadvantages. We discuss future research directions focussing on decision support and robustness.
Inequity averse optimization in operational research
S0377221715001575
In this paper, we deal with the bi-objective non-convex combined heat and power (CHP) planning problem. A medium and long term planning problem decomposes into thousands of single period (hourly) subproblems and dynamic constraints can usually be ignored in this context. The hourly subproblem can be formulated as a mixed integer linear programming (MILP) model. First, an efficient two phase approach for constructing the Pareto Frontier (PF) of the hourly subproblem is presented. Then a merging algorithm is developed to approximate the PF for the multi-period planning problem. Numerical results with real CHP plants demonstrate the effectiveness and efficiency of the solution approach using the Cplex based ɛ-constraint method as benchmark.
A two phase approach for the bi-objective non-convex combined heat and power production planning problem
S0377221715001587
This paper extends the results of a particular capacitated vehicle routing problem with pickups and deliveries (see Pandelis et al., 2013b) to the case in which the demands for a material that is delivered to N customers and the demands for a material that is collected from the customers are continuous random variables instead of discrete ones. The customers are served according to a particular order. The optimal policy that serves all customers has a specific threshold-type structure and it is computed by a suitable efficient dynamic programming algorithm that operates over all policies having this structure. The structural result is illustrated by a numerical example.
A single vehicle routing problem with pickups and deliveries, continuous random demands and predefined customer order
S0377221715001599
Blending biofuels into fossil fuels allows for emission reductions in the transportation sector. However, biofuels are not yet competitive due to high production costs and investments and thus, legal requirements like blending quotas or emission thresholds must be issued if biofuels are to contribute to European CO2-reduction goals. Thereby, the aim is to establish Pareto-efficient long-term legal requirements. Against this background, we develop an optimization model for simultaneously minimizing life cycle greenhouse gas emissions and maximizing discounted net present value in order to analyze the overall system of biomass cultivation, (bio)fuel production and blending. The applied ε-constraint approach allows to calculate Pareto-efficient solutions. The model is applied to the German (bio)diesel market. We show that current and past European Union directives are not Pareto-efficient and cause unintended side effects. As results, information about trade-offs between the two objectives (minimizing life cycle greenhouse gas emissions and maximizing discounted net present value) as well as recommendations on the design of the regulation is derived.
Pareto-efficient legal regulation of the (bio)fuel market using a bi-objective optimization model
S0377221715001605
The British government is implementing fully its novel Electricity Market Reform (GB EMR). Its objective, in line with European directives, aims at replacing existing nuclear and coal plant with low-carbon systems, to deliver reliable and affordable power. Though the GB EMR has proposed several policy instruments for meeting its objectives, and the academic literature has discussed the main issues, no known report includes a comprehensive and dynamic simulation exercise that assesses the extent of this profound and important initiative. This paper presents a system dynamics model that supports analysis of long-term effects of the various policy instruments that have been proposed in the GB EMR, focusing on environmental quality, security of supply and economic sustainability. Using lessons learned from simulation, the paper concludes that effectively achieving the GB EMR objectives requires this comprehensive intervention or a similar one that includes the promotion of low carbon electricity generation through the simultaneous implementation of various direct and indirect incentives, such as a carbon price floor, a Feed in Tariff (FIT) and a capacity mechanism.
Simulating the new British Electricity-Market Reform
S0377221715001617
This paper introduces a novel two-phase supplier selection procedure. Unlike most supplier selection researches, which are decisive based on supplier eligibility at the time of the decision making, the proposed method is based on the long term trend of value, stability, and relationship of potential suppliers. In the first phase, suppliers are evaluated and assigned a comparable value based on a set of criteria. This value is then analyzed in the long run, and fed into a multi-objective portfolio optimization model in the second phase. The model determines a supplier portfolio by maximizing the expected value and development of suppliers, and minimizing their correlated risk. The novelty of this procedure is introducing a new view toward the supplier selection problem. Numerical tests show that the proposed approach selects higher value suppliers with lower risk of failure, compared to available methods.
Selecting a supplier portfolio with value, development, and risk consideration
S0377221715001629
This paper analyzes technical efficiency and the value of the marginal product of productive inputs vis-a-vis pesticide use to measure allocative efficiency of pesticide use along productive inputs. We employ the data envelopment analysis framework and marginal cost techniques to estimate technical efficiency and the shadow values of each input. A bootstrap technique is applied to overcome the limitations of DEA and helps to estimate the mean and 95 percent confidence intervals of the estimated quantities. The methods are applied to a sample of vegetable producers in Benin over the period 2009–2010. Results indicated that bias corrected technical efficiency scores are lower than the initial measures and the former estimates are statistically significant. The application results show that vegetable producers are less efficient with respect to pesticide use than other inputs. Also, results suggest that pesticides, land and fertilizers are overused.
Estimating shadow prices and efficiency analysis of productive inputs and pesticide use of vegetable production