FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0377221715008899
Distribution planning is crucial for most companies since goods are rarely produced and consumed at the same place. Distribution costs, in addition, can be an important component of the final cost of the products. In this paper, we study a VRP variant inspired on a real case of a large distribution company. In particular, we consider a VRP with a heterogeneous fleet of vehicles that are allowed to perform multiple trips. The problem also includes docking constraints in which some vehicles are unable to serve some particular customers, and a realistic objective function with vehicles’ fixed and distance-based costs and a cost per customer visited. We design a trajectory search heuristic called GILS-VND that combines Iterated Local Search (ILS), Greedy Randomized Adaptive Search Procedure (GRASP) and Variable Neighborhood Descent (VND) procedures. This method obtains competitive solutions and improves the company solutions leading to significant savings in transportation costs.
An ILS-based algorithm to solve a large-scale real heterogeneous fleet VRP with multi-trips and docking constraints
S0377221715008905
This paper reports in-depth behavioural operational research to explore how individual clients learned to resolve dynamically complex problems in system dynamics model-based engagements. Consultant-client dyads involved in ten system dynamics consulting engagements were interviewed to identify individual clients' Critical Learning Incidents—defined as the moment of surprise caused after one's mental model produces unexpected failure and a change in one's mental model produces the desired result. The cases, which are reprised from interviews, include assessments of the nature of the engagement problem, the form of system dynamics model, and the methods employed by consultants during each phase of the engagement. Reported Critical Learning Incidents are noted by engagement phase and consulting method and constructivist learning theory is used to describe a pattern of learning. Research outcomes include descriptions of: the role of different methods applied in engagement phases (for example, the role of concept models to commence problem identification and to introduce iconography and jargon to the engagement participants); how model form associates with the timing of Critical Learning Incidents; and the role of social mediation and negotiation in the learning process.
Critical Learning Incidents in system dynamics modelling engagements
S0377221715008917
In this paper, we address a truck dock assignment problem with operational time constraint which has to be faced in the management of cross docks. More specifically, this problem is the subproblem of more involved problems with additional constraints and criteria. We propose a new integer programming model for this problem. The dimension of the polytope associated with the proposed model is identified by introducing a systematic way of generating linearly independent feasible solutions. Several classes of valid inequalities are also introduced. Some of them are proved to be facet-defining. Then, exact separation algorithms are described for separating cuts for classes with exponential number of constraints, and an efficient branch-and-cut algorithm solving real-life size instances in a reasonable time is provided. In most cases, the optimal solution is identified at the root node without requiring any branching.
A branch-and-cut algorithm for the truck dock assignment problem with operational time constraints
S0377221715008929
In general, a portfolio problem minimizes risk (or negative utility) of a portfolio of financial assets with respect to portfolio weights subject to a budget constraint. The inverse portfolio problem then arises when an investor assumes that his/her risk preferences have a numerical representation in the form of a certain class of functionals, e.g. in the form of expected utility, coherent risk measure or mean-deviation functional, and aims to identify such a functional, whose minimization results in a portfolio, e.g. a market index, that he/she is most satisfied with. In this work, the portfolio risk is determined by a coherent risk measure, and the rate of return of investor’s preferred portfolio is assumed to be known. The inverse portfolio problem then recovers investor’s coherent risk measure either through finding a convex set of feasible probability measures (risk envelope) or in the form of either mixed CVaR or negative Yaari’s dual utility. It is solved in single-period and multi-period formulations and is demonstrated in a case study with the FTSE 100 index.
Inverse portfolio problem with coherent risk measures
S0377221715008930
Feature selection methods are used in machine learning and data analysis to select a subset of features that may be successfully used in the construction of a model for the data. These methods are applied under the assumption that often many of the available features are redundant for the purpose of the analysis. In this paper, we focus on a particular method for feature selection in supervised learning problems, based on a linear programming model with integer variables. For the solution of the optimization problem associated with this approach, we propose a novel robust metaheuristics algorithm that relies on a Greedy Randomized Adaptive Search Procedure, extended with the adoption of short memory and a local search strategy. The performances of our heuristic algorithm are successfully compared with those of well-established feature selection methods, both on simulated and real data from biological applications. The obtained results suggest that our method is particularly suited for problems with a very large number of binary or categorical features.
Integer programming models for feature selection: New extensions and a randomized solution algorithm
S0377221715008942
This paper defines and models time-to-profit for the first time for credit acceptance decisions within the context of revolving credit. This requires the definition of a time-related event: A customer is profitable when monthly cumulative return is at least one (i.e. cumulative profits cover the outstanding balance). Time-to-profit scorecards were produced for a data set of revolving credit from a Colombian lending institution which included socio-demographic and first purchase individual characteristics. Results show that it is possible to obtain good classification accuracy and improve portfolio returns which are continuous by definition through the use of survival models for binary events (i.e. either being profitable or not). It is also shown how predicting time-to-profit can be used for investment planning purposes of credit programmes. It is possible to identify the earliest point in time in which a customer is profitable and hence, generates internal (organic) funds for a credit programme to continue growing and become sustainable. For survival models the effect of segmentation on loan duration was explored. Results were similar in terms of classification accuracy and identifying organic growth opportunities. In particular, loan duration and credit limit usage have a significant economic impact on time-to-profit. This paper confirms that high risk credit programmes can be profitable at different points in time depending on loan duration. Furthermore, existing customers may provide internal funds for the credit programme to continue growing.
“Time-to-profit scorecards for revolving credit”
S0377221715008954
We extend the literature on risk preferences of a representative bettor by including odds-dependent bet sizes in our estimations. Accounting for different bet sizes largely reduces the standard errors of all coefficients. Substituting the coefficients from the model with equal bet sizes into the model with odds-dependent sizes leads to a sharp decline in the likelihood which shows that accounting for different amounts is important. Our estimations strongly reject the hypothesis that the overbetting of outcomes with low probabilities (favorite-longshot bias) can be explained by risk-seeking bettors. Depending on the exact specification within cumulative prospect theory, the data can best be described by an overweighting of small probabilities which is more pronounced in the gain domain. Models allowing for two parameters for probability weighting each in the gain- and in the loss domain are superior.
Estimating risk preferences of bettors with different bet sizes
S0377221715008966
The problem of regulating natural gas procurement has become a huge burden to regulators, especially due to the plethora of complicated financial contracts that are now being used by local distribution companies (LDCs) for risk management purposes. Muthuraman, Aouam, and Rardin (2008) proposed a new benchmarking scheme, called policy benchmarks and showed that these benchmarks do not suffer from the usual criticisms that are made against existing regulatory methods. Such policy benchmarks based regulation has however faced hurdles in being adopted. One of the primary reasons has been concerns over its robustness. We demonstrate in this paper that when modeling errors are present, the policy benchmarks proposed earlier can backfire and are hence, as suspected, not well suited for regulation. We begin our analysis with a more general model than the one that has been used earlier by accommodating the LDC’s ability to reduce cost by exerting effort, as in classical economics. We derive solutions to the LDCÕs problem, find closed form solutions for the regulator’s optimal fee fraction along with risk sharing implications, and provide insights into the policy benchmark selection. We then construct a robust-optimization based policy benchmarking mechanism that inherits all the original benefits. We further demonstrate that these, unlike the earlier benchmarks, are robust against modeling errors.
Robust optimization policy benchmarks and modeling errors in natural gas
S0377221715008978
In their recent paper, Hämäläinen, Luoma, and Saarinen (2013) have made a strong case for the importance of Behavioural OR. With the motivation to contribute to a broad academic outlook in this emerging discipline, this rather programmatic paper intends to further the discussion by describing three types of research tasks that should play an important role in Behavioural OR, namely a descriptive, a methodological and a technological task. Moreover, by relating Behavioural OR to similar academic endeavours, three potential pitfalls are presented that Behavioural OR should avoid: (1) a too narrow understanding of what “behavioural” means, (2) ignorance of interdisciplinary links, and (3) a development without close connection with the core disciplines of OR. The paper concludes by suggesting a definition of Behavioural OR that sums up all points addressed.
An outlook on behavioural OR – Three tasks, three pitfalls, one definition
S0377221715008991
The paper develops estimation of three parameters of banking risk based on an explicit model of expected utility maximization by financial institutions subject to the classical technology restrictions of neoclassical production theory. The parameters are risk aversion, prudence or downside risk aversion and generalized risk resulting from a factor model of loan prices. The model can be estimated using standard econometric techniques, like GMM for dynamic panel data and latent factor analysis for the estimation of covariance matrices. An explicit functional form for the utility function is not needed and we show how measures of risk aversion and prudence (downside risk aversion) can be derived and estimated from the model. The model is estimated using data for Eurozone countries and we focus particularly on (i) the use of the modeling approach as a device close to an “early warning mechanism”, (ii) the bank- and country-specific estimates of risk aversion and prudence (downside risk aversion), and (iii) the derivation of a generalized measure of risk that relies on loan-price uncertainty. Moreover, the model provides estimates of loan price distortions and thus, allocative efficiency.
Parameters measuring bank risk and their estimation
S0377221715009005
The concept of Conditional Value-at-Risk (CVaR) is used in various applications in uncertain environment. This paper introduces CVaR (superquantile) norm for a random variable, which is by definition CVaR of absolute value of this random variable. It is proved that CVaR norm is indeed a norm in the space of random variables. CVaR norm is defined in two variations: scaled and non-scaled. L-1 and L-infinity norms are limiting cases of the CVaR norm. In continuous case, scaled CVaR norm is a conditional expectation of the random variable. A similar representation of CVaR norm is valid for discrete random variables. Several properties for scaled and non-scaled CVaR norm, as a function of confidence level, were proved. Dual norm for CVaR norm is proved to be the maximum of L-1 and scaled L-infinity norms. CVaR norm, as a Measure of Error, is related to a Regular Risk Quadrangle. Trimmed L1-norm, which is a non-convex extension for CVaR norm, is introduced analogously to function L-p for p < 1. Linear regression problems were solved by minimizing CVaR norm of regression residuals.
CVaR (superquantile) norm: Stochastic case
S0377221715009017
We consider a selective vehicle routing problem, in which customers belonging to different partners in a logistic coalition are served in a single logistic operation with multiple vehicles. Each partner determines a cost of non-delivery (CND) for each of its customers, and a central algorithm creates an operational plan, including the decision on which customers to serve and in which trip. The total transportation cost of the coalition is then divided back to the partners through a cost allocation mechanism. This paper investigates the effect on the cost allocation of a partner’s strategy on non-delivery penalties (high/low) and the properties of its customer locations (distance to the depot, degree of clustering). The effect of the cost allocation method used by the coalition is also investigated. We compare the well-known Shapley value cost allocation method to our novel problem-specific method: the CND-weighted cost allocation method. We prove that an adequate cost allocation method can provide an incentive for each partner to behave in a way that benefits the coalition. Further, we develop a transformation that is able to transform any cost allocation into an individually rational one without losing this incentive.
The selective vehicle routing problem in a collaborative environment
S0377221715009029
Production plans often span a whole week or month, even when independent production lots are completed every day and service performance is tallied daily. Such policies are said to use staggered deliveries, meaning that the production rate for multiple days are determined at a single point in time. Assuming autocorrelated demand, and linear inventory holding and backlog costs, we identify the optimal replenishment policy for order cycles of length P. With the addition of a once-per-cycle audit cost, we optimize the order cycle length P * via an inverse-function approach. In addition, we characterize periodic inventory costs, availability, and fill rate. As a consequence of staggering deliveries, the inventory level becomes cyclically heteroskedastic. This manifests itself as ripples in the expected cost and service levels. Nevertheless, the cost-optimal replenishment policy achieves a constant availability by using time-varying safety stocks; this is not the case with suboptimal constant safety stock policies, where the availability fluctuates over the cycle.
Inventory performance under staggered deliveries and autocorrelated demand
S0377221715009030
Current sourcing paradigms are described by greater complexity in managing suppliers' performance in purchasing cycle execution, product quality and logistics competence. When we combine the stages of the order fulfillment cycle with the buyer's information sharing factors, a more complete view of the buyer-supplier exchange becomes useful for performance evaluation of both the suppliers and the buyers. Using operational data obtained from a large telecommunications firm, which placed strategic importance on measuring the performance in the purchase order fulfillment cycle stages, this study offers a new context for supplier evaluation. This context also conceptualizes several on-time delivery disruption scenarios as supplier delivery performance risk, and investigates the sensitivity of both the buyer's information sharing and the suppliers’ performance capabilities to on-time delivery disruptions. We evaluate supplier performance and segment suppliers through recourse to chance-constrained data envelopment analysis at high and low discriminatory power levels. Our results show that there are statistically significant relationships between the dimensions of information sharing by the buying firm and the classification of supplier firms at varying levels of on-time delivery performance risk. We also identify robust suppliers demonstrating the capability to remain efficient across disruption levels despite poor buyer performance on information sharing factors, and provide managerial insights on buyer-supplier exchange.
Exploring supplier performance risk and the buyer's role using chance-constrained data envelopment analysis
S0377221715009042
This paper proposes a way to optimally regulate bargaining for risk redistributions. We discuss the strategic interaction between two firms, who trade risk Over-The-Counter in a one-period model. Novel to the literature, we focus on an incomplete set of possible risk redistributions. This keeps the set of feasible contracts simple. We consider catastrophe and longevity risk as two key examples. The reason is that the trading of these risks typically occurs Over-The-Counter, and that there are no given pricing functions. If the set of feasible strategies is unconstrained, we get that all Nash equilibria are such that no firm benefits from trading. A way to avoid this, is to restrict the strategy space a priori. In this way, a Nash equilibrium that is interesting for both firms may exist. The intervention of a regulator is possible by restricting the set of feasible strategies. For instance, a firm has to keep a deductible on its prior risk. We characterize optimal regulation by means of Nash bargaining solutions.
Nash equilibria of Over-The-Counter bargaining for insurance risk redistributions: The role of a regulator
S0377221715009054
In traditional analytic hierarch process (AHP), decision makers (DMs) are required to provide crisp judgments over paired comparisons of objectives to construct comparison matrices. To enhance the modeling ability of traditional AHP, we propose hesitant AHP (H-AHP) that can consider the hesitancy experienced by the DMs in decision. H-AHP is characterized by hesitant judgments, where each hesitant judgment can be represented by several possible values. Different probability distributions can be used to further describe hesitant judgments according to the DMs’ preferences. Based on a hesitant comparison matrix (HCM) that consists of hesitant judgments, we define two indices to measure the consistency degree and the consensus degree of the HCM respectively. From a stochastic point of view, a new prioritization method is developed to derive priorities from HCMs, where the results are with probability interpretations. We provide a step by step procedure for H-AHP, and demonstrate this new method with a real-life decision making problem.
Hesitant analytic hierarchy process
S0377221715009066
In the last few years, the Italian healthcare system has been coping with radical changes, aimed at guaranteeing more efficiency while containing costs. Starting from the actual service network organization, we discuss the problem faced by the Italian authorities of reorganizing the healthcare service network and we propose some optimization models to support the decision-making process. In the first part of the work, we compare the existing health care service network of the northern area of Calabria (Italy) with the configurations determined by solving well-known facility location models. In the second part, taking into account the healthcare reorganization plans imposed by local governments, we consider the problem of reorganizing the public health care service network of the northern area of Calabria. Indeed, we propose two ad-hoc optimization models that consider national and regional guidelines and constraints. The behavior of the proposed models, in terms of solution quality, is evaluated on the basis of an extensive computational study on real data.
Location and reorganization problems: The Calabrian health care system case
S0377221715009091
The talent scheduling problem is a simplified version of the real-world film shooting problem, which aims to determine a shooting sequence so as to minimize the total cost of the actors involved. In this article, we first formulate the problem as an integer linear programming model. Next, we devise a branch-and-bound algorithm to solve the problem. The branch-and-bound algorithm is enhanced by several accelerating techniques, including preprocessing, dominance rules and caching search states. Extensive experiments over two sets of benchmark instances suggest that our algorithm is superior to the current best exact algorithm. Finally, the impacts of different parameter settings, algorithm components and instance generation distributions are disclosed by some additional experiments.
An enhanced branch-and-bound algorithm for the talent scheduling problem
S0377221715009108
This paper studies a k-median Steiner forest problem that jointly optimizes the opening of at most k facility locations and their connections to the client locations, so that each client is connected by a path to an open facility, with the total connection cost minimized. The problem has wide applications in the telecommunication and transportation industries, but is strongly NP-hard. In the literature, only a 2-approximation algorithm is known, it being based on a Lagrangian relaxation of the problem and using a sophisticated primal-dual schema. In this study, we have developed an improved approximation algorithm using a simple transformation from an optimal solution of a minimum spanning tree problem. Compared with the existing 2-approximation algorithm, our new algorithm not only achieves a better approximation ratio that is easier to be proved, but also guarantees to produce solutions of equal or better quality—up to 50 percent improvement in some cases. In addition, for two non-trivial special cases, where either every location contains a client, or all the locations are in a tree-shaped network, we have developed, for the first time in the literature, new algorithms that can solve the problem to optimality in polynomial time.
Improved algorithms for joint optimization of facility locations and network connections
S0377221715009121
We study the operations scheduling problem encountered in the process of making and distributing emergency supplies. The lead times of a multi-echelon process, including shipping time, assembly time, and waiting time for raw materials must be explicitly modeled. The optimization problem is to find an inventory allocation and a production/assembly plan together with a shipping schedule for inbound supplies and outbound deliveries so that the total tardiness in customer order fulfillment is minimized. We define the problem as a mixed integer programming model, perform a structure analysis of the problem, and then propose a new search heuristic for the problem. This proposed heuristic finds a feasible solution to the problem by solving a series of linear programming relaxation problems, and is able to terminate quickly. Observations from an extensive empirical study are reported.
A heuristic for emergency operations scheduling with lead times and tardiness penalties
S0377221715009133
We introduce a new binary quadratic program that subsumes the well known quadratic assignment problem. This problem is shown to be NP-hard when some associated parameters are restricted to simple values but solvable in polynomial time when some other parameter values are restricted. Three different neighborhood structures are introduced. A best solution in all these neighborhoods can be identified in polynomial time even though two of these neighborhoods are of exponential size. Different local search algorithms and their enhancements using tabu search are developed using these neighborhoods. Experimental analysis show that two hybrid algorithms obtained by combining these neighborhoods in specific ways yield results that are superior to the algorithms that use these neighborhoods separately. Extensive computational results and statistical analysis of the resulting data are presented.
The bipartite quadratic assignment problem and extensions
S0377221715009145
This paper addresses a new steelmaking-continuous casting (SCC) scheduling problem from iron and steel production processing. We model the problem as a combination of two coupled sub-problems. One sub-problem is a charge scheduling problem in a hybrid flowshop, and the other is a cast scheduling problem in parallel machines. To solve this SCC problem, we present a novel cooperative co-evolutionary artificial bee colony (CCABC) algorithm that has two sub-swarms, with each addressing a sub-problem. Problem-specific knowledge is used to construct an initial population, and an exploration strategy is introduced to guide the CCABC to promising regions during the search. To adapt the search operators in the classical artificial bee colony (ABC) to the cooperative co-evolution paradigm, an enhanced strategy for onlookers and a self-adaptive neighbourhood operator have been suggested. Extensive experiments based on both synthetic and real-world instances from an SCC process show the effectiveness of the proposed CCABC in solving the SCC scheduling problem.
An effective co-evolutionary artificial bee colony algorithm for steelmaking-continuous casting scheduling
S0377221715009157
The real world problems in the supply-chain domain are generally constrained and combinatorial in nature. Several nature-/bio-/socio-inspired metaheuristic methods have been proposed so far solving such problems. An emerging metaheuristic methodology referred to as Cohort Intelligence (CI) in the socio-inspired optimization domain is applied in order to solve three selected combinatorial optimization problems. The problems considered include a new variant of the assignment problem which has applications in healthcare and inventory management, a sea-cargo mix problem and a cross-border shipper selection problem. In each case, we use two benchmarks for evaluating the effectiveness of the CI method in identifying optimal solutions. To assess the quality of solutions obtained by using CI, we do comparative testing of its performance against solutions generated by using CPLEX. Furthermore, we also compare the performance of the CI method to that of specialized multi-random-start local search optimization methods that can be used to find solutions to these problems. The results are robust with a reasonable computational time and accuracy.
Application of the cohort-intelligence optimization method to three selected combinatorial optimization problems
S0377221715009182
Usually, in order to summarize various opinions about a particular situation (mainly product or service valuation on Internet) a process called aggregation is used. This process basically consists of determining the appropriate value to represent the majority's opinion and many strategies and operators can be used for this purpose. Simple arithmetic mean is widely used to resume several opinions in a single value, but this value is generally not representative or it is affected by the extreme values. An alternative to aggregate opinions are the Ordered Weighting Averaging (OWA) operators. Nevertheless, they have distribution problems when applied to aggregates with cardinalities. These problems may be solved by using Majority Additive OWA (MA-OWA) operator, a sort of arithmetic mean of arithmetic means. MA-OWA operator works adequately but, in some cases, discards the minority's opinion, specifically when it does not coincide with the largest cardinality value. In order to generalize the usage of MA-OWA operator, the rest of opinions are taken into account using a Cardinality Relevance Factor. This paper introduces a Selective Majority Additive OWA (SMA-OWA) which manages the significance of all opinions varying the Cardinality Relevance Factor. Mathematical extension of SMA-OWA, its properties and some illustrative examples are presented in this article.
Selective majority additive ordered weighting averaging operator
S0377221715009194
In this paper, we consider the online strip packing problem, in which a list of online rectangles has to be packed without overlap or rotation into a strip of width 1 and infinite length so as to minimize the required height of the packing. We derive a new improved lower bound of ( 3 + 5 ) / 2 ≈ 2.618 for the competitive ratio for this problem. This result improves the best known lower bound of 2.589.
A new lower bound for online strip packing
S0377221715009200
Strategies for investing in renewable energy projects present high risks associated with generation and price volatility and dynamics. Existing approaches for determining optimal strategies are based on real options theory, that often simplify the uncertainty process, or on stochastic programming approaches, that simplify the dynamic aspects. In this paper, we bridge the gap between these approaches by developing a multistage stochastic programming approach that includes real options such as postponing, hedging with fixed (forward) contracts and combination with other sources. The proposed model is solved by a procedure based on the Stochastic Dual Dynamic Programming (SDDP) method. The framework is extended to the risk averse setting. A specific case study in investment in hydro and wind projects in the Brazilian market is used to illustrate that the investment strategies generated by the proposed approach are efficient.
Risk neutral and risk averse approaches to multistage renewable investment planning under uncertainty
S0377221715009212
Retailers face the important but challenging task of optimizing their product assortments. The challenge is to find, for every category in every store, the assortment that maximizes (expected) category profit. Adding to the complexity of this 0–1 knapsack problem, retailers should also consider the risk associated with every assortment. While every product in the assortment offers an expected return, there is also uncertainty around its expected demand and profit contribution. Therefore, retailers face the difficult task of designing a portfolio of products that balances risk and return. In this paper, we develop a robust approach to optimize retail assortments that offers this balance. Since the dimensionality of this robust 0–1 knapsack problem in practice often precludes full enumeration, we propose a novel, efficient and real-time heuristic that solves this problem. The heuristic constructs an approximation of the risk-return Efficient Frontier of assortments. We find that the robust solutions offer the retailer a considerable reduction in risk (variance), yet only imply a small reduction in expected return. The constructed approximations contain assortments that are optimal solutions to the robust assortment optimization problem. Moreover, they represent insightful visualizations of the solution space, allowing for interactivity (“what risk premium should the retailer pay?”) in real-time (matter of seconds).
Robust optimization of the 0–1 knapsack problem: Balancing risk and return in assortment optimization
S0377221715009224
We generalize the idea of semi-self-financing strategies, originally discussed in Ehrbar (1990), and later formalized in Cui et al (2012), for the pre-commitment mean-variance (MV) optimal portfolio allocation problem. The proposed semi-self-financing strategies are built upon a numerical solution framework for Hamilton–Jacobi–Bellman equations, and can be readily employed in a very general setting, namely continuous or discrete re-balancing, jump-diffusions with finite activity, and realistic portfolio constraints. We show that if the portfolio wealth exceeds a threshold, an MV optimal strategy is to withdraw cash. These semi-self-financing strategies are generally non-unique. Numerical results confirming the superiority of the efficient frontiers produced by the strategies with positive cash withdrawals are presented. Tests based on estimation of parameters from historical time series show that the semi-self-financing strategy is robust to estimation ambiguities.
Better than pre-commitment mean-variance portfolio allocation strategies: A semi-self-financing Hamilton–Jacobi–Bellman equation approach
S0377221715009236
In this paper we study the profitable windy rural postman problem. This is an arc routing problem with profits defined on a windy graph in which there is a profit associated with some of the edges of the graph, consisting of finding a route maximizing the difference between the total profit collected and the total cost. This problem generalizes the rural postman problem and other well-known arc routing problems and has real-life applications, mainly in snow removal operations. We propose here a formulation for the problem and study its associated polyhedron. Several families of facet-inducing inequalities are described and used in the design of a branch-and-cut procedure. The algorithm has been tested on a large set of benchmark instances and compared with other existing algorithms. The results obtained show that the branch-and-cut algorithm is able to solve large-sized instances optimally in reasonable computing times.
A branch-and-cut algorithm for the profitable windy rural postman problem
S0377221715009248
Traditionally, both researchers and practitioners rely on standard Erlang queueing models to analyze call center operations. Going beyond such simple models has strong implications, as is evidenced by theoretical advances in the recent literature. However, there is very little empirical research to support that body of theoretical work. In this paper, we carry out a large-scale data-based investigation of service times in a call center with many heterogeneous agents and multiple call types. We observe that, for a given call type: (a) the service-time distribution depends strongly on the individual agent, (b) that it changes with time, and (c) that average service times are correlated across successive days or weeks. We develop stochastic models that account for these facts. We compare our models to simpler ones, commonly used in practice, and find that our proposed models have a better goodness-of-fit, both in-sample and out-of-sample. We also perform simulation experiments to show that the choice of model can have a significant impact on the estimates of common measures of quality of service in the call center.
Inter-dependent, heterogeneous, and time-varying service-time distributions in call centers
S0377221715009455
In this paper, we propose a semiparametric version of the zero-inefficiency stochastic frontier model of Kumbhakar, Parmeter, and Tsionas (2013) by allowing for the proportion of firms that are fully efficient to depend on a set of covariates via unknown smooth function. We propose a (iterative) backfitting local maximum likelihood estimation procedure that achieves the optimal convergence rates of both frontier parameters and the nonparametric function of the probability of being efficient. We derive the asymptotic bias and variance of the proposed estimator and establish its asymptotic normality. In addition, we discuss how to test for parametric specification of the proportion of firms that are fully efficient as well as how to test for the presence of fully inefficient firms, based on the sieve likelihood ratio statistics. The finite sample behaviors of the proposed estimation procedure and tests are examined using Monte Carlo simulations. An empirical application is further presented to demonstrate the usefulness of the proposed methodology.
Zero-inefficiency stochastic frontier models with varying mixing proportion: A semiparametric approach
S0377221715009467
The financial lease is an important financing tool by which the lessee can acquire ownership of equipment upon the expiration of the lease after making a series of rent payments for the use of the equipment. In this paper, we consider an online version of this financial lease decision problem in which the decision maker (the lessee) does not know how long he/she will use the equipment. By assuming, the lessee can use the equipment through two options: financial lease or lease; we define and solve this online financial lease decision problem using the competitive analysis method. The optimal online strategies are discussed in each financial lease case with or without down payment. Finally, the optimal strategies are summarized as simple decision rules.
Competitive analysis of the online financial lease problem
S0377221715009479
This paper addresses the Berth Allocation Problem under Time-Dependent Limitations. Its goals are to allocate and schedule the available berthing positions for the container vessels arriving toward a maritime container terminal under water depth and tidal constraints. As we discuss, the only optimization model found in the literature does not guarantee the feasibility of the solutions reported in all the cases and is limited to a two-period planning horizon, i.e., one low tide and one high tide period. In this work, we propose an alternative mathematical formulation based upon the Generalized Set Partitioning Problem, which considers a multi-period planning horizon and includes constraints related to berth and vessel time windows. The performance of our optimization model is compared with that of the mathematical model reported in the related literature. In this regard, the computational experiments indicate that our model outperforms the previous one from the literature in several terms: (i) it guarantees the feasibility and optimality of the solutions reported in all the cases, (ii) reduces the computational times about 88 percent on average in the problem instances from the literature, and (iii) presents reasonable computational times in new large problem instances.
A Set-Partitioning-based model for the Berth Allocation Problem under Time-Dependent Limitations
S0377221715009480
The internal complexity of lifeline systems and their interdependencies amplify the vulnerability of external disruptions. We consider lifeline infrastructures as a network system with supply, transshipment, demand nodes and arcs constructed between node-pair for conveying service flows. The complex interactive network system can be modeled as multi-layered graphs, whereby the power network depends on the gas network linked through the gasified power plants. Similarly, the water network depends on both quality and quantity of power supply. A successful emergency rescue can make lifeline infrastructures more resilient against natural disasters and unexpected accidents. This study focuses on a resource allocation and schedule problem to restore the most critical components quickly in the multiple interdependent lifeline infrastructures under disruptions. The key objectives of quick response model include reducing the overall losses caused by the accidents, and restoring system functions as quickly as possible. The Resource Allocation Model (RAM) for rescue was formulated as a two-stage mixed-integer programming, in which the first stage problem aims to minimize the total losses, while the second stage problem is to optimize resource allocation for rescue service within the rescue time horizon using the proposed heuristic algorithm in polynomial complexity. In the meantime, those tasks/components to be repaired are selected by the proposed vulnerability analysis method to guarantee the optimal whole network efficiency, and then put them into the Resource Allocation Model. The simulation results demonstrate that the proposed approaches are both efficient and effective to solve the real-life post-disaster resource allocation problem.
A two-stage resource allocation model for lifeline systems quick response with vulnerability analysis
S0377221715009492
The problem of designing a wavelength routed optical transport network without wavelength conversion at intermediate nodes is considered. A class of valid inequalities for wavelength routing and assignment is reported and is used to augment traditional network design formulations. The resulting network cost provides a lower bound on the cost of a network that permits wavelength routing. The resulting network is shown to be optimal for a majority of the problem instances tested and in those cases where it is not, a trial-and-error method is proposed that is able to find near-optimal solutions within relatively short period of time. This is achieved by developing efficient and effective heuristics that attempt to provide a feasible wavelength routing. Computational tests are reported on relatively larger problem sizes than have been reported in literature on the wavelength routing problem.
Near optimal design of wavelength routed optical networks
S0377221715009509
We discuss two types of contracts for energy-saving products in a monopoly with the government’s budget constraint. These contracts specify the subsidy, either a fixed amount (referred to as F-type contracts) or a discount (referred to as D-type contracts), and the threshold of the energy consumption level of products, below which the products are labeled as certified products and qualified for the subsidy. We model a three-stage game and give optimal design of the contracts under two different objectives of the government: minimizing the total energy consumption and minimizing the average energy consumption. Our results show that: 1) The optimal contract designs are the same under the two objectives; 2) If the subsidy budget is relatively low, the F-type contract is more preferable than the D-type contract, and vice versa; 3) The contracts’ function of certification enables the government to take the advantage of the consumer environmental awareness to improve the environmental performance even when the government’s subsidy budget is zero; 4) Both contracts benefit the environment. Under the F-type contract, only the government pays for the environmental improvement. In contrast, under the D-type contract, both the government and consumers pay.
Contract designs for energy-saving product development in a monopoly
S0377221715009510
This paper addresses an assembly line balancing problem in which the length of the workpieces is larger than the width of the workstations. The problem differs from traditional variants of assembly line balancing in the sense that only a portion of the workpiece, or portions of two consecutive workpieces, can be reached from any workstation. Consequently, at any stationary stage of the cycle, each workstation can only process a portion of the tasks, namely, those which are inside the area of a workpiece that is reachable from the workstation. The objective is to find a (cyclic) movement scheme of the workpieces along the line and a task assignment to stationary stages of the production process, while minimizing the cycle time. We propose three hybrid approaches of metaheuristics and mathematical programming - one based on simulated annealing and the other two based on tabu search, relying on different neighborhood definitions. The two former approaches make use of a classical neighborhood, obtained by applying local changes to a current solution. The latter approach, in contrast, draws ideas from the corridor method to define a corridor around the current solution, via the imposition of exogenous constraints on the solution space of the problem. An extensive computational experiment is carried out to test the performance of the proposed approaches, improving the best results published to date.
Hybrid metaheuristics for the Accessibility Windows Assembly Line Balancing Problem Level 2 (AWALBP-L2)
S0377221715009522
We design and implement a dynamic program for valuing corporate securities, seen as derivatives on a firm’s assets, and computing the term structure of yield spreads and default probabilities. Our setting is flexible for it accommodates an extended balance-sheet equality, arbitrary corporate debts, multiple seniority classes, and a reorganization process. This flexibility comes at the expense of a minor loss of efficiency. The analytical approach proposed in the literature is exchanged here for a quasi-analytical approach based on dynamic programming coupled with finite elements. To assess our construction, which shows flexibility and efficiency, we carry out a numerical investigation along with a complete sensitivity analysis.
A dynamic program for valuing corporate securities
S0377221715009534
We propose an interactive multiobjective evolutionary algorithm that attempts to discover the most preferred part of the Pareto-optimal set. Preference information is elicited by asking the user to compare some solutions pairwise. This information is then used to curb the set of compatible user’s value functions, and the multiobjective evolutionary algorithm is run to simultaneously search for all solutions that could potentially be the most preferred. Compared to previous similar approaches, we implement a much more efficient way of determining potentially preferred solutions, that is, solutions that are best for at least one value function compatible with the preference information provided by the decision maker. For the first time in the context of evolutionary computation, we apply the Choquet integral as a user’s preference model, allowing us to capture interactions between objectives. As there is a trade-off between the flexibility of the value function model and the complexity of learning a faithful model of user’s preferences, we propose to start the interactive process with a simple linear model but then to switch to the Choquet integral as soon as the preference information can no longer be represented using the linear model. An experimental analysis demonstrates the effectiveness of the approach.
Using Choquet integral as preference model in interactive evolutionary multiobjective optimization
S0377221715009546
This paper develops a stochastic modeling framework to determine the location and capacities of distribution centers for emergency stockpiles to improve preparedness in the event of a disaster for which there is little to no forewarning. The proposed framework is applicable to emergency planning that must incorporate multiple sources of uncertainty, including the timing and severity of a potential event, as well as the resulting impact, while taking into consideration both disaster and region specific characteristics. To demonstrate the modeling approach, we apply it to a region prone to earthquakes. The model incorporates various uncertainties such as facility damage and casualty losses, based upon their severity and remaining survivability time, as a function of the magnitude of the earthquake. Given the computational complexity of the problem of interest, we develop an evolutionary optimization heuristic aided by an innovative mixed integer programming model that generates time efficient high quality solutions. We demonstrate the effectiveness of the heuristic via a case study featuring the HAZUS-MH software from the Federal Emergency Management Agency (FEMA). Finally, given the uncertainty associated with the magnitude of the earthquake, we use a decision analysis approach to develop robust solutions while taking into account the geological characteristics of the region.
Location and capacity allocations decisions to mitigate the impacts of unexpected disasters
S0377221715009558
Hospitals in Germany have been required to have an internal quality management system (QMS) since 2000. Although formal certification of such systems is voluntary, the number of certifications has increased steadily. The most common standards in Germany are ISO 9001, which is also widely used internationally, and KTQ (Kooperation für Transparenz und Qualität im Gesundheitswesen), which was developed specifically for the German health care sector. While a large body of literature has investigated the impact of QMS certification on performance in many industries, there is only scarce evidence on the causal link between QMS certification and technical efficiency. In the present study, we seek to elucidate this relationship using administrative data from all German hospitals from 2000 through 2010 combined with information on certification. Our analysis has three steps: First, we calculated efficiency scores for each hospital using a bootstrapped data envelopment analysis. Second, we used genetic matching to ensure that any differences observed could be attributed to certification and were not due to differences in sample characteristics between the intervention and control groups. Third, we employed a difference-in-difference specification within a truncated regression to examine whether certification had an impact on hospital efficiency. To shed light on a potential time lag between certification and efficiency gains, we used various periods for comparison. Our results indicate that hospital efficiency was negatively related to ISO 9001 certification and positively related to KTQ certification. Moreover, coefficients were always larger in the period between first certification and recertification.
Changes in technical efficiency after quality management certification: A DEA approach using difference-in-difference estimation with genetic matching in the hospital industry
S0377221715009571
Developing a cost-effective annual delivery program (ADP) is a challenging task for liquefied natural gas (LNG) suppliers, especially for LNG supply chains with large number of vessels and customers. Given significant operational costs in LNG delivery operations, cost-effective ADPs can yield substantial savings, adding up to millions. Providing an extensive account of supply chain operations and contractual terms, this paper aims to consider a realistic ADP problem faced by large LNG suppliers; suggest alternative delivery options, such as split-delivery; and propose an efficient heuristic solution which outperforms commercial optimizers. The comprehensive numerical study in this research demonstrates that contrary to the common belief in practice, split-delivery may generate substantial cost reductions in LNG supply chains.
A comprehensive annual delivery program for upstream liquefied natural gas supply chain
S0377221715009583
Convex risk measures for European contingent claims are studied in a non-Markovian jump-diffusion modeling framework using functional Itô’s calculus. Two representations for a convex risk measure are considered, one based on a nonlinear g-expectation and another one based on a representation theorem. Functional Itô’s calculus for càdlàg processes, backward stochastic differential equations (BSDEs) with jumps and stochastic optimal control theory are used to discuss the evaluation of convex risk measures. FPDIEs and PDIEs for convex risk measures are derived in the Markovian and non-Markovian situations, respectively. An entropic risk measure, which is a particular case of a convex risk measure, is discussed.
A functional Itô’s calculus approach to convex risk measures with jump diffusion
S0377221715009595
A presale program is popular with manufacturers who wish to reduce the risk posed by uncertain demands. We introduce a new price mechanism in which the manufacturer during the presale period does not disclose the exact regular price in the sale period although it is guaranteed to customers to be higher than the presale price. As positive leadtime is much overlooked in presale models, we analyze the rationality of including one. The numerical results in this paper show that both the specific price mechanism and the positive leadtime have significant effects on the manufacturer’s policy (production quantity, presale price, regular price), the expected profit, and customer behaviors. The optimal discount rate should be greater than 50 percent. This conclusion is consistent with existing results of surveys on saturation points. The manufacturer can take advantage of the latest information on demand gathered in the presale period to update their policy and increase their expected profit.
The effects of an undisclosed regular price and a positive leadtime in a presale mechanism
S0377221715009613
The paper investigates in a dynamic context the effect of Chief Executive Officer (CEO) bonus and salary payments on banks’ technical efficiency levels. Our methodological framework incorporates the latest developments on the probabilistic approach of efficiency measurement as introduced by Bădin et al. (2012). We apply time-dependent conditional efficiency estimates to analyse a sample of 37 US banks for the period from 2003 to 2012. The empirical evidence reveals a non-linear relationship between CEO bonus and salary payments and banks’ efficiency levels. More specifically it is reported that salary and bonus payments affect differently banks’ technological change and technological catch-up levels. Finally, the empirical evidence suggests that higher salary and bonus payments are not always aligned with higher technical efficiency levels.
CEO compensation and bank efficiency: An application of conditional nonparametric frontiers
S0377221715009625
Collier, Johnson and Ruggiero (2011) deal with the problem of estimating technical efficiency using regression analysis that allows multiple inputs and outputs. This revives an old problem in the analysis of production. In this note we provide an alternative maximum likelihood estimator that addresses the concerns. A Monte Carlo experiment shows that the technique works well in practice. A test for homotheticity, a critical assumption in Collier, Johnson and Ruggiero (2011) is constructed and its behavior is examined using Monte Carlo simulation and an empirical application to European banking.
Notes on technical efficiency estimation with multiple inputs and outputs
S0377221715009637
A capacity acquisition process is resource dependent when the existing resources impact the valuation of new resources and thereby influence the investment decision. Following a formal analysis of resource dependency, we show that uncertainty and aversion to risks are sufficient conditions for resource dependent capacity acquisition. Distinct from the technology lock-in effects of increasing returns to scale or learning, risk aversion can induce diversity. We develop a stochastic programming framework and solve the optimization problem by decomposing the problem into investment and operational horizon subproblems. Our computational results for an application to the electricity sector show, inter alia, that technology choices between low carbon and fossil fuel technologies, as well as their investment timings, are dependent upon the resource bases of the companies, with scale, debt leverage and uncertainty effects increasing resource dependency. Particularly, we show that resource dependency can significantly impact the optimal investment decisions and we argue that it should be evaluated at both company and policy levels of analysis.
Risk induced resource dependency in capacity investments
S0377221715009649
This paper introduces the Weighted Uncapacitated Planned Maintenance Problem (WUPMP). Based on guaranteed maximum service intervals, the WUPMP pursues the finding of a maintenance schedule that minimizes the resulting total fixed and variable costs. One finding is that significant polyhedral attributes of its solution space are derived. Among them, quasi-integrality that allows for applying an integral simplex algorithm is proven. Moreover, we prove strong NP -hardness and propose an exact solution procedure that is polynomial if the number of considered maintenance activities or the number of periods is constant. Since at least one restriction applies to most real-world applications, the algorithm provides practical decision support. Furthermore, the complexity status of various polynomial special cases of the WUPMP is resolved.
The weighted uncapacitated planned maintenance problem: Complexity and polyhedral properties
S0377221715009650
Network formation among individuals constitutes an important part of many OR processes, but relatively little is known about how individuals make their linking decisions in networks. This article provides an investigation of heuristic effects in individual linking decisions for network formation in an incentivized lab-experimental setting. Our mixed logit analysis demonstrates that the inherent complexity of the network linking setting causes individuals’ choices to be systematically less guided by payoff but more guided by simpler heuristic decision cues, and that this shift is systematically stronger for social payoff than for own payoff. Furthermore, we show that the specific complexity factors value transferability and social tradeoff aggravate the former effect. These heuristic effects have important research and policy implications in areas that involve network formation.
Heuristic decision making in network linking
S0377221715009662
This paper deals with optimal control points of M(t)/M/c/c queues with periodic arrival rates and two levels of the number of servers. We use the results of this model to build a Markov decision process (MDP). The problem arose from a case study in the Kelowna General Hospital (KGH). The KGH uses surge beds when the emergency room is overcrowded which results in having two levels for the number of the beds. The objective is to minimize a cost function. The findings of this work are not limited to the healthcare; They may be used in any stochastic system with fluctuation in arrival rates and/or two levels of the number of servers, i.e., call centers, transportation, and internet services. We model the situation and define a cost function which needs to be minimized. In order to find the cost function we need transient solutions of the M(t)/M/c/c queue. We modify the fourth-order Runge–Kutta to calculate the transient solutions and we obtain better solutions than the existing Runge–Kutta method. We show that the periodic variation of arrival rates makes the control policies time-dependent and periodic. We also study how fast the policies converge to a periodic pattern and obtain a criterion for independence of policies in two sequential cycles.
Optimal policies of M(t)/M/c/c queues with two different levels of servers
S0377221715009674
This paper concerns the innovative use of a blend of systems thinking ideas in the ‘Munro Review of Child Protection’, a high-profile examination of child protection activities in England, conducted for the Department for Education. We go ‘behind the scenes’ to describe the OR methodologies and processes employed. The circumstances that led to the Review are outlined. Three specific contributions that systems thinking made to the Review are then described. First, the systems-based analysis and visualisation of how a ‘compliance culture’ had grown up. Second the creation of a large, complex systems map of current operations and the effects of past policies on them. Third, how the map gave shape to the range of issues the Review addressed and acted as an organising framework for the systemically coherent set of recommendations made. The paper closes with an outline of the main implementation steps taken so far to create a child protection system with the critically reflective properties of a learning organisation, and methodological reflections on the benefits of systems thinking to support organisational analysis.
Blending systems thinking approaches for organisational analysis: Reviewing child protection in England
S0377221715009686
Group ranking problems involve aggregating individual rankings to generate group ranking which represents consolidated group preference. Group ranking problems are commonly applied in real-world decision-making problems; however, supporting a group decision-making process is difficult due to the existence of multiple decision-makers, each with his/her own opinions. Hence, determining how to best aid the group ranking process is an important consideration. This study aims to determine a total ranking list which meets group consensus preferences for group ranking problems. A new group consensus mining approach based on the concept of tournament matrices and directed graphs is first developed; an optimization model involving maximum consensus sequences is then constructed to achieve a total ranking list. Compared to previous methods, the proposed approach can generate a total ranking list involving group consensus preferences. It can also determine maximum consensus sequences without the need for tedious candidate generation processes, while also providing flexibility in solving ranking problems using different input preferences that vary in format and completeness. In addition, consensus levels are adjustable.
A new group ranking approach for ordinal preferences based on group maximum consensus sequences
S0377221715009698
This paper presents a differential evolution (DE) algorithm, namely SLADE, with self-adaptive strategy and control parameters for unconstrained optimization problems. In SLADE, the population is initialized by symmetric Latin hypercube design (SLHD) to increase the diversity of the initial population. Moreover, the trial vector generation strategy assigned to each target individual is adaptively selected from the strategy candidate pool to match different stages of the evolution according to their previous successful experience. SLADE employs Cauchy distribution and normal distribution to update the control parameters CR and F to appropriate values during the evolutionary process. A large amount of simulation experiments and comparisons have been made by employing a set of 25 benchmark functions. Experimental results show that SLADE is better than, or at least comparable to, other classic or adaptive DE algorithms, and SLHD is effective for improving the performance of SLADE.
A differential evolution algorithm with self-adaptive strategy and control parameters based on symmetric Latin hypercube design for unconstrained optimization problems
S0377221715009704
In order to establish a production plan, an open-pit mine is partitioned into a three-dimensional array of blocks. The order in which blocks are extracted and processed has a dramatic impact on the economic value of the exploitation. Since realistic models have millions of blocks and constraints, the combinatorial optimization problem of finding the extraction sequence that maximizes the profit is computationally intractable. In this work, we present a procedure, based on innovative aggregation and disaggregation heuristics, that allows us to get feasible and nearly optimal solutions. The method was tested on the public reference library MineLib and improved the best known results in the literature in 9 of the 11 instances of the library. Moreover, the overall procedure is very scalable, which makes it a promising tool for large size problems.
Aggregation heuristic for the open-pit block scheduling problem
S0377221715009716
The main objective of this paper is to address, in an a continuous-time framework, the issue of using storable commodity futures as vehicles for hedging purposes when, in particular, the convenience yield as well as the market prices of risk evolve randomly over time. Following the martingale route and by operating a suitable constant relative risk aversion utility function (CRRA) specific change of numéraire, we solve the investor's dynamic optimization program to obtain quasi analytical solutions for optimal demands, which can be expressed in terms of two discount bonds (traded and synthetic). Contrary to the existing literature, we explicitly derive the individual optimal proportions invested in the spot commodity, in a discount bond and in the futures contracts, which can be computed in a simple recursive way. We suggest various decompositions allowing an investor to assess the sensitivity of the optimal demands to the state variables and to specify the role played by each risky asset. Empirical evidence shows that the convenience yield has a strong impact on the speculation and hedging positions and the interaction among time-varying risk premia determines the magnitude and the sign of these positions.
Dynamic speculation and hedging in commodity futures markets with a stochastic convenience yield
S0377221715009728
This paper proposes models and algorithms for the pickup and delivery vehicle routing problem with time windows and multiple stacks. Each stack is rear-loaded and is operated in a last-in-first-out (LIFO) fashion, meaning that when an item is picked up, it is positioned at the rear of a stack. An item can only be delivered if it is in that position. This problem arises in the transportation of heavy or dangerous material where unnecessary handling should be avoided, such as in the transportation of cars between car dealers and the transportation of livestock from farms to slaughterhouses. To solve this problem, we propose two different branch-price-and-cut algorithms. The first solves the shortest path pricing problem with the multi-stack policy, while the second incorporates this policy partly in the shortest path pricing problem and generates additional inequalities to the master problem when infeasible multi-stack routes are encountered. Computational results obtained on instances derived from benchmark instances for the pickup and delivery traveling salesman problem with multiple stacks are reported, and reveal the advantage of incorporating the multi-stack policy in the pricing problem. Instances with up to 75 requests and with one, two and three stacks can be solved optimally within 2 hours of computational time.
Branch-price-and-cut algorithms for the pickup and delivery problem with time windows and multiple stacks
S0377221715009741
The bin packing problem with precedence constraints (BPP-P) is a recently proposed variation of the classical bin packing problem (BPP), which corresponds to a basic model featuring many underlying characteristics of several scheduling and assembly line balancing problems. The formulation builds upon the BPP by incorporating precedence constraints among items, which force successor items to be packed into later bins than their predecessors. In this paper we propose a dynamic programming based heuristic, and a modified exact enumeration procedure to solve the problem. These methods make use of several new lower bounds and dominance rules tailored for the problem in hand. The results of a computational experiment show the effectiveness of the proposed methods, which are able to close all of the previous open instances from the benchmark instance set within very reduced running times.
Procedures for the bin packing problem with precedence constraints
S0377221715009753
Data envelopment analysis (DEA) is an approach for measuring the performance of a set of homogeneous decision making units (DMUs). Recently, DEA has been extended to processes with two stages. Two-stage processes usually have undesirable intermediate outputs, which are normally considered be unrecoverable final outputs. In many real situations like industrial production however, many first-stage waste products can be immediately used or processed in the second stage to produce new resources which can be fed back immediately to the first stage. The objective of this paper is to provide an approach for analyzing the reuse of undesirable intermediate outputs in a two-stage production process with a shared resource. Shared resources are input resources that not only are used by both the first and second stages but also have the property that the proportion used by each stage cannot be conveniently split up and allocated to the operations of the two stages. Additive efficiency measures and non-cooperative efficiency measures are proposed to illustrate the overall efficiency of each DMU and respective efficiency of each sub-DMU. In the non-cooperative framework, a heuristic algorithm is suggested to transform the nonlinear model into a parametric linear one. A real case of industrial production processes of 30 provincial level regions in mainland China in 2010 was analyzed to verify the applicability of the proposed approaches.
Two-stage network processes with shared resources and resources recovered from undesirable outputs
S0377221715009765
This paper introduces an original methodology, derived by the robust order-m model, to estimate technical efficiency with spatial autocorrelated data using a nonparametric approach. The methodology is aimed to identify potential competitors on a subset of productive units that are identified through spatial dependence, thus focusing on peers located in close proximity of the productive unit. The proposed method is illustrated in a simulation setting that verifies the territorial differences between the nonparametric unconditioned and the conditioned estimates. A firm-level application to the Italian industrial districts is proposed in order to highlight the ability of the new method to separate the global intangible spatial effect from the efficiency term on real data.
Controlling for spatial heterogeneity in nonparametric efficiency models: An empirical proposal
S0377221715009777
Our paper presents a dynamic model of entrepreneurial venture financing under uncertainty based on option exercise games between an entrepreneur and a venture capitalist (VC). In particular, we analyze the impact of multi-staged financing and both economic and technological uncertainty on optimal contracting in the context of VC-financing. Our novel approach combines compound option pricing with sequential non-cooperative contracting, allowing us to determine whether renegotiation will improve the probability of coming to an agreement and proceed with the venture. It is shown that both sources of uncertainty positively impact the VC-investor's optimal equity share. Specifically, higher uncertainty leads to a larger stake in the venture, and renegotiation may result in a dramatic shift of control rights in the venture, preventing the venture from failure. Moreover, given ventures with low volatility, situations might occur where the VC-investor loses his first-mover advantage. Based on a comparative-static analysis, new testable hypotheses for further empirical studies are derived from the model.
Venture capital, staged financing and optimal funding policies under uncertainty
S0377221715009789
In container liner shipping, bunker cost is an important component of the total operating cost, and bunker consumption increases dramatically when the sailing speed of containerships increases. A higher speed implies higher bunker consumption (higher bunker cost), shorter transit time (lower inventory cost), and larger shipping capacity per ship per year (lower ship cost). Therefore, a container shipping company aims to determine the optimal sailing speed of containerships in a shipping network to minimize the total cost. We derive analytical solutions for sailing speed optimization on a single ship route with a continuous number of ships. The advantage of analytical solutions lies in that it unveils the underlying structure and properties of the problem, from which a number of valuable managerial insights can be obtained. Based on the analytical solution and the properties of the problem, the optimal integer number of ships to deploy on a ship route can be obtained by solving two equations, each in one unknown, using a simple bi-section search method. The properties further enable us to identify an optimality condition for network containership sailing speed optimization. Based on this optimality condition, we propose a pseudo-polynomial-time solution algorithm that can efficiently obtain an epsilon-optimal solution for sailing speed of containerships in a liner shipping network.
Fundamental properties and pseudo-polynomial-time algorithm for network containership sailing speed optimization
S0377221715009790
We address in this article the multi-commodity pickup-and-delivery traveling salesman problem, which is a routing problem for a capacitated vehicle that has to serve a set of customers that provide or require certain amounts of m different products. Each customer must be visited exactly once by the vehicle, and it is assumed that a unit of a product collected from a customer can be supplied to any other customer that requires that product. Each product is allowed to have several sources and several destinations. The objective is to minimize the total travel distance. We propose a hybrid three-stage heuristic approach that combines a procedure to generate initial solutions with several local search operators and shaking procedures, one of them based on solving an integer programming model. Extensive computational experiments on randomly generated instances with up to 400 locations and 5 products show the effectiveness of the approach.
A hybrid heuristic approach for the multi-commodity pickup-and-delivery traveling salesman problem
S0377221715009807
We consider the One Warehouse Multi-Retailer (OWMR) problem with deterministic time-varying demand in the case where shortages are allowed. Demand may be either backlogged or lost. We present a simple combinatorial algorithm to build an approximate solution from a decomposition of the system into single-echelon subproblems. We establish that the algorithm has a performance guarantee of 3 for the OWMR with backlog under mild assumptions on the cost structure. In addition, we improve this guarantee to 2 in the special case of the Joint-Replenishment Problem (JRP) with backlog. As a by-product of our approach, we show that our decomposition provides a new lower bound of the optimal cost. A similar technique also leads to a 2-approximation for the OWMR problem with lost-sales. In all cases, the complexity of the algorithm is linear in the number of retailers and quadratic in the number of time periods, which makes it a valuable tool for practical applications. To the best of our knowledge, these are the first constant approximations for the OWMR with shortages.
Constant approximation algorithms for the one warehouse multiple retailers problem with backlog or lost-sales
S0377221715009819
This paper describes a case study for a medical diagnostic laboratory service provider to model the behavior of patients when choosing a patient service centre for their medical tests and to estimate future demand volume. A tool developed based on our methodology allows the management of the diagnostic services to experiment with locations and capacities for locating or relocating service centres. In addition to the focal firm, the methodology considers the impact of decisions on another service provider and hospital laboratories located in the same area. The methodology identifies the most significant service centre attractiveness factors. Our models are validated from different perspectives and show good predictive capability. This case study is used to draw a number of lessons for applying these types of models to other similar services in order to assist other applications.
Patient choice analysis and demand prediction for a health care diagnostics company
S0377221715009820
In response to strict regulations and increased environmental awareness, firms are striving to reduce the global warming impact of their operations. Cold supply chains have high levels of greenhouse gas emissions due to the high energy consumption and refrigerant gas leakages. We model the cold supply chain design problem as a mixed-integer concave minimization problem with dual objectives of minimizing the total cost - including capacity, transportation, and inventory costs - and the global warming impact. Demand is modeled as a general distribution, whereas inventory is managed using a known policy but without explicit formulas for the inventory cost and maximum level functions. We propose a novel hybrid simulation-optimization approach to solve the problem. Lagrangian decomposition is used to compose the model into an integer programming subproblem and sets of single variable concave minimization subproblems that are solved using simulation-optimization. We provide closed-form expressions for the Lagrangian multipliers so that the Lagrangian bound is obtained in a single iteration. Furthermore, since the solution of the integer subproblem is feasible to the original problem an upper bound is obtained immediately. To close the optimality gap, the Lagrangian approach is embedded in a branch-and-bound framework. The approach is verified through extensive numerical testing on two realistic case studies from different industries, and some managerial insights are drawn. Annual fixed cost for opening a warehouse Annual fixed CO2-equivalent emissions from a warehouse Unit shipping cost from a plant to a warehouse Expected annual product demand from a retailer CO2-equivalent emissions for shipping a product between a plant and a warehouse Average CO2 emissions from shipping a product between a plant and a warehouse Average HFC gas leakage for shipping a product between a plant and a warehouse Volume-dependent capacity cost function Annual CO2-equivalent emissions from a warehouse as a function of its volume Global-warming potential of a HFC gas Number of units shipped of a product Number of shipments using a truck type Annual CO2-equivalent emissions for serving a retailer from a warehouse Annual cost of serving a customer from a warehouse Maximum inventory function Inventory cost function Product volume Volumetric capacity of a truck type Binary decision variables for assigning retailers to warehouses Number of products shipped from a plant to a warehouse Binary decision variables for locating warehouses
Cold supply chain design with environmental considerations: A simulation-optimization approach
S0377221715009832
In this paper we discuss the polyhedral structure of the integer single node flow set with two possible values for the upper bounds on the arc flows. Such mixed integer sets arise as substructures in complex mixed integer programs for real application problems. This work builds on results for the integer single node flow polytope with two arcs given by Agra and Constantino, 2006a. Valid inequalities are extended to a new family, the lifted Euclidean inequalities, and a complete description of the convex hull is given. All the coefficients of the facet-defining inequalities can be computed in polynomial time. We report on some computational experimentations for three problems: an inventory distribution problem, a facility location problem and a multi-item production planning model.
Lifted Euclidean inequalities for the integer single node flow set with upper bounds
S0377221715009844
The annual dairy transportation problem involves designing the routes that collect milk from farms and deliver it to processing plants. The demands of these plants can change from one week to the next, but the collection is fixed by contract and must remain the same throughout the year. While the routes are currently designed using the historical average demand from the plants, we show that including the information about plants demands leads to significant savings. We propose a two-stage method based on an adaptive large neighborhood search (ALNS). The first phase solves the transportation problem and the second phase ensures that the optimization of plant assignment is performed. An additional analysis based on period clustering is conducted to speed up the resolution.
A two-stage solution method for the annual dairy transportation problem
S0377221715009856
As one of the most challenging combinatorial optimization problems in scheduling, the resource-constrained project scheduling problem (RCPSP) has attracted numerous scholars’ interest resulting in considerable research in the past few decades. However, most of these papers focused on the single objective RCPSP; only a few papers concentrated on the multi-objective resource-constrained project scheduling problems (MORCPSP). Inspired by a procedure called electromagnetism (EM), which can help a generic population-based evolutionary search algorithm to obtain good results for single objective RCPSP, in this paper we attempt to extend EM and integrate it into three reputable state-of-the-art multi-objective evolutionary algorithms (MOEAs) i.e. non-dominated sorting based multi-objective evolutionary algorithm (NSGA-II), strength Pareto evolutionary algorithm (SPEA2) and multi-objective evolutionary algorithm based on decomposition (MOEA/D), for MORCPSP. We aim to optimize makespan and total tardiness. Empirical analysis based on standard benchmark datasets are conducted by comparing the versions of integrating EM to NSGA-II, SPEA2 and MOEA/D with the original algorithms without EM. The results demonstrate that EM can improve the performance of NSGA-II and SPEA2, especially for NSGA-II.
Integration of electromagnetism with multi-objective evolutionary algorithms for RCPSP
S0377221715009868
The objective of this paper is to propose an approach to support group multicriteria classification. The approach is composed of three phases. The first phase exploits the knowledge provided by each decision maker to individually approximate the decision classes using rough approximation. The second phase seeks to combine the outputs of individual approximation phase into a collective decision table by using an appropriate aggregation procedure. The third phase uses the collective decision table in order to infer a set of collective decision rules, which synthesize the judgements and perspectives of the different decision makers and to permit the classification of all decision objects. The proposed approach relies on the Dominance-based Rough Set Approach (DRSA), which is used at two different levels. First, the DRSA is used during the first phase to approximate the input data relative to each decision maker. Second, the DRSA is used during the third phase to approximate the collective decision table and generate the collective decision rules. This paper presents the theoretical foundation of the proposed approach, three case studies using real-world data and a comparative study of recent similar proposals.
Dominance-based rough set approach for group decisions
S037722171500987X
This note takes up a shortcoming of Coelli et al.’s (2007) popular environmental efficiency measure and its extension to economic-environmental trade-off analysis (see Van Meensel et al. (2010)), namely that they do not reward emission reductions by pollution control. A new environmental efficiency measure that overcomes this issue and - similar to Coelli et al.’s efficiency measure - is in line with the materials balance principle is proposed and further decomposed into “technical environmental efficiency” and “material and nonmaterial allocative environmental efficiencies”. The new efficiency measure collapses into Coelli et al.’s efficiency measure if none of the considered Decision Making Units control pollutants. A numerical example using Data Envelopment Analysis is provided to further explore the properties of the new efficiency measure.
Environmental efficiency measurement and the materials balance condition reconsidered
S0377221715009881
In this paper we consider the problem of packing unequal circles in a fixed size circular container, where the objective is to maximise the value of the circles packed. We consider two different objectives: maximise the number of circles packed; maximise the area of the circles packed. For the particular case when the objective is to maximise the number of circles packed we prove that the optimal solution is of a particular form. We present a heuristic for the problem based upon formulation space search. Computational results are given for a number of publicly available test problems involving the packing of up to 40 circles. We also present computational results, for test problems taken from the literature, relating to packing both equal and unequal circles.
A formulation space search heuristic for packing unequal circles in a fixed size circular container
S0377221715009893
In order to limit climate change from greenhouse gas emissions, governments have introduced renewable portfolio standards (RPS) to incentivise renewable energy production. While the response of industry to exogenous RPS targets has been addressed in the literature, setting RPS targets from a policymaker’s perspective has remained an open question. Using a bi-level model, we prove that the optimal RPS target for a perfectly competitive electricity industry is higher than that for a benchmark centrally planned one. Allowing for market power by the non-renewable energy sector within a deregulated industry lowers the RPS target vis-à-vis perfect competition. Moreover, to our surprise, social welfare under perfect competition with RPS is lower than that when the non-renewable energy sector exercises market power. In effect, by subsidising renewable energy and taxing the non-renewable sector, RPS represents an economic distortion that over-compensates damage from emissions. Thus, perfect competition with RPS results in “too much” renewable energy output, whereas the market power of the non-renewable energy sector mitigates this distortion, albeit at the cost of lower consumer surplus and higher emissions. Hence, ignoring the interaction between RPS requirements and the market structure could lead to sub-optimal RPS targets and substantial welfare losses.
Are targets for renewable portfolio standards too low? The impact of market structure on energy policy
S037722171500990X
The existence of positive and negative externalities ought to be considered in a productivity analysis in order to obtain unbiased measures of efficiency. In this research we present an additive style, data envelopment analysis model that considers the production of both negative and positive externalities and permits a limited increase in input utilisation where relevant. The directional economic environmental distance (DEED) function is a unified approach based on a linear program that evaluates the relative inefficiency of the units under examination with respect to a unique reference technology. We discuss the impact of disposability assumptions in depth and demonstrate how different versions of the DEED model improve on models presented in the literature to date.
Accounting for externalities and disposability: A directional economic environmental distance function
S0377221715009911
In this paper we propose a general methodology for solving a broad class of continuous, multifacility location problems, in any dimension and with ℓ τ -norms proposing two different methodologies: (1) by a new second order cone mixed integer programming formulation and (2) by formulating a sequence of semidefinite programs that converges to the solution of the problem; each of these relaxed problems solvable with SDP solvers in polynomial time.
Continuous multifacility ordered median location problems
S0377221715009923
We discuss customers’ inter-temporal repeat-purchasing behavior when their valuations for a product are subject to transitory satiety. The satiety can be increased by purchases and gradually decays in between. Repeat-purchasing customers optimize their inter-temporal purchasing schedules, including the purchasing time and the purchasing quantity, to maximize their time-average payoffs, and the monopoly seller chooses the optimal cyclic pricing policy to maximize its time-average profit. We derive the conditions under which firms earn the same profit under the cyclic pricing policy as under the fixed-price pricing policy. When the inter-temporal price discrimination is more profitable, we show that the optimal pricing policy depends on the relation between customer valuation and customer satiety. Our results imply that overlooking the effect of transitory satiety tends to cause firms to underprice. We also consider extensions in which there exists a fixed purchasing cost or in which purchasing and consumption decisions are separate.
Inter-temporal price discrimination and satiety-driven repeat purchases
S0377221715009935
The inventory routing problem (IRP) is a very challenging optimization task that couples two of the most important components of supply chain management, i.e., inventory control and transportation. Routes of vehicles are to be determined to repeatedly resupply multiple customers with constant demand rates from a single depot. We alter this basic IRP setting by two aspects: (i) only cyclic tours are allowed, i.e., each vehicle continuously tours its dedicated route, and (ii) all customers are located along a line. Both characteristics occur, for instance, in liner shipping (when feeder ships service inland ports along a stream) and in facility logistics (when tow trains deliver part bins to the stations of an assembly line). We formalize the resulting problem setting, identify NP-hard as well as polynomially solvable cases, and develop suited solution procedures.
Cyclic inventory routing in a line-shaped network
S0377221715009947
The sequential ordering problem (SOP) is the generalisation of the asymmetric travelling salesman problem in which there are precedence relations between pairs of nodes. Hernández & Salazar introduced a multi-commodity flow (MCF) formulation for a generalisation of the SOP in which the vehicle has a limited capacity. We strengthen this MCF formulation by fixing variables and adding valid equations. We then use polyhedral projection, together with some known results on flows, cuts and metrics, to derive new families of strong valid inequalities for both problems. Finally, we give computational results, which show that our findings yield good lower bounds in practice.
Stronger multi-commodity flow formulations of the (capacitated) sequential ordering problem
S0377221715009959
With the rapid growth of global trade, used durable goods from wealthy countries increasingly find their way into the secondary market of less wealthy countries. Exporting used products to a physically separate market not only removes cannibalization for new products at home, but also fetches additional revenue. In this paper, we investigate the implications of exporting used products to international secondary markets in the durable goods industry. We find that such a practice may significantly stimulate new product lease on the home market, an effect in which the market attractiveness and product quality are mutually reinforcing. We discover that removing cannibalization pressure is more of a priority than generating additional revenue while exporting used products. If the export is carried out by an agent, who exports used products bought from OEM (Original Equipment Manufacturer), we observe the disadvantage of double marginalization in a channel structure, which slows down export and causes quantity distortion, and also reduces the effectiveness of government stimulus. However, if the agent and OEM set export price based supply and demand equilibrium, this reduces the quantity distortion. One special characteristic of used products trade across borders is the involvement of governments on both sides of the border. The government measures include penalties imposed on aging durable goods and trade barriers. We find that legislation of penalizing used products on the domestic market can stimulate export, but it does not have an intended effect of stimulating new products produced at home. The channel structure worsens the problem.
Durable goods leasing in the presence of exporting used products to an international secondary market
S0377221715010164
We consider the problem of matching capacity and demand in build-to-order manufacturing. Our objective is to assess the ability of upgrade auctions to efficiently allocate excess option capacity to customer orders. We model the Stackelberg game of one manufacturer and a set of loss-averse customers. The manufacturers’ decision of extending the existing fixed-price channel by applying upgrade auctions is evaluated in terms of contribution margin and planning reliability. When the manufacturer offers an option in the upgrade auction, customers seek to maximize utility by buying the option instantly at the fixed price or leaving a bid in hope of a better auction price. Their optimal participation and bidding strategy is explained by a gain-loss utility model with two-dimensional preferences and expectation-based reference levels. Due to missing information on capacity and bids, customers’ decisions are based on estimations of auction and winning probability. We conclude that applying upgrade auctions can significantly improve contribution margin while maintaining planning reliability. Particularly, loss aversion prevents potential fixed-price buyers from bidding in the auction. The results suggest that further effort of implementing upgrade auctions in build-to-order manufacturing will likely pay off.
Upgrade auctions in build-to-order manufacturing with loss-averse customers
S0377221715010176
Inventory inaccuracy is the mismatch between the recorded inventory and the physical inventory, which is severe and widespread in industry. A few studies have investigated the bullwhip effect with the existence of inventory inaccuracy. The development of information technologies has provided companies with access to accurate inventory information in real time. This will surely affect the bullwhip effect and supply chain costs. The aim of this paper is to build an analytical model to systematically investigate these effects. We consider a retailer–manufacturer supply chain in which the retailer faces inventory shrinkage, which is the main cause of inventory inaccuracy. Both the situations with accurate real-time inventory information, i.e., high-quality information, and the situation with statistical inventory information, i.e., low-quality information are studied. We examine the relationships between the bullwhip effect, information distortion and supply chain costs with different levels of information quality. The results of our analysis enrich the existing literature on the bullwhip effect. First, we show that the bullwhip effect is magnified along the chain when higher-quality information on inventory shrinkage – specifically real-time rather than statistical data – is obtained. Second, we show that the magnification of the bullwhip effect does not necessarily result in higher costs. Third, we demonstrate that higher-quality information increases the benefits of information sharing. Our paper provides new insights into the causes, extent and economic dynamics of order variability in the presence of inventory inaccuracy, and may thus suggest ways of more effectively managing the bullwhip effect and inventory.
Bullwhip effect and supply chain costs with low- and high-quality information on inventory shrinkage
S0377221715010188
We consider the one-dimensional skiving stock problem which is strongly related to the dual bin packing problem: find the maximum number of items with minimum length L that can be constructed by connecting a given supply of m ∈ N smaller item lengths l 1 , … , l m with availabilities b 1 , … , b m . For this optimization problem, we present three new models (the arcflow model, the onestick model, and a model of Kantorovich-type) and investigate their relationships, especially regarding their respective continuous relaxations. To this end, numerical computations are provided. As a main result, we prove the equivalence between the arcflow model, the onestick approach and the existing pattern-oriented standard model. In particular, this equivalence is shown to hold for the corresponding continuous relaxations, too.
Integer linear programming models for the skiving stock problem
S037722171501019X
We analyze a time-fenced planning system where both expediting and canceling are allowed inside the time fence, but only with a penalty. Previous research has allowed only for the case of expediting inside the time fence and has overlooked the opportunity for additional improvement by also allowing for cancelations. Some researchers also have found that for traditional time-fenced models, the choice of the more complex stochastic linear programming approach versus the simpler deterministic approach is not justified. We formulate both the deterministic and stochastic problems as dynamic programs and develop analytic bounds that limit the search space (and reduce the complexity) of the stochastic approach. We run extensive simulations and numerical experiments to understand better the benefit of adding cancelation and to compare the performance of the stochastic model with the more common deterministic model when they are employed as heuristics in a rolling-horizon setting. Across all experiments, we find that allowing expediting (but not canceling) lowered costs by 11.3% using the deterministic approach, but costs were reduced by 27.8% if both expediting and canceling are allowed. We find that the benefit of using the stochastic model versus the deterministic model varies widely across demand distributions and levels of recourse—the ratio of stochastic average costs to deterministic average costs ranged from 43.3% to 78.5%.
Fenced in? Stochastic and deterministic planning models in a time-fenced, rolling-horizon scheduling system
S0377221715010206
The ever increasing product variety in grocery retailing is in conflict with the limited shelf space. Managing the assortment and regular order quantity for perishable and non-perishable products is therefore a core task for retailers. Retailers want to maximize profit by considering potential revenues, purchase costs, diminishing profits for products and opportunity costs for unfulfilled demand when determining assortment sizes and order volume to meet stochastic consumer demand. We considered a newsvendor-based decision model that jointly optimizes assortment size and the associated order volumes for listed items. The model considers out-of-assortment (OOA) and out-of-stock (OOS) substitution effects. The model is a structural equivalent to the model described in Kök and Fisher (2007) and solved there via a heuristic procedure. We develop an optimal solution procedure and a time-efficient heuristic that solves the model. The heuristic suggested is applicable for practically relevant, large-scale problem instances. Numerical tests reveal that the heuristic procedure produces close-to-optimal solutions and outperforms the Kök and Fisher heuristic with regard to both computational time and solution quality. A simulation study with discrete demand justifies our approach with using continuous demand distributions. The numerical analyses show that considering substitution effects in a decision-based model has a significant impact on the total profit and solution structure. Further managerial insights are presented with regard to assortment heterogeneity when shelf size, substitution and demand variability are varied.
An efficient algorithm for capacitated assortment planning with stochastic demand and substitution
S0377221715010218
Systems that require maintenance typically consist of multiple components. In case of economic dependencies, maintaining several of these components simultaneously can be more cost efficient than performing maintenance on each component separately, while in case of redundancy, postponing maintenance on some failed components is possible without reducing the availability of the system. Condition-based maintenance (CBM) is known as a cost-minimizing strategy in which the maintenance actions are based on the actual condition of the different components. No research has been performed yet on clustering CBM tasks for systems with both economic dependencies and redundancy. We develop a dynamic programming model to find the optimal maintenance strategy for such systems, and show numerically that it can indeed considerably outperform previously considered policies (failure-based, age-based, block replacement, and more restricted (opportunistic) CBM policies). Moreover, our numerical investigation provides insights into the optimal policy structure. Binary variable indicating whether or not a replacement (preventive or corrective) should be performed on component i Deterioration parameter of component i Cost of a corrective replacement on component i Cost of a preventive replacement on component i, c p i < c c i Fixed set-up cost for maintenance Optimum cumulative cost from period t on Pdf of the deterioration increments of component i Number of components in the system that need to function for the system to function Fixed failure level of component i Number of components in the system Penalty for a system failure System reliability, given deterioration levels of x 1 , x 2 , … , x N for components 1 , 2 , … , N , respectively Reliability of component i, given a deterioration level of xi Set of all component labels, S N : = { 1 , 2 , … , N } Condition of component i at time t, X 0 i = 0 Condition of component i after possible maintenance has been performed Increase in deterioration on component i during t, X t + 1 i = X ¯ t i + Y t i
Clustering condition-based maintenance for systems with redundancy and economic dependencies
S037722171501022X
Faced with a short turn-around request to characterise several hand-held mine detection systems the authors developed and applied an analytical methodology that was sufficiently robust and pragmatic to satisfy the needs of the various military stakeholders involved yet it was appropriately rigorous and transparent to bear external scrutiny. The methodology can be applied in situations where data collection and analysis must be done quickly while preserving scientific veracity. For mine detection systems considerable uncertainties existed that needed to be characterised including: application, location, operational situation and involvement of human operators. Constraints on the time and expertise available implied there would be difficulties ensuring a sufficient number of trials could be conducted to levels of statistical confidence that would assure appropriate credibility across all of the parameters. This problem was effectively rectified through experimental design and by heavily involving the sponsor stakeholders and subject matter experts throughout the study thus boosting the credibility and acceptance of its results. The process followed involved: liaison with the sponsor, identification of critical issues, measurements in field environments, reporting mechanisms and discussion on implementation and further development. The critical focus was operational capability rather than specific equipment characteristics. A robust data presentation technique was developed to deal with the complexities associated with different needs of multiple stakeholders. This technique enabled the results to be reviewed from different stakeholders’ perspectives, the formation of a common understanding and the results to be reusable in future analyses.
A data collection and presentation methodology for decision support: A case study of hand-held mine detection devices
S0377221715010231
Supply Chain Forecasting (SCF) goes beyond the operational task of extrapolating demand requirements at one echelon. It involves complex issues such as supply chain coordination and sharing of information between multiple stakeholders. Academic research in SCF has tended to neglect some issues that are important in practice. In areas of practical relevance, sound theoretical developments have rarely been translated into operational solutions or integrated in state-of-the-art decision support systems. Furthermore, many experience-driven heuristics are increasingly used in everyday business practices. These heuristics are not supported by substantive scientific evidence; however, they are sometimes very hard to outperform. This can be attributed to the robustness of these simple and practical solutions such as aggregation approaches for example (across time, customers and products). This paper provides a comprehensive review of the literature and aims at bridging the gap between theory and practice in the existing knowledge base in SCF. We highlight the most promising approaches and suggest their integration in forecasting support systems. We discuss the current challenges both from a research and practitioner perspective and provide a research and application agenda for further work in this area. Finally, we make a contribution in the methodology underlying the preparation of review articles by means of involving the forecasting community in the process of deciding both the content and structure of this paper.
Supply chain forecasting: Theory, practice, their gap and the future
S0377221715010243
In asset allocation problem, the distribution of the assets is usually assumed to be known in order to identify the optimal portfolio. In practice, we need to estimate their distribution. The estimations are not necessarily accurate and it is known as the uncertainty problem. Many researches show that most people are uncertainty aversion and this affects their investment strategy. In this article, we consider risk and information uncertainty under a common asset allocation framework. The effects of risk premium and covariance uncertainty are demonstrated by the worst scenario in a set of measures generated by a relative entropy constraint. The nature of the uncertainty and its impacts on the asset allocation are discussed.
Optimal asset allocation: Risk and information uncertainty
S0377221715010255
In this paper we propose a quantile-based risk measure which is defined using the modified loss distribution according to the decision maker’s risk and loss aversion. The properties related to different classes of disutility functions are established. A portfolio selection model in the Mean-Risk framework is proposed and equivalent formulations of the model generating the same efficient frontier are given. The advantages of this approach are investigated using real world data from NYSE. The differences between the efficient frontier of the proposed model and the classical Mean-Variance and Mean-CVaR are quantified and interpreted. Extensive experiments show that the efficient portfolios obtained by using the proposed model exhibit lower risk levels and an increased satisfaction compared to the other two Mean-Risk models.
Portfolio optimization with disutility-based risk measure
S0377221715010267
In this paper we present a financial economy in the case when the financial volumes depend on time and on the expected solution, in order to take into account the influence of the expected equilibrium distribution for assets and liabilities on the investments on all financial instruments. We derive the quasi-variational formulation which characterizes the equilibrium of the dynamical financial model. The main result of the paper is a general existence theorem for quasi-variational inequalities under general assumptions, which is also applied to the financial model. We also recall some concepts on the infinite dimensional duality and study some numerical examples.
New existence theorems for quasi-variational inequalities and applications to financial models
S0377221715010279
We study the impact of foresight in a transboundary pollution game; i.e. the ability of a country to control its emissions taking into account the relationship between current emissions and future levels of pollution and thus on future damages. We show that when all countries are myopic, i.e., choose the ‘laisser-faire’ policy, their payoffs are smaller than when all countries are farsighted, i.e., non-myopic. However, in the case where one myopic country becomes farsighted we show that the welfare impact of foresight on that country is ambiguous. Foresight may be welfare reducing for the country that acquires it. This is due to the reaction of the other farsighted countries to that country’s acquisition of foresight. The country that acquires foresight reduces its emissions while the other farsighted countries extend their emissions. The overall impact on total emissions is ambiguous. Moreover, our results suggest that incentive mechanisms, that involve a very small (possibly zero) present value of transfers, can play an important role in inducing a country to adopt a farsighted behavior and diminishing the number of myopic countries. These incentives would compensate the myopic country for the short-run losses incurred from the acquisition of foresight and can be reimbursed by that country from the gains from foresight that it enjoys in the long run.
The impact of foresight in a transboundary pollution game
S0377221715010280
We study a dual-mode production planning problem with emission constraints, where a manufacturer produces a single product with two optional technologies. The manufacturer is equipped with the regular and green technologies to comply with emission limitations, and either one or both can be adopted for production. We first investigate the problem under a mandatory emission-cap policy and then extend it to consider emission trading under an emission cap-and-trade scheme. Based on the structural properties of the problem and a multi-level decomposition approach, a polynomial dynamic programming algorithm is developed to solve the models optimally. Our analysis shows that the manufacturer should only use a mix of both technologies when the emission cap is a binding constraint. Numerical results show that the manufacturer’s decisions and benefits are significantly affected by the emission cap under the mandatory emission-cap policy, especially when the cap is at a relatively low level. However, the carbon price may not remarkably affect the manufacturer’s cost because its influence could be abated through the flexible technology switch under the emission cap-and-trade scheme.
Dual-mode production planning for manufacturing with emission constraints
S0377221715010498
The fuzzy rating scale was introduced as a tool to measure intrinsically ill-defined/ imprecisely-valued attributes in a free way. Thus, users do not have to choose a value from a class of prefixed ones (like it happens when a fuzzy semantic representation of a linguistic term set is considered), but just to draw the fuzzy number that better represents their valuation or measurement. The freedom inherent to the fuzzy rating scale process allows users to collect data with a high level of richness, accuracy, expressiveness, diversity and subjectivity, what is especially valuable for statistical purposes. This paper presents an inferential approach to analyze data obtained by using the fuzzy rating scale. More concretely, the paper is to be focused on testing different hypothesis about means, on the basis of a sound methodology which has been stated during the last years. All the procedures that have been developed to this aim will be presented in an algorithmic way adapted to the usual generic fuzzy rating scale-based data, and they will be illustrated by means of a real-life example.
Hypothesis testing for means in connection with fuzzy rating scale-based data: algorithms and applications
S0377221715010668
Despite the importance and value of the pharmaceutical market, a significant portion of procurement spending including pharmaceuticals are lost. Coupling poor and reactive management practices with the inevitable national drug shortages, leads to lack of medicines causing patient suffering and direct life or death consequences. In this paper, we propose a stochastic model to find the optimal inventory policy for a healthcare facility to proactively minimize the effect of drug shortages in the presence of uncertain disruptions and demand.
Mitigating the impact of drug shortages for a healthcare facility: An inventory management approach
S037722171501067X
We introduce and solve the Vehicle Routing Problem with Simultaneous Pick-ups and Deliveries and Two-Dimensional Loading Constraints (2L-SPD). The 2L-SPD model covers cases where customers raise delivery and pick-up requests for transporting non-stackable rectangular items. 2L-SPD belongs to the class of composite routing-packing optimization problems. However, it is the first such problem to consider bi-directional material flows dictated in practice by reverse logistics policies. The aspect of simultaneously satisfying deliveries and pick-ups has a major impact on the underlying loading constraints: feasible loading patterns must be identified for every arc traveled in the routing plan. This implies that 2L-SPD generalizes previous routing problem variants with two-dimensional loading constraints which call for one feasible loading per route. From a managerial perspective, the simultaneous service of deliveries and pick-ups may bring substantial cost-savings, but the generalized loading constraints are very hard to tackle in reasonable computational times. To this end, we propose an optimization framework which employs memorization techniques designed for the 2L-SPD model, to accelerate the solution methodology. To assess the performance of our routing and packing algorithmic components, we have solved the Vehicle Routing Problem with Simultaneous Pick-ups and Deliveries (VRPSPD) and the Vehicle Routing Problem with Two-Dimensional Constraints (2L-CVRP). Computational results are also reported on newly constructed 2L-SPD benchmark problems. Apart from the basic 2L-SPD version, we introduce the 2L-SPD with LIFO constraints which prohibit item rearrangements along the routes. Computational experiments are conducted to understand the impact of the LIFO constraints on the routing plans obtained.
The Vehicle Routing Problem with Simultaneous Pick-ups and Deliveries and Two-Dimensional Loading Constraints
S0377221715010681
In this paper we address the problem of measuring the degree of consensus/dissensus in a context where experts or agents express their opinions on alternatives or issues by means of cardinal evaluations. To this end we propose a new class of distance-based consensus model, the family of the Mahalanobis dissensus measures for profiles of cardinal values. We set forth some meaningful properties of the Mahalanobis dissensus measures. Finally, an application over a real empirical example is presented and discussed.
A cardinal dissensus measure based on the Mahalanobis distance
S0377221715010693
In this study, we propose constraint programming (CP) model and logic-based Benders algorithms in order to make the best decisions for scheduling non-identical jobs with availability intervals and sequence dependent setup times on unrelated parallel machines in a fixed planning horizon. In this problem, each job has a profit, cost and must be assigned to at most one machine in such a way that total profit is maximized. In addition, the total cost has to be less than or equal to a budget level. Computational tests are performed on a real-life case study prepared in collaboration with the U.S. Army Corps of Engineers (USACE). Our initial investigations show that the pure CP model is very efficient in obtaining good quality feasible solutions but, fails to report the optimal solution for the majority of the problem instances. On the other hand, the two logic-based Benders decomposition algorithms are able to obtain near optimal solutions for 86 instances out of 90 examinees. For the remaining instances, they provide a feasible solution. Further investigations show the high quality of the solutions obtained by the pure CP model.
Analysis of a parallel machine scheduling problem with sequence dependent setup times and job availability intervals
S037722171501070X
In this paper, the resource-constrained project scheduling problem with general temporal constraints is extended by the concept of break-calendars in order to incorporate the possible absence of renewable resources. Three binary linear model formulations are presented that use either start-based or changeover-based or execution-based binary decision variables. In addition, a priority-rule method as well as three different versions of a scatter search procedure are proposed in order to solve the problem heuristically. All exact and heuristic solution procedures use a new and powerful time planning method, which identifies all time- and calendar-feasible start times for activities as well as all corresponding absolute time lags between activities. In a comprehensive performance analysis, small- and medium-scale instances are solved with CPLEX 12.6. Furthermore, large-scale instances of the problem are tackled with scatter search, where the results of the three versions are compared to each other and to the priority-rule method.
Models and solution procedures for the resource-constrained project scheduling problem with general temporal constraints and calendars
S0377221715010711
This paper addresses the Nurse Rerostering Problem (NRRP) that arises when a roster is disrupted by unexpected circumstances. The objective is to find a feasible roster having the minimal number of changes with respect to the original one. The problem is solved by a parallel algorithm executed on a graphics processing unit (GPU) to significantly accelerate its performance. To the best of our knowledge, this is the first parallel algorithm solving the NRRP on GPU. The core concept is a unique problem decomposition allowing efficient parallelization. Two parallel algorithms, homogeneous and heterogeneous, are proposed (available online), and their performance evaluated on benchmark datasets in terms of quality of the results compared to the state-of-the-art results and speedup. In general, higher acceleration was obtained by the homogeneous model with speedup 12.6 and 17.7 times on the NRRP dataset with 19 and 32 nurses respectively. These results encourage further research on parallel algorithms to solve Operational Research problems.
A novel approach for nurse rerostering based on a parallel algorithm
S0377221715010723
The recent failure of major PC and smartphone makers in launching new generations of high-tech products in time shows that analyzing and capturing the timing of technology disruption is an important yet less explored research area. This paper conducts theoretical and empirical analyses for ex-ante quantitative evaluation of the timing of technology disruption. We conceptualize the ease and network factors as key determinants of performance improvement for a disruptive technology. A dynamic consumer model is developed to identify two critical times, termed D-Day and V-Day, of technology disruption. We also show that, if the network factor dominates the performance improvement process, there may exist some “bleak days” during which a firm would discontinue a “promising” technology that will eventually disrupt. Empirical tests are conducted with data of hard disk drives, semiconductor technologies, and CPU performance for mobile devices to verify key model assumptions and to show how to estimate the ease and network factors. We also perform a numerical experiment to demonstrate how to forecast the timing of technology disruption. A decision tree and a systematic framework are also developed to operationalize key model parameters and analytical results from a decision-support perspective. This paper contributes to the literature by presenting a novel analytical tool and new insights for high-tech companies to forecast and manage the timing of technology disruption.
The D-Day, V-Day, and bleak days of a disruptive technology: A new model for ex-ante evaluation of the timing of technology disruption
S0377221715010735
In the mixed capacitated general routing problem, one seeks to determine a minimum cost set of vehicle routes serving segments of a mixed network consisting of nodes, edges, and arcs. We study a bi-objective variant of the problem, in which, in addition to seeking a set of routes of low cost, one simultaneously seeks a set of routes in which the work load is balanced. Due to the conflict between the objectives, finding a solution that simultaneously optimizes both objectives is usually impossible. Thus, we seek to generate many or all efficient, or Pareto-optimal, solutions, i.e., solutions in which it is impossible to improve the value of one objective without deterioration in the value of the other objective. Route balance can be modeled in different ways, and a computational study using small benchmark instances of the mixed capacitated general routing problem demonstrates that the choice of route balance modeling has a significant impact on the number and diversity of Pareto-optimal solutions. The results of the computational study suggest that modeling route balance in terms of the difference between the longest and shortest route in a solution is a robust choice that performs well across a variety of instances.
The bi-objective mixed capacitated general routing problem with different route balance criteria