FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0377221715001630
We propose a new mixed integer programming formulation and solution algorithm for a multi-mode resource-constrained project scheduling problem with availability constraints (calendars) and the objective to minimize the resource availability cost. Our model exploits the problem structure and has an exponential number of variables, which necessitates a column generation approach. The linear programming relaxation is strengthened by adding valid inequalities that need to be carefully separated in order to show the desired effect. Integer optimal solutions are obtained by an exact state-of-the-art branch-price-and-cut algorithm. Classical time-indexed mixed integer programming formulations for similar problems quite often fail already on instances with only 30 jobs, depending on the network complexity and the total freedom of arranging jobs. A reason is the typically very weak linear programming relaxation. Our model can be interpreted as a non-trivial Dantzig–Wolfe reformulation of a time-indexed formulation. In particular, for larger instances our reformulation gives significantly tighter dual bounds, enabling us to optimally solve instances with 50 multi-mode jobs. This outperforms the classical formulation by far.
A branch-price-and-cut algorithm for multi-mode resource leveling
S0377221715001642
The Steiner Travelling Salesman Problem (STSP) is a variant of the TSP that is suitable for instances defined on road networks. We consider an extension of the STSP in which the road traversal costs are both stochastic and correlated. This happens, for example, when vehicles are prone to delays due to rush hours, road works or accidents. Following the work of Markowitz on portfolio selection, we model this problem as a bi-objective mean-variance problem. Then, we show how to approximate the efficient frontier via integer programming. We also show how to exploit certain special structures in the correlation matrices. Computational results indicate that our approach is viable for instances with up to 100 nodes. It turns out that minimum variance tours can look rather different from minimum expected cost tours.
The Steiner travelling salesman problem with correlated costs
S0377221715001654
High logistics and handling costs prevent the bioenergy industry from making a greater contribution to the present energy market. Therefore, a mathematical model, OPTIMASS, is presented to optimise strategic (e.g. facility location and type) and tactical (e.g. allocation) decisions in all kinds of biomass-based supply chains. In addition to existing models, OPTIMASS evaluates changes in biomass characteristics due to handling operations which is needed to meet the requirements set to biomass products delivered at a conversion facility. Also, OPTIMASS considers the re-injection of by-products from conversion facilities which can play a decisive role in the determination of a sustainable supply chain. The scenario analysis illustrates the functionalities of OPTIMASS in the optimisation of an existing supply chain, the definition of the optimal location of new conversion facilities and the definition of the optimal configuration of a supply chain. OPTIMASS, as a deterministic model, does not consider variability related to e.g. seasonal changes which can be a major obstacle. However, a thorough sensitivity analysis of influencing factors must give insight in the induced changes in the supply chain. The sensitivity analysis in this paper investigates the influence of uncertainty in biomass production, energy demand and of changes in transport distance. The analysis demonstrates that OPTIMASS can be used as an inspiring tool to investigate the possible effects of governmental decisions, of considering new biomass material, new facilities, of technology changes, etc. The coupling with GIS allows characterisation and visualisation of problems in advance and visualisation of results in an interpretative way.
A generic mathematical model to optimise strategic and tactical decisions in biomass-based supply chains (OPTIMASS)
S0377221715001666
In this study, we focus on improving parameter estimation in Phase I study to construct more accurate Phase II control limits for monitoring multivariate quality characteristics. For a multivariate normal distribution with unknown mean vector, the usual mean estimator is known to be inadmissible under the squared error loss function when the dimension of the variables is greater than 2. Shrinkage estimators, such as the James–Stein estimators, are shown to have better performance than the conventional estimators in the literature. We utilize the James–Stein estimators to improve the Phase I parameter estimation. Multivariate control limits for the Phase II monitoring based on the improved estimators are proposed in this study. The resulting control charts, JS-type charts, are shown to have substantial performance improvement over the existing ones.
Multivariate control charts based on the James–Stein estimator
S0377221715001678
We propose a hierarchical Bayesian semiparametric approach to account simultaneously for heterogeneity and functional flexibility in store sales models. To estimate own- and cross-price response flexibly, a Bayesian version of P-splines is used. Heterogeneity across stores is accommodated by embedding the semiparametric model into a hierarchical Bayesian framework that yields store-specific own- and cross-price response curves. More specifically, we propose multiplicative store-specific random effects that scale the nonlinear price curves while their overall shape is preserved. Estimation is fully Bayesian and based on novel MCMC techniques. In an empirical study, we demonstrate a higher predictive performance of our new flexible heterogeneous model over competing models that capture heterogeneity or functional flexibility only (or neither of them) for nearly all brands analyzed. In particular, allowing for heterogeneity in addition to functional flexibility can improve the predictive performance of a store sales model considerably, while incorporating heterogeneity alone only moderately improved or even decreased predictive validity. Taking into account model uncertainty, we show that the proposed model leads to higher expected profits as well as to materially different pricing recommendations.
Accommodating heterogeneity and nonlinearity in price effects for predicting brand sales and profits
S0377221715001691
This paper is concerned with the Multi-Row Facility Layout Problem. Given a set of rectangular departments, a fixed number of rows, and weights for each pair of departments, the problem consists of finding an assignment of departments to rows and the positions of the departments in each row so that the total weighted sum of the center-to-center distances between all pairs of departments is minimized. We show how to extend our recent approach for the Space-Free Multi-Row Facility Layout Problem to general Multi-Row Facility Layout as well as some special cases thereof. To the best of our knowledge this is the first global optimization approach for multi-row layout that is applicable beyond the double-row case. A key aspect of our proposed approach is a model for multi-row layout that expresses the problem as a discrete optimization problem, and thus makes it possible to exploit the underlying combinatorial structure. In particular we can explicitly control the number and size of the spaces between departments. We construct a semidefinite relaxation of the discrete optimization formulation and present computational results showing that the proposed approach gives promising results for several variants of multi-row layout problems on a variety of benchmark instances.
A semidefinite optimization-based approach for global optimization of multi-row facility layout
S0377221715001708
The notion of imperfect maintenance has spawned a large body of literature, and many imperfect maintenance models have been developed. However, there is very little work on developing suitable imperfect maintenance models for systems outfitted with sensors. Motivated by the practical need of such imperfect maintenance models, the broad objective of this paper is to propose an imperfect maintenance model that is applicable to systems whose sensor information can be modeled by stochastic processes. The proposed imperfect maintenance model is founded on the intuition that maintenance actions will change the rate of deterioration of a system, and that each maintenance action should have a different degree of impact on the rate of deterioration. The corresponding parameter-estimation problem can be divided into two parts: the estimation of fixed model parameters and the estimation of the impact of each maintenance action on the rate of deterioration. The quasi-Monte Carlo method is utilized for estimating fixed model parameters, and the filtering technique is utilized for dynamically estimating the impact from each maintenance action. The competence and robustness of the developed methods are evidenced via simulated data, and the utility of the proposed imperfect maintenance model is revealed via a real data set.
Degradation-based maintenance decision using stochastic filtering for systems under imperfect maintenance
S0377221715001721
The flexible job shop scheduling is a challenging problem due to its high complexity and the huge number of applications it has in real production environments. In this paper, we propose effective neighborhood structures for this problem, including feasibility and non improving conditions, as well as procedures for fast estimation of the neighbors quality. These neighborhoods are embedded into a scatter search algorithm which uses tabu search and path relinking in its core. To develop these metaheuristics we define a novel dissimilarity measure, which deals with flexibility. We conducted an experimental study to analyze the proposed algorithm and to compare it with the state of the art on standard benchmarks. In this study, our algorithm compared favorably to other methods and established new upper bounds for a number of instances.
Scatter search with path relinking for the flexible job shop scheduling problem
S0377221715001940
Congestion is a widely observed economic phenomenon where outputs are reduced due to excessive amount of inputs. The previous approaches to identify congestion in nonparametric analysis only consider desirable outputs. In the production process, undesirable outputs are usually jointly produced with desirable outputs. In this paper, we propose an approach for measuring congestion in the presence of desirable and undesirable outputs simultaneously. The proposed approach can discriminate between the congested DMUs and the truly efficient DMUs, which are all efficient according to the scores calculated by the directional distance function. Finally, an empirical example is used to illustrate the approach.
Congestion measurement in nonparametric analysis under the weakly disposable technology
S0377221715001952
In this paper, we study the optimal procurement and operation of an oil refinery. The crude oil prices follow geometric Brownian motion processes with correlation. We build a multiperiod inventory problem where each period involves an operation problem such as separation or blending. The decisions are the amount of crude oils to purchase and the amount of oil products to produce. We employ approximate dynamic programming methods to solve this multiperiod multiproduct optimization problem. Numerical results reveal that this complex problem can be approximately solved with little loss of optimality. Further, we find that the approximate solution significantly outperforms a set of myopic policies that are currently used.
Optimal crude oil procurement under fluctuating price in an oil refinery
S0377221715001964
This paper proposes a flexible stochastic cost frontier panel data model where the technology parameters are unknown smooth functions of firm- and time-effects, which non-neutrally shift the cost frontier. The model decomposes inefficiency into firm and time-specific components and productivity change into inefficiency change, technical change and scale change. We then apply the proposed methodology to the Norwegian salmon production data and analyze technical efficiency as well as productivity changes.
Productivity and efficiency estimation: A semiparametric stochastic cost frontier approach
S0377221715001976
In this paper we consider the MMAP/PH/1 priority queue, both the case of preemptive resume and the case of non-preemptive service. The main idea of the presented analysis procedure is that the sojourn time of the low priority jobs in the preemptive case (and the waiting time distribution in the non-preemptive case) can be represented by the duration of the busy period of a special Markovian fluid model. By making use of the recent results on the busy period analysis of Markovian fluid models it is possible to calculate several queueing performance measures in an efficient way including the sojourn time distribution (both in the time domain and in the Laplace transform domain), the moments of the sojourn time, the generating function of the queue length, the queue length moments and the queue length probabilities.
Efficient analysis of the MMAP[K]/PH[K]/1 priority queue
S0377221715001988
In this paper, we suggest a new multi-objective artificial bee colony (ABC) algorithm by introducing an elitism strategy. The algorithm uses a fixed-size archive that is maintained based on crowding-distance to store non-dominated solutions found during the search process. In the proposed algorithm, an improved artificial bee colony algorithm with an elitism strategy is adopted for the purpose of avoiding premature convergence. Specifically, the elites in the archive are selected and used to generate new food sources in both employed and onlooker bee phases in each cycle. To keep diversity, a member located at the most crowded region will be removed when the archive overflows. The algorithm is very easy to be implemented and it employs only a few control parameters. The proposed algorithm is tested on a wide range of multi-objective problems, and compared with other state-of-the-art algorithms in terms of often-used quality indicators with the help of a nonparametric test. It is revealed by the test procedure that the algorithm produces better or comparable results when compared with other well-known algorithms, and it can be used as a promising alternative tool to solve multi-objective problems with the advantage of being simple and effective.
An elitism based multi-objective artificial bee colony algorithm
S0377221715002003
In this study, a mobile blood collection system is designed with the primary objective of increasing blood collection levels. This design also takes into account operational costs to aim for collection of large amounts of blood at reasonable cost. Bloodmobiles perform direct tours to certain activities to collect blood, but at the end of each day, they bring the collected blood to a designated depot to prevent its spoilage. The proposed system consists of the bloodmobiles and a new vehicle called the shuttle that visits the bloodmobiles in the field on each day and transfers the collected blood to the depot. Consequently, bloodmobiles can continue their tours without having to make daily returns to the depot. We propose a mathematical model and a 2-stage IP based heuristic algorithm to determine the tours of the bloodmobiles and the shuttle, and their lengths of stay at each stop. This new problem is defined as an extension of the Selective Vehicle Routing Problem and is referred to as the SVRP with Integrated Tours. The performances of the solution methodologies are tested first on a real data set obtained from past blood donation activities of Turkish Red Crescent in Ankara, and then on a constructed data set based on GIS data of the European part of Istanbul. The Pareto set of optimum solutions is generated based on blood amounts and logistics costs, and finally a sensitivity analysis on some important design parameters is conducted.
Selective vehicle routing for a mobile blood donation system
S0377221715002015
When introducing a new product into market, substantial amounts of resources are put at stake. Innovation managers therefore seek for reliable predictions of the respective innovation diffusion process. Making such predictions, however, is challenging, because the diffusion trajectory is affected by various factors such as the type of innovation, its perceived attributes, marketing activities and their impact, or consumers’ individual communication and adoption behaviors. Modeling the diffusion of innovations accordingly is of interest for both practitioners and management scholars. An agent-based model can overcome many limitations of traditional approaches. It accounts for heterogeneity in consumer preferences as well as in the social structure of their interactions and allows for modeling consumers as boundedly rational agents who make decisions under uncertainty and are influenced by micro-level drivers of adoption. We introduce an agent-based model that deals with repeat purchase decisions, addresses the competitive diffusion of multiple products, and takes into consideration both the temporal and the spatial dimension of innovation diffusion. The corresponding simulation tool can support decision makers in analyzing the prospective diffusion of an innovation in scenarios that differ in pricing strategy, distribution strategy, and/or communication strategy. Its applicability is illustrated by means of an empirically grounded example for a second-generation biofuel.
Innovation diffusion of repeat purchase products in a competitive market: An agent-based simulation approach
S0377221715002027
We develop a model for optimal location of retail stores on a network. The objective is to maximize the total profit of the network subject to a minimum ROI (or ROI threshold) required at each store. Our model determines the location and number of stores, allocation of demands to the stores, and total investment. We formulate a store’s profit as a jointly concave function in demand and investment, and show that the corresponding ROI function is unimodal. We demonstrate an application of our model to location of retail stores operating as an M/M/1/K queue and show the joint concavity of a store’s profit. To this end, we prove the joint concavity of the throughput of an M/M/1/K queue. Parametric analysis is performed on an illustrative example for managerial implications. We introduce an upper bound of an optimal value of the problem and develop three heuristic algorithms based on the structural properties of the profit and ROI functions. Computational results are promising.
Return-on-investment (ROI) criteria for network design
S0377221715002039
The pairwise comparisons method is a convenient tool used when the relative order among different concepts (alternatives) needs to be determined. One popular implementation of the method is based on solving an eigenvalue problem for the pairwise comparisons matrix. In such cases the ranking result for the principal eigenvector of the pairwise comparisons matrix is adopted, while the eigenvalue is used to determine the index of inconsistency. A lot of research has been devoted to the critical analysis of the eigenvalue based approach. One of them is the work of Bana e Costa and Vansnick (2008). In their work, the authors define the conditions of order preservation (COP) and show that even for sufficiently consistent pairwise comparisons matrices, this condition cannot be met. The presented work defines more precise criteria for determining when the COP is met. To formulate the criteria, an error factor is used describing how far the input to the ranking procedure is from the ranking result. The relationship between the Saaty consistency index and COP is also discussed.
Notes on order preservation and consistency in AHP
S0377221715002040
This paper presents an indicator-based multi-objective local search (IBMOLS) to solve a multi-objective optimization problem. The problem concerns the selection and scheduling of observations for an agile Earth observing satellite. The mission of an Earth observing satellite is to obtain photographs of the Earth surface to satisfy user requirements. Requests from several users have to be managed before transmitting an order, which is a sequence of selected acquisitions, to the satellite. The obtained sequence has to optimize two objectives under operation constraints. The objectives are to maximize the total profit of the selected acquisitions and simultaneously to ensure the fairness of resource sharing by minimizing the maximum profit difference between users. Experiments are conducted on realistic instances. Hypervolumes of the approximate Pareto fronts are computed and the results from IBMOLS are compared with the results from the biased random-key genetic algorithm (BRKGA).
A multi-objective local search heuristic for scheduling Earth observations taken by an agile satellite
S0377221715002052
Cluster analysis refers to finding subsets of vertices of a graph (called clusters) which are more likely to be joined pairwise than vertices in different clusters. In the last years this topic has been studied by many researchers, and several methods have been proposed. One of the most popular is to maximize the modularity, which represents the fraction of edges within clusters minus the expected fraction of such edges in a random graph with the same degree distribution. However, this criterion presents some issues, for example the resolution limit, i.e., the difficulty to detect clusters having small sizes. In this paper we focus on a recent measure, called modularity density, which improves the resolution limit issue of modularity. The problem of maximizing the modularity density can be described by means of a 0–1 NLP formulation. We derive some properties of the optimal solution which will be used to tighten the formulation, and we propose some MILP reformulations which yield an improvement of the resolution time.
MILP formulations for the modularity density maximization problem
S0377221715002064
In applications of data envelopment analysis (DEA) data about some inputs and outputs is often available only in the form of ratios such as averages and percentages. In this paper we provide a positive answer to the long-standing debate as to whether such data could be used in DEA. The problem arises from the fact that ratio measures generally do not satisfy the standard production assumptions, e.g., that the technology is a convex set. Our approach is based on the formulation of new production assumptions that explicitly account for ratio measures. This leads to the estimation of production technologies under variable and constant returns-to-scale assumptions in which both volume and ratio measures are native types of data. The resulting DEA models allow the use of ratio measures “as is”, without any transformation or use of the underlying volume measures. This provides theoretical foundations for the use of DEA in applications where important data are reported in the form of ratios.
Efficiency analysis with ratio measures
S0377221715002076
We analyze the effect of demand uncertainty, as measured by entropy, on expected costs in a stochastic inventory model. Existing models studying demand variability’s impact use either stochastic ordering techniques or use variance as a measure of uncertainty. Due to both axiomatic appeal and recent use of entropy in the operations management literature, this paper develops entropy’s use as a demand uncertainty measure. Our key contribution is an insightful proof quantifying how costs are non-increasing when entropy is reduced.
On the relationship between entropy, demand uncertainty, and expected loss
S0377221715002088
Retailers often conduct sequential, single-unit auctions and need to decide the minimum bid in each auction. To reduce inventory costs, it may be optimal to scrap some of the inventory rather than holding it until it is auctioned off. In some auctions, the seller may be uncertain about the market response and hence may want to dynamically learn the demand by observing the number of posted bids. We formulate a Markov decision process (MDP) to study this dynamic auction-design problem under the Vickrey mechanism. We first develop a clairvoyant model where the seller knows the demand distribution. We prove that it is optimal to scrap all inventory above a certain threshold and then auction the remaining units. We derive a first order necessary condition whereby the bidders’ virtual value at an optimal minimum bid equals the seller’s marginal profit. This is a generalization of Riley and Samuelson’s result from the one, single-unit auction case. When the virtual value is strictly increasing, this necessary condition is also sufficient and leads to a structured value iteration algorithm. We then assume that the number of bidders is Poisson distributed but the seller does not know its mean. The seller uses a mixture-of-Gamma prior on this mean and updates this belief over several auctions. This results in a high-dimensional Bayesian MDP whose exact solution is intractable. We therefore propose and compare two approximation methods called certainty equivalent control (CEC) and Q-function approximation. Numerical experiments suggest that Q-function approximation can attain higher revenues than CEC.
Optimal minimum bids and inventory scrapping in sequential, single-unit, Vickrey auctions with demand learning
S0377221715002246
The determination of security returns will be associated with the validity of the corresponding portfolio selection models. The complexity of real financial market inevitably leads to diversity of types of security returns. For example, they are considered as random variables when available data are enough, or they are considered as uncertain variables when lack of data. This paper is devoted to solving such a hybrid portfolio selection problem in the simultaneous presence of random and uncertain returns. The variances of portfolio returns are first given and proved based on uncertainty theory. Then the corresponding mean-variance models are introduced and the analytical solutions are obtained in the case with no more than two newly listed securities. In the general case, the proposed models can be effectively solved by Matlab and a numerical experiment is illustrated.
Mean-variance model for portfolio optimization problem in the simultaneous presence of random and uncertain returns
S0377221715002258
The objective of this study is to assess the importance of short- and long-run liquidity or debt risk on technical inefficiency and productivity. An alternative panel estimator of normal-gamma stochastic frontier model is proposed using a simulated maximum likelihood estimation technique. Empirical estimates indicate a difference in the parameter coefficients of gamma stochastic production function, and heterogeneity function variables between the pooled and the Swamy–Arora panel models. The results from this study show short and long run risk or variations in liquidity or debt-servicing ratio play an important role in explaining the variance in efficiency and productivity.
Impact of liquidity risk on variations in efficiency and productivity: A panel gamma simulated maximum likelihood estimation
S0377221715002271
Railway capacity determination and expansion are very important topics. In prior research, the competition between different entities such as train services and train types, on different network corridors however have been ignored, poorly modelled, or else assumed to be static. In response, a comprehensive set of multi-objective models have been formulated in this article to perform a trade-off analysis. These models determine the total absolute capacity of railway networks as the most equitable solution according to a clearly defined set of competing objectives. The models also perform a sensitivity analysis of capacity with respect to those competing objectives. The models have been extensively tested on a case study and their significant worth is shown. The models were solved using a variety of techniques however an adaptive E constraint method was shown to be most superior. In order to identify only the best solution, a Simulated Annealing meta-heuristic was implemented and tested. However a linearization technique based upon separable programming was also developed and shown to be superior in terms of solution quality but far less in terms of computational time.
Multi-objective models and techniques for analysing the absolute capacity of railway networks
S0377221715002283
Random simulations from complicated combinatorial sets are often needed in many classes of stochastic problems. This is particularly true in the analysis of complex networks, where researchers are usually interested in assessing whether an observed network feature is expected to be found within families of networks under some hypothesis (named conditional random networks, i.e., networks satisfying some linear constraints). This work presents procedures to generate networks with specified structural properties which rely on the solution of classes of integer optimization problems. We show that, for many of them, the constraints matrices are totally unimodular, allowing the efficient generation of conditional random networks by specialized interior-point methods. The computational results suggest that the proposed methods can represent a general framework for the efficient generation of random networks even beyond the models analyzed in this paper. This work also opens the possibility for other applications of mathematical programming in the analysis of complex networks.
Mathematical programming approaches for classes of random network problems
S0377221715002295
In the Team Orienteering Arc Routing Problem (TOARP) the potential customers are located on the arcs of a directed graph and are to be chosen on the basis of an associated profit. A limited fleet of vehicles is available to serve the chosen customers. Each vehicle has to satisfy a maximum route duration constraint. The goal is to maximize the profit of the served customers. We propose a matheuristic for the TOARP and test it on a set of benchmark instances for which the optimal solution or an upper bound is known. The matheuristic finds the optimal solutions on all, except one, instances of one of the four classes of tested instances (with up to 27 vertices and 296 arcs). The average error on all instances for which the optimal solution is available is 0.67 percent.
A matheuristic for the Team Orienteering Arc Routing Problem
S0377221715002301
Maintenance scheduling for high value assets has been studied for decades and is still a crucial area of research with new technological advancements. The main dilemma of maintenance scheduling is to avoid failures while preventing unnecessary maintenance. The technological advancements in real time monitoring and computational science make tracking asset health and forecasting asset failures possible. The usage and maintenance of assets can be planned more efficiently with the forecasted failure probability and remaining useful life (i.e., prognostic information). The prognostic information is time sensitive. Geographically distributed assets such as off-shore wind farms and railway switches add another complexity to the maintenance scheduling problem with the required time of travel to reach these assets. Thus, the travel time between geographically distributed assets should be incorporated in the maintenance scheduling when one technician (or team) is responsible for the maintenance of multiple assets. This paper presents a methodology to schedule the maintenance of geographically distributed assets using their prognostic information. Genetic Algorithm based solution incorporating the daily work duration of the maintenance team is also presented in the paper. total expected failure cost total maintenance cost total travel cost number of distributed assets maintenance schedule time period cumulative failure probability of asset i on day t expected failure probability progression if the system is left as it is without maintenance failure probability obtained from prognostics at time t expected failure probability progression after a maintenance failure probability obtained from reliability analysis at (t LM) time units after a maintenance expected downtime when a failure occurs in asset i downtime cost per unit time for asset i direct failure cost fixed maintenance cost for asset i travel cost of unit distance cost increase per each time unit in fixed maintenance cost when work duration exceeds Dw city order (asset number visited in the ith order at time t) distance between assets visited π i − 1 t th order and π i t th order travel time of distance d π n , π 0 maintenance duration for the asset i time of last maintenance number of assets to be visited at time unit t working duration for maintenance in a time unit upper limit for the work duration in a time unit 1 if asset i is scheduled for maintenance at time t; 0 otherwise random number for asset i for ranking the visit order string of binary numbers with log 2(T + 1) binary numbers representing the maintenance times for asset i in the given period ( b l o g 2 ( T + 1 ) … b 2 b 1 )
Maintenance scheduling of geographically distributed assets with prognostics information
S0377221715002313
An economic measure of scale efficiency is the ratio of the minimum average cost to the average cost at the actual output level of a firm. It is easily measured by the ratio of the total cost of this output under the constant and variable returns to scale assumptions. This procedure does not identify the output level where the average cost reaches a minimum. This paper proposes a nonparametric method of measuring this output level using DEA. The relation between this efficient production scale, the short run physical capacity output, and the most productive scale size (MPSS) is also discussed. An empirical application using state level data from U.S. manufacturing is used to illustrate the procedure. The DEA findings are further analyzed using a smoothed bootstrap procedure.
Nonparametric measures of scale economies and capacity utilization: An application to U.S. manufacturing
S0377221715002337
This paper examines variance swap pricing using a model that integrates three major features of financial assets, namely the mean reversion in asset price, multi-factor stochastic volatility (SV) and simultaneous jumps in prices and volatility factors. Closed-form solutions are derived for vanilla variance swaps and gamma swaps while the solutions for corridor variance swaps and conditional variance swaps are expressed in a one-dimensional Fourier integral. The numerical tests confirm that the derived solution is accurate and efficient. Furthermore, empirical studies have shown that multi-factor SV models better capture the implied volatility surface from option data. The empirical results of this paper also show that the additional volatility factor contributes significantly to the price of variance swaps. Hence, the results favor multi-factor SV models for pricing variance swaps consistent with the implied volatility surface.
Variance swap with mean reversion, multifactor stochastic volatility and jumps
S0377221715002349
This paper analyzes a capacity management problem in which two service providers utilize a common facility to serve two separate markets with time-sensitive demands. The facility provider has a fixed capacity and all parties maximize demand rates. When the service providers share the facility, they play a frequency competition game with a unique Nash equilibrium. When the service providers have dedicated facilities, the facility provider leads two separate Stackelberg games. A centralized system with the first-best outcome is also examined. Based on closed-form solutions under all three scenarios, we find that facility capacity competition is a prerequisite condition for not pooling the service providers. Moreover, we establish the rankings of preferred strategies for all parties with respect to the ratio of the service providers’ demand loss rates, which are proportional to the time sensitivity of demand and the potential market size. Interestingly a triple-agreement situation for the pooling strategy exists if the rates are close, and the facility provider permits a request for dedicated facilities only if the service provider has an overwhelming dominance at the demand loss rate. We connect these managerial insights with strategic seaport capacity management.
Capacity reservation for time-sensitive service providers: An application in seaport management
S0377221715002362
It is well established that multiple reference sets may occur for a decision making unit (DMU) in the non-radial DEA (data envelopment analysis) setting. As our first contribution, we differentiate between three types of reference set. First, we introduce the notion of unary reference set (URS) corresponding to a given projection of an evaluated DMU. The URS includes efficient DMUs that are active in a specific convex combination producing the projection. Because of the occurrence of multiple URSs, we introduce the notion of maximal reference set (MRS) and define it as the union of all the URSs associated with the given projection. Since multiple projections may occur in non-radial DEA models, we further define the union of the MRSs associated with all the projections as unique global reference set (GRS) of the evaluated DMU. As the second contribution, we propose and substantiate a general linear programming (LP) based approach to identify the GRS. Since our approach makes the identification through the execution of a single primal-based LP model, it is computationally more efficient than the existing methods for its easy implementation in practical applications. Our last contribution is to measure returns to scale using a non-radial DEA model. This method effectively deals with the occurrence of multiple supporting hyperplanes arising either from multiplicity of projections or from non-full dimensionality of minimum face. Finally, an empirical analysis is conducted based on a real-life data set to demonstrate the ready applicability of our approach.
On the identification of the global reference set in data envelopment analysis
S0377221715002374
Facing severe budgetary constraints, public transport companies are forced to efficiently manage staff, one of the most expensive resources. The driver rostering problem in a public transit company consists of defining rosters, that is, assigning daily crew duties to the company's drivers for a pre-defined time horizon, ensuring transport demand in a specific area, at low operating costs, while complying with legal restrictions and agreements between the company and the driver unions. The objective of this paper is the study of mathematical models and optimization techniques that lead to new computational tools able to solve the bus driver rostering problem with days-off patterns by producing solutions that increase efficiency and reduce operating costs, while improving or maintaining the service quality and balancing the drivers’ workload. Three mixed integer linear programming formulations are presented and compared from a theoretical point of view: an assignment/cover model, a multi-commodity flow model and a new multi-commodity flow/assignment model. Based on a hierarchy of the decisions made during the resolution of the problem, a new decompose-and-fix heuristic is developed by exploring the underlying structure of the multi-commodity flow models. The heuristic solves the sequence of sub-problems identified by the hierarchy, while fixing or bounding the value of some variables to incorporate previous decisions. Computational experiments were carried out with instances, derived from real world data and from benchmark data, characterized by a days-off pattern in use at two Portuguese public bus transport companies. Computational results confirm the good performance of the decompose-and-fix heuristic.
A decompose-and-fix heuristic based on multi-commodity flow models for driver rostering with days-off pattern
S0377221715002386
Given a finite set N of feasible points of a multi-objective optimization (MOO) problem, the search region corresponds to the part of the objective space containing all the points that are not dominated by any point of N, i.e. the part of the objective space which may contain further nondominated points. In this paper, we consider a representation of the search region by a set of tight local upper bounds (in the minimization case) that can be derived from the points of N. Local upper bounds play an important role in methods for generating or approximating the nondominated set of an MOO problem, yet few works in the field of MOO address their efficient incremental determination. We relate this issue to the state of the art in computational geometry and provide several equivalent definitions of local upper bounds that are meaningful in MOO. We discuss the complexity of this representation in arbitrary dimension, which yields an improved upper bound on the number of solver calls in epsilon-constraint-like methods to generate the nondominated set of a discrete MOO problem. We analyze and enhance a first incremental approach which operates by eliminating redundancies among local upper bounds. We also study some properties of local upper bounds, especially concerning the issue of redundant local upper bounds, that give rise to a new incremental approach which avoids such redundancies. Finally, the complexities of the incremental approaches are compared from the theoretical and empirical points of view.
On the representation of the search region in multi-objective optimization
S0377221715002398
This paper deals with the blocks relocation problem (also called container relocation problem). First, it corrects the binary linear programming model BRP-II presented in Caserta et al. (2012). Second, it improves the initial model formulation by removing superfluous variables, tightening some constraints, introducing a new upper bound and applying a pre-processing step to fix several variables. Computational results show the efficiency of the improved model for small and medium sized instances.
An improved mathematical formulation for the blocks relocation problem
S0377221715002404
When procuring a product from a supplier, a buyer faces the problem of designing a payment scheme to screen the supplier’s quality level and cost. We explore an instalment payment (contract) consisting of an initial payment to the supplier as soon as the product is put in use, followed by a deferred payment that is contingent upon the product in normal operation within a certain period. We find that when the high quality supplier has a higher cost than the low quality supplier, and the suppliers’ financing costs are lower than a certain threshold, the optimal instalment payment has two options: an initial-payment-only option preferred by the low quality supplier and a deferred-payment-only option preferred by the high quality supplier; otherwise, the optimal contract degenerates into an initial-payment-only option. Thus, our research complements past work on moral hazard where no initial payment is proposed. Moreover, we show that the buyer has an incentive to assist with the supplier’s financing. Finally, we compare the instalment payment with the rental contract and show that when the supplier’s financing cost is low or the quality difference among different supplier types is small, the rental contract is more likely to be preferred by the buyer than the instalment payment.
Optimal payment scheme when the supplier’s quality level and cost are unknown
S0377221715002416
This paper discusses a two level hierarchical time minimization transportation problem, in which the whole set of source–destination links consists of two disjoint partitions namely Level-I and Level-II links. Some quantity of a homogeneous product is first shipped from sources to destinations by Level-I decision makers using only Level-I links, and on its completion the Level-II decision maker transports the remaining quantity of the product in an optimal fashion using only Level-II links. The objective is to find that feasible solution for Level-I decision corresponding to which the optimal feasible solution for Level-II decision maker is such that the sum of shipment times in Level-I and Level-II is minimum. A polynomial time iterative algorithm is proposed to solve the two level hierarchical time minimization transportation problem. At each iteration a lexicographic optimal solution of a restricted version of a related standard time minimization transportation problem is examined to generate a pair of Level-I and Level-II shipment times and finally the global optimal solution is obtained by selecting the best out of these generated pairs. Numerical illustration is included in support of theory.
An iterative algorithm for two level hierarchical time minimization transportation problem
S0377221715002428
Crop rotation plays an important role in agricultural production models with sustainability considerations. Commonly associated strategies include the alternation of botanical families in the plots, the use of fallow periods and the inclusion of green manure crops. In this article, we address the problem of scheduling vegetable production in this context. Vegetables crop farmers usually manage a large number of crop species with different planting periods and growing times. These crops present multiple and varied harvesting periods and productivities. The combination of such characteristics makes the generation of good vegetable crop rotation schedules a hard combinatorial task. We approach this problem while considering two additional important practical aspects: standard plot sizes (multiples of a base area) and total area minimisation. We propose an integer programming formulation for this problem and develop a branch-price-and-cut algorithm that includes several performance-enhancing characteristics, such as the inclusion of a family of subadditive valid inequalities, two primal heuristics and a strong branching rule. Extensive computational experiments over a set of instances based on real-life data validate the efficiency and robustness of the proposed method.
A branch-price-and-cut method for the vegetable crop rotation scheduling problem with minimal plot sizes
S0377221715002593
A common problem arising in many domains is how value the benefits of projects in a project portfolio. Recently, there has been some attention given to a decision analysis practice whereby analysts define a value function on some criterion by setting 0 as the value of the worst project. In particular, Clemen and Smith have argued this practice is not sound as it gives different results from the case where projects are “priced out”, and makes a strong implicit assumption about the value of not doing a project. In this paper we underscore the criticism of this way of using value functions by showing it can lead to a rank reversal. We provide a measurement theoretic account of the phenomenon, showing that the problem arises from using evaluating projects on an interval scale (such as a value scale) whereas to guard against such rank reversals, benefits must be measured on at least a ratio scale. Seen from this perspective, we discuss how the solution proposed by Clemen and Smith, of reformulating the underlying optimisation problem to allow for explicitly non-zero values, addresses the issue, and explore in what sense it may be open to similar problems. In closing we discuss what lessons from practice may be drawn from this analysis, focussing on settings where the Clemen and Smith proposal may not be the most natural way of modelling.
Measurement issues in the evaluation of projects in a project portfolio
S0377221715002611
In this paper we present an exact solution method for the transportation problem with piecewise linear costs. This problem is fundamental within supply chain management and is a straightforward extension of the fixed-charge transportation problem. We consider two Dantzig–Wolfe reformulations and investigate their relative strength with respect to the linear programming (LP) relaxation, both theoretical and practical, through tests on a number of instances. Based on one of the proposed formulations we derive an exact method by branching and adding generalized upper bound constraints from violated cover inequalities. The proposed solution method is tested on a set of randomly generated instances and compares favorably to solving the model using a standard formulation solved by a state-of-the-art commercial solver.
A branch-cut-and-price algorithm for the piecewise linear transportation problem
S0377221715002623
We propose, in this paper, a new method to initialize the simplex algorithm. This approach does not involve any artificial variables. It can detect also the redundant constraints or infeasibility, if any. Generally, the basis found by this approach is not feasible. To achieve feasibility, this algorithm appeals to the nonfeasible basis method (NFB). Furthermore, we propose a new pivoting rule for NFB method, which shows to be beneficial in both numerical and time complexity. When solving a linear program, we develop an efficient criterion to decide in advance which algorithm between NFB and formal nonfeasible basis method seems to be more rapid. Comparative analysis is carried out with a set of standard test problems from Netlib. Our computational results indicate that the proposed algorithm is more advantageous than two-phase and perturbation algorithm in terms of number of iterations, number of involved variables, and also computational time.
Algebraic simplex initialization combined with the nonfeasible basis method
S0377221715002635
This paper considers a firm that introduces multiple generations of a product to the market at regular intervals. We assume that the firm has only a single production generation in the market at any time. To maximize the total profit within a given planning horizon, the firm needs to decide the optimal frequency to introduce new product generations, taking into account the trade-off between sales revenues and product development costs. We model the sales quantity of each generation as a function of the technical decay and installed base effects. We analytically examine the optimal frequency for introducing new product generations as a function of these parameters.
On the optimal frequency of multiple generation product introductions
S0377221715002647
Industrial practices and experiences highlight that demand is dynamic and non-stationary. Research however has historically taken the perspective that stochastic demand is stationary therefore limiting its impact for practitioners. Manufacturers require schedules for multiple products that decide the quantity to be produced over a required time span. This work investigated the challenges for production in the framework of a single manufacturing line with multiple products and varying demand. The nature of varying demand of numerous products lends itself naturally to an agile manufacturing approach. We propose a new algorithm that iteratively refines production windows and adds products. This algorithm controls parallel genetic algorithms (pGA) that find production schedules while minimizing costs. The configuration of such a pGA was essential in influencing the quality of results. In particular providing initial solutions was an important factor. Two novel methods are proposed that generate initial solutions by transforming a production schedule into one with refined production windows. The first method is called factorial generation and the second one fractional generation method. A case study compares the two methods and shows that the factorial method outperforms the fractional one in terms of costs.
Agile factorial production for a single manufacturing line with multiple products
S0377221715002659
Bicycle sharing systems can significantly reduce traffic, pollution, and the need for parking spaces in city centers. One of the keys to success for a bicycle sharing system is the efficiency of rebalancing operations, where the number of bicycles in each station has to be restored to its target value by a truck through pickup and delivery operations. The Static Bicycle Rebalancing Problem aims to determine a minimum cost sequence of stations to be visited by a single vehicle as well as the amount of bicycles to be collected or delivered at each station. Multiple visits to a station are allowed, as well as using stations as temporary storage. This paper presents an exact algorithm for the problem and results of computational tests on benchmark instances from the literature. The computational experiments show that instances with up to 60 stations can be solved to optimality within 2 hours of computing time.
An exact algorithm for the static rebalancing problem arising in bicycle sharing systems
S0377221715002660
We consider the stowage planning problem of a container ship, where the ship visits a series of ports sequentially and containers can only be accessed from the top of the stacks. At some ports, certain containers will be unloaded temporarily and will be loaded back later for various purposes. Such unproductive movements of containers are called shifts, which are both time and money consuming. Literature shows that binary linear programming formulation for such problems is impracticable for real life problems due to the large number of binary variables and constraints. Therefore, we develop a heuristic algorithm which can generate stowage plans with a reasonable number of shifts for such problems. The algorithm, verified by extensive computational experimentations, performs better than the Suspensory Heuristic Procedure (SH algorithm) proposed in Avriel et al. (1998), which, to the best of our knowledge, is one of the leading heuristic algorithms for such stowage planning problem.
Stowage planning for container ships: A heuristic algorithm to reduce the number of shifts
S0377221715002672
Within the class of performance ratios, the Sharpe measure can lead to misleading evaluation and various modifications have been investigated. As a starting point, we consider the axiomatic approach based on the notion of acceptable index of performance. Our goal is to show how the promising properties possessed by alternative measures such as the Gain–Loss ratio or the Average-Value-at-Risk ratio are not compatible with the statistical robustness of their estimated counterparts. This clearly affects the ranking of funds and consequently the performance persistence. We study the qualitative robustness along with the quantitative resistance of the corresponding estimators in a nonparametric setting. We include the Value-at-Risk ratio which is not an acceptability index of performance. These measures do not possess qualitative robustness, nonetheless we show how some degree of resistance to data contamination restricted to bounded intervals can be recovered. Using the relationship between the influence function of estimators and their bias for large samples, we suggest the Average-Value-at-Risk ratio and the Value-at-Risk ratio as the less sensitive to outliers. As a consequence, acceptability is no longer a prerequisite for performance evaluation. To limit the alteration of a given ranking among alternative investment funds, one can use the not acceptable Value-at-Risk ratio as well. Eventually, we propose a modified ratio either of the α-trimmed mean or of the median to the Value-at-Risk.
Ranking of investment funds: Acceptability versus robustness
S0377221715002684
In stochastic optimal control, one deals with sequential decision-making under uncertainty; with dynamic risk measures, one assesses stochastic processes (costs) as time goes on and information accumulates. Under the same vocable of time-consistency (or dynamic-consistency), both theories coin two different notions: the latter is consistency between successive evaluations of a stochastic processes by a dynamic risk measure (a form of monotonicity); the former is consistency between solutions to intertemporal stochastic optimization problems. Interestingly, both notions meet in their use of dynamic programming, or nested, equations. We provide a theoretical framework that offers (i) basic ingredients to jointly define dynamic risk measures and corresponding intertemporal stochastic optimization problems (ii) common sets of assumptions that lead to time-consistency for both. We highlight the role of time and risk preferences — materialized in one-step aggregators — in time-consistency. Depending on how one moves from one-step time and risk preferences to intertemporal time and risk preferences, and depending on their compatibility (commutation), one will or will not observe time-consistency. We also shed light on the relevance of information structure by giving an explicit role to a state control dynamical system, with a state that parameterizes risk measures and is the input to optimal policies.
Building up time-consistency for risk measures and dynamic optimization
S0377221715002696
It is an important issue to model the dynamic biological networks from their time-course response datasets, which is critical to understand the system behaviors. This task includes two sub-tasks: network structure identification and associated parameters estimation. In most existing methods, the two sub-tasks are dealt with step by step, which may result in inconsistence between them and hence inaccuracy of the final model. Another challenge is how to transparently understand the derived model, which cannot be achieved by the traditional black-box methods. A human readable fuzzy rule-based model, denoted as MoPath, is developed for the identification of both structure topology and associated parameters, simultaneously, of a biological network within an optimization framework. MoPath encodes the fuzzy rules into particles of heterogeneous particle swarm optimization (CHPSO) algorithm to generate the optimal model. Theoretically, we demonstrate that the cooperation in CHPSO can maintain a balance between exploration and exploitation to guarantee the particles converge to stable points, which is greatly helpful for finding the optimal model consisting of both the network topology and parameters. We demonstrate MoPath on two dynamic biological networks, and successfully generate a few human readable rules that can well represent the network with high accuracy and good robustness.
Modeling nonlinear dynamic biological systems with human-readable fuzzy rules optimized by convergent heterogeneous particle swarm
S0377221715002702
The purpose of this paper is to assess the business values of information technology (IT) and electronic commerce (EC) independently and simultaneously as measured by productive efficiency, in order to provide new insights into IT and EC investments and, consequently, lead to better decisions on IT and EC investments. The paper analyzes a panel data set at the country level based on the theory of production and its companions called the time-varying stochastic production frontier (SPF) approaches with the one-equation and two-equation models estimated by a two-step nonlinear maximum-likelihood method. The performance metric called productive efficiency is built in the research approaches. The empirical evidence strongly suggests that the presence of EC may strengthen or weaken IT value, and vice versa, which provide a good explanation for the disappearance or existence of the so-called productivity paradox, and that the paradox may exist in a country regardless of whether it is a developed or a developing country, inconsistent with conventional wisdom claiming that the paradox exists only in developing countries. The findings imply that it is imperative to carefully assess the values of IT and EC and, as such, prudent rather than blind IT and EC investing decisions are made, and that the values of IT and EC must be evaluated jointly rather than separately. The findings add significant contributions to the literature and serve as a catalytic agent in stimulating further comparative research in these important areas linking IT investments and EC developments when their business values are the major concern.
Assessing the business values of information technology and e-commerce independently and jointly
S0377221715002714
This paper deals with a discrete facility location model where service is provided at the facility sites. It is assumed that facilities can fail and customers do not have information on failures before reaching them. As a consequence, they may need to visit more than one facility, following an optimized search scheme, in order to get service. The goal of the problem is to locate p facilities in order to minimize the expected total travel cost. The paper presents two alternative mathematical programming formulations for this problem and proposes a matheuristic based on a network flow model to provide solutions to it. The computational burden of the presented formulations is tested and compared on a test-bed of instances.
The reliable p-median problem with at-facility service
S0377221715002738
We propose in this work a new multistage risk averse strategy based on Time Stochastic Dominance (TSD) along a given horizon. It can be considered as a mixture of the two risk averse measures based on first- and second-order stochastic dominance constraints induced by mixed integer-linear recourse, respectively. Given the dimensions of medium-sized problems augmented by the new variables and constraints required by this new risk measure, it is unrealistic to solve the problem up to optimality by plain use of MIP solvers in a reasonable computing time, at least. Instead of it, decomposition algorithms of some type should be used. We present an extension of our Branch-and-Fix Coordination algorithm, so named BFC-TSD, where a special treatment is given to cross scenario group constraints that link variables from different scenario groups. A broad computational experience is presented by comparing the risk neutral approach and the tested risk averse strategies. The performance of the new version of the BFC algorithm versus the plain use of a state-of-the-art MIP solver is also reported.
On time stochastic dominance induced by mixed integer-linear recourse in multistage stochastic programs
S0377221715002751
Camanho and Dyson (2005) extended Shephard's (1974) revenue-indirect cost efficiency approach to a cost-effectiveness framework, which helps to assess the ability of a firm to achieve the current revenue (expressed in the firm's own prices and quantities) at minimum cost. The degree of cost-effectiveness is quantified as the ratio of the minimum cost to the observed cost of the evaluated firm where the minimum cost is computed by simultaneously adjusting the output levels at the current revenue. In this paper, we develop two cost-effectiveness approaches based on convex data envelopment analysis and nonconvex free disposable hull technologies. The objectives of this paper are threefold. Firstly, we develop a convex cost-effectiveness (CCE) measure which is equivalent to the Camanho–Dyson CCE measure under the constant returns-to-scale assumption. Secondly, we introduce three nonconvex cost-effectiveness (NCCE) measures which are shown to be equivalent with respect to each returns-to-scale nonconvex technology. Finally, we apply our framework to a real data.
Cost-effectiveness measures on convex and nonconvex technologies
S0377221715002763
Scheduling involving setup times/costs plays an important role in today's modern manufacturing and service environments for the delivery of reliable products on time. The setup process is not a value added factor, and hence, setup times/costs need to be explicitly considered while scheduling decisions are made in order to increase productivity, eliminate waste, improve resource utilization, and meet deadlines. However, the vast majority of existing scheduling literature, more than 90 percent, ignores this fact. The interest in scheduling problems where setup times/costs are explicitly considered began in the mid-1960s and the interest has been increasing even though not at an anticipated level. The first comprehensive review paper (Allahverdi et al., 1999) on scheduling problems with setup times/costs was in 1999 covering about 200 papers, from mid-1960s to mid-1988, while the second comprehensive review paper (Allahverdi et al., 2008) covered about 300 papers which were published from mid-1998 to mid-2006. This paper is the third comprehensive survey paper which provides an extensive review of about 500 papers that have appeared since the mid-2006 to the end of 2014, including static, dynamic, deterministic, and stochastic environments. This review paper classifies scheduling problems based on shop environments as single machine, parallel machine, flowshop, job shop, or open shop. It further classifies the problems as family and non-family as well as sequence-dependent and sequence-independent setup times/costs. Given that so many papers have been published in a relatively short period of time, different researchers have addressed the same problem independently, by even using the same methodology. Throughout the survey paper, the independently addressed problems are identified, and need for comparing these results is emphasized. Moreover, based on performance measures, shop and setup times/costs environments, the less studied problems have been identified and the need to address these problems is specified. The current survey paper, along with those of Allahverdi et al. (1999, 2008), is an up to date survey of scheduling problems involving static, dynamic, deterministic, and stochastic problems for different shop environments with setup times/costs since the first research on the topic appeared in the mid-1960s.
The third comprehensive survey on scheduling problems with setup times/costs
S0377221715002775
In this article we compare five alternative projects for the requalification of an abandoned quarry. The starting point for this paper was a request made by a decision maker. It was not for help in making a decision as such, but rather for a comparison of different projects. In particular, we are interested in ranking the considered projects on the basis of six different criteria. An extension of the Electre III method with interactions between pairs of criteria was applied in the research. A focus group of experts (in economic evaluation, environmental engineering, and landscape ecology) was formed to be in charge of the process leading to the assignment of numerical values to the weights and interaction coefficients. We report on the way the process evolved and on the difficulties we encountered in obtaining consensual sets of values. Taking into account these difficulties, we considered other sets of weights and interaction coefficients. Our aim was also to study the impact on the final ranking of the fact that these numerical values, assigned to the parameters, were not perfectly defined. This allowed us to formulate robust conclusions which were presented to the members of the focus group.
Dealing with a multiple criteria environmental problem with interaction effects between criteria through an extension of the Electre III method
S0377221715002787
We show that a number of scheduling problems with competing agents and earliness minimization objectives are equivalent to the corresponding problems with tardiness minimization objectives.
A note on scheduling problems with competing agents and earliness minimization objectives
S0377221715002799
The aim of this work is to present the practical applications of an integrated use of soft and hard methodologies applied in a case study of the Surgical Centre of the University Hospital Clementino Fraga Filho, where the low volume of surgeries is of major concern. The proposed approach is particularly appropriate in situations where there is limited time, financial resources, and institutional cooperation. Cognitive maps were used to elicit the perspectives of health professionals, which supported simulation experiments and guided the model's execution. Human-resource, patient-related, room-schedule, material, and structural constraints were found to affect the number of surgeries performed. The major contribution of this paper is the proposal of a multi-methodological approach with a committed focus on problem solving that incorporates specialists' views in simulation experiments; these specialists’ collaborative work highlights actions that can lead to the resolution (or improvement) of real-world problems.
Integrating soft and hard operational research to improve surgical centre management at a university hospital
S0377221715002805
Surgical scheduling is a challenging problem faced by hospital managers. It is subject to a wide range of constraints depending upon the particular situation within any given hospital. We deal with the simultaneous employment of specialised human resources, which must be assigned to surgeries according to their skills as well as the time windows of the staff. A particular feature is that they can be assigned to two surgeries simultaneously if the rooms are compatible. The objective is to maximise the use of the operating rooms. We propose an integer model and integer programming based heuristics to address the problem. Computational experiments were conducted on a number of scenarios inspired by real data to cover different practical problem solving situations. Numerical results show that relaxations provide tight upper bounds, and relax-and-fix heuristics are successful in finding optimal or near optimal solutions.
Surgical scheduling with simultaneous employment of specialised human resources
S0377221715002817
We present an algorithm based on an ant colony system to deal with a broad range of Dynamic Capacitated Vehicle Routing Problems with Time Windows, (partial) Split Delivery and Heterogeneous fleets (DVRPTWSD). We address the important case of responsiveness. Responsiveness is defined here as completing a delivery as soon as possible, within the time window, such that the client or the vehicle may restart its activities. We develop an interactive solution to allow dispatchers to take new information into account in real-time. The algorithm and its parametrization were tested on real and artificial instances. We first illustrate our approach with a problem submitted by Liege Airport, the 8th biggest cargo airport in Europe. The goal is to develop a decision system to optimize the journey of the refueling trucks. We then consider some classical VRP benchmarks with extensions to the responsiveness context.
An ant colony system for responsive dynamic vehicle routing
S0377221715002829
In this study, biobjective mixed 0–1 integer linear programming problems are considered and two heuristic approaches are presented to find the Pareto frontier of these problems. The first heuristic is a variant of the variable neighborhood search and explores the k-neighbors of a feasible solution (in terms of binary variables) to find the extreme supported Pareto points. The second heuristic is adapted from the local branching method, which is well-known in single objective mixed 0–1 integer linear programming. Finally, an algorithm is proposed to find Pareto segments of outcome line segments of these heuristics. A computational analysis is performed by using some test problems from the literature and the results are presented.
Heuristic approaches for biobjective mixed 0–1 integer linear programming problems
S0377221715002830
The Minimum Number of Branch Vertices Spanning Tree Problem is to minimize the number of vertices of degree greater than two in a spanning tree. We present a branch-and-cut algorithm based on an enforced Integer Programming formulation, which can solve many more instances than previous methods. Since the problem is NP-hard, very large instances cannot be solved exactly. For such cases, a new heuristic two-stage method that gives very good approximate solutions is developed.
Exact and heuristic solutions for the Minimum Number of Branch Vertices Spanning Tree Problem
S0377221715002842
Point Pattern Matching (PPM) is a task to pair up the points in two images of a same scene. There are many existing approaches in literature for point pattern matching. However, the drawback lies in the high complexity of the algorithms. To overcome this drawback, an Ant Colony Optimization based Binary Search Point Pattern Matching (ACOBSPPM) algorithm is proposed. According to this approach, the edges of the image are stored in the form of point patterns. To match an incoming image with the stored images, the ant agent chooses a point value in the incoming image point pattern and employs a binary search method to find a match with the point values in the stored image point pattern chosen for comparison. Once a match occurs, the ant agent finds a match for the next point value in the incoming image point pattern by searching between the matching position and maximum number of point values in the stored image point pattern. The stored image point pattern having the maximum number of matches is the image matching with the incoming image. Experimental results are shown to prove that ACOBSPPM algorithm is efficient when compared to the existing point pattern matching approaches in terms of time complexity and precision accuracy.
Ant colony optimization based binary search for efficient point pattern matching in images
S0377221715002854
A company's assortment of products and corresponding inventory levels are constrained by available resources, such as production capacity, storage space, and capital to acquire the inventory. Thus, customers may not always be able to find a most preferred product at the time of purchase; this unsatisfied demand is often substituted with an alternative. In the extant literature, there have been an increasing number of studies that consider product substitution when planning product assortment, inventory, and capacity, in conjunction with pricing. In this paper we classify the literature on the planning of substitutable products published in the major OM and marketing journals during the past thirty years (1974–2013) and present a comprehensive taxonomy of the literature. One criterion is adopted to discuss modeling objectives, and three major criteria are provided to define the nature of product substitution, including substitution mechanism, substitution decision maker, and direction of substitutability. We also identify research gaps to provide guidance for related research in the future.
A classification of the literature on the planning of substitutable products
S0377221715002866
A number of factors are causing changes in container logistics, particularly for imports into the United States. These include growth and volatility in demands, expansion of new routes (Panama Canal and Northwest Passage), and development of new ports (Prince Rupert and ports on the Mexican west coast). Uncertainty is a major factor confronting logistics for container shipping. A stochastic network-flow model is developed in this study to analyze risk in port throughput as a result of randomness in critical variables in the logistics system for container imports into the United States. The results illustrate the stochastic distribution of container shipments at ports and routes serving the U.S. container market. The derived distributions for port throughput have important implications for port management.
Risk analysis in port competition for containerized imports
S0377221715002878
In many application areas such as airlines and hotels a large number of bookings are typically cancelled. Explicitly taking into account cancellations creates an opportunity for increasing revenue. Motivated by this we propose a revenue management model based on Talluri and van Ryzin (2004) that takes cancellations into account in addition to customer choice behaviour. Moreover, we consider overbooking limits as these are influenced by cancellations. We model the problem as a Markov decision process and propose three dynamic programming formulations to solve the problem, each appropriate in a different setting. We show that in certain settings the problem can be solved exactly using a tractable solution method. For other settings we propose tractable heuristics, since the problem faces the curse of dimensionality. Numerical results show that the heuristics perform almost as good as the exact solution. However, the model without cancellations can lead to a revenue loss of up to 20 percent. Lastly we provide a parameter estimation method based on Newman et al. (2014). This estimation method is fast and provides good parameter estimates. The combination of the model, the tractable and well-performing solution methods, and the parameter estimation method ensures that the model can efficiently be applied in practice.
Revenue management under customer choice behaviour with cancellations and overbooking
S0377221715003094
In this paper we present an efficient two-stage hierarchical decomposition algorithm aiming at determining economically improved operation schedules for residential proton exchange membrane fuel cell micro-combined heat and power (PEMFC micro-CHP) units and optimizing local charging of electric vehicles (EV) in the same household. Based on an individual short-term load forecasting (STLF) approach (imperfect forecast) for households implemented as an adaptive network-based fuzzy inference system (ANFIS), a mixed-integer linear program (MILP) and a two-stage greedy algorithm are used for determining optimized schedules based on a rolling-window approach. The results of the case study performed for eight variants in exemplary German households reveal that with both the MILP and the algorithmic approach, significant economic savings can be achieved compared to the standard heat-led strategy. Compared to the MILP, however, the two-stage algorithm has the additional advantage of a reduced computing time of only about 1 15 . Deviations from the MILP solutions are mostly smaller than 3 percent regarding the annual supply costs. Moreover, the comparison between the use of perfect and imperfect demand forecasts quantifies additional average losses due to forecasting errors of 2 percent and 3.3 percent at the maximum. Altogether, the algorithmic approach seems to be convincing for real applications in households due to its good results, high reliability, easy implementation, and short computing times. The combination of a micro-CHP unit and an EV is highly synergetic.
An efficient two-stage algorithm for decentralized scheduling of micro-CHP units
S0377221715003100
In this paper we study a hub location problem in which the hubs to be located must form a set of interconnecting lines. The objective is to minimize the total weighted travel time between all pairs of nodes while taking into account a budget constraint on the total set-up cost of the hub network. A mathematical programming formulation, a Benders-branch-and-cut algorithm and several heuristic algorithms, based on variable neighborhood descent, greedy randomized adaptive search, and adaptive large neighborhood search, are presented and compared to solve the problem. Numerical results on two sets of benchmark instances with up to 70 nodes and three lines confirm the efficiency of the proposed solution algorithms.
Exact and heuristic algorithms for the design of hub networks with multiple lines
S0377221715003112
In practice, in many call centers customers often perform redials (i.e., reattempt after an abandonment) and reconnects (i.e., reattempt after an answered call). In the literature, call center models usually do not cover these features, while real data analysis and simulation results show ignoring them inevitably leads to inaccurate estimation of the total inbound volume. Therefore, in this paper we propose a performance model that includes both features. In our model, the total volume consists of three types of calls: (1) fresh calls (i.e., initial call attempts), (2) redials, and (3) reconnects. In practice, the total volume is used to make forecasts, while according to the simulation results, this could lead to high forecast errors, and subsequently wrong staffing decisions. However, most of the call center data sets do not have customer-identity information, which makes it difficult to identify how many calls are fresh and what fractions of the calls are redials and reconnects. Motivated by this, we propose a model to estimate the number of fresh calls, and the redial and reconnect probabilities, using real call center data that has no customer-identity information. We show that these three variables cannot be estimated simultaneously. However, it is empirically shown that if one variable is given, the other two variables can be estimated accurately with relatively small bias. We show that our estimations of redial and reconnect probabilities and the number of fresh calls are close to the real ones, both via real data analysis and simulation.
On the estimation of the true demand in call centers with redials and reconnects
S0377221715003124
This paper presents an effective branch-and-price (B&P) algorithm for multiple-runway aircraft sequencing problems. This approach improves the tractability of the problem by several orders of magnitude when compared with solving a classical 0–1 mixed-integer formulation over a set of computationally challenging instances. Central to the computational efficacy of the B&P algorithm is solving the column generation subproblem as an elementary shortest path problem with aircraft time-windows and non-triangular separation times using an enhanced dynamic programming procedure. We underscore in our computational study the algorithmic features that contribute, in our experience, to accelerating the proposed dynamic programming procedure and, hence, the overall B&P algorithm.
An accelerated branch-and-price algorithm for multiple-runway aircraft sequencing problems
S0377221715003136
In an electric power system, demand fluctuations may result in significant ancillary cost to suppliers. Furthermore, in the near future, deep penetration of volatile renewable electricity generation is expected to exacerbate the variability of demand on conventional thermal generating units. We address this issue by explicitly modeling the ancillary cost associated with demand variability. We argue that a time-varying price equal to the suppliers’ instantaneous marginal cost may not achieve social optimality, and that consumer demand fluctuations should be properly priced. We propose a dynamic pricing mechanism that explicitly encourages consumers to adapt their consumption so as to offset the variability of demand on conventional units. Through a dynamic game-theoretic formulation, we show that (under suitable convexity assumptions) the proposed pricing mechanism achieves social optimality asymptotically, as the number of consumers increases to infinity. Numerical results demonstrate that compared with marginal cost pricing, the proposed mechanism creates a stronger incentive for consumers to shift their peak load, and therefore has the potential to reduce the need for long-term investment in peaking plants.
Pricing of fluctuations in electricity markets
S0377221715003148
The issues of carbon emission and global warming have increasingly aroused worldwide attention in recent years. Despite huge progresses in carbon abatement, few research studies have reported on the impacts of carbon emission reduction mechanisms on manufacturing optimisation, which often leads to decisions of environmentally unsustainable operations and misestimation of performance. This paper attempts to explore carbon management under the carbon emission trading mechanism for optimisation of lot sizing production planning in stochastic make-to-order manufacturing with the objective to maximise shareholder wealth. We are concerned not only about the economic benefits of investors, but also about the environmental impacts associated with production planning. Numerical experiments illustrate the significant influences of carbon emission trading, pricing, and caps on the dynamic decisions of the lot sizing policy. The result highlights the critical roles of carbon management in production planning for achieving both environmental and economic benefits. It also provides managerial insights into operations management to help mitigate environmental deterioration arising from carbon emission, as well as improve shareholder wealth.
Stochastic lot sizing manufacturing under the ETS system for maximisation of shareholder wealth
S0377221715003161
The recently published equilibrium efficient frontier data envelopment analysis (EEFDEA) approach (Yang et al., 2014) represents a step forward in evaluating decision-making units (DMUs) with fixed-sum outputs when compared to prior approaches such as FSODEA (fixed-sum outputs DEA) approach (Yang et al., 2011) and ZSG-DEA (zero sum gains DEA) approach (Lins et al., 2003) and so on. Based on the EEFDEA approach, in this paper, we proposed a generalized equilibrium efficient frontier data envelopment analysis approach (GEEFDEA) which improves and strengthens the EEFDEA approach. Compared to EEFDEA approach, this approach makes several improvements in evaluation, namely that (1) it is not necessary to determine the evaluation order in advance, which overcomes the limitation that different evaluation orders will lead to different results; (2) the equilibrium efficient frontier can be achieved in only one step no matter how many DMUs they are, which greatly simplifies the procedure to reach the equilibrium efficient frontier especially when the number of DMUs is large; and (3) the constraint that signs of outputs’ adjustment of each DMU must be same (all non-positive or all non-negative) in prior approaches has been relaxed. In this sense, the result obtained by the proposed approach is more consistent with the demand of practical applications. Finally, the proposed approach combined with assurance regions (AR) is applied to the data set of 2012 London Olympic Games.
A generalized equilibrium efficient frontier data envelopment analysis approach for evaluating DMUs with fixed-sum outputs
S0377221715003173
This paper deals with the problem of finding the globally optimal subset of h elements from a larger set of n elements in d space dimensions so as to minimize a quadratic criterion, with an special emphasis on applications to computing the Least Trimmed Squares Estimator (LTSE) for robust regression. The computation of the LTSE is a challenging subset selection problem involving a nonlinear program with continuous and binary variables, linked in a highly nonlinear fashion. The selection of a globally optimal subset using the branch and bound (BB) algorithm is limited to problems in very low dimension, typically d ≤ 5, as the complexity of the problem increases exponentially with d. We introduce a bold pruning strategy in the BB algorithm that results in a significant reduction in computing time, at the price of a negligeable accuracy lost. The novelty of our algorithm is that the bounds at nodes of the BB tree come from pseudo-convexifications derived using a linearization technique with approximate bounds for the nonlinear terms. The approximate bounds are computed solving an auxiliary semidefinite optimization problem. We show through a computational study that our algorithm performs well in a wide set of the most difficult instances of the LTSE problem.
SOCP relaxation bounds for the optimal subset selection problem applied to robust linear regression
S0377221715003185
We study the constructive variant of the control problem for Condorcet voting, where control is done by deleting voters. We prove that this problem remains NP-hard if instead of Condorcet winners the alternatives in the uncovered set win. Furthermore, we present a relation-algebraic model of Condorcet voting and relation-algebraic specifications of the dominance relation and the solutions of the control problem. All our relation-algebraic specifications immediately can be translated into the programming language of the OBDD-based computer system RelView. Our approach is very flexible and especially appropriate for prototyping and experimentation, and as such very instructive for educational purposes. It can easily be applied to other voting rules and control problems.
Control of Condorcet voting: Complexity and a Relation-Algebraic approach
S0377221715003197
Duality results on countably infinite linear programs are scarce. Subspaces that admit an interior point, which is a sufficient condition for a zero duality gap, yield a dual where the constraints cannot be expressed using the ordinary transpose of the primal constraint matrix. Subspaces that permit a dual with this transpose do not admit an interior point. This difficulty has stumped researchers for a few decades; it has recently been called the Slater conundrum. We find a way around this hurdle. We propose a pair of primal-dual spaces with three properties: the series in the primal and dual objective functions converge; the series defined by the rows and columns of the primal constraint matrix converge; and the order of sums in a particular iterated series of a double sequence defined by the primal constraint matrix can be interchanged so that the dual is defined by the ordinary transpose. Weak duality and complementary slackness are then immediate. Instead of using interior point conditions to establish a zero duality gap, we call upon the planning horizon method. When the series in the primal and dual constraints are continuous, we prove that strong duality holds if a sequence of optimal solutions to finite-dimensional truncations of the primal and dual CILPs has an accumulation point. We show by counterexample that the requirement that such an accumulation point exist cannot be relaxed. Our results are illustrated using several examples, and are applied to countable-state Markov decision processes and to a problem in robust optimization.
Circumventing the Slater conundrum in countably infinite linear programs
S0377221715003203
Interactive multiobjective optimization methods cannot necessarily be easily used when (industrial) multiobjective optimization problems are involved. There are at least two important factors to be considered with any interactive method: computationally expensive functions and aspects of human behavior. In this paper, we propose a method based on the existing NAUTILUS method and call it the Enhanced NAUTILUS (E-NAUTILUS) method. This method borrows the motivation of NAUTILUS along with the human aspects related to avoiding trading-off and anchoring bias and extends its applicability for computationally expensive multiobjective optimization problems. In the E-NAUTILUS method, a set of Pareto optimal solutions is calculated in a pre-processing stage before the decision maker is involved. When the decision maker interacts with the solution process in the interactive decision making stage, no new optimization problem is solved, thus, avoiding the waiting time for the decision maker to obtain new solutions according to her/his preferences. In this stage, starting from the worst possible objective function values, the decision maker is shown a set of points in the objective space, from which (s)he chooses one as the preferable point. At successive iterations, (s)he always sees points which improve all the objective values achieved by the previously chosen point. In this way, the decision maker remains focused on the solution process, as there is no loss in any objective function value between successive iterations. The last post-processing stage ensures the Pareto optimality of the final solution. A real-life engineering problem is used to demonstrate how E-NAUTILUS works in practice.
E-NAUTILUS: A decision support system for complex multiobjective optimization problems based on the NAUTILUS method
S0377221715003215
Network operation and rehabilitation are major concerns for water utilities due to their impact on providing a reliable and efficient service. Solving the optimization problems that arise in water networks is challenging mainly due to the nonlinearities inherent in the physics and the often binary nature of decisions. In this paper, we consider the operational problem of pump scheduling and the design problem of leaky pipe replacement. New approaches for these problems based on simulation-optimization are proposed as solution methodologies. For the pump scheduling problem, a novel decomposition technique uses solutions from a simulation-based sub-problem to guide the search. For the leaky pipe replacement problem a knapsack-based heuristic is applied. The proposed solution algorithms are tested and detailed results for two networks from the literature are provided.
Simulation-optimization approaches for water pump scheduling and pipe replacement problems
S0377221715003227
This paper presents a biased random-key genetic algorithm (BRKGA) for the unequal area facility layout problem (UA-FLP) where a set of rectangular facilities with given area requirements has to be placed, without overlapping, on a rectangular floor space. The objective is to find the location and the dimensions of the facilities such that the sum of the weighted distances between the centroids of the facilities is minimized. A hybrid approach combining a BRKGA, to determine the order of placement and the dimensions of each facility, a novel placement strategy, to position each facility, and a linear programming model, to fine-tune the solutions, is developed. The proposed approach is tested on 100 random datasets and 28 of benchmark datasets taken from the literature and compared with 21 other benchmark approaches. The quality of the approach was validated by the improvement of the best known solutions for 19 of the 28 extensively studied benchmark datasets.
A biased random-key genetic algorithm for the unequal area facility layout problem
S0377221715003239
The paper studies the incumbent-entrant problem in a fully dynamic setting. We find that under an open-loop information structure the incumbent anticipates entry by overinvesting, whereas in the Markov perfect equilibrium the incumbent slightly underinvests in the period before the entry. The entry cost level where entry accommodation passes into entry deterrence is lower in the Markov perfect equilibrium. Further we find that the incumbent’s capital stock level needed to deter entry is hump shaped as a function of the entry time, whereas the corresponding entry cost, where the entrant is indifferent between entry and non-entry, is U-shaped.
Optimal firm growth under the threat of entry
S0377221715003240
In this paper, we develop an optimal shelf-space stocking policy when demand, in addition to the exogenous uncertainty, is influenced by the amount of inventory displayed (supply) on the shelves. Our model exploits stochastic dominance condition; and, we assume that the distribution of realized demand with higher stocking level stochastically dominates the distribution of realized demand with lower stocking level. We show that the critical fractile with endogenous demand may not exceed the critical fractile of the classical newsvendor model. Our computational results validate the optimality of amount of units stocked on the retail shelves.
Optimal shelf-space stocking policy using stochastic dominance under supply-driven demand uncertainty
S0377221715003252
Risk prices are calculated as the certainty equivalents of risky assets, using a recently developed non-expected utility (non-EU) approach to quantitative risk assessment. The present formalism for the pricing of risk is computationally simple, realistic in the sense of behavioural economics and straightforward to apply in operational research and risk and decision analyses.
Risk pricing in a non-expected utility framework
S0377221715003264
We consider a firm facing stochastic demand for two products with downward, supplier-driven substitution and customer service objectives. We assume both products are perishable or prone to obsolescence, hence the firm faces a single period problem. The fundamental challenge facing the firm is to determine in advance of observing demand the profit maximizing inventory levels of both products that will meet given service level objectives. Note that while we speak of inventory levels, the products may be either goods or services. We characterize the firm’s optimal inventory policy with and without customer service objectives. Results of a numerical study reveal the benefits obtained from substitution and show how optimal inventory levels are impacted by customer service objectives.
Optimal inventory policy for two substitutable products with customer service objectives
S0377221715003276
As supply chain risk management has transitioned from an emerging topic to a growing research area, there is a need to classify different types of research and examine the general trends of this research area. This helps identify fertile research streams with great potential for further examination. This paper presents a systematic review of the quantitative and analytical models (i.e. mathematical, optimization and simulation modeling efforts) for managing supply chain risks. We use bibliometric and network analysis tools to generate insights that have not been captured in the previous reviews on the topic. In particular, we complete a systemic mapping of the literature that identifies the key research clusters/topics, interrelationships, and generative research areas that have provided the field with the foundational knowledge, concepts, theories, tools, and techniques. Some of our findings include (1) quantitative analysis of supply chain risk is expanding rapidly; (2) European journals are the more popular research outlets for the dissemination of the knowledge developed by researchers in United States and Asia; and (3) sustainability risk analysis is an emerging and fast evolving research topic.
Quantitative models for managing supply chain risks: A review
S0377221715003288
One of the most important factors shaping world outcomes is where investment dollars are placed. In this regard, there is the rapidly growing area called sustainable investing where environmental, social, and corporate governance (ESG) measures are taken into account. With people interested in this type of investing rarely able to gain exposure to the area other than through a mutual fund, we study a cross section of U.S. mutual funds to assess the extent to which ESG measures are embedded in their portfolios. Our methodology makes heavy use of points on the nondominated surfaces of many tri-criterion portfolio selection problems in which sustainability is modeled, after risk and return, as a third criterion. With the mutual funds acting as a filter, the question is: How effective is the sustainable mutual fund industry in carrying out its charge? Our findings are that the industry has substantial leeway to increase the sustainability quotients of its portfolios at even no cost to risk and return, thus implying that the funds are unnecessarily falling short on the reasons why investors are investing in these funds in the first place.
Tri-criterion modeling for constructing more-sustainable mutual funds
S0377221715003306
In the field of multicriteria decision aid, the Simos method is considered as an effective tool to assess the criteria importance weights. Nevertheless, the method's input data do not lead to a single weighting vector, but infinite ones, which often exhibit great diversification and threaten the stability and acceptability of the results. This paper proves that the feasible weighting solutions, of both the original and the revised Simos procedures, are vectors of a non-empty convex polyhedral set, hence the reason it proposes a set of complementary robustness analysis rules and measures, integrated in a Robust Simos Method. This framework supports analysts and decision makers in gaining insight into the degree of variation of the multiple acceptable sets of weights, and their impact on the stability of the final results. In addition, the proposed measures determine if, and what actions should be implemented, prior to reaching an acceptable set of criteria weights and forming a final decision. Two numerical examples are provided, to illustrate the paper's evidence, and demonstrate the significance of consistently analyzing the robustness of the Simos method results, in both the original and the revised method's versions.
Elicitation of criteria importance weights through the Simos method: A robustness concern
S0377221715003318
Principal Component Analysis (PCA) is the most common nonparametric method for estimating the volatility structure of Gaussian interest rate models. One major difficulty in the estimation of these models is the fact that forward rate curves are not directly observable from the market so that non-trivial observational errors arise in any statistical analysis. In this work, we point out that the classical PCA analysis is not suitable for estimating factors of forward rate curves due to the presence of measurement errors induced by market microstructure effects and numerical interpolation. Our analysis indicates that the PCA based on the long-run covariance matrix is capable to extract the true covariance structure of the forward rate curves in the presence of observational errors. Moreover, it provides a significant reduction in the pricing errors due to noisy data typically found in forward rate curves.
A noisy principal component analysis for forward rate curves
S0377221715003331
The Stand Allocation Problem (SAP) consists in assigning aircraft activities (arrival, departure and intermediate parking) to aircraft stands (parking positions) with the objective of maximizing the number of passengers/aircraft at contact stands and minimizing the number of towing movements, while respecting a set of operational and commercial requirements. We first prove that the problem of assigning each operation to a compatible stand is NP-complete by a reduction from the circular arc graph coloring problem. As a corollary, this implies that the SAP is NP-hard. We then formulate the SAP as a Mixed Integer Program (MIP) and strengthen the formulation in several ways. Additionally, we introduce two heuristic algorithms based on a spatial and time decomposition leading to smaller MIPs. The methods are tested on realistic scenarios based on actual data from two major European airports. We compare the performance and the quality of the solutions with state-of-the-art algorithms. The results show that our MIP-based methods provide significant improvements to the solutions outlined in previously published approaches. Moreover, their low computation makes them very practical.
Exact and heuristic approaches to the airport stand allocation problem
S0377221715003343
One of the most significant and common techniques to accelerate user queries in multidimensional databases is view materialization. The problem of choosing an appropriate part of data structure for materialization under limited resources is known as the view selection problem. In this paper, the problem of the mean query execution time minimization under limited storage space is studied. Different heuristics based on a greedy method are examined, proofs regarding their performance are presented, and modifications for them are proposed, which not only improve the solution cost but also shorten the running time. Additionally, the heuristics and a widely used Integer Programming solver are experimentally compared with respect to the running time and the cost of solution. What distinguishes this comparison is its comprehensiveness, which is obtained by the use of performance profiles. Two computational effort reduction schemas, which significantly accelerate heuristics as well as optimal algorithms without increasing the value of the cost function, are also proposed. The presented experiments were done on a large dataset with special attention to the large problems, rarely considered in previous experiments. The main disadvantage of a greedy method indicated in literature was its long running time. The results of the conducted experiments show that the modification of the greedy algorithm together with the computational effort reduction schemas presented in this paper result in the method which finds a solution in short time, even for large lattices.
Methods for solving the mean query execution time minimization problem
S0377221715003355
This work addresses the early phases of the elicitation of multiattribute value functions proposing a practical method for assessing interactions and monotonicity. We exploit the link between multiattribute value functions and the theory of high dimensional model representations. The resulting elicitation method does not state any a-priori assumption on an individual’s preference structure. We test the approach via an experiment in a riskless context in which subjects are asked to evaluate mobile phone packages that differ on three attributes.
Elicitation of multiattribute value functions through high dimensional model representations: Monotonicity and interactions
S0377221715003367
It is commonly accepted in the literature that, when facing with a strategic terrorist, the government can be better off by manipulating the terrorist’s target selection with exposing her defense levels and thus moving first. However, the impact of terrorist’s private information may significantly affect such government’s first-mover advantage, which has not been extensively studied in the literature. To explore the impact of asymmetry in terrorist’s attributes between government and terrorist on defense equilibrium, we propose a model in which the government chooses between disclosure (sequential game) and secrecy (simultaneous game) of her defense system. Our analysis shows that the government’s first-mover advantage in a sequential game is considerable only when both government and terrorist share relatively similar valuation of targets. In contrast, we interestingly find that the government no longer benefits from the first-mover advantage by exposing her defense levels when the degree of divergence between government and terrorist valuation of targets is high. This is due to the robustness of defense system under secrecy, in the sense that all targets should be defended in equilibrium irrespective of how the terrorist valuation of targets is different to government. We identify two phenomena that lead to this result. First, when the terrorist holds a significantly higher valuation of targets than the government’s belief, the government may waste her budget in a sequential game by over-investing on the high-valued targets. Second, when the terrorist holds a significantly lower valuation of targets, the government may incur a higher expected damage in a sequential game because of not defending the low-valued targets. Finally, we believe that this paper provides some novel insights to homeland security resource allocation problems.
On the value of exposure and secrecy of defense system: First-mover advantage vs. robustness
S0377221715003379
In this study, we investigate a two-dimensional cutting stock problem in the thin film transistor liquid crystal display industry. Given the lack of an efficient and effective mixed production method that can produce various sizes of liquid crystal display panels from a glass substrate sheet, thin film transistor liquid crystal display manufacturers have relied on the batch production method, which only produces one size of liquid crystal display panel from a single substrate. However, batch production is not an effective or flexible strategy because it increases production costs by using an excessive number of glass substrate sheets and causes wastage costs from unused liquid crystal display panels. A number of mixed production approaches or algorithms have been proposed. However, these approaches cannot solve industrial-scale two-dimensional cutting stock problem efficiently because of its computational complexity. We propose an efficient and effective genetic algorithm that incorporates a novel placement procedure, called a corner space algorithm, and a mixed integer programming model to resolve the problem. The key objectives are to reduce the total production costs and to satisfy the requirements of customers. Our computational results show that, in terms of solution quality and computation time, the proposed method significantly outperforms the existing approaches.
An efficient genetic algorithm with a corner space algorithm for a cutting stock problem in the TFT-LCD industry
S0377221715003653
We propose a new moment-matching method to build scenario trees that rule out arbitrage opportunities when describing the dynamics of financial assets. The proposed scenario generator is based on the monomial method, a technique to solve systems of algebraic equations. Extensive numerical experiments show the accuracy and efficiency of the proposed moment-matching method when solving financial problems in complete and incomplete markets.
A moment-matching method to generate arbitrage-free scenarios
S0377221715003665
Ouyang et al. (2009) consider an economic order quantity (EOQ) model for deteriorating items with a partially permissible delay in payments linked to order quantity. Basically, their inventory model is practical, but there are some defects from the logical viewpoints of mathematics. In this paper, the functional behaviors of the annual total relevant costs are explored by rigorous methods of mathematics. A complete solution procedure is also developed to make up for the shortcomings of Ouyang et al. (2009). In numerical examples, it is proved that the new solution procedure could avoid making wrong decisions and causing cost penalties.
Comments on the EOQ model for deteriorating items with conditional trade credit linked to order quantity in the supply chain management
S0377221715003677
This paper analyzes the bullwhip effect in multi-echelon supply chains under a general class of nonlinear ordering policies. A describing-function approach from control theory is used to derive closed-form formulas to predict amplification of order fluctuations along the supply chain. It is proven that with consideration of nonlinearity in the ordering policy, the magnitude of the bullwhip effect will eventually become bounded after growing through the first few stages of the supply chain. It is also proven that the average customer demand as well as the demand fluctuation frequency would directly affect the bounded magnitude, while the suppliers’ demand forecasting method has no effect at all. For illustration, analytical results for a class of order-up-to policies are derived and verified by numerical simulations. The proposed modeling framework holds the promise to not only explain empirical observations, but also serve as the basis for developing counteracting strategies against the bullwhip effect.
Bounded growth of the bullwhip effect under a class of nonlinear ordering policies
S0377221715003689
Handling weather uncertainty during the harvest season is an indispensable aspect of seed gathering activities. More precisely, the focus of this study refers to the multi-period wheat quality control problem during the crop harvest season under meteorological uncertainty. In order to alleviate the problem curse of dimensionality and to reflect faithfully exogenous uncertainties revealed progressively over time, we propose a multi-step joint chance-constrained model rolled forward step-by-step. This model is subsequently solved by a proactive dynamic approach, specially conceived for this purpose. Based on real-world derived instances, the obtained computational results exhibit proactive and accurate harvest scheduling solutions for the wheat crop quality control problem.
A multi-step rolled forward chance-constrained model and a proactive dynamic approach for the wheat crop quality control problem
S0377221715003707
In this paper, we consider the problem of optimizing the portfolio of an aggregator that interacts with the energy grid via bilateral contracts. The purpose of the contracts is to achieve the pointwise procurement of energy to the grid. The challenge raised by the coordination of scattered resources and the securing of obligations over the planning horizon is addressed through a twin-time scale model, where robust short term operational decisions are contingent on long term resource usage incentives that embed the full extent of contract specifications. A contract is said to be valid at week t if week t is within the contract’s validity period and if it has a positive number of request tokens left at the end of week t − 1. A resource contract that has announced a maintenance for week t is not valid at week t. Contract Level information. Projected Flow information. set of weeks. set of time slots in week t. duration of time slot s ∈ S(t). set of time slots types. type of time slot s ∈ S(t). set of demands (grid contracts). set of resources (generator’s contracts). set of valid demand contracts in time slot s ∈ S(t). set of valid resource contracts in time slot s ∈ S(t). set of valid and compatible demand/resource contract pairs in time slot s ∈ S(t). set of valid resources in time slot s ∈ S(t) that are compatible with demand i. set of valid demands in time slot s ∈ S(t) that are compatible with resource i. set of demand contracts in time slots of type k. subset of valid demand contracts in D ^ k forming the ℓth horizontal slice of the corresponding demand histogram. number of slices in histogram associated with demand D ^ k . slide indices for time slot type k: Lk = [1, … , nk ]. static feasibility set associated with resource set R and demand set D. robust feasible at week t. set of feasible demand request scenarios. subset of feasible demand request scenarios compatible with the uniform forecast. amount of mobilization m used in time slots of type k along the ℓth slice (to respond to simultaneous requests from demands D ^ k ℓ ). binary decision variable set to 1 if resource i is assigned to demand j in time slot s ∈ S(t). binary parameters set to 1 if demand j makes a request in time slot s ∈ S(t). upper bound on the expected number of simultaneous requests from demands D ^ k ℓ in time slots of type k. expected number of requests from demand j in time slots of type k. number of request tokens for resource i at the beginning of week t. number of requests tokens for demand j at the beginning of week t. adjusted cost of resource i for demand j at time slot s ∈ S(t). cost for the mobilization of resource i for one unit of time. set of mobilizations for time slots of type k for demands in D ^ k ℓ . total cost of resource mobilization for mobilization m ∈ M kℓ. mobilization incidence matrix associated with slice ℓ of time slot type k. duration of time slot s ∈ S(t). power delivered (resp. received) for resource (resp. demand) ℓ.
Optimal design of bilateral contracts for energy procurement
S0377221715003719
Let G = (V, E) be an undirected graph with costs associated with its edges and K pre-specified root vertices. The K−rooted mini-max spanning forest problem asks for a spanning forest of G defined by exactly K mutually disjoint trees. Each tree must contain a different root vertex and the cost of the most expensive tree must be minimum. This paper introduces a Branch-and-cut algorithm for the problem. It involves a multi-start Linear Programming heuristic and the separation of some new optimality cuts. Extensive computational tests indicate that the new algorithm significantly improves on the results available in the literature. Improvements being reflected by lower CPU times, smaller enumeration trees, and optimality certificates for previously unattainable K = 2 instances with as many as 200 vertices. Furthermore, for the first time, instances of the problem with K ∈ {3, 4} are solved to proven optimality.
Optimality cuts and a branch-and-cut algorithm for the K-rooted mini-max spanning forest problem
S0377221715003720
Mining complexes contain multiple sequential activities that are strongly interrelated. Extracting the material from different sources may be seen as the first main activity, and any change in the sequence of extraction of the mining blocks modify the activities downstream, including blending, processing and transporting the processed material to final stocks or ports. Similarly, modifying the conditions of operation at a given processing path or the transportation systems implemented may affect the suitability of using a mining sequence previously optimized. This paper presents a method to generate mining, processing and transportation schedules that account for the previously mentioned activities (or stages) associated with the mining complex simultaneously. The method uses an initial solution generated using conventional optimizers and improves it by mean of perturbations associated to three different levels of decision: block based perturbations, operating alternative based perturbations and transportation system based perturbation. The method accounts for geological uncertainty of several deposits by considering scenarios originated from combinations of their respective stochastic orebody simulations. The implementation of the method in a multipit copper operation shows its ability to reduce deviations from capacity and blending targets while improving the expected NPV (cumulative discounted cash flows), which highlight the importance of stochastic optimizers given their ability to generate more value with less risk.
Optimizing mining complexes with multiple processing and transportation alternatives: An uncertainty-based approach
S0377221715003732
In projects with a flexible project structure, the activities that must be scheduled are not completely known in advance. Scheduling such projects includes deciding whether to perform particular activities. This decision also affects precedence constraints among the implemented activities. However, established model formulations and solution approaches for the resource-constrained project scheduling problem (RCPSP) assume that the project structure is provided in advance. In this paper, the traditional RCPSP is extended using a highly general model-endogenous decision on this flexible project structure. This extension is illustrated using the example of the aircraft turnaround process at airports. We present a genetic algorithm to solve this type of scheduling problem and evaluate it in an extensive numerical study.
Scheduling resource-constrained projects with a flexible project structure
S0377221715003744
Used products collected for value recovery are characterized by higher uncertainty regarding their quality condition compared to raw materials used in forward supply chains. Because of the need for timely information regarding their quality, a common business practice is to establish procedures for the classification of used products (returns), which is not always error-free. The existence of a multitude of sites where used products can be collected, further increases the complexity of reverse supply chain design and management. In this paper we formulate the objective function for a reverse supply chain with multiple collection sites and the possibility of returns sorting, assuming general distributions of demand and returns quality in a single-period context. We derive conditions for the determination of the optimal acquisition and remanufacturing lot-sizing decisions under alternative locations of the unreliable classification/sorting operation. We provide closed-form expressions for the selection of the optimal sorting location in the special case of identical collection sites and guidelines for tackling the decision-making problem in the general case. Furthermore, we examine analytically the effect of the cost and accuracy of the classification procedure on the profitability of the alternative supply chain configurations. Our analysis, which is accompanied by a brief numerical investigation, offers insights regarding the impact of yield variability, number of collection sites, and location and characteristics of the returns classification operation both on the acquisition decisions and on the profitability of the reverse supply chain.
Reverse supply chains: Effects of collection network and returns classification on profitability
S0377221715003756
The literature reveals contradiction between theoretical results (superiority of uniform policy under a concave advertising response function) and empirical results (concavity of the advertising response function and the superiority of a pulsation policy). To reconcile the above difference, this paper offers a resolution based on (1) the concavity of the advertising response function; (2) the convexity of the firm's cost function; and (3) over-advertising. The resolution is reached upon maximizing the net profit per unit time over the infinite planning horizon subject to an exogenous advertising budget constraint. Theoretical results for monopolistic markets are found mostly generalized to competitive markets. A numerical example is introduced to gain more insight into the theoretical findings and an approach is introduced and implemented to empirically assess the shape of a firm's cost function and the advertising policy to be employed.
Pulsation in a competitive model of advertising-firm's cost interaction