FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0377221716000795
Operational decisions for crude oil scheduling activities are determined on a daily basis and have a strong impact on the overall supply chain cost. The challenge is to develop a feasible schedule at a low cost that has a high level of confidence. This paper presents a framework to support decision making in terminal-refinery systems under supply uncertainty. The proposed framework comprises a stochastic optimization model based on mixed-integer linear programming for scheduling a crude oil pipeline connecting a marine terminal to an oil refinery and a method for representing oil supply uncertainty. The scenario generation method aims at generating a minimal number of scenarios while preserving as much as possible of the uncertainty characteristics. The proposed framework was evaluated considering real-world data. The numerical results suggest the efficiency of the framework in providing resilient solutions in terms of feasibility in the face of the inherent uncertainty.
A framework for crude oil scheduling in an integrated terminal-refinery system under supply uncertainty
S0377221716000813
In this paper, we address a multi-activity tour scheduling problem with time varying demand. The objective is to compute a team schedule for a fixed roster of employees in order to minimize the over-coverage and the under-coverage of different parallel activity demands along a planning horizon of one week. Numerous complicating constraints are present in our problem: all employees are different and can perform several different activities during the same day-shift, lunch breaks and pauses are flexible, demand is given for 15 minutes periods. Employees have feasibility and legality rules to be satisfied, but the objective function does not account for any quality measure associated with each individual’s schedule. More precisely, the problem mixes simultaneously days-off scheduling, shift scheduling, shift assignment, activity assignment, pause and lunch break assignment. To solve this problem, we developed four methods: a compact Mixed Integer Linear Programming model, a branch-and-price like approach with a nested dynamic program to solve heuristically the subproblems, a diving heuristic and a greedy heuristic based on our subproblem solver. The computational results, based on both real cases and instances derived from real cases, demonstrate that our methods are able to provide good quality solutions in a short computing time. Our algorithms are now embedded in a commercial software, which is already in use in a mini-mart company.
Column generation based approaches for a tour scheduling problem with a multi-skill heterogeneous workforce
S0377221716000825
Ridge analysis allows the analyst to explore the optimal operating conditions of the experimental factors. A confidence region is desirable for the estimated ridge path. Most literature concentrates on the univariate response situation. Little is known for the confidence region of the ridge path for the multivariate response; only a large-sample confidence interval for the ridge path is available. The simultaneous coverage rate for the existing interval is typically too conservative in practice, especially for small sample sizes. In this paper, the ridge path (via desirability function) is estimated based on the seemingly unrelated regression (SUR) model as well as standard multivariate regression (SMR) model, and a conservative confidence interval suitable for small sample sizes is proposed. It is shown that the proposed method outperforms the existing methods. Real-life examples and simulative study are given for illustration.
A confidence region for the ridge path in multiple response surface optimization
S0377221716000837
Due to new regulations and further technological progress in the field of electric vehicles, the research community faces the new challenge of incorporating the electric energy based restrictions into vehicle routing problems. One of these restrictions is the limited battery capacity which makes detours to recharging stations necessary, thus requiring efficient tour planning mechanisms in order to sustain the competitiveness of electric vehicles compared to conventional vehicles. We introduce the Electric Fleet Size and Mix Vehicle Routing Problem with Time Windows and Recharging Stations (E-FSMFTW) to model decisions to be made with regards to fleet composition and the actual vehicle routes including the choice of recharging times and locations. The available vehicle types differ in their transport capacity, battery size and acquisition cost. Furthermore, we consider time windows at customer locations, which is a common and important constraint in real-world routing and planning problems. We solve this problem by means of branch-and-price as well as proposing a hybrid heuristic, which combines an Adaptive Large Neighbourhood Search with an embedded local search and labeling procedure for intensification. By solving a newly created set of benchmark instances for the E-FSMFTW and the existing single vehicle type benchmark using an exact method as well, we show the effectiveness of the proposed approach.
The Electric Fleet Size and Mix Vehicle Routing Problem with Time Windows and Recharging Stations
S0377221716000849
We consider a class of multi-objective probabilistically constrained programs (MOPCP) with a joint probabilistic constraint and a variable risk level. We consider two cases with only a random right-hand side vector or a multi-row random technology matrix, and propose a Boolean modeling framework to derive new mixed-integer linear programs (MILP) that are either equivalent reformulations or inner approximations of MOPCP, respectively. Via testing randomly generated MOPCP instances, we demonstrate modeling insights pertaining to the most suitable MILP, to the trade-offs between conflicting objectives of cost/revenue and reliability, and to the parameter scalarization determining relative importance of each objective. We then focus on several MOPCP variants of a multi-portfolio financial optimization problem to implement a downside risk measure, which can be used in a centralized or decentralized investment context. We study the impact of modeling parameters on the portfolios, show, via a cross-validation study, robustness of MOPCP, and perform a comparative analysis of the optimal investment decisions.
Multi-objective probabilistically constrained programs with variable risk: Models for multi-portfolio financial optimization
S0377221716000850
This paper develops estimators for market penetration level and arrival rate in finding queue lengths from probe vehicles at isolated traffic intersections. Closed-form analytical expressions for expectations and variances of these estimators are formulated. Derived estimators are compared based on squared error losses. Effect of number of cycles (i.e., short-term and long-term performances), estimation at low penetration rates, and impact of combinations of derived estimators on queue length problem are also addressed. Fully analytical formulas with unknown parameters are derived to evaluate how queue length estimation errors change with respect to percent of probe vehicles in the traffic stream. Developed models can be used for the real-time cycle-to-cycle estimation of the queue lengths by inputting some of the fundamental information that probe vehicles provide (e.g., location, time, and count). Models are evaluated using VISSIM microscopic simulations with different arrival patterns. Numerical experiments show that the developed estimators are able to point the true arrival rate values at 5% probe penetration level with 10 cycles of data. For low penetrations such as 0.1%, large number of cycles of data is required by arrival rate estimators which are essential for overflow queue and volume-to-capacity ratios. Queue length estimation with tested parameter estimators is able to provide cycle-to-cycle errors within ±5% of coefficient of variations with less than 5 cycles of probe data at 0.1% penetration for all arrival rates used.
Queue length estimation from probe vehicles at isolated intersections: Estimators for primary parameters
S0377221716000862
We provide a novel adversarial risk analysis approach to security resource allocation decision processes for an organization which faces multiple threats over multiple sites. We deploy a Sequential Defend-Attack model for each type of threat and site, under the assumption that different attackers are uncoordinated, although cascading effects are contemplated. The models are related by resource constraints and results are aggregated over the sites for each participant and, for the Defender, by value aggregation across threats. We illustrate the model with a case study in which we support a railway operator in allocating resources to protect from two threats: fare evasion and pickpocketing. Results suggest considerable expected savings due to the proposed investments.
Multithreat multisite protection: A security case study
S0377221716000874
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes–no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognized incorrectly by the yes-filter (that is, to recognize the false positives of the yes-filter). By querying the no-filter after an object has been recognized by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognizes as many as possible false positives but no true positives, thus producing the most accurate yes–no Bloom filter among all yes–no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognized by the no-filter, with the constraint being that it should recognize no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes–no Bloom filters. In a wider context of the study of lossy compression algorithms, our research is an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
An approximate dynamic programming approach for improving accuracy of lossy data compression by Bloom filters
S0377221716000886
One of the important aspects to increasing sugarcane mechanical harvesting efficiency is the path planning of the harvester, involving direction and field accessibility constraints. Moreover, in real-life applications, the two objective functions pertaining to minimization of harvested distance and maximization of sugarcane yield are conflicting and must be considered simultaneously. This paper presents a multi-objective with the variant of the particle swarm optimization combined gbest, lbest and nbest social structures (MO-GLNPSO), to solve sugarcane mechanical harvester route planning (MHRP). A new particle encoding/decoding scheme has been devised for combining the path planning with the accessibility and split harvesting constraints. Numerical computation results on several networks with sugarcane field topologies illustrate the efficiency of the proposed MO-GLNPSO method for computation of MHRP, which is compared with other methods such as the traditional particle swarm optimization (PSO) and Non-dominated Sorting Genetic Algorithm-II (NSGAII) by the values of C ˜ metric indicator. The solutions found in this work can offer a decision maker a choice of trade-off solutions, providing sufficient options to give planners the power to make an informed choice that balances the important objectives.
Multi-objective particle swarm optimization for mechanical harvester route planning of sugarcane field operations
S0377221716000898
Public investment into risk reduction infrastructure plays an important role in facilitating adaptation to climate impacted hazards and natural disasters. In this paper, we provide an economic framework to incorporate investment timing and insurance market risk preferences when evaluating projects related to reducing climate impacted risks. The model is applied to a case study of bushfire risk management. We find that optimal timing of the investment may increase the net present value (NPV) of an adaptation project for various levels of risk aversion. Assuming risk neutrality, while the market is risk averse, is found to result in an unnecessary delay of the investment into risk reduction projects. The optimal waiting time is shorter when the insurance market is more risk averse or when a more serious scenario for climatic change is assumed. A higher investment cost or a higher discount rate will increase the optimal waiting time. We also find that a stochastic discount rate results in higher NPVs of the project than a discount rate that is assumed fixed at the long run average level.
It’s not now or never: Implications of investment timing and risk aversion on climate adaptation to extreme events
S0377221716000904
We consider a scheduling problem where a set of jobs has already been assigned to identical parallel machines that are subject to disruptions with the objective of minimizing the total completion time. When machine disruptions occur, the affected jobs need to be rescheduled with a view to not causing excessive schedule disruption with respect to the original schedule. Schedule disruption is measured by the maximum time deviation or the total virtual tardiness, given that the completion time of any job in the original schedule can be regarded as an implied due date for the job concerned. We focus on the trade-off between the total completion time of the adjusted schedule and schedule disruption by finding the set of Pareto-optimal solutions. We show that both variants of the problem are NP -hard in the strong sense when the number of machines is considered to be part of the input, and NP -hard when the number of machines is fixed. In addition, we develop pseudo-polynomial-time solution algorithms for the two variants of the problem with a fixed number of machines, establishing that they are NP -hard in the ordinary sense. For the variant where schedule disruption is modeled as the total virtual tardiness, we also show that the case where machine disruptions occur only on one of the machines admits a two-dimensional fully polynomial-time approximation scheme. We conduct extensive numerical studies to evaluate the performance of the proposed algorithms.
Rescheduling on identical parallel machines with machine disruptions to minimize total completion time
S0377221716000916
In this paper, we approach the Examination-Timetabling Problem (ETP) from a student-centric point of view. We allow for multiple versions of an exam to be scheduled to increase the spreading of exams for students. We propose two Column Generation (CG) algorithms. In the first approach, a column is defined as an exam schedule for every unique student group, and a Pricing Problem (PPs) is developed to generate these columns. The Master Program (MP) then selects an exam schedule for every unique student group. Instead of using branch-and-price, we heuristically select columns. In the second approach, a column consists of a mask schedule for every unique student group, and a PP is developed to generate the masks. The MP then selects the masks and schedules exams in the mask slots. We compare both models and perform a computational experiment. We solve the ETP at KU Leuven campus Brussels (Belgium) for the business engineering degree program and apply the models to two existing datasets from the literature.
A column generation approach for solving the examination-timetabling problem
S0377221716000928
The Obnoxious p-Median problem consists in selecting a subset of p facilities from a given set of possible locations, in such a way that the sum of the distances between each customer and its nearest facility is maximized. The problem is NP -hard and can be formulated as an integer linear program. It was introduced in the 1990s, and a branch and cut method coupled with a tabu search has been recently proposed. In this paper, we propose a heuristic method – based on the Greedy Randomized Adaptive Search Procedure, GRASP, methodology – for finding approximate solutions to this optimization problem. In particular, we consider an advanced GRASP design in which a filtering mechanism avoids applying the local search method to low quality constructed solutions. Empirical results indicate that the proposed implementation compares favorably to previous methods. This fact is confirmed with non-parametric statistical tests.
Advanced Greedy Randomized Adaptive Search Procedure for the Obnoxious p-Median problem
S037722171600093X
We propose a Hybrid Scenario Cluster Decomposition (HSCD) heuristic for solving a large-scale multi-stage stochastic mixed-integer programming (MS-MIP) model corresponding to a supply chain tactical planning problem. The HSCD algorithm decomposes the original scenario tree into smaller sub-trees that share a certain number of predecessor nodes. Then, the MS-MIP model is decomposed into smaller scenario-cluster multi-stage stochastic sub-models coordinated by Lagrangian terms in their objective functions, in order to compensate the lack of non-anticipativity corresponding to common ancestor nodes of sub-trees. The sub-gradient algorithm is then implemented in order to guide the scenario-cluster sub-models into an implementable solution. Moreover, a Variable Fixing Heuristic is embedded into the sub-gradient algorithm in order to accelerate its convergence rate. Along with the possibility of parallelization, the HSCD algorithm provides the possibility of embedding various heuristics for solving scenario-cluster sub-models. The algorithm is specialized to lumber supply chain tactical planning under demand and supply uncertainty. An ad-hoc heuristic, based on Lagrangian Relaxation, is proposed to solve each scenario-cluster sub-model. Our experimental results on a set of realistic-scale test cases reveal the efficiency of HSCD in terms of solution quality and computation time. set of harvesting blocks set of manufacturing mills set of raw materials set of available raw material scenarios set of time periods corresponding to node n in the scenario tree the demand scenario tree set of immediate predecessor of each node n in scenario tree the total harvesting capacity in period t the total transportation capacity in period t unit cost to harvest block bl during period t unit cost to transport raw material rm from block bl to mill m during period t unit cost to store raw material rm in block bl during period t stumpage fee for raw material rm in block bl during period t inventory holding cost of raw material rm at mill m inventory capacity at mill m during period t supply capacity of block bl in period t maximum number of periods over which harvesting can occur in block bl lead time of procuring raw material rm from block bl unit purchase cost of raw material rm from block m in period t maximum number of blocks in which harvesting can occur during period t probability of node n in scenario tree probability of supply scenario sc minimum contract purchase quantity from block bl safety stock of raw material rm at mill m volume of available raw material rm in block bl for supply scenario sc forecasted demand of raw material rm for mill m in period t at node n of the scenario tree binary variable that takes 1 if harvesting occurs in block bl during time period t at node n of the scenario tree and 0 otherwise inventory of raw material rm in block bl at the end of period t for supply scenario sc at node n of the scenario tree inventory of raw material rm at mill m at the end of period t for supply scenario sc at node n of the scenario tree purchasing quantity of raw material rm from block bl in period t at node n of the scenario tree proportion of harvested block bl in period t at node n of the scenario tree
A hybrid scenario cluster decomposition algorithm for supply chain tactical planning under uncertainty
S0377221716000941
A natural way to avoid the injection of potentially dangerousor illicit products in a certain country is by means of protection, following a strict port-of-entry inspection policy. A naive exhaustive manual inspection is the most secure policy. However, the number of within containers allows only to check a limited number of containers by day. As a consequence, a smart port-of-entry selection policy must trade cost of inspection with security, in order to fit into the dynamic operation of a port. We explore the design of port-of-entry container inspection policies with imperfect information (unavailable or untrusted data). Starting from an a-priori classification provided by port-of-entry customs operator, a combinatorial optimization problem is introduced. The goal is to match an a-priori container classification with a logically coherent one, subject to a given level of container inspection. Inspired in the related literature, a novel Multi-Tree Committee is introduced in order to find a solution to the previous combinatorial problem. It combines the strength of binary decision trees and minimization of logical functions. The algorithm is easy-to-handle and useful for an on-line production. We highlight the effectiveness of our proposal, regarding real traces available from the port of Montevideo. The results show the capability to detect the most risky containers and its conservative nature, respecting any desired level of inspection.
A Multi-Tree Committee to assist port-of-entry inspection decisions
S0377221716000953
While the literature on dynamic portfolio selection with stochastic interest rates only confines its investigation to the continuous-time setting up to now, this paper studies a multi-period mean-variance portfolio selection problem with a stochastic interest rate, where the movement of the interest rate is governed by the discrete-time Vasicek model. Invoking dynamic programming approach and the Lagrange duality theory, we derive the analytical expressions for both the efficient investment strategy and the efficient mean-variance frontier of the model formulation. We then extend our model to the situation with an uncontrollable liability.
Multi-period mean-variance portfolio selection with stochastic interest rate and uncontrollable liability
S0377221716000965
We present numerical approximations of optimal portfolios in mispriced Lévy markets under asymmetric information for informed and uninformed investors having logarithmic preference. We apply our numerical scheme to Kou (2002) jump-diffusion markets by deriving analytic formulas for the first two derivatives of the underlying portfolio objective function which depend only on the Lévy measure of the jump-generating process. Optimal portfolios are then simulated using the Box–Muller algorithm, Newton’s method and incomplete Beta functions. Convergence dynamics and trajectories of sample paths of optimal portfolios for both investors are presented at different levels of information asymmetry, mispricing, horizon, asymmetry in the Kou density, jump intensity, volatility, mean-reversion speed, and Sharpe ratios. We also apply the proposed Newton’s algorithm to compute optimal portfolios for investors in Variance Gamma markets via instantaneous centralized moments of returns.
Numerical approximations of optimal portfolios in mispriced asymmetric Lévy markets
S0377221716000977
Policies for managing multi-echelon supply chains can be considered mathematically as large-scale dynamic programs, affected by uncertainty and incomplete information. Except for a few special cases, optimal solutions are computationally intractable for systems of realistic size. This paper proposes a novel approximation scheme using scenario-based model predictive control (SCMPC), based on recent results in scenario-based optimization. The presented SCMPC approach can handle supply chains with stochastic planning uncertainties from various sources (demands, lead times, prices, etc.) and of a very general nature (distributions, correlations, etc.). Moreover, it guarantees a specified customer service level, when applied in a rolling horizon fashion. At the same time, SCMPC is computationally efficient and able to tackle problems of a similar scale as manageable by deterministic optimization. For a large class of supply chain models, SCMPC may therefore offer substantial advantages over robust or stochastic optimization.
Scenario-based model predictive control for multi-echelon supply chain management
S0377221716000989
The popularity of business intelligence (BI) systems to support business analytics has tremendously increased in the last decade. The determination of data items that should be stored in the BI system is vital to ensure the success of an organisation's business analytic strategy. Expanding conventional BI systems often leads to high costs of internally generating, cleansing and maintaining new data items whilst the additional data storage costs are in many cases of minor concern – what is a conceptual difference to big data systems. Thus, potential additional insights resulting from a new data item in the BI system need to be balanced with the often high costs of data creation. While the literature acknowledges this decision problem, no model-based approach to inform this decision has hitherto been proposed. The present research describes a prescriptive framework to prioritise data items for business analytics and applies it to human resources. To achieve this goal, the proposed framework captures core business activities in a comprehensive process map and assesses their relative importance and possible data support with multi-criteria decision analysis.
Prioritising data items for business analytics: Framework and application to human resources
S0377221716000990
DEA production games have recently been introduced in a paper by Lozano (2013). In the present paper we further investigate these cooperative games. We establish the links between the class of DEA production games and the classes of linear programming games and linear production games. We also analyse the Owen set of DEA production games and discuss the interpretation of these allocations for different levels of cooperation between agents.
DEA production games and Owen allocations
S0377221716001004
The Basel II and III Accords allow banks to calculate regulatory capital using their own internally developed models under the advanced internal ratings-based approach (AIRB). The Exposure at Default (EAD) is a core parameter modelled for revolving credit facilities with variable exposure. The credit conversion factor (CCF), the proportion of the current undrawn amount that will be drawn down at time of default, is used to calculate the EAD and poses modelling challenges with its bimodal distribution bounded between zero and one. There has been debate on the suitability of the CCF for EAD modelling. We explore alternative EAD models which ignore the CCF formulation and target the EAD distribution directly. We propose a mixture model with the zero-adjusted gamma distribution and compare its performance to three variants of CCF models and a utilization change model which are used in industry and academia. Additionally, we assess credit usage – the percentage of the committed amount that has been currently drawn – as a segmentation criterion to combine direct EAD and CCF models. The models are applied to a dataset from a credit card portfolio of a UK bank. The performance of these models is compared using cross-validation on a series of measures. We find the zero-adjusted gamma model to be more accurate in calibration than the benchmark models and that segmented approaches offer further performance improvements. These results indicate direct EAD models without the CCF formulation can be an alternative to CCF based models or that both can be combined.
Exposure at default models with and without the credit conversion factor
S0377221716001016
The blocks relocation problem is a classic combinatorial optimisation problem that occurs in daily operations for facilities that use block stacking systems. In the block stacking method, blocks can be stored on top of each other in order to utilise the limited surface of a storage area. When there is a predetermined pickup order among the blocks, this stacking method inevitably leads to the reshuffling moves for blocks stored above the target block and the minimisation of such unproductive reshuffling moves is of a primary concern to industry practitioners. A container terminal is a typical place where this problem arises, thus the problem being also referred to as the container relocation problem. In this study, we consider departure time windows for containers, which are usually revealed by the truck appointment system in port container terminals. We propose a stochastic dynamic programming model to calculate the minimum expected number of reshuffles for a stack of containers which all have departure time windows. The model is solved with a search-based algorithm in a tree search space, and an abstraction heuristic is proposed to improve the time performance. To overcome the computational limitation of exact methods, we develop a heuristic called the expected reshuffling index (ERI) and evaluate its performance.
Container relocation problem with time windows for container departure
S0377221716001028
In the operations management literature, traditional revenue management focused on pricing and capacity allocation strategies in a two-period model with stochastic demand. Inspired by travel and lodging industries, we examine a two-period model in which each seller may also adopt the overselling strategy to customers whose valuations are differentiated by timing of arrivals. Widely seen as a popular hedge against consumers’ skipping reservations, we extend the stylized approaches of Biyalogorsky, Carmon, Fruchter, and Gerstner (1999) and Lim (2009) to understand the value of overselling under various market structures. We find that contrary to existing literature, the impact of period-two pricing competition from overselling spills over to period-one such that overselling may not always be a (weakly) dominant strategy once unlimited early demand ceases to hold in a duopoly regime. We provide some numerical studies on the existence of multiple equilibria at the capacity allocation level which actually lead to different selling strategies at the equilibrium despite identical market conditions and firm characteristics.
Market structure and the value of overselling under stochastic demands
S037722171600103X
We consider a Cournot duopoly under general demand and cost functions, where an incumbent patentee has a cost reducing technology that it can license to its rival by using combinations of royalties and upfront fees (two-part tariffs). We show that for drastic technologies: (a) licensing occurs and both firms stay active if the cost function is superadditive and (b) licensing does not occur and the patentee monopolizes the market if the cost function is additive or subadditive. For non drastic technologies, licensing takes place provided the average efficiency gain from the cost reducing technology is higher than the marginal gain computed at the licensee’s reservation output. Optimal licensing policies have both royalties and fees for significantly superior technologies if the cost function is superadditive. By contrast, for additive and certain subadditive cost functions, optimal licensing policies have only royalties and no fees.
Licensing under general demand and cost functions
S0377221716300017
Robust Portfolio Modeling (RPM) supports multi-attribute project portfolio selection with uncertain project scores and decision maker preferences. By determining non-dominated portfolios for all possible realizations of uncertain parameters, decision recommendations produced by RPM may prove too conservative for real-life decision problems. We develop a methodology to reduce the set of possible realizations by limiting the number of project scores that may simultaneously deviate from their most likely value. By adjusting this limit, decision makers can choose desired levels of conservatism. Our approach also allows to capture dependencies among project scores as well as uncertainty in portfolio constraints.
Adjustable robustness for multi-attribute project portfolio selection
S0377221716300029
Pairwise comparison is a key component in multi-criteria decision making. The probability of rank reversal is a useful measure for evaluating the impact of uncertainty on the final outcome. In the context of this paper the type of uncertainty considered is related to the fact that experts have different opinions or that they may perform inconsistent pairwise comparisons. We provide a theoretical model for estimating the probability of the consequent rank reversal using the multivariate normal cumulative distribution function. The model is applied to two alternative weight extraction methods frequently used in the literature: the geometric mean and the eigenvalue method. We introduce a reasonable framework for incorporating uncertainty in the decision making process and calculate the mean value and cross-correlation of the average weights which are required in the application of the model. The theoretical results are compared against numerical simulations and a very good agreement is observed. We further show how our model can be extended in applications of a full multi-criteria decision making analysis, such as the analytic hierarchy process. We also discuss how the theoretical model can be used in practice where the statistical properties of the uncertainty-induced perturbations are unknown and the only information provided by the pairwise comparison matrices of a small group of experts. The methodology presented here can be used to extend the pairwise comparison framework in order to provide some information on the credibility of its outcome.
Theoretical estimation of the probability of weight rank reversal in pairwise comparisons
S0377221716300200
Just-In-Time (JIT) system involves frequent shipments of smaller batch sizes from the supplier to the buyer. For the buyer, it results in the reduction of the inventory holding cost. However, it is often accompanied by an increase in the set up cost for the supplier. Thus, the supplier may be reluctant to switch to the JIT mode unless he is assured of some form of compensation. In this paper, we introduce a pricing scheme where the buyer offers the supplier an increase in the wholesale price, to encourage the supplier to switch to the JIT mode. Such a pricing scheme may be termed as a surcharge. We develop the economics of surcharge pricing as a supply chain coordinating mechanism under JIT environment. We also establish the equivalence of surcharge pricing with other common coordination mechanisms like Quantity Discount (QD) and Joint Economic Lot Sizing (JELS) model.
A surcharge pricing scheme for supply chain coordination under JIT environment
S0377221716300212
In this study, we focus on the quality of Condorcet and Approval Voting winners using Median and Maximum Coverage problems as benchmarks. We assess the quality of solutions by democratic processes assuming many dimensions for evaluating candidates. We use different norms to map multidimensional preferences into single values. We perform extensive numerical experiments. The Condorcet winner, when he/she exists, may have very high quality measured by the Median objective function, but poor quality measured by the Maximum Coverage problem. We show that the Approval Voting winner is optimal when the quality is measured by the Maximum Coverage objective and fairs well when the Median objective is employed. The analyses further indicate that the number of voters and the distance norm may increase, while the number of candidates and dimensions may decrease the quality of democratic methods.
Democratic elections and centralized decisions: Condorcet and Approval Voting compared with Median and Coverage locations
S0377221716300224
We consider the flowshop problem on two machines with sequence-independent setup times to minimize total completion time. Large scale network flow formulations of the problem are suggested together with strong Lagrangian bounds based on these formulations. To cope with their size, filtering procedures are developed. To solve the problem to optimality, we embed the Lagrangian bounds into two branch-and-bound algorithms. The best algorithm is able to solve all 100-job instances of our testbed with setup times and all 140-job instances without setup times, thus significantly outperforming the best algorithms in the literature.
The two-machine flowshop total completion time problem: Branch-and-bound algorithms based on network-flow formulation
S0377221716300248
This paper considers a multi-product short sea inventory-routing problem in which a heterogeneous fleet of ships transports multiple products from production sites to consumption sites in a continuous time framework. A many-to-many distribution structure is taken into account, which makes it extremely hard to even compute feasible solutions. We propose an iterative two-phase hybrid matheuristic called Hybrid Cargo Generating and Routing (HCGR) to solve the problem. In the first phase the inventory-routing problem is converted into a ship routing and scheduling problem by generating cargoes subject to inventory limits through the use of mathematical programming. In the second phase, an adaptive large neighborhood search solves the resulting ship routing and scheduling problem. The HCGR heuristic iteratively modifies the generated cargoes based on information obtained during the process. The proposed heuristic is compared with an exact algorithm on small size instances; computational results are also presented on larger and more realistic instances.
An iterative two-phase hybrid matheuristic for a multi-product short sea inventory-routing problem
S037722171630025X
This paper addresses an integrated framework for deciding about the supplier selection in the processed food industry under uncertainty. The relevance of including tactical production and distribution planning in this procurement decision is assessed. The contribution of this paper is three-fold. Firstly, we propose a new two-stage stochastic mixed-integer programming model for the supplier selection in the process food industry that maximizes profit and minimizes risk of low customer service. Secondly, we reiterate the importance of considering main complexities of food supply chain management such as: perishability of both raw materials and final products; uncertainty at both downstream and upstream parameters; and age dependent demand. Thirdly, we develop a solution method based on a multi-cut Benders decomposition and generalized disjunctive programming. Results indicate that sourcing and branding actions vary significantly between using an integrated and a decoupled approach. The proposed multi-cut Benders decomposition algorithm improved the solutions of the larger instances of this problem when compared with a classical Benders decomposition algorithm and with the solution of the monolithic model.
Supplier selection in the processed food industry under uncertainty
S0377221716300261
Companies operating global supply chains in various industries struggle with parallel importers diverting goods from authorized channels to gray markets. While the existing gray market literature mainly focuses on pricing, in this paper we develop a model to examine the role of demand enhancing services as non-price mechanisms for coping with gray markets. We consider a manufacturer that sells a product in two markets and a parallel importer that transfers the product from the low-price market to the high-price market and competes with the manufacturer on price and service. We show that parallel importation forces the manufacturer to provide more service in both markets. We explore the value of service and the effects of competition intensity and market responsiveness to service on the manufacturer’s policy. We find that a little service can go a long way in boosting the profit of the manufacturer. Investing in service enables the manufacturer to differentiate herself from the parallel importer and to achieve the ideal price discrimination. In addition, service increases the value of strategic price discrimination when facing parallel importation. We also analyze the case when the manufacturer sells through a retailer in the high price market and can delegate service provision to the retailer or provide service herself. We find that delegating service to the retailer reduces double marginalization and can simultaneously benefit the manufacturer and the retailer, even if the retailer is not as efficient as the manufacturer.
Beyond price mechanisms: How much can service help manage the competition from gray markets?
S0377221716300273
Complex multi-state warm standby systems subject to different types of failures and preventive maintenance are modelled by considering discrete marked Markovian arrival processes. The system is composed of K units, one online and the rest in warm standby and by an indefinite number of repairpersons, R. The online unit passes through several performance states, which are partitioned into two types: minor and major. This unit can fail due to wear or to external shock. In both cases of failures, the failure can be repairable or non-repairable. Warm standby units can only undergo repairable failures due to wear. Derived systems are modelled from the basic one according to the type of the failure; repairable or non-repairable, and preventive maintenance. When a unit undergoes a repairable failure, it goes to the repair facility for corrective repair, and if it is non-repairable, it is replaced by a new, identical one. Preventive maintenance is carried out in response to random inspections. When an inspection takes place, the online unit is observed and if the performance state is major, the unit is sent to the repair facility for preventive maintenance. Preventive maintenance and corrective repair times follow different distributions according to the type of failure. The systems are modelled in transient regime, relevant performance measures are obtained, and rewards and costs are calculated. All results are expressed in algorithmic form and implemented computationally with Matlab. A numerical example shows the versatility of the model presented.
Complex multi-state systems modelled through marked Markovian arrival processes
S0377221716300285
Issues related to decision making based on dispersed knowledge are discussed in the paper. A system using a process of combining classifiers into coalitions is used. In this article clusters that are generated using an approach with a negotiation stage are used. Such clusters are more complex and are better able to reconstruct the views of the agents on the classifications. However, a significant improvement is not obtained when we use these clusters without an additional enhancement to the method of conflict analysis. In order to take full advantage of the clustering method, the size and structure of the clusters should be taken into account. Therefore, the main aim of this paper is to examine the impact of the method of conflict analysis and the methods that are used to determine the individual weights of the clusters on the effectiveness of the inference in a system that has a negotiation stage. Four new methods for determining the strength of a coalition are proposed and compared. The tests, which were performed on data from the University of California, Irvine Repository, are presented. The results that were obtained are much better than in the case in which the strength of the clusters was not calculated. The approach that consists in the computation of the individual weights from the judgments of each cluster allowed the size and structure of the clusters to be taken into account. This in turn allowed us to take full advantage of a clustering method with a negotiation stage.
The strength of coalition in a dispersed decision support system with negotiations
S0377221716300297
This paper presents an exact algorithm for solving the knapsack sharing problem with common items. In literature, this problem is also denominated the Generalized Knapsack Sharing Problem (GKSP). The GKSP is NP-hard because it lays on the 0–1 knapsack problem and the knapsack sharing problem. The proposed exact method is based on a rigorous decomposition technique which leads to an intense simplification of the solution procedure for the GKSP. Furthermore, in order to accelerate the procedure for finding the optimum solution, an upper bound and several reduction strategies are considered. Computational results on two sets of benchmark instances from literature show that the proposed method outperforms the other approaches in most instances.
An exact decomposition algorithm for the generalized knapsack sharing problem
S0377221716300303
In the present work, we develop a multi-class multi-server queuing model with heterogeneous servers under the accumulating priority queuing discipline, where customers accumulate priority credits as a linear function of their waiting time in the queue, at rates which are distinct to the class to which they belong. At a service completion instant, the customer with the greatest accumulated priority commences service. When the system has more than one idle server, the so-called r-dispatch policy is implemented to determine which of the idle servers is to be selected to serve a newly-arriving customer. We establish the waiting time distribution for each class of customers. We also present a conservation law for the mean waiting time in M/Mi /c systems, and study a cost function in relation to the conservation law to optimize the level of heterogeneity among the service times in M/Mi /2 systems. Numerical investigations through simulation are carried out to validate our model.
Multi-server accumulating priority queues with heterogeneous servers
S0377221716300315
Despite many proposed alternatives, the predominant model in portfolio selection is still mean–variance. However, the main weakness of the mean–variance model is in the specification of the expected returns of the individual securities involved. If this process is not accurate, the allocations of capital to the different securities will in almost all certainty be incorrect. If, however, this process can be made accurate, then correct allocations can be made, and the additional expected return following from this is the value of information. This paper thus proposes a methodology to calculate the value of information. A related idea of a level of disappointment is also shown. How value of information calculations can be important in helping a mutual fund settle on how much to set aside for research is discussed in reference to a Taiwan Stock Exchange illustrative application in which the value of information appears to be substantial. Heavy use is made of parametric quadratic programming to keep computation times down for the methodology.
Value of information in portfolio selection, with a Taiwan stock market application illustration
S0377221716300327
This paper presents an exact algorithm, a constructive heuristic algorithm, and a metaheuristic for the Hamiltonian p-Median Problem (HpMP). The exact algorithm is a branch-and-cut algorithm based on an enhanced p-median based formulation, which is proved to dominate an existing p-median based formulation. The constructive heuristic is a giant tour heuristic, based on a dynamic programming formulation to optimally split a given sequence of vertices into cycles. The metaheuristic is an iterated local search algorithm using 2-exchange and 1-opt operators. Computational results show that the branch-and-cut algorithm outperforms the existing exact solution methods.
Exact and heuristic algorithms for the Hamiltonian p-median problem
S0377221716300339
In this paper we analyze a particular aspect of capacity planning that is concerned with the active trading of production facilities. For a homogenous product market we provide a theoretical rationale for the valuation and trading of these assets based on a metric of strategic slack. We show that trading production assets with non-additive portfolio profitability involves complex coordination with multiple equilibria and that these equilibria depend on the foresight in the planning horizon. Using the concept of strategic slack we have analyzed the dynamics of market structure, the impact of asset trading on the level of production of the industry, and to derive boundaries on the value of the traded assets. Moreover, through computational learning, the formulation is applied to a large oligopolistic electricity market, showing that plant trading tends to lead to increased market concentration, high prices, lower production and a decrease in consumer surplus.
Dynamic capacity planning using strategic slack valuation
S0377221716300340
Data Envelopment Analysis (DEA) is a powerful analytical technique for measuring the relative efficiency of alternatives based on their inputs and outputs. The alternatives can be in the form of countries who attempt to enhance their productivity and environmental efficiencies concurrently. However, when desirable outputs such as productivity increases, undesirable outputs increase as well (e.g. carbon emissions), thus making the performance evaluation questionable. In addition, traditional environmental efficiency has been typically measured by crisp input and output (desirable and undesirable). However, the input and output data, such as CO2 emissions, in real-world evaluation problems are often imprecise or ambiguous. This paper proposes a DEA-based framework where the input and output data are characterized by symmetrical and asymmetrical fuzzy numbers. The proposed method allows the environmental evaluation to be assessed at different levels of certainty. The validity of the proposed model has been tested and its usefulness is illustrated using two numerical examples. An application of energy efficiency among 23 European Union (EU) member countries is further presented to show the applicability and efficacy of the proposed approach under asymmetric fuzzy numbers.
Carbon efficiency evaluation: An analytical framework using fuzzy DEA
S0377221716300352
The capacitated arc routing problem (CARP) is a difficult combinatorial optimization problem that has been intensively studied in the last decades. We present a hybrid metaheuristic approach (HMA) to solve this problem which incorporates an effective local refinement procedure, coupling a randomized tabu thresholding procedure with an infeasible descent procedure, into the memetic framework. Other distinguishing features of HMA include a specially designed route-based crossover operator for solution recombination and a distance-and-quality based replacement criterion for pool updating. Extensive experimental studies show that HMA is highly scalable and is able to quickly identify either the best known results or improved best known results for almost all currently available CARP benchmark instances. In particular, it discovers an improved best known result for 15 benchmark instances (6 classical instances and 9 large-sized instances whose optima are unknown). Furthermore, we analyze some key elements and properties of the HMA-CARP algorithm to better understand its behavior.
A hybrid metaheuristic approach for the capacitated arc routing problem
S0377221716300364
Recent research reveals that pro-active real-time routing approaches that use stochastic knowledge about future requests can significantly improve solution quality compared to approaches that simply integrate new requests upon arrival. Many of these approaches assume that request arrivals on different days follow an identical pattern. Thus, they define and apply a single profile of past request days to anticipate future request arrivals. In many real-world applications, however, different days may follow different patterns. Moreover, the pattern of the current day may not be known beforehand, and may need to be identified in real-time during the day. In such cases, applying approaches that use a single profile is not promising. In this paper, we propose a new pro-active real-time routing approach that applies multiple profiles. These profiles are generated by grouping together days with a similar pattern of request arrivals. For each combination of identified profiles, stochastic knowledge about future request arrivals is derived in an offline step. During the day, the approach repeatedly evaluates characteristics of request arrivals and selects a suitable combination of profiles. The performance of the new approach is evaluated in computational experiments in direct comparison with a previous approach that applies only a single profile. Computational results show that the proposed approach significantly outperforms the previous one. We analyze further potential for improvement by comparing the approach with an omniscient variant that knows the actual pattern in advance. Based on the results, managerial implications that allow for a practical application of the new approach are provided.
Pro-active real-time routing in applications with multiple request patterns
S0377221716300376
Patients who fail to attend their appointments complicate appointment scheduling systems. The accurate prediction of no-shows may assist a clinic in developing operational mitigation strategies, such as overbooking appointment slots or special management of patients who are predicted as being highly likely to not attend. We present a new model for predicting no-show behavior based solely on the binary representation of a patient's historical attendance history. Our model is a parsimonious, pure predictive analytics technique, which combines regression-like modeling and functional approximation, using the sum of exponential functions, to produce probability estimates. It estimates parameters that can give insight into the way in which past behavior affects future behavior, and is important for clinic planning and scheduling decisions to improve patient service. Additionally, our choice of exponential functions for modeling leads to tractable analysis that is proved to produce optimal and unique solutions. We illustrate our approach using data from patients’ attendance and non-attendance at Veteran Health Administration (VHA) outpatient clinics.
Predictive analytics model for healthcare planning and scheduling
S0377221716300388
The Single-Picker Routing Problem deals with the determination of the sequence according to which article locations have to be visited in a distribution warehouse and the identification of the corresponding paths which have to be traveled by human operators (order pickers) in order to collect a set of items requested by internal or external customers. The Single-Picker Routing Problem (SPRP) represents a special case of the classic Traveling Salesman Problem (TSP) and, therefore, can also be modeled as a TSP. Standard TSP formulations applied to the SPRP, however, neglect that in distribution warehouses article locations are arranged in a specifically structured way. When arranged according to a block layout, articles are located in parallel picking aisles, and order pickers can only change over to another picking aisle at certain positions by means of so-called cross aisles. In this paper, for the first time a mathematical programming formulation is proposed which takes into account this specific property. By means of extensive numerical experiments it is shown that the proposed formulation is superior to standard TSP formulations.
A new mathematical programming formulation for the Single-Picker Routing Problem
S037722171630039X
We consider storage loading problems where items with uncertain weights have to be loaded into a storage area, taking into account stacking and payload constraints. Following the robust optimization paradigm, we propose strict and adjustable optimization models for finite and interval-based uncertainties. To solve these problems, exact decomposition and heuristic solution algorithms are developed. For strict robustness, we also propose a compact formulation based on a characterization of worst-case scenarios. Computational results for randomly generated data with up to 300 items are presented showing that the robustness concepts have different potential depending on the type of data being used.
Robust storage loading problems with stacking and payload constraints
S0377221716300406
Preventive maintenance over the warranty period has a crucial effect on the warranty servicing cost. Numerous papers on the optimal maintenance strategies for warranted items have focused on the case of items from homogeneous populations. However, most of real life populations are heterogeneous. In this paper, we assume that an item is randomly selected from a mixed population composed of two stochastically ordered subpopulations and that the subpopulation, from which the item is chosen, is unknown. As the operational history of an item contains the information on the chosen subpopulation, we utilize this information to develop and justify a new information-based warranty policy. For illustration of the proposed model, we provide and discuss relevant numerical examples.
On information-based warranty policy for repairable products from heterogeneous population
S0377221716300583
We study the optimal production planning for an assembly system consisting of n components in a single period setting. Demand for the end-product is random and production and assembly capacities are uncertain due to unexpected breakdowns, repairs and reworks, etc. The cost-minimizing firm (she) plans components production before the production capacities are realized, and after the outputs of components are observed, she decides the assembly amount before the demand realization. We start with a simplified system of selling two complementary products without an assembly stage and find that the firm's best choices can only be: (a) producing no products or producing only the product of less stock such that its target amount is not higher than the other product's initial stock level, or (b) producing both products such that their target amounts are equal. Leveraging on these findings, the two-dimensional optimization problem is reduced to two single-dimensional sub-problems and the optimal solution is characterized. For a general assembly system with n components, we show that if initially the firm has more end-products than a certain level, she will neither produce any component nor assemble end-product; if she does not have that many end-products but does have enough mated components, she will produce nothing and assemble up to that level; otherwise she will try to assemble all mated components and plan production of components accordingly. We characterize the structure of optimal solutions and find the solutions analytically.
Optimal production planning for assembly systems with uncertain capacities and random demand
S0377221716300595
Prior experimental research shows that, in aggregate, decision makers acting as suppliers to a newsvendor do not set the wholesale price to maximize supplier profits. However, these deviations from optimal have rarely been examined at an individual level. In this study, presented with scenarios that differ in terms of how profit is shared between retailer and supplier, suppliers set wholesale price contracts which deviate from profit-maximization in ways that are either generous or spiteful. On an individual basis, these deviations were found to be consistent with how the profit-maximizing contract compares to the subject's idea of a fair contract. Suppliers moved nearer to self-reported ideal allocations when they indicated a high degree of concern for fairness, consistent with previously proposed fairness models, and were found to be more likely to act upon generous inclinations than spiteful ones.
Generous, spiteful, or profit maximizing suppliers in the wholesale price contract: A behavioral study
S0377221716300601
We run a choice-based conjoint (CBC) analysis for term life insurance on a sample of 2017 German consumers using data from web-based experiments. Individual-level part-worth profiles are estimated by means of a hierarchical Bayes model. Drawing on the elicited preference structures, we then compute relative attribute importances and different willingness to pay measures. In addition, we present comprehensive simulation results for a realistic competitive setting that allows us to assess product switching as well as market expansion effects. On average, brand, critical illness cover, and underwriting procedure turn out to be the most important nonprice product attributes. Hence, if a policy comprises their favored specifications, customers accept substantial markups in the monthly premium. Furthermore, preferences vary considerably across the sample. While some individuals are prepared to pay relatively high monthly premiums, a large fraction exhibits no willingness to pay for term life insurance at all, presumably due to the absence of a need for mortality risk coverage. We also illustrate that utility-driven product optimization is well-suited to gain market shares, avoid competitive price pressure, and access additional profit potential. Finally, based on estimated demand sensitivities and a set of cost assumptions, it is shown that insurers require an in-depth understanding of preferences to identify the profit-maximizing price.
On consumer preferences and the willingness to pay for term life insurance
S0377221716300613
Over the last decades, speculative investors in the FX market have profited in the well known currency carry trade strategy (CT). However, during currencies or global financial crashes, CT produces substantial losses. In this work we present a methodology that enhances CT performance significantly. For our final strategy, constructed backtests show that the mean-semivolatility ratio can be more than doubled with respect to benchmark CT. To do the latter, we first identify and classify CT returns according to their behavior in different regimes, using a Hidden Markov Model (HMM). The model helps to determine when to open and close positions, depending whether the regime is favorable to CT or not. Finally we employ a mean-semivariance allocation model to improve allocations when positions are opened.
Dynamic allocations for currency futures under switching regimes signals
S0377221716300637
Given a collection of n items (elements) and an associated symmetric distance dij between each pair of items i and j, we seek a subset P of these items (with a given cardinality p) so that the minimum pairwise distance among the selected items is maximized. This problem is known as the max–min diversity problem or the p-dispersion problem, and it is shown to be np-hard. We define a collection of node packing problems associated with each instance of this problem and employ a binary search among these node packing problems to devise an effective procedure for solving the original problem. We employ existing integer programming techniques, i.e., branch-and-bound and strong valid inequalities, to solve these node packing problems. Through a computational experiment we show that this approach can be used to solve relatively large instances of the p-dispersion problem, i.e., instances with more than 1000 items. We also discuss an application of this problem in the context of locating traffic sensors in a highway network.
An integer programming approach for solving the p-dispersion problem
S0377221716300649
Due to the development of sensor technologies nowadays, condition-based maintenance (CBM) programs can be established and optimized based on the data collected through condition monitoring. The CBM activities can significantly increase the uptime of a machine. However, they should be conducted in a coordinated way with the production plan to reduce the interruptions. On the other hand, the production lot size should also be optimized by taking the CBM activities into account. Relatively fewer works have been done to investigate the impact of the CBM policy on production lot-sizing and to propose joint optimization models of both the economic manufacturing quantity (EMQ) and CBM policy. In this paper, we evaluate the average long-run cost rate of a degrading manufacturing system using renewal theory. The optimal EMQ and CBM policy can be obtained by minimizing the average long-run cost rate that includes setup cost, inventory holding cost, lost sales cost, predictive maintenance cost and corrective maintenance cost. Unlike previous works on this topic, we allow the use of continuous time and continuous state degradation processes, which broadens the application area of this model. Numerical examples are provided to illustrate the utilization of our model. Condition-based Maintenance Economic Manufacturing Quantity production time of the manufacturing system degradation state of the manufacturing system at t failure threshold of the degradation process X(.) first passage time of X(.) over H control limit of X(.) first passage time of X(.) over C pdf of TC pdf of TH under the condition that TC =tc operational time including production time and idle time on-hand inventory level at time t’ demand rate production rate production quantity/lot size production time for a lot number of lots produced random duration of corrective maintenance cdf of corrective maintenance time S pdf of corrective maintenance time S constant duration of predictive maintenance cumulative production and maintenance cost until time t’ expected total production and maintenance cost per cycle expected length of a cycle setup cost per lot expected setup cost per cycle expected inventory holding cost per cycle holding cost per unit held in inventory per time unit cost of lost sales per unit expected lost sales cost per cycle expected predictive maintenance cost per cycle cost related to the predictive maintenance activities triggered by the control limit C after the production of a lot expected corrective maintenance cost per cycle cost due to the unexpected machine failure, including corrective maintenance cost, scrap costs due to the machine failure, costs of changing staff working schedule or transportation schedule, etc average long-run total cost rate
Joint optimization of condition-based maintenance and production lot-sizing
S0377221716300650
We introduce a parallel machine scheduling problem in which the processing times of jobs are not given in advance but are determined by a system of linear constraints. The objective is to minimize the makespan, i.e., the maximum job completion time among all feasible choices. This novel problem is motivated by various real-world application scenarios. We discuss the computational complexity and algorithms for various settings of this problem. In particular, we show that if there is only one machine with an arbitrary number of linear constraints, or there is an arbitrary number of machines with no more than two linear constraints, or both the number of machines and the number of linear constraints are fixed constants, then the problem is polynomial-time solvable via solving a series of linear programming problems. If both the number of machines and the number of constraints are inputs of the problem instance, then the problem is NP-Hard. We further propose several approximation algorithms for the latter case.
Scheduling under linear constraints
S0377221716300662
We study incentive issues seen in a firm performing global planning and manufacturing, and local demand management. The stochastic demands in local markets are best observed by the regional business units, and the firm relies on the business units’ forecasts for planning of global manufacturing operations. We propose a class of performance evaluation schemes that induce the business units to reveal their private demand information truthfully by turning the business units’ demand revelation game into a potential game with truth telling being a potential maximizer, an appealing refinement of Nash equilibrium. Moreover, these cooperative performance evaluation schemes satisfy several essential fairness notions. After analyzing the characteristics of several performance evaluation schemes in this class, we extend our analysis to include the impact of effort on demand.
Setting the right incentives for global planning and operations
S0377221716300674
Low carbon manufacturing has become a strategic objective for many developed and developing economies. This study examines the role of co-opetition in achieving this objective. We investigate the pricing and emissions reduction policies for two rival manufacturers with different emission reduction efficiencies under the cap-and-trade policy. We assume that the product demand is price and emission sensitive. Based on non-cooperative and cooperative games, the optimal solutions for the two manufacturers are derived in the purely competitive and co-opetitive market environments respectively. Through the discussion and numerical analysis, we uncovered that in both pure competition and co-opetition models, the two manufacturers’ optimal prices depend on the unit price of carbon emission trading. In addition, higher emission reduction efficiency leads to lower optimal unit carbon emissions and higher profit in both the pure competition and co-petition models. Interestingly, compared to pure competition, co-opetition will lead to more profit and less total carbon emissions. However, the improvement in economic and environmental performance is based on higher product prices and unit carbon emissions.
The role of co-opetition in low carbon manufacturing
S0377221716300686
A precursor question to increase the capacity of an airspace is to determine the minimum distance separation required to make this airspace safe. A methodology to answer this question is proposed in this paper. The methodology takes sector volume, number of crossings and crossing angles of routes, and the number of aircraft as input, and generate air traffic scenarios which satisfy the input values. A stochastic multi-objective optimization algorithm is then used to optimize separation values. The algorithm outputs the set of non-dominated solutions representing the trade-off between separation values and the best attainable target level of safety. The results show that the proposed methodology is successful in determining the minimum distance separation values required to make an air traffic scenario safe from a collision risk perspective, and in illustrating how minimum separation values are affected by different sector/traffic characteristics.
A multiobjective distance separation methodology to determine sector-level minimum separation for safe air traffic scenarios
S0377221716300698
In this paper, we address a basic production planning problem with price dependent demand and stochastic yield of production. We use price and target quantity as decision variables to control the risk of low production yield. The value of risk control becomes more important especially for products with short life cycle where high losses are unbearable in the short run. In this cases, optimization of a solely scalar function of profit is not sufficient to control the risk. We apply Conditional Value at Risk (CVaR) measure to model the risk preferences of the producer. The producer is interested in shaping the risk by bounding from below the means of α-tail distributions of profit for different values of α. The resulting model is nonconvex. We propose an efficient solution algorithm and present a sufficient optimality condition.
Risk shaping in production planning problem with pricing under random yield
S0377221716300704
In technology-related industries, such as smartphones manufacturing, two phenomena can be observed for the products: the first is the obsolescence of an existing product, usually due to the appearance of a new product which decreases the attractiveness of the existing one; the second is the stochastic nature of the market demand and its price sensitivity. These two imply that the demand decreases over a product's lifecycle, and thus, the manufacturer and/or retailer may need to decline the retail price of the product during its lifecycle. In this paper, we assume that a dominant manufacturer wholesales a technological product to a retailer who has single or two buying opportunities. For either single- or two-buying-opportunity setting, we consider two models: (1) the retailer decreases the retail price at the product's midlife with no compensation from the manufacturer; (2) the manufacturer provides a rebate to the retailer for the retail price decline at the midlife. For the two-buying-opportunity setting, the rebate is that the manufacturer decreases the wholesale price at the midlife. The variables include the manufacturer's wholesale price and rebate, the retailer's order quantities and retail prices. We also compare the performance of the proposed models to the wholesale-price-only and the buyback policies.
A two-price policy for a newsvendor product supply chain with time and price sensitive demand
S0377221716300716
This paper studies cost uncertainty in services. Despite the fact that the service sector has become the largest component of gross domestic products in most developed economies, cost uncertainty and its impact on pricing decisions have not received much attention in the literature. In this paper, we first identify the root causes of cost uncertainty in services. Using the distinctive characteristics of services defined in the literature, we show why cost uncertainty, which has been widely neglected in the manufacturing dominated literature, is pervasive in services. Next, we investigate how cost uncertainty affects a risk-averse service provider’s pricing decisions in a make-to-order setting. Using the expected utility theory framework, we show that cost uncertainty increases the optimal price, whereas demand uncertainty reduces it. As a result of the countervailing impacts, the optimal price under risk aversion may be larger or smaller than the optimal risk-neutral price. Next, we study the problem of optimizing cost contingency in service contract pricing. We show that the optimal cost contingency decreases as the profit of the contract increases even when the utility function exhibits an increasing absolute risk aversion. Finally, we introduce various strategies to mitigate the risk of cost uncertainty observed in practice, and propose new research problems.
Impact of cost uncertainty on pricing decisions under risk aversion
S0377221716300728
Validation of Operations Research/Management Science (OR/MS) decision support models is usually performed with the aim of providing the decision makers with sufficient confidence to utilize the model's recommendations to support their decision-making. OR/MS models for investigation and improvement provide a particularly challenging task as far as validation is concerned. Their complex nature often relies on a wide variety of data sources and empirical estimates used as parameters in the model, as well as on multiple conceptual and computational modeling techniques. In this paper, we performed an extensive literature review of validation techniques for healthcare models in OR/MS. Despite calls for systematic approaches to validation of complex OR/MS decision support models; we identified a clear lack of appropriate application of validation techniques reported in published healthcare models. The “Save a minute – save a day” model for evaluation of long-term benefits of faster access to thrombolysis therapy in acute ischemic stroke is used as a case to demonstrate how multiple aspects of data validity, conceptual model validity, computerized verification, and operational model validity can be systematically addressed when developing a complex OR/MS decision support model for investigation and improvement in health services.
Validation of a decision support model for investigation and improvement in stroke thrombolysis
S037722171630073X
The characterization of a technology, from an economic point of view, often uses the first derivatives of either the transformation or the production function. In a parametric setting, these quantities are readily available as they can be easily deduced from the first derivatives of the specified function. In the standard framework of data envelopment analysis (DEA) models these quantities are not so easily obtained. The difficulty resides in the fact that marginal changes of inputs and outputs might affect the position of the frontier itself while the calculation of first derivatives for economic purposes assumes that the frontier is held constant. We develop here a procedure to recover first derivatives of transformation functions in DEA models and we show how we can evacuate the problem of the (marginal) shift of the frontier. We show how the knowledge of the first derivatives of the frontier estimated by DEA can be used to deduce and compute marginal products, marginal rates of substitution, and returns to scale for each decision making unit (DMU) in the sample.
From partial derivatives of DEA frontiers to marginal products, marginal rates of substitution, and returns to scale
S0377221716300741
We present a new algorithm for optimizing a linear function over the set of efficient solutions of a multiobjective integer program (MOIP). The algorithm’s success relies on the efficiency of a new algorithm for enumerating the nondominated points of a MOIP, which is the result of employing a novel criterion space decomposition scheme which (1) limits the number of subspaces that are created, and (2) limits the number of sets of disjunctive constraints required to define the single-objective IP that searches a subspace for a nondominated point. An extensive computational study shows that the efficacy of the algorithm. Finally, we show that the algorithm can be easily modified to efficiently compute the nadir point of a multiobjective integer program.
A new method for optimizing a linear function over the efficient set of a multiobjective integer program
S0377221716300753
Container terminals are facing great challenges in order to meet the shipping industry’s requirements. An important fact within the industry is the increasing vessel sizes. Actually, within the last decade the ship size in the Asia–Europe trade has effectively doubled. However, port productivity has not doubled along with the larger vessel sizes. This has led to increased vessel turn around times at ports which indeed is a severe problem. In order to meet the industry targets a game-changer in container handling is required. Indented berth structure is one important opportunity to handle this issue. This novel berth structure requires new models and solution techniques for scheduling the quay cranes serving the indented berth. Accordingly, in this paper, we approach the quay crane scheduling problem at an indented berth structure. We focus on the challenges and constraints related to the novel architecture. We model the quay crane scheduling problem under the special structure and develop a solution technique based on branch-and-price. Extensive experiments are conducted to validate the efficiency of the proposed algorithm.
Scheduling cranes at an indented berth
S0377221716300765
Bensolve is an open source implementation of Benson’s algorithm and its dual variant. Both algorithms compute primal and dual solutions of vector linear programs (VLP), which include the subclass of multiple objective linear programs (MOLP). The recent version of Bensolve can treat arbitrary vector linear programs whose upper image does not contain lines. This article surveys the theoretical background of the implementation. In particular, the role of VLP duality for the implementation is pointed out. Some numerical examples are provided. In contrast to the existing literature we consider a less restrictive class of vector linear programs.
The vector linear program solver Bensolve – notes on theoretical background
S0377221716300777
We consider the logistics capacity planning problem arising in the context of supply-chain management. We address the tactical-planning problem of determining the quantity of capacity units, hereafter called bins, of different types to secure for the next period of activity, given the uncertainty on future needs in terms of demand for loads (items) to be moved or stored, and the availability and costs of capacity for these movements or storage activities. We propose a modeling framework introducing a new class of bin packing problems, the Stochastic Variable Cost and Size Bin Packing Problem. The resulting two-stage stochastic formulation with recourse assigns to the first stage the tactical capacity-planning decisions of selecting bins, while the second stage models the subsequent adjustments to the plan, securing extra bins and packing the items into the selected bins, performed each time the plan is applied and new information becomes known. We propose a new meta-heuristic based on progressive hedging ideas that includes advanced strategies to accelerate the search and efficiently address the symmetry strongly present in the problem considered due to the presence of several equivalent bins of each type. Extensive computational results for a large set of instances support the claim of validity for the model, efficiency for the solution method proposed, and quality and robustness for the solutions obtained. The method is also used to explore the impact on the capacity plan and the recourse to spot-market capacity of a quite wide range of variations in the uncertain parameters and the economic environment of the firm.
Logistics capacity planning: A stochastic bin packing formulation and a progressive hedging meta-heuristic
S0377221716300959
This article presents a modified version of the Differential Evolution (DE) algorithm for solving Dynamic Optimization Problems (DOPs) efficiently. The algorithm, referred as Modified DE with Locality induced Genetic Operators (MDE-LiGO) incorporates changes in the three basic stages of a standard DE framework. The mutation phase has been entrusted to a locality-induced operation that retains traits of Euclidean distance-based closest individuals around a potential solution. Diversity maintenance is further enhanced by inclusion of a local-best crossover operation that empowers the algorithm with an explorative ability without directional bias. An exhaustive dynamic detection technique has been introduced to effectively sense the changes in the landscape. An even distribution of solutions over different regions of the landscape calls for a solution retention technique that adapts this algorithm to dynamism by using the previously stored information in diverse search domains. MDE-LiGO has been compared with seven state-of-the-art evolutionary dynamic optimizers on a set of benchmarks known as the Generalized Dynamic Benchmark Generator (GDBG) used in competition on evolutionary computation in dynamic and uncertain environments held under the 2009 IEEE Congress on Evolutionary Computation (CEC). The experimental results clearly indicate that MDE-LiGO can outperform other algorithms for most of the tested DOP instances in a statistically meaningful way.
Modified Differential Evolution with Locality induced Genetic Operators for dynamic optimization
S0377221716300960
In an algorithm for a problem whose candidate solutions are selections of objects, an ejection chain is a sequence of moves from one solution to another that begins by removing an object from the current solution. The quadratic multiple knapsack problem extends the familiar 0–1 knapsack problem both with several knapsacks and with values associated with pairs of objects. A hybrid algorithm for this problem extends a local search algorithm through an ejection chain mechanism to create more powerful moves. In addition, adaptive perturbations enhance the diversity of the search process. The resulting algorithm produces results that are competitive with the best heuristics currently published for this problem. In particular, it improves the best known results on 34 out of 60 test problem instances and matches the best known results on all but 6 of the remaining instances.
An ejection chain approach for the quadratic multiple knapsack problem
S0377221716300972
Multimethodology interventions are being increasingly employed by operational researchers to cope with the complexity of real-world problems. In keeping with recent calls for more research into the ‘realised’ impacts of multimethodology, we present a detailed account of an intervention to support the planning of business ideas by a management team working in a community development context. Drawing on the rich steam of data gathered during the intervention, we identify a range of cognitive, task and relational impacts experienced by the management team during the intervention. These impacts are the basis for developing a process model that accounts for the personal, social and material changes reported by those involved in the intervention. The model explains how the intervention's analytic and relational capabilities incentivise the interplay of participants’ decision making efforts and integrative behaviours underpinning reported intervention impacts and change. Our findings add much needed empirical case material to enrich further our understanding of the realised impacts of operational research interventions in general, and of multimethodology interventions in particular.
Unpacking multimethodology: Impacts of a community development intervention
S0377221716300984
In this paper, we propose a general agent-based distributed framework where each agent is implementing a different metaheuristic/local search combination. Moreover, an agent continuously adapts itself during the search process using a direct cooperation protocol based on reinforcement learning and pattern matching. Good patterns that make up improving solutions are identified and shared by the agents. This agent-based system aims to provide a modular flexible framework to deal with a variety of different problem domains. We have evaluated the performance of this approach using the proposed framework which embodies a set of well known metaheuristics with different configurations as agents on two problem domains, Permutation Flow-shop Scheduling and Capacitated Vehicle Routing. The results show the success of the approach yielding three new best known results of the Capacitated Vehicle Routing benchmarks tested, whilst the results for Permutation Flow-shop Scheduling are commensurate with the best known values for all the benchmarks tested.
A multi-agent based cooperative approach to scheduling and routing
S0377221716300996
The field of “Sustainable Operations” and the term itself have arisen only in the last ten to twenty years in the context of sustainable development. Even though the term is frequently used in practice and research, it has hardly been characterized and defined precisely in the literature so far. For reasons of clarity and unambiguity, we present terms and definitions before we demarcate Sustainable Operations from its neighboring topics. We especially focus on the interactions between economic, social and ecological aspects as part of Sustainable Operations, but exclude the development of a normative ethics, instead focusing on the use of quantitative methods from Operations Research. Then the broad subject of Sustainable Operations is structured into various areas arising from the typical structure of an enterprise. For each area, we present examples of applications and refer to the existing literature. The paper concludes with future research directions.
Sustainable Operations
S037722171630100X
Demand-based pricing is often used to moderate demand fluctuations so as to level resource utilization and increase profitability. However, such pricing policies may not be effective when customers’ purchase decisions are influenced by social interactions. This paper investigates the demand dynamics, under a demand-based pricing policy, of a frequently purchased service when social interactions are at work. Customers are heterogeneous and adaptively forward-looking. Existing customers’ re-purchase decisions are based on adaptively formed price expectations and reservation prices. Potential customers are attracted through social interactions with existing customers. The demand process is characterized by a two-dimensional dynamical system. It is shown that the equilibrium demand can be unstable. For a given reservation price distribution, we first analyze the stability of the equilibrium demand under various scenarios of social interactions and customers’ adaptively forward-looking behavior, and then characterize their dynamics using the bifurcation plots, Lyapunov exponents and return maps. The results indicate that the demand process can be stable, periodic or chaotic. The study shows that the intended effect of a demand-based pricing policy may be offset by customers’ adaptively forward-looking behavior under the influence of social interactions. In fact, the interplay of these factors may even lead to chaotic demand dynamics. The result highlights the complex dynamics produced by a simple demand-price mechanism under social interactions. For a demand-based pricing strategy to be effective, companies must take social interactions into account.
Stability and chaos in demand-based pricing under social interactions
S0377221716301011
In this work, we present a Lagrangean relaxation of the hull-reformulation of discrete-continuous optimization problems formulated as linear generalized disjunctive programs (GDP). The proposed Lagrangean relaxation has three important properties. The first property is that it can be applied to any linear GDP. The second property is that the solution to its continuous relaxation always yields 0–1 values for the binary variables of the hull-reformulation. Finally, it is simpler to solve than the continuous relaxation of the hull-reformulation. The proposed Lagrangean relaxation can be used in different GDP solution methods. In this work, we explore its use as primal heuristic to find feasible solutions in a disjunctive branch and bound algorithm. The modified disjunctive branch and bound is tested with several instances with up to 300 variables. The results show that the proposed disjunctive branch and bound performs faster than other versions of the algorithm that do not include this primal heuristic.
Lagrangean relaxation of the hull-reformulation of linear generalized disjunctive programs and its use in disjunctive branch and bound
S0377221716301023
Composite measures calculated from individual performance indicators increasingly are used to profile and reward health care providers. We illustrate an innovative way of using Data Envelopment Analysis (DEA) to create a composite measure of quality for profiling facilities, informing consumers, and pay-for-performance programs. We compare DEA results to several widely used alternative approaches for creating composite measures: opportunity-based-weights (OBW, a form of equal weighting) and a Bayesian latent variable model (BLVM, where weights are driven by variances of the individual measures). Based on point estimates of the composite measures, to a large extent the same facilities appear in the top decile. However, when high performers are identified because the lower limits of their interval estimates are greater than the population average (or, in the case of the BLVM, the upper limits are less), there are substantial differences in the number of facilities identified: OBWs, the BLVM and DEA identify 25, 17 and 5 high-performers, respectively. With DEA, where every facility is given the flexibility to set its own weights, it becomes much harder to distinguish the high performers. In a pay-for-performance program, the different approaches result in very different reward structures: DEA rewards a small group of facilities a larger percentage of the payment pool than the other approaches. Finally, as part of the DEA analyses, we illustrate an approach that uses Monte Carlo resampling with replacement to calculate interval estimates by incorporating uncertainty in the data generating process for facility input and output data. This approach, which can be used when data generating processes are hierarchical, has the potential for wider use than in our particular application.
A DEA based composite measure of quality and its associated data uncertainty interval for health care provider profiling and pay-for-performance
S0377221716301047
This paper assesses 27 alternative natural gas supply corridors for the case of Greece, according to a multicriteria analysis approach based on three main pillars: (1) economics of supply, (2) security of supply, and (3) cooperation between countries. The alternatives include onshore and offshore pipeline corridors and LNG shipping, determined after exhaustive investigation of all possible existing and future routes, taking into consideration all possible natural gas infrastructure development projects around Greece. A multicriteria additive value system is assessed via the robust ordinal regression methodology, aiming to support the national energy policy makers to devise favorable strategies, concerning both long-term national natural gas supplies and infrastructure developments. The obtained ranking shows that noticeable alternative corridors for gas passage to Greece do exist both in terms of maritime transport of LNG and in terms of potential future pipeline infrastructure projects.
Multicriteria decision support to evaluate potential long-term natural gas supply alternatives: The case of Greece
S0377221716301059
We consider a situation in which a home improvement project contractor has a team of regular crew members who receive compensation even when they are idle. Because both projects arrivals and the completion time of each project are uncertain, the contractor needs to manage the utilization of his crews carefully. One common approach adopted by many home improvement contractors is to accept multiple projects to keep his crew members busy working on projects to generate positive cash flows. However, this approach has a major drawback because it causes “intentional” (or foreseeable) project delays. Intentional project delays can inflict explicit and implicit costs on the contractor when frustrating customers abandon their projects and/or file complaints or lawsuits. In this paper, we present a queueing model to capture uncertain customer (or project) arrivals and departures, along with the possibility of customer abandonment. Also, associated with each admission policy (i.e., the maximum number of projects that the contractor will accept), we model the underlying tradeoff between accepting too many projects (that can increase customer dissatisfaction) and accepting too few projects (that can reduce crew utilization). We examine this tradeoff analytically so as to determine the optimal admission policy and the optimal number of crew members. We further apply our model to analyze other issues including worker productivity and project pricing. Finally, our model can be extended to allow for multiple classes of projects with different types of crew members.
A queueing model for managing small projects under uncertainties
S0377221716301060
Previous work has studied the classical joint economic lot size model as an adverse selection problem with asymmetric cost information. Solving this problem is challenging due to the presence of countervailing incentives and two-dimensional information asymmetry, under which the classical single-crossing condition does not need to hold. In the present work we advance the existing knowledge about the problem on hand by conducting its optimality analysis, which leads to a better informed and an easier problem solution: First, we refine the existing closed-form solution, which simplifies problem solving and its analysis. Second, we prove that Karush–Kuhn–Tucker conditions are necessary for optimality, and demonstrate that the problem may, in general, possess non-optimal stationary points due to non-convexity. Third, we prove that certain types of stationary points are always dominated, which eases the analytical solution of the problem. Fourth, we derive a simple optimality condition stating that a weak Pareto efficiency of the buyer’s possible cost structures implies optimality of any stationary point. It simplifies the analytical solution approach and ensures a successful solution of the problem by means of conventional numerical techniques, e.g. with a general-purpose solver. We further establish properties of optimal solutions and indicate how these are related with the classical results on adverse selection.
Optimal contract design in the joint economic lot size problem with multi-dimensional asymmetric information
S0377221716301072
One of the fundamental features of policy processes in contemporary societies is complexity. It follows from the plurality of points of view actors adopt in their interventions, and from the plurality of criteria upon which they base their decisions. In this context, collaborative multicriteria decision processes seem to be appropriate to address part of the complexity challenge. This study discusses a decision support framework that guides policy makers in their strategic decisions by using a multi-method approach based on the integration of three tools, i.e., (i) stakeholders analysis, to identify the multiple interests involved in the process, (ii) cognitive mapping, to define the shared set of objectives for the analysis, and (iii) Multi-Attribute Value Theory, to measure the level of achievement of the previously defined objectives by the policy options under investigation. The integrated decision support framework has been tested on a real world project concerning the location of new parking areas in a UNESCO site in Southern Italy. The purpose of this study was to test the operability of an integrated analytical approach to support policy decisions by investigating the combined and synergistic effect of the three aforementioned tools. The ultimate objective was to propose policy recommendations for a sustainable parking area development strategy in the region under consideration. The obtained results illustrate the importance of integrated approaches for the development of accountable public decision processes and consensus policy alternatives. The proposed integrated methodological framework will, hopefully, stimulate the application of other collaborative decision processes in public policy making.
From stakeholders analysis to cognitive mapping and Multi-Attribute Value Theory: An integrated approach for policy support
S0377221716301084
Catalog firms mail billions of catalogs each year. To stay competitive, catalog managers need to maximize the return on these mailings by deciding who should receive a mail-order catalog. In this paper, we propose a two-step approach that allows firms to address the dynamic implications of mailing decisions, and to make efficient mailing decisions by maximizing the long-term value generated by customers. Specifically, we first propose a nonhomogeneous hidden Markov model (HMM) to capture the interactive dynamics between customers and mailings. In the second step, we use the parameters obtained from the HMM to determine the optimal mailing decisions using the Partial Observable Markov Decision Process (POMDP). Both the immediate and the long-term effects of mailings are accounted for. The mailing endogeneity that may result in biased parameter estimates is also corrected. We conduct an empirical study using six years of quarterly solicitation data derived from the well-known DMEF donation data set. All metrics used suggest that the proposed model fits the data well in terms of correct predictions and outperforms all other benchmark models. The simulative experimental results show that the proposed method for optimizing total accrued benefits outperforms the usual targeted-marketing methodology for optimizing each promotion in isolation. We also find that the sequential targeting rules acquired by our proposed methods are more cost-containment oriented in nature compared with the corresponding single-event targeting rules.
A nonhomogeneous hidden Markov model of response dynamics and mailing optimization in direct marketing
S0377221716301096
Forecasting stock market returns is a challenging task due to the complex nature of the data. This study develops a generic methodology to predict daily stock price movements by deploying and integrating three data analytical prediction models: adaptive neuro-fuzzy inference systems, artificial neural networks, and support vector machines. The proposed approach is tested on the Borsa Istanbul BIST 100 Index over an 8 year period from 2007 to 2014, using accuracy, sensitivity, and specificity as metrics to evaluate each model. Using a ten-fold stratified cross-validation to minimize the bias of random sampling, this study demonstrates that the support vector machine outperforms the other models. For all three predictive models, accuracy in predicting down movements in the index outweighs accuracy in predicting the up movements. The study yields more accurate forecasts with fewer input factors compared to prior studies of forecasts for securities trading on Borsa Istanbul. This efficient yet also effective data analytic approach can easily be applied to other emerging market stock return series.
A data analytic approach to forecasting daily stock returns in an emerging market
S0377221716301102
The problem of minimizing the sum of nonsmooth, convex objective functions defined on a real Hilbert space over the intersection of fixed point sets of nonexpansive mappings, onto which the projections cannot be efficiently computed, is considered. The use of proximal point algorithms that use the proximity operators of the objective functions and incremental optimization techniques is proposed for solving the problem. With the focus on fixed point approximation techniques, two algorithms are devised for solving the problem. One blends an incremental subgradient method, which is a useful algorithm for nonsmooth convex optimization, with a Halpern-type fixed point iteration algorithm. The other is based on an incremental subgradient method and the Krasnosel’skiĭ–Mann fixed point algorithm. It is shown that any weak sequential cluster point of the sequence generated by the Halpern-type algorithm belongs to the solution set of the problem and that there exists a weak sequential cluster point of the sequence generated by the Krasnosel’skiĭ–Mann-type algorithm, which also belongs to the solution set. Numerical comparisons of the two proposed algorithms with existing subgradient methods for concrete nonsmooth convex optimization show that the proposed algorithms achieve faster convergence.
Proximal point algorithms for nonsmooth convex optimization with fixed point constraints
S0377221716301114
Corner Polyhedra are a natural intermediate step between linear programming and integer programming. This paper first describes how the concept of Corner Polyhedra arose unexpectedly from a practical operations research problem, and then describes how it evolved to shed light on fundamental aspects of integer programming and to provide a great variety of cutting planes for integer programming.
Origin and early evolution of corner polyhedra
S0377221716301126
Benders is one of the most famous decomposition tools for Mathematical Programming, and it is the method of choice e.g., in mixed-integer stochastic programming. Its hallmark is the capability of decomposing certain types of models into smaller subproblems, each of which can be solved individually to produce local information (notably, cutting planes) to be exploited by a centralized “master” problem. As its name suggests, the power of the technique comes essentially from the decomposition effect, i.e., the separability of the problem into a master problem and several smaller subproblems. In this paper we address the question of whether the Benders approach can be useful even without separability of the subproblem, i.e., when its application yields a single subproblem of the same size as the original problem. In particular, we focus on the capacitated facility location problem, in two variants: the classical linear case, and a “congested” case where the objective function contains convex but non-separable quadratic terms. We show how to embed the Benders approach within a modern branch-and-cut mixed-integer programming solver, addressing explicitly all the ingredients that are instrumental for its success. In particular, we discuss some computational aspects that are related to the negative effects derived from the lack of separability. Extensive computational results on various classes of instances from the literature are reported, with a comparison with the state-of-the-art exact and heuristic algorithms. The outcome is that a clever but simple implementation of the Benders approach can be very effective even without separability, as its performance is comparable and sometimes even better than that of the most effective and sophisticated algorithms proposed in the previous literature.
Benders decomposition without separability: A computational study for capacitated facility location problems
S0377221716301138
A batch of expensive items, such as IC chips, is often inspected multiple times in a sequential manner to further discover more conforming items. After several rounds of screening, we need to estimate the number of conforming items that still remain in the batch. We propose in this paper an empirical Bayes estimation method and compare its performance with that of the traditional maximum likelihood method. In the repetitive screening procedure, another important decision problem is when to stop the screening process and salvage the remaining items. We propose various types of stopping rules and illustrate their procedures with a simulated inspection data. Finally, we explore various extensions to our empirical Bayes estimation method in multiple inspection plans.
Designing repetitive screening procedures with imperfect inspections: An empirical Bayes approach
S037722171630114X
Today’s power systems are experiencing a transition from primarily fossil fuel based generation toward greater shares of renewable energy sources. It becomes increasingly costly to manage the resulting uncertainty and variability in power system operations solely through flexible generation assets. Incorporating demand side flexibility through appropriately designed incentive structures can add an additional lever to balance demand and supply. Based on a supply model using empirical wind generation data and a discrete model of flexible demand with temporal constraints, we design and evaluate a local online market mechanism for matching flexible load and uncertain supply. Under this mechanism, truthful reporting of flexibility is a dominant strategy for consumers reducing payments and increasing the likelihood of allocation. Suppliers, during periods of scarce supply, benefit from elevated critical-value payments as a result of flexibility-induced competition on the demand side. We find that, for a wide range of the key parameters (supply capacity, flexibility level), the cost of ensuring incentive compatibility in a smart grid market, relative to the welfare-optimal matching, is relatively small. This suggests that local matching of demand and supply can be organized in a decentral manner in the presence of a sufficiently flexible demand side. Extending the stylized demand model to include complementary demand structures, we demonstrate that decentral matching induces only minor efficiency losses if demand is sufficiently flexible. Furthermore, by accounting for physical grid limitations we show that flexibility and grid capacity exhibit complementary characteristics.
Local matching of flexible load in smart grids
S0377221716301151
In the aftermath of a mass-casualty incident, effective policies for timely evaluation and prioritization of patients can mean the difference between life and death. While operations research methods have been used to study the patient prioritization problem, prior research has either proposed decision rules that only apply to very simple cases, or proposed formulating and solving a mathematical program in real time, which may be a barrier to implementation in an urgent situation. We connect these two regimes by proposing a general decision support rule that can handle survival probability functions and an arbitrary number of patient classifications. The proposed survival lookahead policy generalizes not only a myopic policy and a cμ type rule, but also the optimal solution to a version of the problem with two priority classes. This policy has other desirable properties, including index policy structure. Using simple heuristic parameterizations, the survival lookahead policy yields an expected number of survivors that is almost as large as published methods that require mathematical programming, while having the advantage of an intuitive structure and requiring minimal computational support.
A simple yet effective decision support policy for mass-casualty triage
S0377221716301163
The reduction in carbon dioxide levels by using hybrid electric vehicles is a currently ongoing endeavor. Although this development is quite advanced for hybrid electric passenger cars, small transporters and trucks are far behind. We try to address this challenge by introducing a new optimization problem that describes the delivery of goods with a hybrid electric vehicle to a set of customer locations. The Hybrid Electric Vehicle – Traveling Salesman Problem extends the well-known Traveling Salesman Problem by adding different modes of operation for the vehicle, causing different costs and driving times for each arc within a delivery network. As the use of different modes of operation immensely increases the complexity of the problem, we present a heuristic solution approach, based mainly on a Tabu Search, to solve this optimization problem. Additionally, we provide a set of realistic benchmark instances based on real-world delivery tours to test and evaluate our solution approach. We also implemented a mathematical problem formulation and are able to solve small instances with the IBM ILOG CPLEX Optimization Studio, which allows us to prove the quality of the solutions, provided by our heuristic.
The Hybrid Electric Vehicle – Traveling Salesman Problem
S0377221716301357
Scheduling production in open-pit mines is characterized by uncertainty about the metal content of the orebody (the reserve) and leads to a complex large-scale mixed-integer stochastic optimization problem. In this paper, a two-phase solution approach based on Rockafellar and Wets’ progressive hedging algorithm (PH) is proposed. PH is used in phase I where the problem is first decomposed by partitioning the set of scenarios modeling metal uncertainty into groups, and then the sub-problems associated with each group are solved iteratively to drive their solutions to a common solution. In phase II, a strategy exploiting information obtained during the PH iterations and the structure of the problem under study is used to reduce the size of the original problem, and the resulting smaller problem is solved using a sliding time window heuristic based on a fix-and-optimize scheme. Numerical results show that this approach is efficient in finding near-optimal solutions and that it outperforms existing heuristics for the problem under study.
Progressive hedging applied as a metaheuristic to schedule production in open-pit mines accounting for reserve uncertainty
S0377221716301369
Ensemble techniques such as bagging or boosting, which are based on combinations of classifiers, make it possible to design models that are often more accurate than those that are made up of a unique prediction rule. However, the performance of an ensemble solely relies on the diversity of its different components and, ultimately, on the algorithm that is used to create this diversity. It means that such models, when they are designed to forecast corporate bankruptcy, do not incorporate or use any explicit knowledge about this phenomenon that might supplement or enrich the information they are likely to capture. This is the reason why we propose a method that is precisely based on some knowledge that governs bankruptcy, using the concept of “financial profiles”, and we show how the complementarity between this technique and ensemble techniques can improve forecasts.
A two-stage classification technique for bankruptcy prediction
S0377221716301370
Two-dimensional irregular strip packing problems are cutting and packing problems where small pieces have to be cut from a larger object, involving a non-trivial handling of geometry. Increasingly sophisticated and complex heuristic approaches have been developed to address these problems but, despite the apparently good quality of the solutions, there is no guarantee of optimality. Therefore, mixed-integer linear programming (MIP) models started to be developed. However, these models are heavily limited by the complexity of the geometry handling algorithms needed for the piece non-overlapping constraints. This led to pieces simplifications to specialize the developed mathematical models. In this paper, to overcome these limitations, two robust MIP models are proposed. In the first model (DTM) the non-overlapping constraints are stated based on direct trigonometry, while in the second model ( NFP − CM ) pieces are first decomposed into convex parts and then the non-overlapping constraints are written based on nofit polygons of the convex parts. Both approaches are robust in terms of the type of geometries they can address, considering any kind of non-convex polygon with or without holes. They are also simpler to implement than previous models. This simplicity allowed to consider, for the first time, a variant of the models that deals with piece rotations. Computational experiments with benchmark instances show that NFP − CM outperforms both DTM and the best exact model published in the literature. New real-world based instances with more complex geometries are proposed and used to verify the robustness of the new models.
Robust mixed-integer linear programming models for the irregular strip packing problem
S0377221716301382
We used a multi-method and repeated elicitation approach across different stakeholder groups to explore possible differences in the outcome of an environmental decision. We compared different preference elicitation procedures based on Multi Criteria Decision Analysis (MCDA) over time for a water infrastructure decision in Switzerland. We implemented the SWING and SMART/SWING weight elicitation methods and also compared results with earlier stakeholder interviews. In all procedures, the weights for environmental protection and well-functioning (waste-)water systems were higher than for cost reduction. The SMART/SWING variant produced statistically significantly different weights than SWING. Weights changed over time with both elicitation methods. Weights were more stable with the SWING method, which was also perceived as slightly more difficult than the SMART/SWING variant. We checked whether the difference in weights produced by the two elicitation methods and the difference in their stability affects the ranking of six alternatives. Overall an unconventional decentralized alternative ranked first or second in 92 percent of all elicitation procedures, which were the online surveys or interviews. For practical decision-making, using multiple methods across different stakeholder groups and repeating elicitation can increase our confidence that the results reflect the true opinions of the decision makers and stakeholders.
Preference stability over time with multiple elicitation methods to support wastewater infrastructure decision-making
S0377221716301394
We investigate TU-game solutions that are neutral to collusive agreements among players. A collusive agreement binds collusion members to act as a single player and is feasible when they are connected on a network. Collusion neutrality requires that no feasible collusive agreement can change the total payoff of collusion members. We show that on the domain of network games, there is a solution satisfying collusion neutrality, efficiency and null-player property if and only if the network is a tree. Considering a tree network, we show that affine combinations of hierarchical outcomes (Demange, 2004; van den Brink, 2012) are the only solutions satisfying the three axioms together with linearity. As corollaries, we establish characterizations of the average tree solution (equally weighted average of hierarchical outcomes); one established earlier in the literature and the others new.
Hierarchical outcomes and collusion neutrality on networks
S0377221716301400
In this paper, we explore the use of static risk measures from the mathematical finance literature to assess the performance of some standard nonstationary queueing systems. To do this we study two important queueing models, namely the infinite server queue and the multi-server queue with abandonment. We derive exact expressions for the value of many standard risk measures for the Mt /M/∞, Mt /G/∞, and Mt /Mt /∞ queueing models. We also derive Gaussian based approximations for the value of risk measures for the Erlang-A queueing model. Unlike more traditional approaches of performance analysis, risk measures offer the ability to satisfy the unique and specific risk preferences or tolerances of service operations managers. We also show how risk measures can be used for staffing nonstationary systems with different risk preferences and assess the impact of these staffing policies via simulation.
Risk measures and their application to staffing nonstationary service systems
S0377221716301424
Market share of buyers and the influence of supply chain structure on the choice of supply contracts have received scant attention in the literature. This paper focuses on this gap and examines a network consisting of one supplier and two buyers under complete and partial decentralization. In the completely decentralized setting both buyers are independent of the supplier. In the partially decentralized setting the supplier and one of the buyers form a vertically integrated entity. Both buyers order from the single supplier and produce similar products to sell in the same market. The supplier charges the buyer through a contract. We investigate the influence of supply chain structure, market-share and asymmetry of information on supplier's choice of contracts. We demonstrate that both linear two-part tariff and quantity discount contract can coordinate the supply chain irrespective of the supply chain structure. By comparing profit levels of supply chain agents across different supply chain structures, we show that if a buyer possesses a minimum threshold market potential, the supplier has an incentive to collude with her. We calculate the cut-off policies for wholesale price and two-part tariff contracts by incorporating the reservation profit level of individual agents. The managerial implications of the analyses and the directions of future research are presented in the conclusion.
Impact of structure, market share and information asymmetry on supply contracts for a single supplier multiple buyer network
S0377221716301436
A kind of personalized quantifier, the so-called SEVSI-induced quantifier as an acronym for Subjective Expected Value of Sample Information, is developed in this paper by introducing Bernstein polynomials of higher degree. This allows us to provide a novel solution to improve the final representation of the quantifier that generally performed poorly in our previous work, thus enhancing the quality of global approximation of functions and improving the operability of this kind of quantifier for practical use. We show some properties of the developed quantifier. We also prove the consistency of the OWA aggregation under the guidance of this type of quantifier. Finally, we experimentally show that the developed quantifier outperforms the one with the piecewise linear interpolation in many aspects of geometrical characteristics and operability. Thus it could be considered as an effective analytical tool to help handle the complex cases involving people's personalities or behavior intentions that have to be considered in decision making under uncertainty.
Quantifiers induced by subjective expected value of sample information with Bernstein polynomials
S0377221716301448
We consider robust stochastic optimization problems for risk-averse decision makers, where there is ambiguity about both the decision maker’s risk preferences and the underlying probability distribution. We propose and analyze a robust optimization problem that accounts for both types of ambiguity. First, we derive a duality theory for this problem class and identify random utility functions as the Lagrange multipliers. Second, we turn to the computational aspects of this problem. We show how to evaluate our robust optimization problem exactly in some special cases, and then we consider some tractable relaxations for the general case. Finally, we apply our model to both the newsvendor and portfolio optimization problems and discuss its implications.
Ambiguity in risk preferences in robust stochastic optimization
S0377221716301461
The aim of this paper is to investigate model risk aspects of variance swaps and forward-start options in a realistic market setup where the underlying asset price process exhibits stochastic volatility and jumps. We devise a general framework in order to provide evidence of the model uncertainty attached to variance swaps and forward-start options. In our study, both variance swaps and forward-start options can be valued by means of analytic methods. We measure model risk using a set of 21 models embedding various dynamics with both continuous and discontinuous sample paths. To conduct our empirical analysis, we work with two major equity indices (S&P 500 and Eurostoxx 50) under different market situations. Our results evaluate model risk between 50 and 200 basis points, with an average value slightly above 100 basis points of the contract notional.
An investigation of model risk in a market with jumps and stochastic volatility
S0377221716301473
In multi-product multi-plant manufacturing systems, process flexibility is the ability to produce different types of products in the same manufacturing plant or production line. While several design methods and flexibility indices have been proposed in the literature on how to design process flexibility, most of the insights generated are focused on identical production systems whereby all plants have the same capacity and all products have identically distributed demands. In this paper, we examine the process flexibility design problem for non-identical systems. We first study the effect of non-identical demand distributions on the performance of the well-known long chain design, and discover three interesting insights: (1) products with low demand mean will create a bottleneck effect, (2) products with low demand variance will result in inefficient utilization of flexibility links, and (3) long chain efficiency decreases in demand variance of any product, hence the need to provide this product with access to more capacity. Using these insights, we develop the variance-based hub-and-chain method (VHC), a simple and graphically intuitive method which decomposes the long chain into smaller chains, one of which will serve as a hub to which the other chains will be connected. Numerical tests show that VHC outperforms the long chain by 15% on average and outperforms the constraint sampling method by 38% on average. Lastly, we implement VHC on a case study in the edible oil industry in China and find substantial benefits. We then summarize with some managerial insights.
Hub and Chain: Process Flexibility Design in Non-Identical Systems Using Variance Information
S0377221716301485
Pumped-storage hydroelectric plants are very valuable assets on the electric grid and in electric markets as they are able to pump and store water for generation, thus allowing for grid-level storage. Within the realm of short-term energy markets, we present a model for determining forward-looking thresholds for making generation and pumping decisions at such plants. A multistage stochastic programming framework is developed to optimize the thresholds with uncertain system prices over the next three days. Tractability issues are discussed and a novel method based on an implementation of the scatter search algorithm is proposed. Given the size of the multistage stochastic programming formulation, we argue that this novel method is a more accurate representation of the decision process. We demonstrate model stability and quality, and show that the forward thresholds obtained using a stochastic programming framework outperform the forward thresholds from a deterministic model, and thus can lead to efficiency gains for both the generation unit owner and the overall system in the real-time market.
Forward thresholds for operation of pumped-storage stations in the real-time energy market
S0377221716301497
Dynamism was originally defined as the proportion of online versus offline orders in the literature on dynamic logistics. Such a definition however, loses meaning when considering purely dynamic problems where all customer requests arrive dynamically. Existing measures of dynamism are limited to either (1) measuring the proportion of online versus offline orders or (2) measuring urgency, a concept that is orthogonal to dynamism, instead. The present paper defines separate and independent formal definitions of dynamism and urgency applicable to purely dynamic problems. Using these formal definitions, instances of a dynamic logistic problem with varying levels of dynamism and urgency were constructed and several route scheduling algorithms were executed on these problem instances. Contrary to previous findings, the results indicate that dynamism is positively correlated with route quality; urgency, however, is negatively correlated with route quality. The paper contributes the theory that dynamism and urgency are two distinct concepts that deserve to be treated separately.
Measures of dynamism and urgency in logistics
S0377221716301503
Over the past few decades, Six Sigma has diffused to a wide array of organizations across the globe, which has been fueled by the reported financial benefits of Six Sigma. Implementing Six Sigma entails carrying out a series of Six Sigma projects that improve business processes. Scholars have investigated some mechanisms that influence project success, such as setting challenging goals and adhering to the Six Sigma method. However, these mechanisms have been studied in a piecemeal fashion and do not provide a deeper understanding of their interrelationships. Developing a deeper understanding of these mechanisms helps identify the contingency and boundary conditions that influence Six Sigma project execution. Drawing on Sociotechnical Systems theory, this research conceptualizes and empirically examines the interrelationships of the key mechanisms that influence project execution. Specifically, we examine the interrelationship between Six Sigma project goals (Social System), adherence to the Six Sigma method (Technical System), and knowledge creation. The analysis uses a mediation-moderation approach which helps empirically examine these relationships. The data come from a survey of 324 employees in 102 Six Sigma projects from two organizations. The findings show that project goals and the Six Sigma method can compensate for one another. It also suggests that adherence to the Six Sigma method becomes more beneficial for projects that create a lot of knowledge. Otherwise the method becomes less important. Prior research has not examined these contingencies and boundary conditions, which ultimately influence project success.
The influence of challenging goals and structured method on Six Sigma project performance: A mediated moderation analysis