FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0377221714001118 | Directional distance functions provide very flexible tools for investigating the performance of Decision Making Units (DMUs). Their flexibility relies on their ability to handle undesirable outputs and to account for non-discretionary inputs and/or outputs by fixing zero values in some elements of the directional vector. Simar and Vanhems (2012) and Simar, Vanhems, and Wilson (2012) indicate how the statistical properties of Farrell–Debreu type of radial efficiency measures can be transferred to directional distances. Moreover, robust versions of these distances are also available, for conditional and unconditional measures. Bădin, Daraio, and Simar (2012) have shown how conditional radial distances are useful to investigate the effect of environmental factors on the production process. In this paper we develop the operational aspects for computing conditional and unconditional directional distances and their robust versions, in particular when some of the elements of the directional vector are fixed at zero. After that, we show how the approach of Bădin et al. (2012) can be adapted in a directional distance framework, including bandwidth selection and two-stage regression of conditional efficiency scores. Finally, we suggest a procedure, based on bootstrap techniques, for testing the significance of environmental factors on directional efficiency scores. The procedure is illustrated through simulated and real data. | Directional distances and their robust versions: Computational and testing issues |
S0377221714001131 | New theoretical foundations for analyzing the newsboy problem under incomplete information about the probability distribution of random demand are presented. Firstly, we reveal that the distribution-free newsboy problem under the worst-case and best-case demand scenarios actually reduces to the standard newsboy problem with demand distributions that bound the allowable distributions in the sense of increasing concave order. Secondly, we provide a theoretical tool for seeking the best-case and worst-case order quantities when merely the support and the first k moments of the demand are known. Using this tool we derive closed form formulas for such quantities in the case of known support, mean and variance, i.e. k =2. Consequently, we generalize all results presented so far in literature for the worst-case and best-case scenarios, and present some new ones. Extensions of our findings to the cases of the known mode of a unimodal demand distribution, the known median, and to other stochastic inventory problems are indicated. | The distribution-free newsboy problem under the worst-case and best-case scenarios |
S0377221714001143 | This paper analyzes risk management contracts used to handle currency risk in a decentralized supply chain that consists of risk-averse divisions in a multinational firm. Particular contracts of interest involve transferring risk to a third party by using risk-transfer contracts such as currency options and re-arranging risk between supply chain members using risk-sharing contracts. Due to decentralization, operational and risk management decisions are made locally; however, a headquarter who is interested in total supply chain profit has some controllability over those activities. We question if each kind of risk management contract can improve the utility of all supply chain members compared to the utility without any of those, and how the conditions to achieve such improvements are different. Further structural differences are investigated via sensitivity analysis with respect to the transfer price, the variability of exchange rates, and the location of the headquarter. We also find that using the two kinds of contracts jointly does not necessarily result in better outcomes. | Transferring and sharing exchange-rate risk in a risk-averse supply chain of a multinational firm |
S0377221714001155 | Numerical preference relations (NPRs) consisting of numerical judgments can be considered as a general form of the existing preference relations, such as multiplicative preference relations (MPRs), fuzzy preference relations (FPRs), interval MPRs (IV-MPRs) and interval FPRs (IV-FPRs). On the basis of NPRs, we develop a stochastic preference analysis (SPA) method to aid the decision makers (DMs) in decision making. The numerical judgments in NPRs can also be characterized by different probability distributions in accordance with practice. By exploring the judgment space of NPRs, SPA produces several outcomes including the rank acceptability index, the expected priority vector, the expected rank and the confidence factor. The outcomes are obtained by Monte Carlo simulation with at least 95% confidence degree. Based on the outcomes, the DMs can choose some of them which they find most useful to make reliable decisions. | Stochastic preference analysis in numerical preference relations |
S0377221714001167 | To analyze the input/output behavior of simulation models with multiple responses, we may apply either univariate or multivariate Kriging (Gaussian process) metamodels. In multivariate Kriging we face a major problem: the covariance matrix of all responses should remain positive-definite; we therefore use the recently proposed “nonseparable dependence” model. To evaluate the performance of univariate and multivariate Kriging, we perform several Monte Carlo experiments that simulate Gaussian processes. These Monte Carlo results suggest that the simpler univariate Kriging gives smaller mean square error. | Multivariate versus univariate Kriging metamodels for multi-response simulation models |
S0377221714001179 | By mixing concepts from both game theoretic analysis and real options theory, an investment decision in a competitive market can be seen as a “game” between firms, as firms implicitly take into account other firms’ reactions to their own investment actions. We review two decades of real option game models, suggesting which critical problems have been “solved” by considering game theory, and which significant problems have not been yet adequately addressed. We provide some insights on the plausible empirical applications, or shortfalls in applications to date, and suggest some promising avenues for future research. | Developing real option game models |
S0377221714001180 | While the application service provider (ASP) market continues to grow, it is fiercely competitive, and ASPs encounter difficulties in retaining customers and achieving long-term profitability. One stream of prior literature suggests that customer loyalty is driven by service quality, while another argues that loyalty is driven by partnerships between the firms. However, to date these competing explanations have not been tested together in the ASP context. This empirical study contributes to the literature by unifying these two previously separate streams of research on customer loyalty. Using a survey of 135 ASP clients, we find a significant relationship between the service quality perspective and the partnership perspective. We thus argue that service loyalty models ought to include both of these constructs in order to effectively explain service loyalty. | Exploring two explanations of loyalty in application service provision |
S0377221714001192 | Batching plays an important role in performance evaluation of manufacturing systems. Three types of batching are commonly seen: transfer batches, parallel batches and serial batches. To model the batching behavior correctly, a comprehensive classification of batching is proposed. Eight types of batching behavior are classified and corresponding queueing models are given. The newly proposed models are validated by simulation. | Taxonomy of batch queueing models in manufacturing systems |
S0377221714001209 | In this research, a two-stage batch production–inventory system is introduced. In this system, the production may be disrupted, for a given period of time, either at one or both stages. In this paper, firstly, a mathematical model has been developed to suggest a recovery plan for a single occurrence of disruption at either stage. Secondly, multiple disruptions have been considered, for which a new disruption may or may not affect the recovery plan of earlier disruptions. We propose a new approach that deals with a series of disruptions over a period of time, which can be implemented for disruption recovery on a real time basis. In this approach, the model formulated for single disruption has been integrated to generate initial solutions for individual disruptions and the solutions have been revised for multiple dependent disruptions with changed parameters. With the proposed approach, an optimal recovery plan can be obtained in real time, whenever the production system experiences either a sudden disruption or a series of disruptions, at different points in time. Some numerical examples and a real-world case study are presented to explain the benefits of our proposed approach. | Real time disruption management for a two-stage batch production–inventory system with reliability considerations |
S0377221714001210 | This article analyzes the fleet management problem faced by a firm when deciding which vehicles to add to its fleet. Such a decision depends not only on the expected mileage and tasks to be assigned to the vehicle but also on the evolution of fuel and CO2 emission prices and on fuel efficiency. This article contributes to the literature on fleet replacement and sustainable operations by proposing a general decision support system for the fleet replacement problem using stochastic programming and conditional value at risk (CVaR) to account for uncertainty in the decision process. The article analyzes how the CVaR associated with different types of vehicle is affected by the parameters in the model by reporting on the results of a real-world case study. | A risk management system for sustainable fleet replacement |
S0377221714001222 | Warehouses play a vital role in mitigating variations in supply and demand, and in providing value-added services in a supply chain. However, our observation of supply chain practice reveals that warehousing decisions are not included when developing a distribution plan for the supply chain. This lack of integration has resulted in a substantial variation in workload (42–220%) at our industry partner’s warehouse costing them millions of dollars. To address this real-world challenge, we introduce the warehouse-inventory-transportation problem (WITP) of determining an optimal distribution plan from vendors to customers via one or more warehouses in order to minimize the total distribution cost. We present a nonlinear integer programming model for the WITP considering supply chains with multiple vendors, stores, products, and time-periods, and one warehouse. The model also considers worker congestion at the warehouse that could affect worker productivity. A heuristic based on iterative local search is developed to solve industry-sized problems with up to 500 stores and 1000 products. Our experiments indicate that the distribution plans obtained via the WITP, as compared to a sequential approach, result in a substantial reduction in workload variance at the warehouse, while considerably reducing the total distribution cost. These plans, however, are sensitive to aisle configuration and technology at the warehouse, and the level and productivity of temporary workers. | The warehouse-inventory-transportation problem for supply chains |
S0377221714001234 | In this paper, a novel and fast algorithm for identifying the Minimum Size Instance (MSI) of the equivalence class of the Pallet Loading Problem (PLP) is presented. The new algorithm is based on the fact that the PLP instances of the same equivalence class have the property that the aspect ratios of their items belong to an open interval of real numbers. This interval characterises the PLP equivalence classes and is referred to as the Equivalence Ratio Interval (ERI) by authors of this paper. The time complexity of the new algorithm is two polynomial orders lower than that of the best known algorithm. The authors of this paper also suggest that the concept of MSI and its identifying algorithm can be used to transform the non-integer PLP into its equivalent integer MSI. | A fast algorithm for identifying minimum size instances of the equivalence classes of the Pallet Loading Problem |
S0377221714001246 | A key issue in applying multi-attribute project portfolio models is specifying the baseline value – a parameter which defines how valuable not implementing a project is relative to the range of possible project values. In this paper we present novel baseline value specification techniques which admit incomplete preference statements and, unlike existing techniques, make it possible to model problems where the decision maker would prefer to implement a project with the least preferred performance level in each attribute. Furthermore, we develop computational methods for identifying the optimal portfolios and the value-to-cost -based project rankings for all baseline values. We also show how these results can be used to (i) analyze how sensitive project and portfolio decision recommendations are to variations in the baseline value and (ii) provide project decision recommendations in a situation where only incomplete information about the baseline value is available. | Baseline value specification and sensitivity analysis in multiattribute project portfolio selection |
S0377221714001258 | A typical railway crew scheduling problem consists of two phases: a crew pairing problem to determine a set of crew duties and a crew rostering problem. The crew rostering problem aims to find a set of rosters that forms workforce assignment of crew duties and rest periods satisfying several working regulations. In this paper, we present a two-level decomposition approach to solve railway crew rostering problem with the objective of fair working condition. To reduce computational efforts, the original problem is decomposed into the upper-level master problem and the lower-level subproblem. The subproblem can be further decomposed into several subproblems for each roster. These problems are iteratively solved by incorporating cuts into the master problem. We show that the relaxed problem of the master problem can be formulated as a uniform parallel machine scheduling problem to minimize makespan, which is NP-hard. An efficient branch-and-bound algorithm is applied to solve the master problem. Effective valid cuts are developed to reduce feasible search space to tighten the duality gap. Using data provided by the railway company, we demonstrate the effectiveness of the proposed method compared with that of constraint programming techniques for large-scale problems through computational experiments. | Two-level decomposition algorithm for crew rostering problems with fair working condition |
S0377221714001271 | We propose an extension to the flow shop scheduling problem named Heterogeneous Flow Shop Scheduling Problem (Het-FSSP), where two simultaneous issues have to be resolved: finding the best worker assignment to the workstations, and solving the corresponding scheduling problem. This problem is motivated by Sheltered Work centers for Disabled, whose main objective is the labor integration of persons with disabilities, an important aim not only for these centers but for any company desiring to overcome the traditional standardized vision of the workforce. In such a scenario the goal is to maintain high productivity levels by minimizing the maximum completion time, while respecting the diverse capabilities and paces of the heterogeneous workers, which increases the complexity of finding an optimal schedule. We present a mathematical model that extends a flow shop model to admit a heterogeneous worker assignment, and propose a heuristic based on scatter search and path relinking to solve the problem. Computational results show that this approach finds good solutions within a short time, providing the production managers with practical approaches for this combined assignment and scheduling problem. | Flow shop scheduling with heterogeneous workers |
S0377221714001283 | We consider project scheduling where the project manager’s objective is to minimize the time from when an adversary discovers the project until the completion of the project. We analyze the complexity of the problem identifying both polynomially solvable and NP-hard versions of the problem. The complexity of the problem is seen to be dependent on the nature of renewable resource constraints, precedence constraints, and the ability to crash activities in the project. | On the complexity of project scheduling to minimize exposed time |
S0377221714001295 | We formulate the multiple knapsack assignment problem (MKAP) as an extension of the multiple knapsack problem (MKP), as well as of the assignment problem. Except for small instances, MKAP is hard to solve to optimality. We present a heuristic algorithm to solve this problem approximately but very quickly. We first discuss three approaches to evaluate its upper bound, and prove that these methods compute an identical upper bound. In this process, reference capacities are derived, which enables us to decompose the problem into mutually independent MKPs. These MKPs are solved euristically, and in total give an approximate solution to MKAP. Through numerical experiments, we evaluate the performance of our algorithm. Although the algorithm is weak for small instances, we find it prospective for large instances. Indeed, for instances with more than a few thousand items we usually obtain solutions with relative errors less than 0.1% within one CPU second. | Upper and lower bounding procedures for the multiple knapsack assignment problem |
S0377221714001301 | When we use a PSM what is it we are actually doing? An answer to this question would enable the PSM community to considerably enlarge the available source of case studies by the inclusion of examples of non-codified PSM use. We start from Checkland’s own proposal for a “constitutive definition” of SSM, which originated from trying to answer the question of knowing when a claim of SSM use was legitimate. By extending this idea to a generic constitutive definition for all PSMs leads us to propose a self-consistent labelling schema for observed phenomena arising from PSMs in action. This consists of a set of testable propositions, which, through observation of putative PSM use, can be used to assess validity of claims of PSM use. Such evidential support for the propositions as may be found in putative PSM use can then make it back into a broader axiomatic formulation of PSMs through the use of a set-theoretic approach, which enables our method to scale to large data sets. The theoretical underpinning to our work is in causal realism and middle range theory. We illustrate our approach through the analysis of three case studies drawn from engineering organisations, a rich source of possible non-codified PSM use. The combination of a method for judging cases of non-codified PSM use, sound theoretical underpinning, and scalability to large data sets, we believe leads to a demystification of PSMs and should encourage their wider use. | The non-codified use of problem structuring methods and the need for a generic constitutive definition |
S0377221714001313 | In a financial market composed of n risky assets and a riskless asset, where short sales are allowed and mean–variance investors can be ambiguity averse, i.e., diffident about mean return estimates where confidence is represented using ellipsoidal uncertainty sets, we derive a closed form portfolio rule based on a worst case max–min criterion. Then, in a market where all investors are ambiguity-averse mean–variance investors with access to given mean return and variance–covariance estimates, we investigate conditions regarding the existence of an equilibrium price system and give an explicit formula for the equilibrium prices. In addition to the usual equilibrium properties that continue to hold in our case, we show that the diffidence of investors in a homogeneously diffident (with bounded diffidence) mean–variance investors’ market has a deflationary effect on equilibrium prices with respect to a pure mean–variance investors’ market in equilibrium. Deflationary pressure on prices may also occur if one of the investors (in an ambiguity-neutral market) with no initial short position decides to adopt an ambiguity-averse attitude. We also establish a CAPM-like property that reduces to the classical CAPM in case all investors are ambiguity-neutral. | Equilibrium in an ambiguity-averse mean–variance investors market |
S0377221714001325 | Estimation of efficiency of firms in a non-competitive market characterized by heterogeneous inputs and outputs along with their varying prices is questionable when factor-based technology sets are used in data envelopment analysis (DEA). In this scenario, a value-based technology becomes an appropriate reference technology against which efficiency can be assessed. In this contribution, the value-based models of Tone (2002) are extended in a directional DEA set up to develop new directional cost- and revenue-based measures of efficiency, which are then decomposed into their respective directional value-based technical and allocative efficiencies. These new directional value-based measures are more general, and include the existing value-based measures as special cases. These measures satisfy several desirable properties of an ideal efficiency measure. These new measures are advantageous over the existing ones in terms of (1) their ability to satisfy the most important property of translation invariance; (2) choices over the use of suitable direction vectors in handling negative data; and (3) flexibility in providing the decision makers with the option of specifying preferable direction vectors to incorporate their preferences. Finally, under the condition of no prior unit price information, a directional value-based measure of profit inefficiency is developed for firms whose underlying objectives are profit maximization. For an illustrative empirical application, our new measures are applied to a real-life data set of 50 US banks to draw inferences about the production correspondence of banking industry. | Cost, revenue and profit efficiency measurement in DEA: A directional distance function approach |
S0377221714001337 | Empty repositions are a major problem for car rental companies that deal with special types of vehicles whose number of units is small. In order to meet reservation requirements concerning time and location, companies are forced to transfer cars between rental stations, bearing significant costs and increasing the environmental impact of their activity due to the fuel consumption and CO 2 emission. In this paper, this problem is tackled under a vehicle-reservation assignment framework as a network-flow model in which the profit is maximized. The reservations are allocated considering the initial and future availability of each car, interdependencies between rental groups, and different reservation priorities. To solve this model, a relax-and-fix heuristic procedure is proposed, including a constraint based on local branching that enables and controls modifications between iterations. Using real instances, the value of this approach is established and an improvement of 33% was achieved when compared to the company’s current practices. | A relax-and-fix-based algorithm for the vehicle-reservation assignment problem in a car rental company |
S0377221714001349 | This research studies the performance of circular unidirectional chaining – a “lean” configuration of lateral inventory sharing among retailers or warehouses – and compares its performance to that of no pooling and complete pooling in terms of expected costs and optimal order quantities. Each retailer faces uncertain demand, and we wish to minimize procurement, shortage and transshipment costs. In a circular unidirectional chain all retailers are connected in a closed loop, so that each retailer can cooperate with exactly two others as follows: receive units (if needed⧹available) from the left “neighbor” and send units (if needed⧹available) to the right, and a retailer who receives units from one neighbor is not allowed to send any units to its other neighbor. If the chain consists of at least three nodes and demands across nodes are i.i.d., its performance turns out to be independent of the number of nodes. The optimal stocking is therefore solved analytically. Analytical comparative statics with respect to cost parameters and demand distributions are provided. We also examine thoroughly the cases of uniform demand distribution (analytically) and normal demand distribution (numerically). In the uniform case with free transshipment, a unidirectional chain can save up to 1/3 of the expected cost of separate newsvendors caused by uncertainty. For three nodes, the advantage of complete pooling over unidirectional chaining does not exceed 19%. | Inventory sharing via circular unidirectional chaining |
S0377221714001350 | The Single-Vehicle Cyclic Inventory Routing Problem (SV-CIRP) belongs to the class of Inventory Routing Problems (IRP) in which the supplier optimises both the distribution costs and the inventory costs at the customers. The goal of the SV-CIRP is to minimise both kinds of costs and to maximise the collected rewards, by selecting a subset of customers from a given set and determining the quantity to be delivered to each customer and the vehicle routes, while avoiding stockouts. A cyclic distribution plan should be developed for a single vehicle. We present an iterated local search (ILS) metaheuristic that exploits typical characteristics of the problem and opportunities to reduce the computation time. Experimental results on 50 benchmark instances show that our algorithm improves the results of the best available algorithm on average with 16.02%. Furthermore, 32 new best known solutions are obtained. A sensitivity analysis demonstrates that the performance of the algorithm is not influenced by small changes in the parameter settings of the ILS. | An iterated local search algorithm for the single-vehicle cyclic inventory routing problem |
S0377221714001362 | The goal of factor screening is to find the really important inputs (factors) among the many inputs that may be changed in a realistic simulation experiment. A specific method is sequential bifurcation (SB), which is a sequential method that changes groups of inputs simultaneously. SB is most efficient and effective if the following assumptions are satisfied: (i) second-order polynomials are adequate approximations of the input/output functions implied by the simulation model; (ii) the signs of all first-order effects are known; and (iii) if two inputs have no important first-order effects, then they have no important second-order effects either (heredity property). This paper examines SB for random simulation with multiple responses (outputs), called multi-response SB (MSB). This MSB selects groups of inputs such that—within a group—all inputs have the same sign for a specific type of output, so no cancellation of first-order effects occurs. To obtain enough replicates (replications) for correctly classifying a group effect or an individual effect as being important or unimportant, MSB applies Wald’s sequential probability ratio test (SPRT). The initial number of replicates in this SPRT is also selected efficiently by MSB. Moreover, MSB includes a procedure to validate the three assumptions of MSB. The paper evaluates the performance of MSB through extensive Monte Carlo experiments that satisfy all MSB assumptions, and through a case study representing a logistic system in China; the results are very promising. | Factor screening for simulation with multiple responses: Sequential bifurcation |
S0377221714001374 | Sales forecasting at the UPC level is important for retailers to manage inventory. In this paper, we propose more effective methods to forecast retail UPC sales by incorporating competitive information including prices and promotions. The impact of these competitive marketing activities on the sales of the focal product has been extensively documented. However, competitive information has been surprisingly overlooked by previous studies in forecasting UPC sales, probably because of the problem of too many competitive explanatory variables. That is, each FMCG product category typically contains a large number of UPCs and is consequently associated with a large number of competitive explanatory variables. Under such a circumstance, time series models can easily become over-fitted and thus generate poor forecasting results. Our forecasting methods consist of two stages. In the first stage, we refine the competitive information. We identify the most relevant explanatory variables using variable selection methods, or alternatively, pool information across all variables using factor analysis to construct a small number of diffusion indexes. In the second stage, we specify the Autoregressive Distributed Lag (ADL) model following a general to specific modelling strategy with the identified most relevant competitive explanatory variables and the constructed diffusion indexes. We compare the forecasting performance of our proposed methods with the industrial practice method and the ADL model specified exclusively with the price and promotion information of the focal product. The results show that our proposed methods generate substantially more accurate forecasts across a range of product categories. | The value of competitive information in forecasting FMCG retail product sales and the variable selection problem |
S0377221714001386 | In this paper, an original approach to formulate and solve a multi-period and multi-commodity distribution (re)planning problem for a multi-stage centralized upstream network with structure dynamics considerations is proposed. First original idea of this study is description of the supply chain as a non-stationary dynamic system along with a linear programming (LP) model. This allows distribute design and control variables between dynamic and static models. Second original idea is to transit from the classical LP model to maximal flow problem by excluding demand constraint from the LP model. The first contribution of this study is multi-objective problem formulation that opens additional perspectives for decision-making beyond cost-oriented optimization. Second, the maximal flow LP model allows the finding of a feasible solution even for unbalanced supply and demand cases without relaxing hard capacity constraints. Third, this allows improve service level at the strategic inventory holding point. Fourth, structure dynamics and ripple effect can be taken into account. Structure dynamics allows considering different execution scenarios and developing suggestions on replanning in the case of disturbances. The graph of structural reliability allows identify the optimistic and pessimistic scenarios. These scenarios are used for computational experiments with the developed model and the industrial models. With the developed model, the practical issues of scenario-based risk identification strategy and operational distribution planning can be interlinked. | Optimal distribution (re)planning in a centralized multi-stage supply network under conditions of the ripple effect and structure dynamics |
S0377221714001398 | This paper presents a general and numerically accurate lattice methodology to price risky corporate bonds. It can handle complex default boundaries, discrete payments, various asset sales assumptions, and early redemption provisions for which closed-form solutions are unavailable. Furthermore, it can price a portfolio of bonds that accounts for their complex interaction, whereas traditional approaches can only price each bond individually or a small portfolio of highly simplistic bonds. Because of the generality and accuracy of our method, it is used to investigate how credit spreads are influenced by the bond provisions and the change in a firm’s liability structure due to bond repayments. | Evaluating corporate bonds with complicated liability structures and bond provisions |
S0377221714001404 | We study the strategic behavior of two countries facing transboundary CO2 pollution under a differential game setting. In our model, the reduction of CO2 concentration occurs through the carbon capture and storage process, rather than through the adoption of cleaner technologies. Furthermore, we first provide the explicit short-run dynamics for this dynamic game with symmetric open-loop and a special Markovian Nash strategy. Then, we compare these strategies at the games’ steady states and along some balanced growth paths. Our results show that if the initial level of CO2 is relatively high, state dependent emissions reductions can lead to higher overall environmental quality, hence, feedback strategy leads to less social waste. | Carbon capture and storage and transboundary pollution: A differential game approach |
S0377221714001611 | In this article, we derive a solution for a linear stochastic model on a complex time domain. In this type of models, the time domain can be any collection of points along the real number line, so these models are suitable for problems where events do not occur at evenly-spaced time intervals. We present examples based on well-known results from economics and finance to illustrate how our model generalizes and extends conventional dynamic models. | Cagan type rational expectation model on complex discrete time domains |
S0377221714001623 | When modeling optimal product mix under emission restrictions produces a solution with unacceptable level of profit, analyst is moved to investigate the cause(s). Interior analysis (IA) is proposed for this purpose. With IA, analyst can investigate the impact of accommodating emission controls in step-by-step one-at-a-time manner and in doing so track how profit and other important features of product mix degrade and to which emission control enforcements its diminution may be attributed. In this way, analyst can assist manager in identifying implementation strategies. Although IA is presented within context of a linear programming formulation of the green product mix problem, its methodology may be applied to other modeling frameworks. Quantity dependent penalty rates and transformations of emissions to forms with or without economic value are included in the modeling and illustrations of IA. | Interior analysis of the green product mix solution |
S0377221714001635 | Nowadays, due to some social, legal, and economical reasons, dealing with reverse supply chain is an unavoidable issue in many industries. Besides, regarding real-world volatile parameters, lead us to use stochastic optimization techniques. In location–allocation type of problems (such as the presented design and planning one), two-stage stochastic optimization techniques are the most appropriate and popular approaches. Nevertheless, traditional two-stage stochastic programming is risk neutral, which considers the expectation of random variables in its objective function. In this paper, a risk-averse two-stage stochastic programming approach is considered in order to design and planning a reverse supply chain network. We specify the conditional value at risk (CVaR) as a risk evaluator, which is a linear, convex, and mathematically well-behaved type of risk measure. We first consider return amounts and prices of second products as two stochastic parameters. Then, the optimum point is achieved in a two-stage stochastic structure regarding a mean-risk (mean-CVaR) objective function. Appropriate numerical examples are designed, and solved in order to compare the classical versus the proposed approach. We comprehensively discuss about the effectiveness of incorporating a risk measure in a two-stage stochastic model. The results prove the capabilities and acceptability of the developed risk-averse approach and the affects of risk parameters in the model behavior. | Reverse logistics network design and planning utilizing conditional value at risk |
S0377221714001660 | For a repairable k-out-of- n : G system consisting of line-replaceable units, its operational availability depends on component reliability, its redundancy level, and spare parts availability. As a result, it is important to consider redundancy allocation and spare parts provisioning simultaneously in maximizing the system’s operational availability. In prior studies, however, these important aspects are often handled separately in the areas of reliability engineering and spare parts logistics. In this paper, we study a collection of operational availability maximization problems, in which the component redundancy and the spares stocking quantities are to be determined simultaneously under economic and physical constraints. To solve this type of problem, continuous-time Markov chain models are developed first for a single repairable k-out-of- n : G system under different shut-off rules, and some important properties of the corresponding operational availability and spare parts availability are derived. Then, we extend the models to series systems consisting of multiple repairable k-out-of- n : G subsystems. The related optimization problems are reformulated as binary integer linear programs and solved using a branch-and-bound method. Numerical examples, including a real-world application of automatic test equipment, are presented to illustrate this integrated product-service solution and to offer valuable managerial insights. Automatic Test Equipment Continuous Operation Continuous-Time Markov Chain First-Come, First Served Multi-Echelon Technique for Recoverable Item Control Mean Down Time of the System Mean Time Between Failures of the System Original Equipment Manufacturer Optimal Operational Availability Optimal Parts Availability Performance-Based Contracting Required Operational Availability Required Parts Availability Suspended Animation number of subsystems in a series system number of suspended components in subsystem l number of components in subsystem l (redundancy level) base-stock level for subsystem l failure rate of the component in subsystem l replenishment rate of spare parts for subsystem l repair-by-replacement rate for subsystem l if a spare part is available steady-state probability for the case with i non-failed components and j on-hand spare parts steady-state probability for cases with i non-failed components steady-state probability for cases with j on-hand spare parts availability of the overall system availability of subsystem l availability of spare parts for subsystem l required parts availability level of subsystem l holding cost per unit time of a spare part for subsystem l operation cost per unit time of a component for subsystem l expected total inventory cost per unit time for subsystem l expected total operation cost per unit time for subsystem l inventory and operation budget per unit time for the overall system incomplete Gamma function ∫ z ∞ t α - 1 e - t dt | Maximizing system availability through joint decision on component redundancy and spares inventory |
S0377221714001672 | With modern data-acquisition equipment and on-line computers used during production, it is now common to monitor several correlated quality characteristics simultaneously in multivariate processes. Multivariate control charts (MCC) are important tools for monitoring multivariate processes. One difficulty encountered with multivariate control charts is the identification of the variable or group of variables that cause an out-of-control signal. Expert knowledge either in combination with wrapper-based supervised classifier or a pre-filter with wrapper are the standard approaches to detect the sources of out-of-control signal. However gathering expert knowledge in source identification is costly and may introduce human error. Individual univariate control charts (UCC) and decomposition of T 2 statistics are also used in many cases simultaneously to identify the sources, but these either ignore the correlations between the sources or may take more time with the increase of dimensions. The aim of this paper is to develop a source identification approach that does not need any expert-knowledge and can detect out-of-control signal in less computational complexity. We propose, a hybrid wrapper–filter based source identification approach that hybridizes a Mutual Information (MI) based Maximum Relevance (MR) filter ranking heuristic with an Artificial Neural Network (ANN) based wrapper. The Artificial Neural Network Input Gain Measurement Approximation (ANNIGMA) has been combined with MR (MR-ANNIGMA) to utilize the knowledge about the intrinsic pattern of the quality characteristics computed by the filter for directing the wrapper search process. To compute optimal ANNIGMA score, we also propose a Global MR-ANNIGMA using non-functional relationship between variables which is independent of the derivative of the objective function and has a potential to overcome the local optimization problem of ANN training. The novelty of the proposed approaches is that they combine the advantages of both filter and wrapper approaches and do not require any expert knowledge about the sources of the out-of-control signals. Heuristic score based subset generation process also reduces the search space into polynomial growth which in turns reduces computational time. The proposed approaches were tested by exhaustive experiments using both simulated and real manufacturing data and compared to existing methods including independent filter, wrapper and Multivariate EWMA (MEWMA) methods. The results indicate that the proposed approaches can identify the sources of out-of-control signals more accurately than existing approaches. | A hybrid wrapper–filter approach to detect the source(s) of out-of-control signals in multivariate manufacturing process |
S0377221714001684 | In DEA, there are two frameworks for efficiency assessment and targeting: the greatest and the least distance framework. The greatest distance framework provides us with the efficient targets that are determined by the farthest projections to the assessed decision making unit via maximization of the p-norm relative to either the strongly efficient frontier or the weakly efficient frontier. Non-radial measures belonging to the class of greatest distance measures are the slacks-based measure (SBM) and the range-adjusted measure (RAM). Whereas these greatest distance measures have traditionally been utilized because of their computational ease, least distance projections are quite often more appropriate than greatest distance projections from the perspective of managers of decision-making units because closer efficient targets may be attained with less effort. In spite of this desirable feature of the least distance framework, the least distance (in) efficiency versions of the additive measure, SBM and RAM do not even satisfy weak monotonicity. In this study, therefore, we introduce and investigate least distance p-norm inefficiency measures that satisfy strong monotonicity over the strongly efficient frontier. In order to develop these measures, we extend a free disposable set and introduce a tradeoff set that implements input–output substitutability. | Input–output substitutability and strongly monotonic p-norm least distance DEA measures |
S0377221714001696 | The waste disposal charging fee (WDCF) has long been adopted for stimulating major project stakeholders’ (particularly project clients and contractors) incentives to minimize solid waste and increase the recovery of wasted materials in the construction industry. However, the present WDCFs applied in many regions of China are mostly determined based on a rule of thumb. Consequently the effectiveness of implementing these WDCFs is very limited. This study aims at addressing this research gap through developing a system dynamics based model to determine an appropriate WDCF in the construction sector. The data used to test and validate the model was collected from Shenzhen of south China. By using the model established, two types of simulations were carried out. One is the base run simulation to investigate the status quo of waste generation in Shenzhen; the other is policy analysis simulation, with which an appropriate WDCF could be determined to reduce waste generation and landfilling, maximize waste recycling, and minimize the waste dumped inappropriately. The model developed can function as a tool to effectively determine an appropriate WDCF in Shenzhen. Further, it can also be used by other regions intending to stimulate construction waste minimization and recycling through implementing an optimal WDCF. | A system dynamics model for determining the waste disposal charging fee in construction |
S0377221714001702 | Agents are connected each other through a tree. Each link of the tree has an associated cost and the total cost of the tree must be divided among the agents. In this paper we assume that agents are asymmetric (think on countries that use aqueducts to bring water from the rainy regions to the dry regions, for example). We suppose that each agent is entitled with a production and demand of a good that can be sent through the tree. This heterogeneity implies that the links are not equally important for all the agents. In this work we propose, and characterize axiomatically, two rules for sharing the cost of the tree when asymmetries apply. | Cost allocation in asymmetric trees |
S0377221714001714 | Forecasting as a scientific discipline has progressed a lot in the last 40years, with Nobel prizes being awarded for seminal work in the field, most notably to Engle, Granger and Kahneman. Despite these advances, even today we are unable to answer a very simple question, the one that is always the first tabled during discussions with practitioners: “what is the best method for my data?”. In essence, as there are horses for courses, there must also be forecasting methods that are more tailored to some types of data, and, therefore, enable practitioners to make informed method selection when facing new data. The current study attempts to shed light on this direction via identifying the main determinants of forecasting accuracy, through simulations and empirical investigations involving 14 popular forecasting methods (and combinations of them), seven time series features (seasonality, trend, cycle, randomness, number of observations, inter-demand interval and coefficient of variation) and one strategic decision (the forecasting horizon). Our main findings dictate that forecasting accuracy is influenced as follows: (a) for fast-moving data, cycle and randomness have the biggest (negative) effect and the longer the forecasting horizon, the more accuracy decreases; (b) for intermittent data, inter-demand interval has bigger (negative) impact than the coefficient of variation; and (c) for all types of data, increasing the length of a series has a small positive effect. | ‘Horses for Courses’ in demand forecasting |
S0377221714001726 | Most previous related studies on warehouse configurations and operations only investigated single-level storage rack systems where the height of storage racks and the vertical movement of the picking operations are both not considered. However, in order to utilize the space efficiently, high-level storage systems are often used in warehouses in practice. This paper presents a travel time estimation model for a high-level picker-to-part system with the considerations of class-based storage policy and various routing policies. The results indicate that the proposed model appears to be sufficiently accurate for practical purposes. Furthermore, the effects of storage and routing policies on the travel time and the optimal warehouse layout are discussed in the paper. | A travel time estimation model for a high-level picker-to-part system with class-based storage policies |
S0377221714001738 | The strategic design of a robust supply chain has to determine the configuration of the supply chain so that its performance remains of a consistently high quality for all possible future conditions. The current modeling techniques often only consider either the efficiency or the risk of the supply chain. Instead, we define the strategic robust supply chain design as the set of all Pareto-optimal configurations considering simultaneously the efficiency and the risk, where the risk is measured by the standard deviation of the efficiency. We model the problem as the Mean–Standard Deviation Robust Design Problem (MSD-RDP). Since the standard deviation has a square root expression, which makes standard maximization algorithms based on mixed-integer linear programming non-applicable, we show the equivalency to the Mean–Variance Robust Design Problem (MV-RDP). The MV-RDP yields an infinite number of mixed-integer programming problems with quadratic objective (MIQO) when considering all possible tradeoff weights. In order to identify all Pareto-optimal configurations efficiently, we extend the branch-and-reduce algorithm by applying optimality cuts and upper bounds to eliminate parts of the infeasible region and the non-Pareto-optimal region. We show that all Pareto-optimal configurations can be found within a prescribed optimality tolerance with a finite number of iterations of solving the MIQO. Numerical experience for a metallurgical case is reported. | Strategic robust supply chain design based on the Pareto-optimal tradeoff between efficiency and risk |
S0377221714001751 | This study uses multivariate regression analysis to examine the effects of asset specificity on the financial performance of both external and internal governance structures for medical device maintenance, and investigates how the financial performance of external governance structures differs depending on whether a hospital is private or public. The hypotheses were tested using information on 764 medical devices and 62 maintenance service providers, resulting in 1403 maintenance transactions. As such, our data sample is significantly larger than those used in previous studies in this area. The results empirically support our core theoretical argument that governance financial performance is influenced by assets specificity. | The effects of asset specificity on maintenance financial performance: An empirical application of Transaction Cost Theory to the medical device maintenance field |
S0377221714001763 | In this paper we examine the various effects that workstations and rework loops with identical parallel processors and stochastic processing times have on the performance of a mixed-model production line. Of particular interest are issues related to sequence scrambling. In many production systems (especially those operating on just-in-time or in-line vehicle sequencing principles), the sequence of orders is selected carefully to optimize line efficiency while taking into account various line balancing and product spacing constraints. However, this sequence is often altered due to stochastic factors during production. This leads to significant economic consequences, due to either the degraded performance of the production line, or the added cost of restoring the sequence (via the use of systems such as mix banks or automated storage and retrieval systems). We develop analytical formulas to quantify both the extent of sequence scrambling caused by a station of the production line, and the effects of this scrambling on downstream performance. We also develop a detailed Markov chain model to analyze related issues regarding line stoppages and throughput. We demonstrate the usefulness of our methods on a range of illustrative numerical examples, and discuss the implications from a managerial point of view. | Modeling sequence scrambling and related phenomena in mixed-model production lines |
S0377221714001775 | Customer requirements play a vital and important role in the design of products and services. Quality Function Deployment (QFD) is a popular, widely used method that helps translate customer requirements into design specifications. Thus, the foundation for a successful QFD implementation lies in the accurate capturing and prioritization of these requirements. This paper proposes and tests the use of an alternative framework for prioritizing studentsâ requirements within QFD. More specifically, Fuzzy Analytic Hierarchy Process (Fuzzy-AHP) and the linear programming method (LP-GW-AHP) based on Data Envelopment Analysis (DEA) are embedded into QFD (QFD-LP-GW-Fuzzy AHP) in order to account for inherent subjectivity of human judgements. The effectiveness of the proposed framework is assessed in capturing and prioritizing studentsâ requirements regarding coursesâ learning outcomes within the process of an academic course design. Sensitivity analysis evaluates the robustness of the prioritization solution and implications for course design specifications are discussed. | Capturing and prioritizing studentsâ requirements for course design by embedding Fuzzy-AHP and linear programming in QFD |
S0377221714001787 | For small resource-rich developing economies, specialization in raw exports is usually considered to be detrimental to growth and Resource-Based Industrialization (RBI) is often advocated to promote export diversification. This paper develops a new methodology to assess the performance of these RBI policies. We first formulate an adapted mean-variance portfolio model that explicitly takes into consideration: (i) a technology-based representation of the set of feasible export combinations and (ii) the cost structure of the resource processing industries. Second, we provide a computationally tractable reformulation of the resulting mixed-integer nonlinear optimization problem. Finally, we present an application to the case of natural gas, comparing current and efficient export-oriented industrialization strategies of nine gas-rich developing countries. | Export diversification through resource-based industrialization: The case of natural gas |
S0377221714001799 | In this paper we study the economic lot sizing problem with cost discounts. In the economic lot sizing problem a facility faces known demands over a discrete finite horizon. At each period, the ordering cost function and the holding cost function are given and they can be different from period to period. There are no constraints on the quantity ordered in each period and backlogging is not allowed. The objective is to decide when and how much to order so as to minimize the total ordering and holding costs over the finite horizon without any shortages. We study two different cost discount functions. The modified all-unit discount cost function alternates increasing and flat sections, starting with a flat section that indicates a minimum charge for small quantities. While in general the economic lot sizing problem with modified all-unit discount cost function is known to be NP-hard, we assume that the cost functions do not vary from period to period and identify a polynomial case. Then we study the incremental discount cost function which is an increasing piecewise linear function with no flat sections. The efficiency of the solution algorithms follows from properties of the optimal solution. We computationally test the polynomial algorithms against the use of CPLEX. | Polynomial cases of the economic lot sizing problem with cost discounts |
S0377221714001805 | Computing optimal capacity allocations in network revenue management is computationally hard. The problem of computing exact Nash equilibria in non-zero-sum games is computationally hard, too. We present a fast heuristic that, in case it cannot converge to an exact Nash equilibrium, computes an approximation to it in general network revenue management problems under competition. We also investigate the question whether it is worth taking competition into account when making (network) capacity allocation decisions. Computational results show that the payoffs in the approximate equilibria are very close to those in exact ones. Taking competition into account never leads to a lower revenue than ignoring competition, no matter what the competitor does. Since we apply linear continuous models, computation time is very short. | Computing approximate Nash equilibria in general network revenue management games |
S0377221714001817 | In this paper, the anchor points in DEA, as an important subset of the set of extreme efficient points of the production possibility set (PPS), are studied. A basic definition, utilizing the multiplier DEA models, is given. Then, two theorems are proved which provide necessary and sufficient conditions for characterization of these points. The main results of the paper lead to a new interesting connection between DEA and sensitivity analysis in linear programming theory. By utilizing the established theoretical results, a successful procedure for identification of the anchor points is presented. | Identifying the anchor points in DEA using sensitivity analysis in linear programming |
S0377221714001829 | The purpose of this paper is to develop an early warning system to predict currency crises. In this study, a data set covering the period of January 1992–December 2011 of Turkish economy is used, and an early warning system is developed with artificial neural networks (ANN), decision trees, and logistic regression models. Financial Pressure Index (FPI) is an aggregated value, composed of the percentage changes in dollar exchange rate, gross foreign exchange reserves of the Central Bank, and overnight interest rate. In this study, FPI is the dependent variable, and thirty-two macroeconomic indicators are the independent variables. Three models, which are tested in Turkish crisis cases, have given clear signals that predicted the 1994 and 2001 crises 12months earlier. Considering all three prediction model results, Turkey’s economy is not expected to have a currency crisis (ceteris paribus) until the end of 2012. This study presents uniqueness in that decision support model developed in this study uses basic macroeconomic indicators to predict crises up to a year before they actually happened with an accuracy rate of approximately 95%. It also ranks the leading factors of currency crisis with regard to their importance in predicting the crisis. | Developing an early warning system to predict currency crises |
S0377221714001830 | The paper addresses restaurant revenue management from both a strategic and an operational point of view. Strategic decisions in restaurants are mainly related to defining the most profitable combination of tables that will constitute the restaurant. We propose new formulations of the so-called “Tables Mix Problem” by taking into account several features of the real setting. We compare the proposed models in a computational study showing that restaurants, with the capacity of managing tables as renewable resources and of combining different-sized tables, can improve expected revenue performances. Operational decisions are mainly concerned with the more profitable assignment of tables to customers. Indeed, the “Parties Mix Problem” consists of deciding on accepting or denying a booking request from different groups of customers, with the aim of maximizing the total expected revenue. A dynamic formulation of the “Parties Mix Problem” is presented together with a linear programming approximation, whose solutions can be used to define capacity control policies based on booking limits and bid prices. Computational results compare the proposed policies and show that they lead to higher revenues than the traditional strategies used to support decision makers. | Strategic and operational decisions in restaurant revenue management |
S0377221714001842 | Although splitting shipments across multiple delivery modes typically increases total shipping costs as a result of diseconomies of scale, it may offer certain benefits that can more than offset these costs. These benefits include a reduction in the probability of stockout and in the average inventory costs. We consider a single-stage inventory replenishment model that includes two delivery modes: a cheaper, less reliable mode, and another, more expensive but perfectly reliable mode. The high-reliability mode is only utilized in replenishment intervals in which the lead time of the less-reliable mode exceeds a certain value. This permits substituting the high-reliability mode for safety stock, to some degree. We characterize optimal replenishment decisions with these two modes, as well as the potential benefits of simultaneously using two delivery modes. | A (Q, R) inventory replenishment model with two delivery modes |
S0377221714001854 | Integrated device manufacturers (IDMs) and foundries are two types of manufacturers in the semiconductor industry. IDMs integrate both design and manufacturing functions whereas foundries solely focus on manufacturing. Since foundries often have cost advantage over IDMs due to their specialization and economies of scale, IDMs have incentives to source from foundries for the purpose of avoiding excessive capacity investment risk. As the IDM is also a potential capacity source, the IDM and foundry are in a horizontal setting rather than a purely vertical setting. In the absence of sophisticated contracts, the benchmark contract for the IDM and foundry is a wholesale price contract. We define “coordinating” contracts as those that improve both the IDM’s and foundry’s expected profits over the benchmark wholesale price contract and also lead to the maximum system profit. This paper examines if there exist coordinating capacity reservation contracts. It is found that wholesale price contracts in the horizontal setting cannot achieve the maximum system profit due to either double marginalization effect, or “misalignment of capacity-usage-priority”. In contrast, if the IDM’s capacity investment risk is not too low, there always exist coordinating capacity reservation contracts. Furthermore, under coordinating contracts, the IDM’s sourcing structure, either sole sourcing from the foundry or dual sourcing, is contingent on the firms’ cost structures. | Horizontal coordinating contracts in the semiconductor industry |
S0377221714001866 | We examine a supply chain in which a manufacturer participates in a sealed-bid lowest price procurement auction through a distributor. This form of supply chain is common when a manufacturer is active in an overseas market without establishing a local subsidiary. To gain a strategic advantage in the division of profit, the manufacturer and distributor may intentionally conceal information about the underlying cost distribution of the competition. In this environment of information asymmetry, we determine the equilibrium mark-up, the ex-ante expected mark-up and expected profit of the manufacturer and the equilibrium bid of the distributor. In unilateral communication, we demonstrate the informed agent’s advantage resulting to higher mark-up. Under information sharing, we show that profit is equally shared among the supply chain partners and we explicitly derive the mark-up when the underlying cost distribution is uniform in [0,1]. The model and findings are illustrated by a numerical example. | Pricing in a supply chain for auction bidding under information asymmetry |
S0377221714001878 | This paper presents a composite model in which two simulation approaches, discrete-event simulation (DES) and system dynamics (SD), are used together to address a major healthcare problem, the sexually transmitted infection Chlamydia. The paper continues an on-going discussion in the literature about the potential benefits of linking DES and SD. Previous researchers have argued that DES and SD are complementary approaches and many real-world problems would benefit from combining both methods. In this paper, a DES model of the hospital outpatient clinic which treats Chlamydia patients is combined with an SD model of the infection process in the community. These two models were developed in commercial software and linked in an automated fashion via an Excel interface. To our knowledge this is the first time such a composite model has been used in a healthcare setting. The model shows how the prevalence of Chlamydia at a community level affects (and is affected by) operational level decisions made in the hospital outpatient department. We discuss the additional benefits provided by the composite model over and above the benefits gained from the two individual models. | Combining discrete-event simulation and system dynamics in a healthcare setting: A composite model for Chlamydia infection |
S0377221714001891 | The paper presents a simulationâÂÂoptimization modeling framework for the evacuation of large-scale pedestrian facilities with multiple exit gates. The framework integrates a genetic algorithm (GA) and a microscopic pedestrian simulationâÂÂassignment model. The GA searches for the optimal evacuation plan, while the simulation model guides the search through evaluating the quality of the generated evacuation plans. Evacuees are assumed to receive evacuation instructions in terms of the optimal exit gates and evacuation start times. The framework is applied to develop an optimal evacuation plan for a hypothetical crowded exhibition hall. The obtained results show that the model converges to a superior optimal evacuation plan within an acceptable number of iterations. In addition, the obtained evacuation plan outperforms conventional plans that implement nearest-gate immediate evacuation strategies. | Modeling framework for optimal evacuation of large-scale crowded pedestrian facilities |
S0377221714001908 | This paper considers the Red–Blue Transportation Problem (Red–Blue TP), a generalization of the transportation problem where supply nodes are partitioned into two sets and so-called exclusionary constraints are imposed. We encountered a special case of this problem in a hospital context, where patients need to be assigned to rooms. We establish the problem’s complexity, and we compare two integer programming formulations. Furthermore, a maximization variant of Red–Blue TP is presented, for which we propose a constant-factor approximation algorithm. We conclude with a computational study on the performance of the integer programming formulations and the approximation algorithms, by varying the problem size, the partitioning of the supply nodes, and the density of the problem. | The Red–Blue transportation problem |
S0377221714001921 | In the paper a new deterministic continuum-strategy two-player discrete-time dynamic Stackelberg game is proposed with fixed finite time duration and closed-loop information structure. The considered payoff functions can be widely used in different applications (mainly in conflicts of consuming a limited resource, where one player, called leader, is a superior authority choosing strategy first, and another player, called follower, chooses after). In case of convex payoff functions and certain parameter values, we give a new particular backward induction algorithm, which can be easily realized to find a (leader–follower) equilibrium of the game (in a certain sequential equilibrium realization from the last step towards the first one with respect to the current strategy choices of the players). Considerations on uniqueness and game regulation (i.e. setting parameters of the game to achieve a predefined equilibrium) are also provided. The finite version of the game (with finite strategy sets) is also given along with its simplification and solution method. Several practical examples are shown to illustrate the comprehensive application possibilities of the results. | Backward induction algorithm for a class of closed-loop Stackelberg games |
S0377221714002100 | We consider the problem of evaluating and constructing appointment schedules for patients in a health care facility where a single physician treats patients in a consecutive manner, as is common for general practitioners, clinics and for outpatients in hospitals. Specifically, given a fixed-length session during which a physician sees K patients, each patient has to be given an appointment time during this session in advance. Optimising a schedule with respect to patient waiting times, physician idle times, session overtime, etc. usually requires a heuristic search method involving a huge number of repeated schedule evaluations. Hence, our aim is to obtain accurate predictions at very low computational cost. This is achieved by (1) using Lindley’s recursion to allow for explicit expressions and (2) choosing a discrete-time (slotted) setting to make those expressions easy to compute. We assume general, possibly distinct, distributions for the patients’ consultation times, which allows to account for multiple treatment types, emergencies and patient no-shows. The moments of waiting and idle times are obtained and the computational complexity of the algorithm is discussed. Additionally, we calculate the schedule’s performance in between appointments in order to assist a sequential scheduling strategy. | Computationally efficient evaluation of appointment schedules in health care |
S0377221714002112 | Cutting and packing problems have been extensively studied in the literature in recent decades, mainly due to their numerous real-world applications while at the same time exhibiting intrinsic computational complexity. However, a major limitation has been the lack of problem generators that can be widely and commonly used by all researchers in their computational experiments. In this paper, a problem generator for every type of two-dimensional rectangular cutting and packing problems is proposed. The problems are defined according to the recent typology for cutting and packing problems proposed by Wäscher, Haußner, and Schumann (2007) and the relevant problem parameters are identified. The proposed problem generator can significantly contribute to the quality of the computational experiments run with cutting and packing problems and therefore will help improve the quality of the papers published in this field. | 2DCPackGen: A problem generator for two-dimensional rectangular cutting and packing problems |
S0377221714002124 | We present in this paper a new model for robust combinatorial optimization with cost uncertainty that generalizes the classical budgeted uncertainty set. We suppose here that the budget of uncertainty is given by a function of the problem variables, yielding an uncertainty multifunction. The new model is less conservative than the classical model and approximates better Value-at-Risk objective functions, especially for vectors with few non-zero components. An example of budget function is constructed from the probabilistic bounds computed by Bertsimas and Sim. We provide an asymptotically tight bound for the cost reduction obtained with the new model. We turn then to the tractability of the resulting optimization problems. We show that when the budget function is affine, the resulting optimization problems can be solved by solving n + 1 deterministic problems. We propose combinatorial algorithms to handle problems with more general budget functions. We also adapt existing dynamic programming algorithms to solve faster the robust counterparts of optimization problems, which can be applied both to the traditional budgeted uncertainty model and to our new model. We evaluate numerically the reduction in the price of robustness obtained with the new model on the shortest path problem and on a survivable network design problem. | Robust combinatorial optimization with variable cost uncertainty |
S0377221714002136 | In this paper we address the issue of vendor managed inventory (VMI) by considering a two-echelon single vendor/multiple buyer supply chain network. We try to find the optimal sales quantity by maximizing profit, given as a nonlinear and non-convex objective function. For such complicated combinatorial optimization problems, exact algorithms and optimization commercial software such as LINGO are inefficient, especially on practical-size problems. In this paper we develop a hybrid genetic/simulated annealing algorithm to deal with this nonlinear problem. Our results demonstrate that the proposed hybrid algorithm outperforms previous methodologies and achieves more robust solutions. | Hybrid algorithm for a vendor managed inventory system in a two-echelon supply chain |
S0377221714002148 | Using a unique data set with 168 ski resorts located in France, this paper investigates the relationship between lift ticket prices and supply-related characteristics of ski resorts. A non-parametric analysis combined with a principal component analysis is used to identify the set of efficient ski resorts, defined as those where the lift ticket price is the cheapest for a given level of quality. Results show that the average inefficiency per lift ticket price is less than 1.5euros for resorts located in the Pyrenees and the Southern Alps. The average inefficiency is three times higher for ski resorts located in the Northern Alps, which is explained by the presence of large connected ski areas offering many more runs for a small surcharge. | Lift ticket prices and quality in French ski resorts: Insights from a non-parametric analysis |
S0377221714002161 | We introduce an optimization-based production planning tool for the biotechnology industry. The industry’s planning problem is unusually challenging because the entire production process is regulated by multiple external agencies – such as the US Food and Drug Administration – representing countries where the biopharmaceutical is to be sold. The model is structured to precisely capture the constraints imposed by current and projected regulatory approvals of processes and facilities, as well as capturing the outcomes of quality testing and processing options, facility capacities and initial status of work-in-process. The result is a supply chain “Planning Engine” that generates capacity-feasible batch processing schedules for each production facility within the biomanufacturing supply chain and an availability schedule for finished product against a known set of demands and regulations. Developing the formulation based on distinct time grids tailored for each facility, planning problems with more than 27,000 boolean variables, more than 130,000 linear variables and more than 80,000 constraints are automatically formulated and solved within a few hours. The Planning Engine’s development and implementation at Bayer Healthcare’s Berkeley, CA manufacturing site is described. | An automated planning engine for biopharmaceutical production |
S0377221714002173 | New city logistics approaches are needed to ensure efficient urban mobility for both people and goods. Usually, these are handled independently in dedicated networks. This paper considers conceptual and mathematical models in which people and parcels are handled in an integrated way by the same taxi network. From a city perspective, this system has a potential to alleviate urban congestion and environmental pollution. From the perspective of a taxi company, new benefits from the parcel delivery service can be obtained. We propose two multi-commodity sharing models. The Share-a-Ride Problem (SARP) is discussed and defined in detail. A reduced problem based on the SARP is proposed: the Freight Insertion Problem (FIP) starts from a given route for handling people requests and inserts parcel requests into this route. We present MILP formulations and perform a numerical study of both static and dynamic scenarios. The obtained numerical results provide valuable insights into successfully implementing a taxi sharing service. | The Share-a-Ride Problem: People and parcels sharing taxis |
S0377221714002185 | Colorectal cancer (CRC) is notoriously hard to combat for its high incidence and mortality rates. However, with improved screening technology and better understanding of disease pathways, CRC is more likely to be detected at early stage and thus more likely to be cured. Among the available screening methods, colonoscopy is most commonly used in the U.S. because of its capability of visualizing the entire colon and removing the polyps it detected. The current national guideline for colonoscopy screening recommends an observation-based screening strategy. Nevertheless, there is scant research studying the cost-effectiveness of the recommended observation-based strategy and its variants. In this paper, we describe a partially observable Markov chain (POMC) model which allows us to assess the cost-effectiveness of both fixed-interval and observation-based colonoscopy screening strategies. In our model, we consider detailed adenomatous polyp states and estimate state transition probabilities based on longitudinal clinical data from a specific population cohort. We conduct a comprehensive numerical study which investigates several key factors in screening strategy design, including screening frequency, initial screening age, screening end age, and screening compliance rate. We also conduct sensitivity analyses on the cost and quality of life parameters. Our numerical result demonstrates the usability of our model in assessing colonoscopy screening strategies with consideration of partial observation of true health states. This research facilitates future design of better colonoscopy screening strategies. | Using a partially observable Markov chain model to assess colonoscopy screening strategies – A cohort study |
S0377221714002197 | In urban areas, logistic transportation operations often run into problems because travel speeds change, depending on the current traffic situation. If not accounted for, time-dependent and stochastic travel speeds frequently lead to missed time windows and thus poorer service. Especially in the case of passenger transportation, it often leads to excessive passenger ride times as well. Therefore, time-dependent and stochastic influences on travel speeds are relevant for finding feasible and reliable solutions. This study considers the effect of exploiting statistical information available about historical accidents, using stochastic solution approaches for the dynamic dial-a-ride problem (dynamic DARP). The authors propose two pairs of metaheuristic solution approaches, each consisting of a deterministic method (average time-dependent travel speeds for planning) and its corresponding stochastic version (exploiting stochastic information while planning). The results, using test instances with up to 762 requests based on a real-world road network, show that in certain conditions, exploiting stochastic information about travel speeds leads to significant improvements over deterministic approaches. | Integrating stochastic time-dependent travel speed in solution methods for the dynamic dial-a-ride problem |
S0377221714002203 | We consider a real problem faced by a large company providing repair services of office machines in Santiago, Chile. In a typical day about twenty technicians visit seventy customers in a predefined service area in Santiago. We design optimal routes for technicians by considering travel times, soft time windows for technician arrival times at client locations, and fixed repair times. A branch-and-price algorithm was developed, using a constraint branching strategy proposed by Ryan and Foster along with constraint programming in the column generation phase. The column generation takes advantage of the fact that each technician can satisfy no more than five to six service requests per day. Different instances of the problem were solved to optimality in a reasonable computational time, and the results obtained compare favorably with the current practice. | Branch-and-price and constraint programming for solving a real-life technician dispatching problem |
S0377221714002215 | We investigate a newsvendor-type retailer sourcing problem under demand uncertainty who has the option to source from multiple suppliers. The suppliers’ manufacturing costs are private information. A widely used mechanism to find the least costly supplier under asymmetric information is to use a sealed-bid reverse auction. We compare the combinations of different simple auction formats (first- and second-price) and risk sharing supply contracts (push and pull) under full contract compliance, both for risk-neutral and risk-averse retailer and suppliers. We show the superiority of a first-price push auction for a risk-neutral retailer. However, only the pull contracts lead to supply chain coordination. If the retailer is sufficiently risk-averse, the pull is preferred over the push contract. If suppliers are risk-averse, the first-price push auction remains the choice for the retailer. Numerical examples illustrate the allocation of benefits between the retailer and the (winning) supplier for different number of bidders, demand uncertainty, cost uncertainty, and degree of risk-aversion. | First- and second-price sealed-bid auctions applied to push and pull supply contracts |
S0377221714002227 | Extended Producer Responsibility (EPR) initiatives may require a manufacturer to be responsible in the future for taking back the products it produces today. A ramification of EPR is that take back costs may influence firms’ decisions regarding product durability. In the absence of EPR, prior literature has shown that a firm may intentionally lower durability, yielding planned obsolescence. We use a two period model to examine the impact of take back costs on a manufacturer’s product durability and pricing decisions, under both selling and leasing scenarios. We show that compared to selling, leasing provides a greater incentive to raise durability, thus extending a classic insight to a setting with product take backs. Interestingly, we also show that it is possible for the optimal product durability to decrease if the stipulated take back fraction increases. In such situations, were the take back fraction tied to durability rather than a fixed fraction, we demonstrate durability can increase. We explore the impact of take backs on profits and surplus by alternatively considering products for which take back costs are either increasing or decreasing functions of durability. When increasing durability implies higher take back costs, our results demonstrate that leasing can increase durability, profits, and surplus significantly compared to selling. In contrast, when increasing durability implies a lower take back cost, there is a built-in incentive for the firm to increase durability, which can make selling more efficient (i.e., surplus enhancing) than leasing. | Take back costs and product durability |
S0377221714002239 | In a supplier-retailer-buyer supply chain, the supplier frequently offers the retailer a trade credit of S periods, and the retailer in turn provides a trade credit of R periods to her/his buyer to stimulate sales and reduce inventory. From the seller’s perspective, granting trade credit increases sales and revenue but also increases opportunity cost (i.e., the capital opportunity loss during credit period) and default risk (i.e., the percentage that the buyer will not be able to pay off her/his debt obligations). Hence, how to determine credit period is increasingly recognized as an important strategy to increase seller’s profitability. Also, many products such as fruits, vegetables, high-tech products, pharmaceuticals, and volatile liquids not only deteriorate continuously due to evaporation, obsolescence and spoilage but also have their expiration dates. However, only a few researchers take the expiration date of a deteriorating item into consideration. This paper proposes an economic order quantity model for the retailer where: (a) the supplier provides an up-stream trade credit and the retailer also offers a down-stream trade credit, (b) the retailer’s down-stream trade credit to the buyer not only increases sales and revenue but also opportunity cost and default risk, and (c) deteriorating items not only deteriorate continuously but also have their expiration dates. We then show that the retailer’s optimal credit period and cycle time not only exist but also are unique. Furthermore, we discuss several special cases including for non-deteriorating items. Finally, we run some numerical examples to illustrate the problem and provide managerial insights. | Optimal credit period and lot size for deteriorating items with expiration dates under two-level trade credit financing |
S0377221714002240 | Self-explicated approaches are popular preference measurement approaches for products with many attributes. This article classifies previous self-explicated approaches according to their evaluation types, i.e. trade-off- versus non-trade-off-based, and outlines their advantages and disadvantages. In addition, it proposes a new method, the presorted adaptive self-explicated approach that is based on Netzer and Srinivasan’s (2011) adaptive self-explicated approach and that combines trade-off- and non-trade-off-based evaluation types. Two empirical studies compare this new method with the most popular existing self-explicated approaches, including the adaptive self-explicated approach and paired comparison preference measurement. The new method overcomes the insufficient discrimination between importance weights, as usually found in non-trade-off-based evaluation types; discourages respondents’ simplification strategies, as are frequently encountered in trade-off evaluation types; is easy to implement; and yields high predictive validity compared with other popular self-explicated approaches. | Measurement of preferences with self-explicated approaches: A classification and merge of trade-off- and non-trade-off-based evaluation types |
S0377221714002252 | Problems of loading, unloading and premarshalling of stacks as well as combinations thereof appear in several practical applications, e.g. container terminals, container ship stowage planning, tram depots or steel industry. Although these problems seem to be different at first sight, they hold plenty of similarities. To precisely unite all aspects, we suggest a classification scheme and show how problems existing in literature can be described with it. Furthermore, we give an overview of known complexity results and solution approaches. | Loading, unloading and premarshalling of stacks in storage areas: Survey and classification |
S0377221714002264 | Cross-docking is a distribution strategy that enables the consolidation of less-than-truckload shipments into full truckloads without long-term storage. Due to the absence of a storage buffer inside a cross-dock, local and network-wide cross-docking operations need to be carefully synchronized. This paper proposes a framework specifying the interdependencies between different cross-docking problem aspects with the aim to support future research in developing decision models with practical and scientific relevance. The paper also presents a new general classification scheme for cross-docking research based on the inputs and outputs for each problem aspect. After classifying the existing cross-docking research, we conclude that the overwhelming majority of papers fail to consider the synchronization of local and network-wide cross-docking operations. Lastly, to highlight the importance of synchronization in cross-docking networks, two real-life illustrative problems are described that are not yet addressed in the literature. | Synchronization in cross-docking networks: A research classification and framework |
S0377221714002276 | In real-world applications of optimization, optimal solutions are often of limited value, because disturbances of or changes to input data may diminish the quality of an optimal solution or even render it infeasible. One way to deal with uncertain input data is robust optimization, the aim of which is to find solutions which remain feasible and of good quality for all possible scenarios, i.e., realizations of the uncertain data. For single objective optimization, several definitions of robustness have been thoroughly analyzed and robust optimization methods have been developed. In this paper, we extend the concept of minmax robustness (Ben-Tal, Ghaoui, & Nemirovski, 2009) to multi-objective optimization and call this extension robust efficiency for uncertain multi-objective optimization problems. We use ingredients from robust (single objective) and (deterministic) multi-objective optimization to gain insight into the new area of robust multi-objective optimization. We analyze the new concept and discuss how robust solutions of multi-objective optimization problems may be computed. To this end, we use techniques from both robust (single objective) and (deterministic) multi-objective optimization. The new concepts are illustrated with some linear and quadratic programming instances. | Minmax robustness for multi-objective optimization problems |
S0377221714002288 | In this paper we simultaneously consider three extensions to the standard Orienteering Problem (OP) to model characteristics that are of practical relevance in planning reconnaissance missions of Unmanned Aerial Vehicles (UAVs). First, travel and recording times are uncertain. Secondly, the information about each target can only be obtained within a predefined time window. Due to the travel and recording time uncertainty, it is also uncertain whether a target can be reached before the end of its time window. Finally, we consider the appearance of new targets during the flight, so-called time-sensitive targets, which need to be visited immediately if possible. We tackle this online stochastic UAV mission planning problem with time windows and time-sensitive targets using a re-planning approach. To this end, we introduce the Maximum Coverage Stochastic Orienteering Problem with Time Windows (MCS-OPTW). It aims at constructing a tour with maximum expected profit of targets that were already known before the flight. Secondly, it directs the planned tour to predefined areas where time-sensitive targets are expected to appear. We have developed a fast heuristic that can be used to re-plan the tour, each time before leaving a target. In our computational experiments we illustrate the benefits of the MCS-OPTW planning approach with respect to balancing the two objectives: the expected profits of foreseen targets, and expected percentage of time-sensitive targets reached on time. We compare it to a deterministic planning approach and show how it deals with uncertainty in travel and recording times and the appearance of time-sensitive targets. | Online stochastic UAV mission planning with time windows and time-sensitive targets |
S0377221714002306 | Copulas offer a useful tool in modelling the dependence among random variables. In the literature, most of the existing copulas are symmetric while data collected from the real world may exhibit asymmetric nature. This necessitates developing asymmetric copulas that can model such data. In the meantime, existing methods of modelling two-dimensional reliability data are not able to capture the tail dependence that exists between the pair of age and usage, which are the two dimensions designated to describe product life. This paper proposes a new method of constructing asymmetric copulas, discusses the properties of the new copulas, and applies the method to fit two-dimensional reliability data that are collected from the real world. | Construction of asymmetric copulas and its application in two-dimensional reliability modelling |
S0377221714002318 | We consider a make-to-stock system served by an unreliable machine that produces one type of product, which is sold to customers at one of two possible prices depending on the inventory level at the time when a customer arrives (i.e., the decision point). The system manager must determine the production level and selling price at each decision point. We first show that the optimal production and pricing policy is a threshold control, which is characterized by three threshold parameters under both the long-run discounted profit and long-run average profit criteria. We then establish the structural relationships among the three threshold parameters that production is off when inventory is above the threshold, and that the optimal selling price should be low when inventory is above the threshold under the scenario where the machine is down or up. Finally we provide some numerical examples to illustrate the analytical results and gain additional insights. | Production planning and pricing policy in a make-to-stock system with uncertain demand subject to machine breakdowns |
S0377221714002331 | In this paper we develop the partial adjustment valuation approach in which the speeds of (partial) adjustment are assumed to be dynamic and variable, rather than fixed or constant, to assessing the value of information technology (IT). The speeds of adjustment are a function of a set of macroeconomic and/or microeconomic variables, observed and unobserved and, hence, become time-varying or dynamic and variable over time. The approach is illustrated by a practical application. The results imply that the constant speeds of adjustment may overestimate or underestimate the actual speeds of adjustment and, accordingly, may miscalculate the values of performance metrics. Thus, the partial adjustment valuation approach with dynamic and variable speeds of adjustment is more realistic and, more importantly, captures the changing patterns and trends of the adjustment speeds and the performance measures as well. As such, the partial adjustment valuation approach with constant speeds of adjustment fails to adequately explain the dynamic production process of a decision making unit. The empirical evidence also conflicts with the lopsided view that the productivity paradox does not exist in developed countries. | The partial adjustment valuation approach with dynamic and variable speeds of adjustment to evaluating and measuring the business value of information technology |
S0377221714002495 | We present a new piecewise linear approximation of non-linear optimization problems. It can be seen as a variant of classical triangulations that leaves more degrees of freedom to define any point as a convex combination of the samples. For example, in the traditional Union Jack approach a (two-dimensional) variable domain is split by a rectangular grid, and one has to select the diagonals that induce the triangles used for the approximation. For a hyper-rectangular domain U ∈ R L , partitioned into hyper-rectangular subdomains through a grid defined by n l points on the l-axis ( l = 1 , … , L ), the number of potential simplexes is L ! ∏ l = 1 L ( n l - 1 ) , and an MILP model incorporating it without complicated encoding strategies must have the same number of additional binary variables. In the proposed approach the choice of the simplexes is optimistically guided by one between two approximating objective functions (one convex, one concave), and the number of additional binary variables needed by a straightforward implementation drops to only ∑ l = 1 L ( n l - 1 ) . The method generalizes to the splitting of U into L-dimensional bounded polytopes in R L in which samples can be taken not only at the vertices of the polytopes but also inside them thus allowing, for example, off-grid oversampling of interesting regions. When addressing polytopes that are regularly spaced hyper-rectangles, the methods allows modeling of the domain partition with a logarithmic number of constraints and binary variables. The simultaneous use of both convex and concave piecewise linear approximations reminds of global optimization techniques, which are, on the one side, stronger because they lead to convex relaxations and not only approximations of the problem at hand, but, on the other hand, significantly more arduous from a computational standpoint. We show theoretical properties of the approximating functions, and provide computational evidence of the impact of their use within MILP models approximating non-linear problems. | Optimistic MILP modeling of non-linear optimization problems |
S0377221714002501 | Deriving accurate interval weights from interval fuzzy preference relations is key to successfully solving decision making problems. Xu and Chen (2008) proposed a number of linear programming models to derive interval weights, but the definitions for the additive consistent interval fuzzy preference relation and the linear programming model still need to be improved. In this paper, a numerical example is given to show how these definitions and models can be improved to increase accuracy. A new additive consistency definition for interval fuzzy preference relations is proposed and novel linear programming models are established to demonstrate the generation of interval weights from an interval fuzzy preference relation. | Note on “Some models for deriving the priority weights from interval fuzzy preference relations” |
S0377221714002513 | This paper studies vertical integration in serial supply chains with a wholesale price contract. We consider a business environment where the contracting leader may be endogenously changed before and after forming the integration. A cooperative game is formulated to normatively analyze the stable and fair profit allocations under the grand coalition in such an environment. Our main result demonstrates that vertical integration is stable when all members are pessimistic in the sense that they are sure that they will not become the contracting leader if they deviate from the grand coalition. We find that in this case, the grand coalition’s profit must be allocated more to the retailer and the members with higher costs. Nevertheless, we also show the conditions under which the upstream manufacturer can have strong power as in traditional supply chains. | Vertical integration with endogenous contract leadership: Stability and fair profit allocation |
S0377221714002525 | A multiobjective binary integer programming model for R&D project portfolio selection with competing objectives is developed when problem coefficients in both objective functions and constraints are uncertain. Robust optimization is used in dealing with uncertainty while an interactive procedure is used in making tradeoffs among the multiple objectives. Robust nondominated solutions are generated by solving the linearized counterpart of the robust augmented weighted Tchebycheff programs. A decision maker’s most preferred solution is identified in the interactive robust weighted Tchebycheff procedure by progressively eliciting and incorporating the decision maker’s preference information into the solution process. An example is presented to illustrate the solution approach and performance. The developed approach can also be applied to general multiobjective mixed integer programming problems. | Robust optimization for interactive multiobjective programming with imprecise information applied to R&D project portfolio selection |
S0377221714002537 | Carbon emissions are an increasingly important consideration in sustainable environmental development. In the green building industry, green construction cost controls and low-carbon construction methods are considered to be the key barriers encountered. Based on Corporate Social Responsibility (CSR) policy, management of carbon emissions from green building projects contributes to the acquisition of accurate building cost information and to a reduction in the environmental impact of these projects. This study focuses on the CO2 emission costs and low-carbon construction methods, and proposes a 0–1 mixed integer programming (0–1 MIP) decision model for integrated green building projects, using an Activity-Based Cost (ABC) and life cycle assessment (LCA) approach. The major contributions of this study are as follows: (1) the integrated model can help construction company managers to more accurately understand how to allocate resources and funding for energy saving activities to each green building through appropriate cost drivers; (2) this model provides a pre-construction decision-making tool which will assist management in bidding on environmentally-friendly construction projects; and (3) this study contributes to the innovation operation research (OR) literature, especially in regard to incorporating the life cycle assessment measurement into construction cost management by utilizing a mixed decision model for green building projects. | An Activity-Based Costing decision model for life cycle assessment in green building projects |
S0377221714002549 | Lease expiration management (LEM) in the apartment industry aims to control the number of lease expirations and thus achieve maximal revenue growth. We examine rental rate strategies in the context of LEM for apartment buildings that offer a single lease term and face demand uncertainty. We show that the building may incur a significant revenue loss if it fails to account for LEM in the determination of the rental rate. We also show that the use of LEM is a compromise approach between a limited optimization, where no future demand information is available, and a global optimization, where complete future demand information is available. We show that the use of LEM can enhance the apartment building’s revenue by as much as 8% when the desired number of expirations and associated costs are appropriately estimated. Numerical examples are included to illustrate the major results derived from our models and the impact on the apartment’s revenue of sensitivity to the desired number of expirations and associated costs. | Lease expiration management for a single lease term in the apartment industry |
S0377221714002550 | We consider parallel machine scheduling problems where the processing of the jobs on the machines involves two types of objectives. The first type is one of two classical objective functions in scheduling theory: either the total completion time or the makespan. The second type involves an actual cost associated with the processing of a specific job on a given machine; each job-machine combination may have a different cost. Two bi-criteria scheduling problems are considered: (1) minimize the maximum machine cost subject to the total completion time being at its minimum, and (2) minimize the total machine cost subject to the makespan being at its minimum. Since both problems are strongly NP-hard, we propose fast heuristics and establish their worst-case performance bounds. a ¯ † : a 1 = ⋯ = a m - 1 = 1 , a m > 1 . ρ ‡ : an approximation ratio for the parallel machine scheduling problem to minimize the makespan. | Fast approximation algorithms for bi-criteria scheduling with machine assignment costs |
S0377221714002562 | We discuss cutting stock problems (CSPs) from the perspective of the paper industry and the financial impact they make. Exact solution approaches and heuristics have been used for decades to support cutting stock decisions in that industry. We have developed polylithic solution techniques integrated in our ERP system to solve a variety of cutting stock problems occurring in real world problems. Among them is the simultaneous minimization of the number of rolls and the number of patterns while not allowing any overproduction. For two cases, CSPs minimizing underproduction and CSPs with master rolls of different widths and availability, we have developed new column generation approaches. The methods are numerically tested using real world data instances. An assembly of current solved and unsolved standard and non-standard CSPs at the forefront of research are put in perspective. | Solving real-world cutting stock-problems in the paper industry: Mathematical approaches, experience and challenges |
S0377221714002574 | During a mass casualty incident (MCI), to which one of several area hospitals should each victim be sent? These decisions depend on resource availability (both transport and care) and the survival probabilities of patients. This paper focuses on the critical time period immediately following the onset of an MCI and is concerned with how to effectively evacuate victims to the different area hospitals in order to provide the greatest good to the greatest number of patients while not overwhelming any single hospital. This resource-constrained triage problem is formulated as a mixed-integer program, which we call the Severity-Adjusted Victim Evacuation (SAVE) model. It is compared with a model in the extant literature and also against several current policies commonly used by the so-called incident commander. The experiments indicate that the SAVE model provides a marked improvement over the commonly used ad-hoc policies and an existing model. Two possible implementation strategies are discussed along with managerial conclusions. | Mass-casualty triage: Distribution of victims to multiple hospitals using the SAVE model |
S0377221714002586 | This paper proposes a new nonlinear interval programming method that can be used to handle uncertain optimization problems when there are dependencies among the interval variables. The uncertain domain is modeled using a multidimensional parallelepiped interval model. The model depicts single-variable uncertainty using a marginal interval and depicts the degree of dependencies among the interval variables using correlation angles and correlation coefficients. Based on the order relation of interval and the possibility degree of interval, the uncertain optimization problem is converted to a deterministic two-layer nesting optimization problem. The affine coordinate is then introduced to convert the uncertain domain of a multidimensional parallelepiped interval model to a standard interval uncertain domain. A highly efficient iterative algorithm is formulated to generate an efficient solution for the multi-layer nesting optimization problem after the conversion. Three computational examples are given to verify the effectiveness of the proposed method. | A new nonlinear interval programming method for uncertain problems with dependent interval variables |
S0377221714002690 | We investigate the value of accounting for demand seasonality in inventory control. Our problem is motivated by discussions with retailers who admitted to not taking perceived seasonality patterns into account in their replenishment systems. We consider a single-location, single-item periodic review lost sales inventory problem with seasonal demand in a retail environment. Customer demand has seasonality with a known season length, the lead time is shorter than the review period and orders are placed as multiples of a fixed batch size. The cost structure comprises of a fixed cost per order, a cost per batch, and a unit variable cost to model retail handling costs. We consider four different settings which differ in the degree of demand seasonality that is incorporated in the model: with or without within-review period variations and with or without across-review periods variations. In each case, we calculate the policy which minimizes the long-run average cost and compute the optimality gaps of the policies which ignore part or all demand seasonality. We find that not accounting for demand seasonality can lead to substantial optimality gaps, yet incorporating only some form of demand seasonality does not always lead to cost savings. We apply the problem to a real life setting, using Point-of-Sales data from a European retailer. We show that a simple distinction between weekday and weekend sales can lead to major cost reductions without greatly increasing the complexity of the retailer’s automatic store ordering system. Our analysis provides valuable insights on the tradeoff between the complexity of the automatic store ordering system and the benefits of incorporating demand seasonality. | Demand seasonality in retail inventory management |
S0377221714002707 | Most previous optimization models on technology adoption assume perfect foresight over the long term. In reality, decision-makers do not have perfect foresight, and the endogenous driving force of technology adoption is uncertain. With a stylized optimization model, this paper explores the adoption of a new technology, its associated cost dynamics, and technological bifurcations with limited foresight and uncertain technological learning. The study shows that when modeling with limited foresight and technological learning, (1) the longer the length of the decision period, the earlier the adoption of a new technology, and the value of a foresight can be amplified with a high learning rate. However, when the decision period is beyond a certain length, further extending its length has little influence on adopting the new technology; (2) with limited foresight, decisions aiming at minimizing the total cost of each decision period will commonly result in a non-optimal solution from the perspective of the entire decision horizon; and (3) the range of technological bifurcation is much larger than that with perfect foresight, but uncertainty in technological learning tends to reduce the range by removing the early adoption paths of a new technology. | Technology adoption with limited foresight and uncertain technological learning |
S0377221714002719 | A Pairwise Comparison Matrix (PCM) has been used to compute for relative priorities of elements and are integral components in widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, PCMs suffer from several issues limiting their applications to large-scale decision problems. These limitations can be attributed to the curse of dimensionality, that is, a large number of pairwise comparisons need to be elicited from a decision maker. This issue results to inconsistent preferences due to the limited cognitive powers of decision makers. To address these limitations, this research proposes a PCM decomposition methodology that reduces the elicited pairwise comparisons. A binary integer program is proposed to intelligently decompose a PCM into several smaller subsets using interdependence scores among elements. Since the subsets are disjoint, the most independent pivot element is identified to connect all subsets to derive the global weights of the elements from the original PCM. As a result, the number of pairwise comparison is reduced and consistency is of the comparisons is improved. The proposed decomposition methodology is applied to both AHP and ANP to demonstrate its advantages. | An intelligent decomposition of pairwise comparison matrices for large-scale decisions |
S0377221714002720 | Cross-training of nursing staff has been used in hospitals to reduce labor cost, provide scheduling flexibility, and meet patient demand effectively. However, cross-trained nurses may not be as productive as regular nurses in carrying out their tasks because of a new work environment and unfamiliar protocols in the new unit. This leads to the research question: What is the impact of productivity on optimal staffing decisions (both regular and cross-trained) in a two-unit and multi-unit system. We investigate the effect of mean demand, cross-training cost, contract nurse cost, and productivity, on a two-unit, full-flexibility configuration and a three-unit, partial flexibility and chaining (minimal complete chain) configurations under centralized and decentralized decision making. Under centralized decision making, the optimal staffing and cross-training levels are determined simultaneously, while under decentralized decision making, the optimal staffing levels are determined without any knowledge of future cross-training programs. We use two-stage stochastic programming to derive closed form equations and determine the optimal number of cross-trained nurses for two units facing stochastic demand following general, continuous distributions. We find that there exists a productivity level (threshold) beyond which the optimal number of cross-trained nurses declines, as fewer cross-trained nurses are sufficient to obtain the benefit of staffing flexibility. When we account for productivity variations, chaining configuration provides on average 1.20% cost savings over partial flexibility configuration, while centralized decision making averages 1.13% cost savings over decentralized decision making. | Impact of productivity on cross-training configurations and optimal staffing decisions in hospitals |
S0377221714002732 | The awareness of importance of product recovery has grown swiftly in the past few decades. This paper focuses on a problem of inventory control and production planning optimisation of a generic type of an integrated Reverse Logistics (RL) network which consists of a traditional forward production route, two alternative recovery routes, including repair and remanufacturing and a disposal route. It is assumed that demand and return quantities are uncertain. A quality level is assigned to each of the returned products. Due to uncertainty in the return quantity, quantity of returned products of a certain quality level is uncertain too. The uncertainties are modelled using fuzzy trapezoidal numbers. Quality thresholds are used to segregate the returned products into repair, remanufacturing or disposal routes. A two phase fuzzy mixed integer optimisation algorithm is developed to provide a solution to the inventory control and production planning problem. In Phase 1, uncertainties in quantity of product returns and quality of returns are considered to calculate the quantities to be sent to different recovery routes. These outputs are inputs into Phase 2 which generates decisions on component procurement, production, repair and disassembly. Finally, numerical experiments and sensitivity analysis are carried out to better understand the effects of quality of returns and RL network parameters on the network performance. These parameters include quantity of returned products, unit repair costs, unit production cost, setup costs and unit disposal cost. | Optimisation of integrated reverse logistics networks with different product recovery routes |
S0377221714002744 | As evidenced through both a historical and contemporary number of reported over-runs, managing projects can be a risky business. Managers are faced with the need to effectively work with a multitude of parties and deal with a wealth of interlocking uncertainties. This paper describes a modelling process developed to assist managers facing such situations. The process helps managers to develop a comprehensive appreciation of risks and gain an understanding of the impact of the interactions between these risks through explicitly engaging a wide stakeholder base using a group support system and causal mapping process. Using a real case the paper describes the modelling process and outcomes along with its implications, before reflecting on the insights, limitations and future research. | Systemic risk elicitation: Using causal maps to engage stakeholders and build a comprehensive view of risks |
S0377221714002756 | We introduce a new distance measure between two preorders that captures indifference, strict preference, weak preference and incomparability relations. This measure is the first to capture weak preference relations. We illustrate how this distance measure affords decision makers greater modeling power to capture their preferences, or uncertainty and ambiguity around them, by using our proposed distance measure in a multiple criteria aggregation procedure for mixed evaluations. | A new distance measure including the weak preference relation: Application to the multiple criteria aggregation procedure for mixed evaluations |
S0377221714002768 | In this paper we consider the convergence of a sequence { M n } of the models of discounted continuous-time constrained Markov decision processes (MDP) to the “limit” one, denoted by M ∞ . For the models with denumerable states and unbounded transition rates, under reasonably mild conditions we prove that the (constrained) optimal policies and the optimal values of { M n } converge to those of M ∞ , respectively, using a technique of occupation measures. As an application of the convergence result developed here, we show that an optimal policy and the optimal value for countable-state continuous-time MDP can be approximated by those of finite-state continuous-time MDP. Finally, we further illustrate such finite-state approximation by solving numerically a controlled birth-and-death system and also give the corresponding error bound of the approximation. | Convergence of controlled models and finite-state approximation for discounted continuous-time Markov decision processes with constraints |
S0377221714002781 | In this paper, we consider coordinated production and interstage batch delivery scheduling problems, where a third-party logistics provider (3PP) delivers semi-finished products in batches from one production location to another production location belonging to the same manufacturer. A batch cannot be delivered until all jobs of the batch are completed at the upstream stage. The 3PP is required to deliver each product within a time T from its release at the upstream stage. We consider two transportation modes: regular transportation, for which delivery departure times are fixed at the beginning, and express transportation, for which delivery departure times are flexible. We analyze the problems faced by the 3PP when either the manufacturer dominates or the 3PP dominates. In this context, we investigate the complexity of several problems, providing polynomiality and NP-completeness results. | Coordination of production and interstage batch delivery with outsourced distribution |
S0377221714002793 | This paper examines location assignment for outbound containers in container terminals. It is an extension to the previous modeling work of Kim et al. (2000) and Zhang et al. (2010). The previous model was an “optimistic” handling way and gave a moderate punishment for placing a lighter container onto the top of a stack already loaded with heavier containers. Considering that the original model neglected the stack height and the state-changing magnitude information when interpreting the punishment parameter and hid too much information about the specific configurations for a given stack representation, we propose two new “conservative” allocation models in this paper. One considers the stack height and the state-changing magnitude information by reinterpreting the punishment parameter and the other further considers the specific configurations for a given stack representation. Solution qualities for the “optimistic” and the two “conservative” allocation models are compared on two performance indicators. The numerical experiments indicate that both the first and second “conservative” allocation models outperform the original model in terms of the two performance indicators. In addition, to overcome computational difficulties encountered by the dynamic programming algorithm for large-scale problems, an approximate dynamic programming algorithm is presented as well. | Conservative allocation models for outbound containers in container terminals |
S0377221714002811 | The travelling salesman problem (TSP) is one of the most prominent NP-hard combinatorial optimisation problems. After over fifty years of intense study, the TSP continues to be of broad theoretical and practical interest. Using a novel approach to empirical scaling analysis, which in principle is applicable to solvers for many other problems, we demonstrate that some of the most widely studied types of TSP instances tend to be much easier than expected from previous theoretical and empirical results. In particular, we show that the empirical median run-time required for finding optimal solutions to so-called random uniform Euclidean (RUE) instances – one of the most widely studied classes of TSP instances – scales substantially better than Θ ( 2 n ) with the number n of cities to be visited. The Concorde solver, for which we achieved this result, is the best-performing exact TSP solver we are aware of, and has been applied to a broad range of real-world problems. Furthermore, we show that even when applied to a broad range of instances from the prominent TSPLIB benchmark collection for the TSP, Concorde exhibits run-times that are surprisingly consistent with our empirical model of Concorde’s scaling behaviour on RUE instances. This result suggests that the behaviour observed for the simple random structure underlying RUE is very similar to that obtained on the structured instances arising in various applications. | On the empirical scaling of run-time for finding optimal solutions to the travelling salesman problem |
S0377221714002823 | A scheduling strategy to determine starting times of surgeries in multiple operating rooms (OR) is presented. The constraints are resource limit of a downstream facility, post-anesthesia care unit (PACU), and the service time uncertainties. Given sets of surgeries that need to be done on a day, this problem is formulated as a flexible job shop model with fuzzy sets. Patient-waitings in the process flow, clinical resource idling, and total completion times are considered for evaluation. This multi-objective problem is solved by a two-stage decision process. A genetic algorithm is used for determining relative order of surgeries in the first stage and definite starting times for all the surgical cases are obtained by a decision-heuristic in the second stage. The resultant schedule is evaluated by a Monte-Carlo simulation. The performance is shown to be better than our previous approach, a simulation based scheduling which already outperforms simple scheduling rules in regional hospitals. Additionally, the ratio of PACU to OR is examined using the proposed scheduling strategy. | Reducing patient-flow delays in surgical suites through determining start-times of surgical cases |
S0377221714002835 | We give an axiomatization of the Aumann–Shapley cost-sharing method in the discrete case by means of monotonicity and no merging or splitting (Sprumont, 2005). Monotonicity has not yet been employed to characterize this method in such a case, by contrast with the case in which goods are perfectly divisible, for which Monderer and Neyman (1988) and Young (1985b) characterize the Aumann–Shapley price mechanism. | Monotonicity and the Aumann–Shapley cost-sharing method in the discrete case |
S0377221714002847 | The analytic hierarchy process (AHP) is a widely-used method for multicriteria decision support based on the hierarchical decomposition of objectives, evaluation of preferences through pairwise comparisons, and a subsequent aggregation into global evaluations. The current paper integrates the AHP with stochastic multicriteria acceptability analysis (SMAA), an inverse-preference method, to allow the pairwise comparisons to be uncertain. A simulation experiment is used to assess how the consistency of judgements and the ability of the SMAA-AHP model to discern the best alternative deteriorates as uncertainty increases. Across a range of simulated problems results indicate that, according to conventional benchmarks, judgements are likely to remain consistent unless uncertainty is severe, but that the presence of uncertainty in almost any degree is sufficient to make the choice of best alternative unclear. | The analytic hierarchy process with stochastic judgements |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.