FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0377221714008601 | This paper presents a review and classification of the literature regarding workforce planning problems incorporating skills. In many cases, technical research regarding workforce planning focuses very hard on the mathematical model and neglects the real life implications of the simplifications that were needed for the model to perform well. On the other hand, many managerial studies give an extensive description of the human implications of certain management decisions in particular cases, but fail to provide useful mathematical models to solve workforce planning problems. This review will guide the operations researcher in his search to find useful papers and information regarding workforce planning problems incorporating skills. We not only discuss the differences and similarities between different papers, but we also give an overview of the managerial insights. The objective is to present a combination of technical and managerial knowledge to encourage the production of more realistic and useful solution techniques. | Workforce planning incorporating skills: State of the art |
S0377221714008613 | We view an administrative activity of issuing parking tickets in a dense city street setting, like downtown Philadelphia or NYC, as a revenue collection activity. The task of designing parking permit inspection routes is modeled as a revenue collecting Chinese Postman Problem. After demonstrating that our design of inspection routes maximizes the expected revenue we investigate decision rules that allow the officers to adjust online their inspection routes in response to the observed parking permits’ times. A simple simulation study tests the sensitivity of expected revenues with respect to the problem’s parameters and underscores the main conclusion that allowing an officer to selectively wait by parked cars for the expiration of the cars’ permits increases the expected revenues between 10% and 69 percent. | City streets parking enforcement inspection decisions: The Chinese postman’s perspective |
S0377221714008625 | This paper studies planning problems for a group of heating systems which supply the hot water demand for domestic use in houses. These systems (e.g. gas or electric boilers, heat pumps or microCHPs) use an external energy source to heat up water and store this hot water for supplying the domestic demands. The latter allows to some extent a decoupling of the heat production from the heat demand. We focus on the situation where each heating system has its own demand and buffer and the supply of the heating systems is coming from a common source. In practice, the common source may lead to a coupling of the planning for the group of heating systems. On the one hand, the external supply of the energy for heating up the water may have to be bought by an energy supplier on e.g. a day-ahead market. As the price of energy varies over time on such markets, this supplier is interested in a planning which minimizes the total cost to supply the heating systems with energy. On the other hand, the bottleneck to supply the energy also may be the capacity of the distribution system (e.g. the electricity networks or the gas network). As this has to be dimensioned for the maximal consumption, in this case it is important to minimize the maximal peak. The two mentioned coupling constraints for supplying the energy for producing the heat, lead to two different objectives for the planning of the group of heating systems: minimizing cost and minimizing the maximal peak. In this paper, we study the algorithmic complexity of the two resulting planning problems. For minimizing costs, a classical dynamic programming approach is given which solves the problem in polynomial time. On the other hand, we prove that minimizing the maximal peak is NP-hard and discuss why this problem is hard. Based on this, we show that this problem becomes polynomial if all heating systems have the same consumption of energy when turned on. Finally, we present a Fix Parameter Tractable (FPT) algorithm for minimizing the maximal peak which is linear in the number of time intervals. | Minimizing costs is easier than minimizing peaks when supplying the heat demand of a group of houses |
S0377221714008637 | The increase in cost of supplies and services is outpacing the increase in revenues at many hospitals. To address this cost increase hospitals are seeking more efficient ways to store and manage vast inventories of medical supplies. A parsimonious and efficient inventory system which we call 2Bin is becoming increasingly popular in North American hospitals. Under the 2Bin system inventory is stored in two equal-sized bins. 2Bin systems are reviewed periodically and empty bins are replenished. In recent years the adoption of RFID technology for 2Bin systems is allowing continuous-time tracking of empty bins, increasing inventory visibility. In this paper we model the 2Bin inventory system under periodic and continuous review. For periodic review we show that the long-run average cost per unit time is quasi-convex, enabling a simple search for the optimal review cycle. For continuous review, we present a semi-Markov decision model, characterize the optimal replenishment policy, and provide a solution approach to obtain the long-run average cost per unit time. Using data obtained from hospitals currently using RFID-enabled 2Bin systems, we estimate the economic benefits of using the best periodic review length (i.e., parameter optimization), and of using a continuous review inventory policy (i.e., policy improvement). We characterize system conditions such as the number of medical supplies used, replenishment costs, stock-out costs, etc. that favor each option, and provide insights to hospital management on system design considerations that favor the use of periodic or continuous review. | The 2Bin system for controlling medical supplies at point-of-use |
S0377221714008649 | The complexity of air traffic flow management has its groundwork at an airport and increases with the number of daily aircraft departures and arrivals. To adequately contribute toward an accelerated air traffic flow management (ATfM), multivariate statistical models were developed based on airport utility. The utility functions were derived from daily probabilities of airport delay and inefficiencies computed using parameterized statistical models. The estimates were based on logistic and stochastic frontier models to derive distribution functions from which daily airport utilities were estimated. Data for testing and model simulations are daily aggregates spanning a five year period, collected from Entebbe International Airport. The utility models show that there was a 2 percent difference between daily aircraft operations at departures (92 percent) and at arrivals (94 percent). These findings confirm the likelihood that events leading to departures are more rigid compared to those observed at aircraft arrivals. Simulation results further confirmed that lowering delays at departure and arrival would result into higher airport utility. Airport utility was found to decrease consistently with an increase in the air-to-ground cost ratios. Airport utility analyses were most stable at a delay threshold of 60 percent and an air-to-ground cost ratio of 1.6 for both departures and arrivals. Therefore, for better outcomes of airport utility studies, this study recommends different treatments between departure and arrival analyses. The models developed are flexible and easily replicable with little adjustments to reflect airport specific characteristics. | Airport utility stochastic optimization models for air traffic flow management |
S0377221714008650 | Operational Research (OR) techniques have been applied, from the early stages of the discipline, to a wide variety of issues in education. At the government level, these include questions of what resources should be allocated to education as a whole and how these should be divided amongst the individual sectors of education and the institutions within the sectors. Another pertinent issue concerns the efficient operation of institutions, how to measure it, and whether resource allocation can be used to incentivise efficiency savings. Local governments, as well as being concerned with issues of resource allocation, may also need to make decisions regarding, for example, the creation and location of new institutions or closure of existing ones, as well as the day-to-day logistics of getting pupils to schools. Issues of concern for managers within schools and colleges include allocating the budgets, scheduling lessons and the assignment of students to courses. This survey provides an overview of the diverse problems faced by government, managers and consumers of education, and the OR techniques which have typically been applied in an effort to improve operations and provide solutions. | Operational Research in education |
S0377221714008662 | Two methods for ranking of solutions of multi objective optimization problems are proposed in this paper. The methods can be used, e.g. by metaheuristics to select good solutions from a set of non dominated solutions. They are suitable for population based metaheuristics to limit the size of the population. It is shown theoretically that the ranking methods possess some interesting properties for such applications. In particular, it is shown that both methods form a total preorder and are both refinements of the Pareto dominance relation. An experimental investigation for a multi objective flow shop problem shows that the use of the new ranking methods in a Population-based Ant Colony Optimization algorithm and in a genetic algorithm leads to good results when compared to other methods. | Refined ranking relations for selection of solutions in multi objective metaheuristics |
S0377221714008674 | Non-profit organizations like the Meals On Wheels (MOW) association of America prepare and deliver meals, typically daily, to approximately one million homebound individuals in the United States alone. However, many MOW agencies are facing a steadily increasing number of clients requesting meal service without an increase in resources (either financial or human). One strategy for accommodating these requests is to deliver multiple (frozen) meals at a time and thus make fewer deliveries. However, many of the stakeholders (funders, volunteers, meal recipients) value the relationships that are developed by having a client receive daily deliveries from the same volunteer. Further, meal recipients may be concerned with the quality of food delivered in a frozen meal. In this paper, we develop a method for introducing consolidation into home meal delivery while minimizing operational disruptions and maintaining client satisfaction. With an extensive computational study, the savings associated with various levels and types of disruptions are detailed. The question of whether delivering frozen meals will enable an agency to both save money and deliver meals to a larger client base is also studied. | Consolidating home meal delivery with limited operational disruption |
S0377221714008686 | We consider a discrete time version of the popular optimal dividend payout problem in risk theory. The novel aspect of our approach is that we allow for a risk averse insurer, i.e., instead of maximising the expected discounted dividends until ruin we maximise the expected utility of discounted dividends until ruin. This task has been proposed as an open problem in Gerber and Shiu (2004). The model in a continuous-time Brownian motion setting with the exponential utility function has been analysed in Grandits et al. (2007). Nevertheless, a complete solution has not been provided. In this work, instead we solve the problem in discrete time setup for the exponential and the power utility functions and give the structure of optimal history-dependent dividend policies. We make use of certain ideas studied earlier in Bäuerle and Rieder (2011), where Markov decision processes with general utility functions were treated. Our analysis, however, includes new aspects, since the reward functions in this case are not bounded. | Risk-sensitive dividend problems |
S0377221714008698 | In many supply chains, the manufacturer sells not only through an independent retailer, but also through its own direct channel. This work studies the pricing and assortment decisions in such a supply chain in the presence of inventory costs. In our model, the retailer offers a subset of the assortment that the manufacturer offers through its direct channel. We model the customer demand by building on the nested-logit model, which captures the customer’s choice between the manufacturer and the retailer. This model produces several insights into the optimal pricing strategies of the manufacturer. For example, we find that variants with high demand variability will carry a lower wholesale price. Furthermore, we characterize scenarios in which the manufacturer’s and retailer’s assortment preferences are in conflict. In particular, the manufacturer may prefer the retailer to carry items with high demand variability while the retailer prefers items with low demand variability. | Pricing and assortment decisions for a manufacturer selling through dual channels |
S0377221714008704 | We propose a generalization of the multi-depot capacitated vehicle routing problem where the assumption of visiting each customer does not hold. In this problem, called the Multi-Depot Covering Tour Vehicle Routing Problem (MDCTVRP), the demand of each customer could be satisfied in two different ways: either by visiting the customer along the tour or by “covering” it. When a customer is visited, the corresponding demand is delivered at its location. A customer is instead covered when it is located within an acceptable distance from at least one visited customer from which it can receive its demand. For this problem we develop two mixed integer programming formulations and a hybrid metaheuristic combining GRASP, iterated local search and simulated annealing. Extensive computational tests on this problem and some of its variants clearly indicate the effectiveness of the developed solution methods. | A hybrid metaheuristic algorithm for the multi-depot covering tour vehicle routing problem |
S0377221714008716 | The minimum common string partition problem is an NP-hard combinatorial optimization problem with applications in computational biology. In this work we propose the first integer linear programming model for solving this problem. Moreover, on the basis of the integer linear programming model we develop a deterministic 2-phase heuristic which is applicable to larger problem instances. The results show that provenly optimal solutions can be obtained for problem instances of small and medium size from the literature by solving the proposed integer linear programming model with CPLEX. Furthermore, new best-known solutions are obtained for all considered problem instances from the literature. Concerning the heuristic, we were able to show that it outperforms heuristic competitors from the related literature. | Mathematical programming strategies for solving the minimum common string partition problem |
S0377221714008728 | In this paper we analyze the applicability of the TOPSIS method to support the process of building the scoring system for negotiation offers in ill-structured negotiations. When discussing the ill-structured negotiation problem we consider two major issues: the imprecisely defined negotiation space, and the vagueness of the negotiator's preferences that cannot be defined by means of crisp values. First we introduce the traditional fuzzy TOPSIS model showing the alternative ways of normalizing the data and measuring the distances, which allows to avoid the problem of ranking reversals. Then we formalize ill-structured negotiations using a model which allows the negotiation problem to be defined in a simplified way by means of the aspiration and reservation levels only. Such a definition requires changes in the traditional fuzzy TOPSIS algorithm the development of a mechanism for scoring the offers that fall outside of the negotiation space defined independently and subjectively by the negotiator. We propose three different approaches to handle this problem, that keep the scoring system stable and unchanged throughout the whole negotiation process. | Application of fuzzy TOPSIS to scoring the negotiation offers in ill-structured negotiation problems |
S0377221714008741 | Reverse factoring—a financial arrangement where a corporation facilitates early payment of its trade credit obligations to suppliers—is increasingly popular in industry. Many firms use the scheme to induce their suppliers to grant them more lenient payment terms. By means of a periodic review base stock model that includes alternative sources of financing, we explore the following question: what extensions of payment terms allow the supplier to benefit from reverse factoring? We obtain solutions by means of simulation optimisation. We find that an extension of payment terms induces a non-linear financing cost for the supplier, beyond the opportunity cost of carrying additional receivables. Furthermore, we find that the size of the payment term extension that a supplier can accommodate depends on demand uncertainty and the cost structure of the supplier. Overall, our results show that the financial implications of an extension of payment terms need careful assessment in stochastic settings. | The price of reverse factoring: Financing rates vs. payment delays |
S0377221714008753 | We address a generalization of the asymmetric Traveling Salesman Problem where routes have to be constructed to satisfy customer requests, which either involve the pickup or delivery of a single commodity. A vehicle is to be routed such that the demand and the supply of the customers is satisfied under the objective to minimize the total distance traveled. The commodities which are collected from the pickup customers can be used to accommodate the demand of the delivery customers. In this paper, we present three mathematical formulations for this problem class and apply branch-and-cut algorithms to optimally solve the model formulations. For two of the models we derive Benders cuts based on the classical and the generalized Benders decomposition. Finally, we analyze the different mathematical formulations and associated solution approaches on well-known data sets from the literature. | Mathematical formulations for a 1-full-truckload pickup-and-delivery problem |
S0377221714008765 | Technology based loan default is related not only to technology-oriented attributes (management, technology, profitability and marketability), and firm-specific characteristics but also to the economic situation after the loan. However, the default phenomenon for technology based loan has not reflected the change of economic situation. We propose a framework of utilizing a time varying Cox hazard proportional model in the context of technology based credit scoring. The proposed model is used for stress test with various scenarios of lending portfolio and economic situations. The results indicate that the firms with higher management score than average have the lower loan default rates than the firms with higher profitability or marketability score than average due to the effect of manager's knowledge and experience and fund supply ability when they are exposed under the same economic condition. In scenario test, we found the highest default rate under stable exchange rate with high consumer price index. Moreover, firms with a high level of marketability factors turn out to be significantly affected by economic conditions in terms of technology credit risk. We expect the result of this study can provide valuable feedback for the management of technology credit fund for SMEs. | Behavioral technology credit scoring model with time-dependent covariates for stress test |
S0377221714008777 | In this paper, we consider an assemble-to-order manufacturing system producing a single end product, assembled from n components, and serving an after sales market for individual components. Components are produced in a make-to-stock fashion, one unit at a time, on independent production facilities. Production times are exponentially distributed with finite production rates. The components are stocked ahead of demand and therefore incur a holding cost rate per unit. Demand for the end product as well as for the individual components occurs continuously over time according to independent Poisson streams. In order to characterize the optimal production and inventory rationing policies, we formulate such a problem using a Markov decision process framework. In particular, we show that the optimal component production policy is a state-dependent base-stock policy. We also show that the optimal component inventory rationing policy is a rationing policy with state-dependent rationing levels. Recognizing that such a policy is generally not only difficult to obtain numerically but also is difficult to implement in practice, we propose three heuristic policies that are easier to implement in practice. We show that two of these heuristics are highly efficient compared to the optimal policy. In particular, we show that one of the two heuristics strikes a balance between high efficiency and computational effort and thus can be used as an effective substitute of the optimal policy. | Managing an assemble-to-order system with after sales market for components |
S0377221714008789 | Various preference-based multi-objective evolutionary algorithms have been developed to help a decision-maker search for his/her preferred solutions to multi-objective problems. In most of the existing approaches the decision-maker preferences are formulated either by mathematical expressions such as the utility function or simply by numerical values such as aspiration levels and weights. However, in some sense a decision-maker may find it easier to specify preferences visually by drawing rather than using numbers. This paper introduces such a method, namely, the brushing technique. Using this technique the decision-maker can specify his/her preferences easily by drawing in the objective space. Combining the brushing technique with one existing algorithm PICEA-g, we present a novel approach named iPICEA-g for an interactive decision-making. The performance of iPICEA-g is tested on a set of benchmark problems and is shown to be good. | The iPICEA-g: a new hybrid evolutionary multi-criteria decision making approach using the brushing technique |
S0377221714008960 | In this work, a spatial equilibrium problem is formulated for analyzing the impact of the application of the EU-ETS on the steel industry that has historically seen Europe as one of its major producers. The developed model allows us to simultaneously represent the interactions of several market players, to endogenously determine output and steel prices and to analyze the investment in the Carbon Capture and Storage (CCS) technology. In addition, the proposed model supports the evaluation of the CO2 emission costs on the basis of Directive 2009/29/EC, the “20-20-20” targets, and the Energy Roadmap 2050. In this light, two main processes for steelmaking have to be considered: integrated mills (BOF) and Electric Arc Furnace (EAF) in minimills. | The steel industry: A mathematical model under environmental regulations |
S0377221714008972 | The world faces major problems, not least climate change and the financial crisis, and business schools have been criticised for their failure to help address these issues and, in the case of the financial meltdown, for being causally implicated in it. In this paper we begin by describing the extent of what has been called the rigour/relevance debate. We then diagnose the nature of the problem in terms of historical, structural and contextual mechanisms that initiated and now sustain an inability of business schools to engage with real-world issues. We then propose a combination of measures, which mutually reinforce each other, that are necessary to break into this vicious circle – critical realism as an underpinning philosophy that supports and embodies the next points; holism and transdisciplinarity; multimethodology (mixed-methods research); and a critical and ethical-committed stance. OR and management science have much to contribute in terms of both powerful analytical methods and problem structuring methods. | Helping business schools engage with real problems: The contribution of critical realism and systems thinking |
S0377221714008984 | The distribution of products using compartmentalized vehicles involves many decisions such as the allocation of products to vehicle compartments, vehicle routing and inventory control. These decisions often span several periods, yielding a difficult optimization problem. In this paper we define and compare four main categories of the Multi-Compartment Delivery Problem (MCDP). We propose two mixed-integer linear programming formulations for each case, as well as specialized models for particular versions of the problem. Known and new valid inequalities are introduced in all models. We then describe a branch-and-cut algorithm applicable to all variants of the MCDP. We have performed extensive computational experiments on single-period and multi-period cases of the problem. The largest instances that could be solved exactly for these two cases contain 50 and 20 customers, respectively. | Classification, models and exact algorithms for multi-compartment delivery problems |
S0377221714008996 | In humans, the relationship between experience and productivity, also known as learning (possibly also including forgetting), is non-linear. As a result, prescriptive planning models that seek to manage workforce development through task assignment are difficult to solve. To overcome this challenge we adapt a reformulation technique from non-convex optimization to model non-linear functions with a discrete domain with sets of binary and continuous variables and linear constraints. Further, whereas the original applications of this technique yielded approximations, we show that in our context the resulting mixed integer program is equivalent to the original non-linear problem. As a second contribution, we introduce a capacity scaling algorithm that exploits the structure of the reformulation model and reduces computation time. We demonstrate the effectiveness of the techniques on task assignment models wherein employee learning is a function of task repetition. | Integer programming techniques for solving non-linear workforce planning models with learning |
S0377221714009011 | Pareto Local Search (PLS) is a simple and effective local search method for tackling multi-objective combinatorial optimization problems. It is also a crucial component of many state-of-the-art algorithms for such problems. However, PLS may be not very effective when terminated before completion. In other words, PLS has poor anytime behavior. In this paper, we study the effect that various PLS algorithmic components have on its anytime behavior. We show that the anytime behavior of PLS can be greatly improved by using alternative algorithmic components. We also propose Dynagrid, a dynamic discretization of the objective space that helps PLS to converge faster to a good approximation of the Pareto front and continue to improve it if more time is available. We perform a detailed empirical evaluation of the new proposals on the bi-objective traveling salesman problem and the bi-objective quadratic assignment problem. Our results demonstrate that the new PLS variants not only have significantly better anytime behavior than the original PLS, but also may obtain better results for longer computation time or upon completion. | Anytime Pareto local search |
S0377221714009023 | This paper studies the joint optimization problem of energy and delay in a multi-hop wireless network. The optimization variables are the transmission rates, which are adjustable according to the packet queueing length in the buffer. The optimization goal is to minimize the energy consumption of energy-critical nodes and the packet transmission delay throughout the network. In this paper, we aim at understanding the well-known decentralized algorithms which are threshold based from a different research angle. By using a simplified network model, we show that we can adopt the semi-open Jackson network model and study this optimization problem in closed form. This simplified network model further allows us to establish some significant optimality properties. We prove that the system performance is monotonic with respect to (w.r.t.) the transmission rate. We also prove that the threshold-type policy is optimal, i.e., when the number of packets in the buffer is larger than a threshold, transmit with the maximal rate (power); otherwise, no transmission. With these optimality properties, we develop a heuristic algorithm to iteratively find the optimal threshold. Finally, we conduct some simulation experiments to demonstrate the main idea of this paper. | A Jackson network model and threshold policy for joint optimization of energy and delay in multi-hop wireless networks |
S0377221714009035 | This paper studies the multiple runway Aircraft Landing Problem. The aim is to schedule arriving aircraft to available runways at the airport. Landing times lie within predefined time windows and safety separation constraints between two successive landings must be satisfied. We propose a new approach for solving the problem. The method is based on an approximation of the separation time matrix and on time discretization. The separation matrix is approximated by a rank two matrix. This provides lower bounds or upper bounds depending on the choice of the approximating matrix. These bounds are used in a constraint generation algorithm to, exactly or heuristically, solve the problem. Computational tests, performed on publicly available problems involving up to 500 aircraft, show the efficiency of the approach. | Solving the Aircraft Landing Problem with time discretization approach |
S0377221714009047 | The buffer allocation problem consists of a dynamical description of the underlying production process combined with stochastic processing times. The aim is to find optimal buffer sizes averaged over several samples. Starting from a time-discrete recursion we derive a time-continuous model supplemented with a stochastic process. The new model is used for simulation and optimization purposes as well. Numerical experiments show the efficiency of our approach compared to other optimization techniques. | A continuous buffer allocation model using stochastic processes |
S0377221714009059 | We focus on multicriteria preference elicitation by matching. In this widely employed task, the decision maker (DM) is presented with two multicriteria options, a and b, and must assess the performance value on one criterion for b, left blank, so that she is indifferent between the two options. A reverse matching, which is normatively equivalent, can be created by integrating the answer to the description of b and letting the DM adjust a performance value on the previously totally specified option a. Such a procedure is called a bi-matching. Consistency requires that isopreferences resulting from the forward and backward matchings be identical, but they empirically differ in a systematic direction. In a matching task, multicriteria conflict refers to the magnitude of the advantage or disadvantage to be compensated. We investigate the effect of the multicriteria conflict, or trade-off size, on the difference of judgement between forward and backward matchings. We observed that the difference of judgement is increased both by multicriteria conflict and by asking deteriorating rather than improving judgements at both steps of the bi-matching. We derive some implications for the practice of preference elicitation. | The effect of bi-criteria conflict on matching-elicited preferences |
S0377221714009060 | This paper presents a state transition based formal framework for a new search method, called Evolutionary Ruin and Stochastic Recreate, which tries to learn and adapt to the changing environments during the search process. It improves the performance of the original Ruin and Recreate principle by embedding an additional phase of Evolutionary Ruin to mimic the survival-of-the-fittest mechanism within single solutions. This method executes a cycle of Solution Decomposition, Evolutionary Ruin, Stochastic Recreate and Solution Acceptance until a certain stopping condition is met. The Solution Decomposition phase first uses some problem-specific knowledge to decompose a complete solution into its components and assigns a score to each component. The Evolutionary Ruin phase then employs two evolutionary operators (namely Selection and Mutation) to destroy a certain fraction of the solution, and the next Stochastic Recreate phase repairs the “broken” solution. Last, the Solution Acceptance phase selects a specific strategy to determine the probability of accepting the newly generated solution. Hence, optimisation is achieved by an iterative process of component evaluation, solution disruption and stochastic constructive repair. From the state transitions point of view, this paper presents a probabilistic model and implements a Markov chain analysis on some theoretical properties of the approach. Unlike the theoretical work on genetic algorithm and simulated annealing which are based on state transitions within the space of complete assignments, our model is based on state transitions within the space of partial assignments. The exam timetabling problems are used to test the performance in solving real-world hard problems. | Search with evolutionary ruin and stochastic rebuild: A theoretic framework and a case study on exam timetabling |
S0377221714009072 | The Biobjective Shortest Path Problem (BSP) is the problem of finding (one-to-one) paths from a start node to an end node, while simultaneously minimizing two (conflicting) objective functions. We present an exact recursive method based on implicit enumeration that aggressively prunes dominated solutions. Our approach compares favorably against a top-performer algorithm on two large testbeds from the literature and efficiently solves the BSP on large-scale networks with up to 1.2 million nodes and 2.8 million arcs. Additionally, we describe how the algorithm can be extended to handle more than two objectives and prove the concept on networks with up to 10 objectives. | An exact method for the biobjective shortest path problem for large-scale road networks |
S0377221714009084 | We consider the problem of pricing and alliance selection that a dominant retailer in a two-echelon supply chain decides when facing a potential upstream entry. The two-echelon supply chain consists of a dominant retailer, an incumbent supplier and an “incursive” vendor, where both the incumbent supplier and “incursive” vendor sell substitutable products to the common market through the dominant retailer. Our objective is to discuss whether the dominant retailer should sell the “incursive” vendor's products and, if so, how the dominant retailer strategically selects the alliance structure to maximize his/her own profit. We also present how all the members make their pricing decisions and analyze the impact of competitive intensity between two products on their pricing strategies after the entry of the vendor in possible alliance settings. Our results show that: (1) the introduction of the upstream vendor always benefits the retailer, and more interestingly, benefits the incumbent suppler in many cases, too; (2) in this paper, we define the competitive ability as the price dominance of one player over another when both are competing for the same customer market, if the price competition between the incumbent supplier and the “incursive” vendor is relatively fierce, the dominant retailer should ally with the one who has a relatively strong competitive ability rather than the other who has a relatively weak competitive ability; otherwise, he/she should ally with both upstream members. Finally, using numerical examples, we analyze the impact of different parameters and provide some management insights. | Pricing and alliance selection for a dominant retailer with an upstream entry |
S0377221714009096 | The Mixed Capacitated General Routing Problem (MCGRP) is defined over a mixed graph, for which some vertices must be visited and some links must be traversed at least once. The problem consists of determining a set of least-cost vehicle routes that satisfy this requirement and respect the vehicle capacity. Few papers have been devoted to the MCGRP, in spite of interesting real-world applications, prevalent in school bus routing, mail delivery, and waste collection. This paper presents a new mathematical model for the MCGRP based on two-index variables. The approach proposed for the solution is a two-phase branch-and-cut algorithm, which uses an aggregate formulation to develop an effective lower bounding procedure. This procedure also provides strong valid inequalities for the two-index model. Extensive computational experiments over benchmark instances are presented. | Two-phase branch-and-cut for the mixed capacitated general routing problem |
S0377221714009102 | We consider a simple and altruistic multiagent system in which the agents are eager to perform a collective task but where their real engagement depends on the willingness to perform the task of other influential agents. We model this scenario by an influence game, a cooperative simple game in which a team (or coalition) of players succeeds if it is able to convince enough agents to participate in the task (to vote in favor of a decision). We take the linear threshold model as the influence model. We show first the expressiveness of influence games showing that they capture the class of simple games. Then we characterize the computational complexity of various problems on influence games, including measures (length and width), values (Shapley–Shubik and Banzhaf) and properties (of teams and players). Finally, we analyze those problems for some particular extremal cases, with respect to the propagation of influence, showing tighter complexity characterizations. | Cooperation through social influence |
S0377221714009114 | We deal with the multi-attribute decision problem with sequentially presented decision alternatives. Our decision model is based on the assumption that the decision-maker has a major attribute that must be “optimized” and minor attributes that must be “satisficed”. In the vendor selection problem, for example, the product price could be the major factor that should be optimized, while the product quality and delivery time could be the minor factors that should satisfy certain aspiration levels. We first derive the optimal selection strategy for the discrete-time case in which one alternative is presented at each time period. The discrete-time model is then extended to the continuous-time case in which alternatives are presented sequentially at random times. A numerical example is used to analyze the effects of the satisficing condition and the uncertainty on the optimal selection strategy. | Multi-attribute sequential decision problem with optimizing and satisficing attributes |
S0377221714009126 | Landelijke Thuiszorg is a “social profit” organisation that provides home care services in several Belgian regions. In this paper, the core optimisation component of a decision support system to support the planning of the organisations’ home care service is described. Underlying this decision support system is an optimisation problem that aims to maximise the service level and to minimise the distance travelled by the caregivers of the organisation. This problem is formulated as a bi-objective mathematical program, based on a set partitioning problem formulation. A flexible two-stage solution strategy is designed to efficiently tackle the problem. Computational tests, as well as extensive pilot runs performed by the organisation’s personnel, show that this approach achieves excellent performance, both in terms of the service level and total travelled distance. Moreover, computational times are small, allowing for the weekly planning to be largely automated. The organisation is currently in the process of implementing our solution approach in collaboration with an external software company. | Home care service planning. The case of Landelijke Thuiszorg |
S0377221714009138 | We study the Demand Adjustment Problem (DAP) associated to the urban traffic planning. The framework for the formulation of the DAP is mathematical programming with equilibrium constraints. In particular, if we consider the optimization problem equivalent to the equilibrium problem, the DAP becomes a bilevel optimization problem. In this work we present a descent scheme based on the approximation of the gradient of the objective function of DAP. | A heuristic for the OD matrix adjustment problem in a congested transport network |
S0377221714009151 | We present a method of hedging Conditional Value at Risk of a position in stock using put options. The result leads to a linear programming problem that can be solved to optimise risk hedging. | Hedging Conditional Value at Risk with options |
S0377221714009163 | This paper describes a methodology that aims to enhance statistical inference in data envelopment analysis (DEA). In order to incorporate statistical properties in a DEA analysis we propose a combined application of a chance constrained DEA (CCDEA) model that is integrated with a stochastic mechanism from Bayesian techniques. The proposed method is conducted in two basic steps. In a first step we make use of Bayesian techniques on the data set to generate a statistical model and to simulate a large number of alternative data sets that can be observed as realizations. In a second step we solve the CCDEA problem for each and every one of the alternative samples, compute efficiency measures, and use the sampling distribution of these measures as an approximation to the finite sample distribution. The paper discusses the statistical advantages of this method using cross-sectional data from a sample of 117 Greek public hospitals. In testing the model we use homogeneous groups of hospitals in various sizes according to the hierarchical structure of the Greek health system (primary, secondary and tertiary care). In order to measure the overall technical efficiency of hospitals that are classified into different groups we introduce the concept of metafrontier analysis on the developed model. The results show that the tertiary and secondary hospitals operate with similar production technologies while a large technology gap is observed between the primary care hospitals and the metafrontier. | Combining stochastic DEA with Bayesian analysis to obtain statistical properties of the efficiency scores: An application to Greek public hospitals |
S0377221714009175 | We consider the online Steiner Traveling Salesman Problem. In this problem, we are given an edge-weighted graph G = (V, E) and a subset D⊆V of destination vertices, with the optimization goal to find a minimum weight closed tour that traverses every destination vertex of D at least once. During the traversal, the salesman could encounter at most k non-recoverable blocked edges. The edge blockages are real-time, meaning that the salesman knows about a blocked edge whenever it occurs. We first show a lower bound on the competitive ratio and present an online optimal algorithm for the problem. While this optimal algorithm has non-polynomial running time, we present another online polynomial-time near optimal algorithm for the problem. Experimental results show that our online polynomial-time algorithm produces solutions very close to the offline optimal solutions. | The Steiner Traveling Salesman Problem with online edge blockages |
S0377221714009187 | We investigate project scheduling with stochastic activity durations to maximize the expected net present value. Individual activities also carry a risk of failure, which can cause the overall project to fail. In the project planning literature, such technological uncertainty is typically ignored and project plans are developed only for scenarios in which the project succeeds. To mitigate the risk that an activity’s failure jeopardizes the entire project, more than one alternative may exist for reaching the project’s objectives. We propose a model that incorporates both the risk of activity failure and the possible pursuit of alternative technologies. We find optimal solutions to the scheduling problem by means of stochastic dynamic programming. Our algorithms prescribe which alternatives need to be explored, and how they should be scheduled. We also examine the impact of the variability of the activity durations on the project’s value. | Project planning with alternative technologies in uncertain environments |
S0377221714009199 | This paper generalizes the automated innovization framework using genetic programming in the context of higher-level innovization. Automated innovization is an unsupervised machine learning technique that can automatically extract significant mathematical relationships from Pareto-optimal solution sets. These resulting relationships describe the conditions for Pareto-optimality for the multi-objective problem under consideration and can be used by scientists and practitioners as thumb rules to understand the problem better and to innovate new problem solving techniques; hence the name innovization (innovation through optimization). Higher-level innovization involves performing automated innovization on multiple Pareto-optimal solution sets obtained by varying one or more problem parameters. The automated innovization framework was recently updated using genetic programming. We extend this generalization to perform higher-level automated innovization and demonstrate the methodology on a standard two-bar bi-objective truss design problem. The procedure is then applied to a classic case of inventory management with multi-objective optimization performed at both system and process levels. The applicability of automated innovization to this area should motivate its use in other avenues of operational research. | Generalized higher-level automated innovization with application to inventory management |
S0377221714009205 | The paper provides some guidelines to individuals with defined contribution (DC) pension plans on how to manage pension savings both before and after retirement. We argue that decisions regarding investment, annuity payments, and the size of death sum should not only depend on the individual’s age (or time left to retirement), nor should they solely depend on the risk preferences, but should also capture: (1) economical characteristics—such as current value on the pension savings account, expected pension contributions (mandatory and voluntary), and expected income after retirement (e.g. retirement state pension), and (2) personal characteristics—such as risk aversion, lifetime expectancy, preferable payout profile, bequest motive, and preferences on portfolio composition. Specifically, the decisions are optimal under the expected CRRA utility function and are subject to the constraints characterizing the individual. The problem is solved via a model that combines two optimization approaches: stochastic optimal control and multi-stage stochastic programming. The first method is common in financial and actuarial literature, but produces theoretical results. However, the latter, which is characteristic for operations research, has practical applications. We present the operations research methods which have potential to stimulate new thinking and add to actuarial practice. | Optimal savings management for individuals with defined contribution pension plans |
S0377221714009217 | The concept of delayed product differentiation has received considerable attention in the research literature in recent years. However, few analytical models explain and quantify the benefits of delayed product differentiation strategy with additional consideration of supplier delivery performance. This paper proposes a delayed product differentiation model in which a supply of raw materials is integrated at the beginning of the production process to match uncertain demand in a cost-effective way given the constraint of lead time delivery window. It develops insights regarding a delayed product differentiation strategy and shows that with respect to delivery windows, supplier delivery performance plays an important role in the determination of the optimal point of differentiation. This study also shows that when the “on-time” and the “late” portions of the delivery window are constant, the proposed cost function coincides with similar models found in the literature. An extension of this work also reveals that when the customer service level varies across various production stages, its choice affects the decision to delay or postpone the customization point. A mini industrial case involving the customization of a personal desktop computer is used to illustrate the applicability of the resulting framework. | A delayed product customization cost model with supplier delivery performance |
S0377221714009229 | In order to maintain a unit that is running successive works with cycle times, this paper classifies its replacement policies into three types: (Type I) Replacement with interrupted cycles; (Type II) replacement with complete works; and (Type III) replacement with incomplete works. Type I is typically done at a continuous time T while Type II is executed at a discrete number N of working cycles. Type III is proposed as an improvement of Type I, which can be done at discrete working cycles. For each type, age and periodic replacement models are respectively observed. It is shown that Type I is more flexible than Type II and costs less than Type III. However, modified replacement costs, i.e., without penalty of operational interruptions, are obtained for Types II and III as critical points at which their policies should be adopted. All discussions are presented analytically and numerical examples are given when each cycle time is exponential and the failure time has a Weibull distribution. | Which is better for replacement policies with continuous or discrete scheduled times? |
S0377221714009230 | One of the greatest challenges in managing product development projects is identifying an appropriate sequence of many coupled activities. The current study presents an effective approach for determining the activity sequence with minimum total feedback time in a design structure matrix (DSM). First, a new formulation of the optimization problem is proposed, which allows us to obtain optimal solutions in a reasonable amount of time for problems up to 40 coupled activities. Second, two simple rules are proposed, which can be conveniently used by management to reduce the total feedback time. We also prove that if the sequence of activities in a subproblem is altered, then the change of total feedback time in the overall problem equals to the change in the subproblem. Because the optimization problem is NP-complete, we further develop a heuristic approach that is able to provide good solutions for large instances. To illustrate its application, we apply the presented approach to the design of balancing machines in an international firm. Finally, we perform a large number of random experiments to demonstrate that the presented approach outperforms existing state-of-art heuristics. | An effective approach for scheduling coupled activities in development projects |
S0377221714009242 | We study a multiple period discount problem for products that undergo several price cuts over time. In the high-technology sector, electronic component suppliers are often able to offer pre-announced price cuts to buyers due to technological innovation that allows them to produce existing components at lower costs. In this context, suppliers are primarily concerned with the optimal pricing decisions for the components over their life spans in order to achieve the highest possible revenues. Accordingly, the buying firms (i.e., manufacturers or retailers) also need to identify the corresponding optimal retail prices and order quantity for the finished products that utilize the components for which discounts are offered frequently. In this research, we develop a multiple-period price discount model that addresses this issue. Extant research in the price discount literature focuses on supply chains’ pricing and inventory decisions in the presence of a single price discount. The proposed model, in contrast, offers a systematic decision tool for identifying the optimal strategies throughout a product's life span. Our results show that a decentralized supply chain characterized by multi-period discounts over a product's life typically achieves 75 percent supply chain efficiency. We undertake a series of numerical experiments based on the model and discuss their managerial implications. | Optimal pricing and inventory strategies with multiple price markdowns over time |
S0377221714009254 | The goal of this paper is to quantify the impact of Inventory Record Inaccuracy on the dynamics of collaborative supply chains, both in terms of operational performance (i.e. order and inventory stability), and customer service level. To do so, we model an Information Exchange Supply Chain under shrinkage errors in the inventory item recording activity of their nodes, present the mathematical formulation of such supply chain model, and conduct a numerical simulation assuming different levels of errors. Results clearly show that Inventory Record Inaccuracy strongly compromises supply chain stability, particularly when moving upwards in the supply chain. Important managerial insights can be extracted from this analysis, such as the role of ‘benefit-sharing’ strategies in order to guarantee the advantage of investments in connectivity technologies. | The effect of Inventory Record Inaccuracy in Information Exchange Supply Chains |
S0377221714009461 | The crude oil tanker routing and scheduling problem with split pickup and split delivery is a maritime transportation task where an industrial operator needs to ship different types of crude oil from production sites to oil refineries. The different crude oils are supplied and demanded in many ports in certain time windows. Pickup and delivery quantities are known in advance but no pairing of pickup and delivery needs to be predefined and can be decided together with shipment quantities during optimization. Pickup and delivery quantities may be split arbitrarily among the ships in the fleet. We compare two alternative path flow model approaches to investigate their degree of applicability in a column generation setup. For this purpose we apply route pregeneration prior to optimization. The first approach uses continuous decision variables for pickup and delivery to decide on shipment quantities. In the considerably shorter second formulation cargo quantities are discretized and included into the paths. The second approach is capable to solve larger instances and is more efficient in terms of computational performance, however solution quality may decrease due to the discretization. | Alternative approaches to the crude oil tanker routing and scheduling problem with split pickup and split delivery |
S0377221714009473 | We propose a large neighborhood search (LNS) algorithm to solve the periodic location routing problem (PLRP). The PLRP combines location and routing decisions over a planning horizon in which customers require visits according to a given frequency and the specific visit days can be chosen. We use parallelization strategies that can exploit the availability of multiple processors. The computational results show that the algorithms obtain better results than previous solution methods on a set of standard benchmark instances from the literature. | Sequential and parallel large neighborhood search algorithms for the periodic location routing problem |
S0377221714009485 | In this work we deal with the Order Batching Problem (OBP) considering traversal, return and midpoint routing policies. For the first time, we introduce Mixed Integer Linear Programming (MILP) formulations for these three variants of the OBP. We also suggest an efficient Iterated Local Search Algorithm with Tabu Thresholding (ILST). According to our extensive computational experiments on standard and randomly generated instances we can say that the proposed ILST yields an outstanding performance in terms of both accuracy and efficiency. | MILP formulations and an Iterated Local Search Algorithm with Tabu Thresholding for the Order Batching Problem |
S0377221714009497 | Every year, natural and man-made disasters affect hundreds of thousands of people and cause extensive damage. OR has made substantial contributions to disaster response and these have been the subject of several recent literature reviews. However, these reviews have also identified research gaps for OR – two of which are (1) limited contribution from soft OR, and (2) a need to model communications during disasters where there are complex interactions between stakeholders. At the intersection of these gaps we apply the Viable System Model (VSM) to examine challenges of rapid communication viability during dynamic disasters. The data that informs this paper were collected in four case studies in Japan – three on its current capabilities (e.g. a local government disaster management office) and one on its response to a past disaster (the Great Hanshin-Awaji Earthquake in 1995). This paper shows how applying VSM identified generic gaps and opportunities for communication systems and shows how these case studies signal the utility of VSM structures to arranging communications for fast-paced and changing environments. This paper also contributes to VSM theory through developing two new concepts (1) environmental support mechanisms for viability; and (2) rapid implementation unit emergence. | Application of the Viable System Model to analyse communications structures: A case study of disaster response in Japan |
S0377221714009503 | The capacity of a runway system represents a bottleneck at many international airports. The current practice at airports is to land approaching aircraft on a first-come, first-served basis. An active rescheduling of aircraft landing times increases runway capacity or reduces delays. The problem of finding an optimal schedule for aircraft landings is referred to as the “aircraft landing problem”. The objective is to minimize the total delay of aircraft landings or the respective cost. The necessary separation time between two operations must be met. Due to the complexity of this scheduling problem, recent research has been focused on developing heuristic solution approaches. This article presents a new algorithm that is able to create optimal landing schedules on multiple independent runways for aircraft with positive target landing times and limited time windows. Our numerical experiments show that problems with up to 100 aircraft can be optimally solved within seconds. | A dynamic programming approach for the aircraft landing problem with aircraft classes |
S0377221714009515 | Many practical optimization problems are dynamically changing, and require a tracking of the global optimum over time. However, tracking usually has to be quick, which excludes re-optimization from scratch every time the problem changes. Instead, it is important to make good use of the history of the search even after the environment has changed. In this paper, we consider Efficient Global Optimization (EGO), a global search algorithm that is known to work well for expensive black box optimization problems where only few function evaluations are possible. It uses metamodels of the objective function for deciding where to sample next. We propose and compare four methods of incorporating old and recent information in the metamodels of EGO in order to accelerate the search for the global optima of a noise-free objective function stochastically changing over time. As we demonstrate, exploiting old information as much as possible significantly improves the tracking behavior of the algorithm. | Tracking global optima in dynamic environments with efficient global optimization |
S0377221714009527 | Condition-based maintenance has been proven effective in reducing unexpected failures with minimum operational costs. This study considers an optimal condition-based replacement policy with optimal inspection interval when the degradation conforms to an inverse Gaussian process with random effects. The random effects parameter is used to account for heterogeneities commonly observed among a product population. Its distribution is updated when more degradation observations are available. The observed degradation level together with the unit’s age are used for the replacement decision. The structure of the optimal replacement policy is investigated in depth. We prove that the monotone control limit policy is optimal. We also provide numerical studies to validate our results and conduct sensitivity analysis of the model parameters on the optimal policy. | Condition-based maintenance using the inverse Gaussian degradation model |
S0377221714009539 | We present a survey that focuses on the response and recovery planning phases of the disaster lifecycle. Related mathematical models developed in this area of research are classified in terms of vehicle/network representation structures and their functionality. The relationships between these characteristics and model size are discussed. The review provides details on goals, constraints, and structures of available mathematical models as well as solution methods. In this review, information systems applications in humanitarian logistics are also surveyed, since humanitarian logistics models and their solutions need to be integrated with information technology to enable their use in practice. | Models, solutions and enabling technologies in humanitarian logistics |
S0377221714009540 | Sparse optimization refers to an optimization problem involving the zero-norm in objective or constraints. In this paper, nonconvex approximation approaches for sparse optimization have been studied with a unifying point of view in DC (Difference of Convex functions) programming framework. Considering a common DC approximation of the zero-norm including all standard sparse inducing penalty functions, we studied the consistency between global minimums (resp. local minimums) of approximate and original problems. We showed that, in several cases, some global minimizers (resp. local minimizers) of the approximate problem are also those of the original problem. Using exact penalty techniques in DC programming, we proved stronger results for some particular approximations, namely, the approximate problem, with suitable parameters, is equivalent to the original problem. The efficiency of several sparse inducing penalty functions have been fully analyzed. Four DCA (DC Algorithm) schemes were developed that cover all standard algorithms in nonconvex sparse approximation approaches as special versions. They can be viewed as, an ℓ1-perturbed algorithm/reweighted-ℓ1 algorithm / reweighted-ℓ2 algorithm. We offer a unifying nonconvex approximation approach, with solid theoretical tools as well as efficient algorithms based on DC programming and DCA, to tackle the zero-norm and sparse optimization. As an application, we implemented our methods for the feature selection in SVM (Support Vector Machine) problem and performed empirical comparative numerical experiments on the proposed algorithms with various approximation functions. | DC approximation approaches for sparse optimization |
S0377221714009552 | Multi-objective optimization problems arise frequently in applications, but can often only be solved approximately by heuristic approaches. Evolutionary algorithms have been widely used to tackle multi-objective problems. These algorithms use different measures to ensure diversity in the objective space but are not guided by a formal notion of approximation. We present a framework for evolutionary multi-objective optimization that allows to work with a formal notion of approximation. This approximation-guided evolutionary algorithm (AGE) has a worst-case runtime linear in the number of objectives and works with an archive that is an approximation of the non-dominated objective vectors seen during the run of the algorithm. Our experimental results show that AGE finds competitive or better solutions not only regarding the achieved approximation, but also regarding the total hypervolume. For all considered test problems, even for many (i.e., more than ten) dimensions, AGE discovers a good approximation of the Pareto front. This is not the case for established algorithms such as NSGA-II, SPEA2, and SMS-EMOA. In this paper we compare AGE with two additional algorithms that use very fast hypervolume-approximations to guide their search. This significantly speeds up the runtime of the hypervolume-based algorithms, which now allows a comparison of the underlying selection schemes. | Efficient optimization of many objectives by approximation-guided evolution |
S0377221714009564 | In this paper we deal with TU games in which cooperation is restricted by means of a weighted network. We admit several interpretations for the weight of a link: capacity of the communication channel, flow across it, intimacy or intensity in the relation, distance between both incident nodes/players, cost of building or maintaining the communication link or even probability of the relation (as in Calvo, Lasaga, and van den Noweland, 1999). Then, according to the different interpretations, we introduce several point solutions for these restricted games in a way parallel to the familiar environment of Myerson. Finally, we characterize these values in terms of the (adapted) component efficiency, fairness and balanced contributions properties and we analyze the extent to which they satisfy a link/weight monotonicity property. | Values of games with weighted graphs |
S0377221714009576 | This paper investigates the coordinated scheduling problem of production and transportation in a two-stage supply chain, where the actual job processing time is a linear function of its starting time. During the production stage the jobs are first processed in serial batches on a bounded serial batching machine at the manufacturer's site. Then, the batches are delivered to a customer by a single vehicle with limited capacity during the transportation stage, and the vehicle can only deliver one batch at one time. The objective of this proposed scheduling problem is to make decisions on job batching and batch sequencing so as to minimize the makespan. Moreover, we consider two different models. With regards to the scheduling model with a buffer for storing the processed batches before transportation, we develop an optimal algorithm to solve it. For the scheduling model without buffer, we present some useful properties and develop a heuristic H for solving it. Then a novel lower bound is derived and two optimal algorithms are designed for solving two special cases. Furthermore, computational experiments with random instances of different sizes are conducted to evaluate the proposed heuristic H, and the results show that our proposed algorithm is superior to other four approaches in the literature. Besides, heuristic H in our experiments can effectively and efficiently solve both small-size and large-size problems in a reasonable time. | Serial batching scheduling of deteriorating jobs in a two-stage supply chain to minimize the makespan |
S0377221714009588 | In this study, we propose a mixed integer linear programming based methodology for selecting the location of temporary shelter sites. The mathematical model maximizes the minimum weight of open shelter areas while deciding on the location of shelter areas, the assigned population points to each open shelter area and controls the utilization of open shelter areas. We validate the mathematical model by generating a base case scenario using real data for Kartal, Istanbul, Turkey. Also, we perform a sensitivity analysis on the parameters of the mentioned mathematical model and discuss our findings. Lastly, we perform a case study using the data from the 2011 Van earthquake. | Locating temporary shelter areas after an earthquake: A case for Turkey |
S0377221714009606 | We first consider the ordinary NP-hard three-machine proportionate open shop minimum makespan O3|prpt|C max problem and show that it is solvable in O(nlog n) time when certain conditions on the total machine load are met. When these conditions are not met, we derive an approximate solution by implementing the optimal solution for the O3|prpt|C max problem when the two longest jobs are of equal length. In that case, both absolute and ratio worst-case bounds are derived. We also consider the more general mixed shop problem M3|prpt|C max in which a given job subset must be processed as in a flow shop while the remaining jobs can be processed as in the O3|prpt|C max problem. We show that our open shop solution techniques can be implemented to derive exact and approximate solutions for the M3|prpt|C max problem. Finally, we discuss the applicability of our open shop results to the proportionate open shop problem with unequal machine speeds. | The three-machine proportionate open shop and mixed shop minimum makespan problems |
S0377221714009618 | We study the Cournot competition between two supply chains that are subject to supply uncertainty. Each supply chain consists of a retailer and an exclusive supplier which has random yield. We examine how the levels of supply uncertainty and competition intensity affect the equilibrium decisions of ordering quantity, contract offering, and centralization choice. We show that a retailer should order more if its competing retailer’s supply becomes less reliable or if its own supply becomes more reliable. A supply chain with reliable supply can take great advantage of the high supply risk of its competing chain. Furthermore, for decentralized chains we characterize the optimal wholesale price contracts with linear penalty, under different supply risks and competition scenarios. Finally, we show that supply chain centralization is a dominant strategy, and it always makes the customers better off. Nevertheless, if the supply risk is low and the chain competition is intensive, centralization could actually decrease the supply chain profit, compared with the case where both chains do not choose centralization. This results in a prisoner’s dilemma. On the other hand, if the supply risk is high and/or the competition level is low, centralization always increases the supply chain profit. The desirability of supply chain centralization is enhanced by high supply uncertainty or low chain competition. | Managing supply uncertainty under supply chain Cournot competition |
S0377221714009825 | This paper uses the setting of a volleyball game and an exotic sports betting on the point difference of volleyball games to test whether people correctly understand the probabilities related to outcomes of a process which follows a binomial distribution. We find that people consistently underestimate the probabilities of outcomes that correspond to extreme ends of the distribution. This is consistent with the extremeness aversion bias documented in decision making studies. Whereas previous studies on the extremeness aversion bias find the existence of the bias in a consumer choice setting, we document that this bias also exists in an investment setting. We find evidence of learning behavior over time; however, it is not sufficient to eliminate the bias. | Misunderstanding of the binomial distribution, market inefficiency, and learning behavior: Evidence from an exotic sports betting market |
S0377221714009837 | In this paper, we extend the vehicle routing problem with clustered backhauls (VRPCB) to an integrated routing and three-dimensional loading problem, called VRPCB with 3D loading constraints (3L-VRPCB). In the VRPCB each customer is either a linehaul or a backhaul customer and in each route all linehaul customers must be visited before any backhaul customer. In the 3L-VRPCB, each customer demand is given as a set of 3D rectangular items (boxes) and the vehicle capacity is replaced by a 3D loading space. Moreover, some packing constraints, e.g. concerning stacking of boxes, are also integrated. A set of routes of minimum total length has to be determined such that each customer is visited once. For each route two packing plans have to be provided that stow all boxes of all visited linehaul and backhaul customers, respectively, taking into account the additional packing constraints. We propose two hybrid algorithms for solving the 3L-VRPCB, each of them consisting of a routing and a packing procedure. The routing procedures follow different metaheuristic strategies (large vs. variable neighborhood search) and in both algorithms a tree search heuristic is responsible for packing boxes. Extensive computational experiments were carried out using 95 3L-VRPCB benchmark instances that were derived from well-known VRPCB instances. Good results are also achieved for the capacitated vehicle routing problem with 3D loading constraints as a special case of the 3L-VRPCB. | Hybrid algorithms for the vehicle routing problem with clustered backhauls and 3D loading constraints |
S0377221714009849 | This paper addresses the problem of minimizing the total tardiness of a set of jobs to be scheduled on identical parallel machines where jobs can only be delivered at certain fixed delivery dates. Scheduling problems with fixed delivery dates are frequent in industry, for example when a manufacturer has to rely on the timetable of a logistics provider to ship its products to customers. We develop and empirically evaluate both optimal and heuristic solution procedures to solve the problem. As the problem is NP-hard, only relatively small instances can be optimally solved in reasonable computational time using either an efficient mathematical programming formulation or a branch-and-bound algorithm. Consequently, we develop a tabu search and a hybrid genetic algorithm to quickly find good approximate solutions for larger instances. | Scheduling identical parallel machines with fixed delivery dates to minimize total tardiness |
S0377221714009850 | The influence of applying queue state dependent order acceptance policies, where either decision is with customer or with manufacturer, on optimal capacity investment is discussed. Therefore, three order acceptance policies are developed where either the customer has a certain service level threshold for each order or the manufacturer has an overall service level threshold. The third policy, modeling queue state independent order acceptance, is used to identify performance gains of including queue state knowledge into this decision. Equations for state probabilities, order acceptance rate, work-in-process, finished-goods-inventory, backorders and service level are developed for a system with stochastic customer-required lead times applying queuing methodology. An optimization problem minimizing capacity, work-in-process, finished-goods-inventory, backorder and lost sales cost (for rejected orders) in a single stage MTO production system is presented. The system is modeled as an M/M/1 queue with input rates depending on queue length and random customer required lead time. For the optimization problem, which cannot be solved explicitly, a solution heuristic is developed and a broad numerical study is conducted. The numerical study shows that allowing the customer to know the expected production lead time and—based on this knowledge—decide whether or not to place an order can have positive or negative influences on the overall costs, depending on the customer's service level target. Furthermore, the study shows that a high cost reduction potential exists for simultaneously optimizing capacity investment and order acceptance policy if the production system can decide whether or not to accept an order. | Influence of order acceptance policies on optimal capacity investment with stochastic customer required lead times |
S0377221714009862 | A capital allocation scheme for a company that has a random total profit Y and uses a coherent risk measure ρ has been suggested. The scheme returns a unique real number Λ ρ * ( X , Y ) , which determines the capital that should be allocated to company’s subsidiary with random profit X. The resulting capital allocation is linear and diversifying as defined by Kalkbrener (2005). The problem is reduced to selecting the “center” of a non-empty convex weakly compact subset of a Banach space, and the solution to the latter problem proposed by Lim (1981) has been used. Our scheme can also be applied to selecting the unique Pareto optimal allocation in a wide class of optimal risk sharing problems. | The center of a convex set and capital allocation |
S0377221714009874 | Difficult combinatorial optimization problems coming from practice are nowadays often approached by hybrid metaheuristics that combine principles of classical metaheuristic techniques with advanced methods from fields like mathematical programming, dynamic programming, and constraint programming. If designed appropriately, such hybrids frequently outperform simpler “pure” approaches as they are able to exploit the underlying methods’ individual advantages and benefit from synergy. This article starts with a general review of design patterns for hybrid approaches that have been successful on many occasions. More complex practical problems frequently have some special structure that might be exploited. In the field of mixed integer linear programming, three decomposition techniques are particularly well known for taking advantage of special structures: Lagrangian decomposition, Dantzig–Wolfe decomposition (column generation), and Benders’ decomposition. It has been recognized that these concepts may also provide a very fruitful basis for effective hybrid metaheuristics. We review the basic principles of these decomposition techniques and discuss for each promising possibilities for combinations with metaheuristics. The approaches are illustrated with successful examples from literature. | Decomposition based hybrid metaheuristics |
S0377221714009886 | This paper presents a study of the problem of resource allocation between increasing protection of components and constructing redundant components in parallel systems subject to intentional threats. The defender aims at minimizing the entire system destruction probability during certain time horizon by using the best resource allocation strategy which is determined by redundant components construction pace. Different from previous works which focus on the static resource allocation strategy, we propose a dynamic resource distribution strategy with geometric construction pace model and show its advantage over constant construction pace. The vulnerability model considering a most probable attack time and uncertainties of attack time estimates is provided and a destruction probability is evaluated to quantitatively define the ability of the system to survive an intentional attack. The random time of intentional attack is represented by truncated normal distribution. Through the modeling of the most probable attack time and quantifying the uncertainty of the knowledge of defender about this time, the influence of these factors on the optimal resource allocation strategy is investigated. Proper decision regarding the resource allocation is crucial in protecting safety-critical systems, i.e. nuclear power plant, communication base station, power network. Case studies are presented to illustrate the influence and strategy. | Optimal resource distribution between protection and redundancy considering the time and uncertainties of attacks |
S0377221714009898 | In today's competitive markets, most firms in United Kingdom and United States offer their products on trade credit to stimulate sales and reduce inventory. Trade credit is calculated based on time value of money on the purchase cost (i.e., discounted cash flow analysis). Recently, many researchers use discounted cash flow analysis only on the purchase cost but not on the revenue (which is significantly larger than the purchase cost) and the other costs. For a sound and rigorous analysis, we should use discounted cash flow analysis on revenue and costs. In addition, expiration date for a deteriorating item (e.g., bread, milk, and meat) is an important factor in consumer's purchase decision. However, little attention has been paid to the effect of expiration date. Hence, in this paper, we establish a supplier–retailer–customer supply chain model in which: (a) the retailer receives an up-stream trade credit from the supplier while grants a down-stream trade credit to customers, (b) the deterioration rate is non-decreasing over time and near 100 percent particularly close to its expiration date, and (c) discounted cash flow analysis is adopted for calculating all relevant factors: revenue and costs. The proposed model is an extension of more than 20 previous papers. We then demonstrate that the retailer's optimal credit period and cycle time not only exist but also are unique. Thus, the search of the optimal solution reduces to a local one. Finally, we run several numerical examples to illustrate the problem and gain managerial insights. | Inventory and credit decisions for time-varying deteriorating items with up-stream and down-stream trade credit financing by discounted cash flow analysis |
S0377221714009904 | The time required to board an airplane directly influences an airplane’s turn-around time, i.e., the time that the airplane requires at the gate between two flights. Thus, the turn-around time can be reduced by using efficient boarding methods and such actions may also result in cost savings. The main contribution of this paper is fourfold. First, we provide a general problem description including partly established and partly new definitions of relevant terms. Next, we survey boarding methods known from theory and practice and provide an according classification scheme. Third, we present a broad overview on the current literature in this field and we describe 12 most relevant papers in detail and juxtapose their results. Fourth, we summarize the state-of-the-art of research in this field showing e.g., that the commonly used strategy back-to-front generally requires more time than other easy to implement strategies such as random boarding. Further concepts and approaches that can help speed up the boarding process are also presented and these can be studied in future research. | Airplane boarding |
S0377221714009916 | Multi-objectivization has been used to solve several single objective problems with improved results over traditional genetically inspired optimization methods. Multi-objectivization reformulates the single objective problem into a multiple objective problem. The reformulated problem is then solved with a multiple objective method to obtain a resulting solution to the original problem. Multi-objectivization Via Decomposition (MVD) and the addition of novel objectives are the two major approaches used in multi-objectivization. This paper focuses on analysis of two major MVD methods: helper-objectives and complete decomposition. Helper-objectives decomposition methods identify one or more decomposed objectives that are used simultaneously with the main objective to focus attention on components of the decomposed objectives. Complete decomposition, unlike helper-objectives does not explicitly use the main objective and instead uses decomposed objectives that exhaustively cover all portions of the main objective. This work examines the relationship between helper-objective decompositions and complete decomposition using both an analytic and experimental methodology. Pareto dominance relationships are examined analytically to clarify the relationship between dominant solutions in both types of decompositions. These results more clearly characterize how solutions from the two approaches rank in Pareto-frontier based fitness algorithms such as NSGA-II. An empirical study on job shop scheduling problems shows how fitness signal and fitness noise are affected by the balance of decomposition size. Additionally we provide evidence that, for the settings and instances studied, complete decompositions have a better on-average performance when compared to analogous helper-objective decompositions. Lastly we examine the underlying forces that determine effective decomposition size. We argue that it is advantageous to use less balanced decompositions as within-decomposition conflict increases and as heuristic strength increases. | Multi-objectivization Via Decomposition: An analysis of helper-objectives and complete decomposition |
S0377221714009928 | This paper deals with the Pollution-Routing Problem (PRP), a Vehicle Routing Problem (VRP) with environmental considerations, recently introduced in the literature by Bektaş and Laporte (2011) [Transportation Research Part B: Methodological 45 (8), 1232–1250]. The objective is to minimize operational and environmental costs while respecting capacity constraints and service time windows. Costs are based on driver wages and fuel consumption, which depends on many factors, such as travel distance and vehicle load. The vehicle speeds are considered as decision variables. They complement routing decisions, impacting the total cost, the travel time between locations, and thus the set of feasible routes. We propose a method which combines a local search-based metaheuristic with an integer programming approach over a set covering formulation and a recursive speed-optimization algorithm. This hybridization enables to integrate more tightly route and speed decisions. Moreover, two other “green” VRP variants, the Fuel Consumption VRP (FCVRP) and the Energy Minimizing VRP (EMVRP), are addressed, as well as the VRP with time windows (VRPTW) with distance minimization. The proposed method compares very favorably with previous algorithms from the literature, and new improved solutions are reported for all considered problems. | A matheuristic approach for the Pollution-Routing Problem |
S0377221714010121 | The optimal-exercise policy of an American option dictates when the option should be exercised. In this paper, we consider the implications of missing the optimal exercise time of an American option. For the put option, this means holding the option until it is deeper in-the-money when the optimal decision would have been to exercise instead. We derive an upper bound on the maximum possible loss incurred by such an option holder. This upper bound requires no knowledge of the optimal-exercise policy or true price function. This upper bound is a function of only the option-holder’s exercise strategy and the intrinsic value of the option. We show that this result holds true for both put and call options under a variety of market models ranging from the simple Black–Scholes model to complex stochastic-volatility jump-diffusion models. Numerical illustrations of this result are provided. We then use this result to study numerically how the cost of delaying exercise varies across market models and call and put options. We also use this result as a tool to numerically investigate the relation between an option-holder’s risk-preference levels and the maximum possible loss he may incur when adopting a target-payoff policy that is a function of his risk-preference level. | The implication of missing the optimal-exercise time of an American option |
S0377221714010133 | This paper studies preemptive bi-criteria scheduling on m parallel machines with machine unavailable intervals. The goal is to minimize the total completion time subject to the constraint that the makespan is at most a constant T. We study the unavailability model such that the number of available machines cannot go down by 2 within any period of p max where p max is the maximum processing time among all jobs. We show that there is an optimal polynomial time algorithm. | Total completion time minimization on multiple machines subject to machine availability and makespan constraints |
S0377221714010145 | In this paper, we address the issue of optimal selection of portfolio of projects using reinvestment strategy within a flexible time horizon. We assume that an investor intends to invest his/her initial capital on the implementation of some projects in a flexible time horizon. The investorâÂÂs motivation for considering a flexible time horizon is to maximize his/her gain by determining the optimal time horizon for investing on the selected portfolio of projects. Projects have different durations and their potential rates of return are also different. The profit yielded by the completed projects can be reinvested in the implementation of other projects. The implementation costs of projects can be allocated at equally spaced intervals with equal amounts during their life cycles or can be assigned to each project according to its estimated s-curve. We assume that the profit yielded by each project is accrued after the investment for the project ends. Therefore, in order to maximize gains, the investor needs to optimize three issues: combination of projects, schedule of the selected projects and the time horizon. An integer program is presented and discussed to address the given issues. | Optimal selection of project portfolios using reinvestment strategy within a flexible time horizon |
S0377221714010157 | Credit card balance is an important factor in retail finance. In this article we consider multivariate models of credit card balance and use a real dataset of credit card data to test the forecasting performance of the models. Several models are considered in a cross-sectional regression context: ordinary least squares, two-stage and mixture regression. After that, we take advantage of the time series structure of the data and model credit card balance using a random effects panel model. The most important predictor variable is previous lagged balance, but other application and behavioural variables are also found to be important. Finally, we present an investigation of forecast accuracy on credit card balance 12 months ahead using each of the proposed models. The panel model is found to be the best model for forecasting credit card balance in terms of mean absolute error (MAE) and the two-stage regression model performs best in terms of root mean squared error (RMSE). | Models and forecasts of credit card balance |
S0377221714010169 | The primary objective in the one-dimensional cutting stock problem is to minimize material cost. In real applications it is often necessary to consider auxiliary objectives, one of which is to reduce the number of different cutting patterns (setups). This paper first presents an integer linear programming model to minimize the sum of material and setup costs over a given pattern set, and then describes a sequential grouping procedure to generate the patterns in the set. Two sets of benchmark instances are used in the computational test. The results indicate that the approach is efficient in improving the solution quality. | Pattern-set generation algorithm for the one-dimensional cutting stock problem with setup cost |
S0377221714010182 | The Global Financial Crisis (GFC) demonstrated the devastating impact of extreme credit risk on global economic stability. We develop four credit models to better measure credit risk in extreme economic circumstances, by applying innovative Conditional Value at Risk (CVaR) techniques to structural models (called Xtreme-S), transition models (Xtreme-T), quantile regression models (Xtreme-Q), and the author's unique iTransition model (Xtreme-i) which incorporates industry factors into transition matrices. We find the Xtreme-S and Xtreme-Q models to be the most responsive to changing market conditions. The paper also demonstrates how the models can be used to determine capital buffers required to deal with extreme credit risk. | Take it to the limit: Innovative CVaR applications to extreme credit risk measurement |
S0377221714010194 | We analyze split cuts from the perspective of cut generating functions via geometric lifting. We show that α-cuts, a natural higher-dimensional generalization of the k-cuts of Cornuéjols et al., give all the split cuts for the mixed-integer corner relaxation. As an immediate consequence we obtain that the k-cuts are equivalent to split cuts for the 1-row mixed-integer relaxation. Further, we show that split cuts for finite-dimensional corner relaxations are restrictions of split cuts for the infinite-dimensional relaxation. In a final application of this equivalence, we exhibit a family of pure-integer programs whose split closure has arbitrarily bad integrality gap. This complements the mixed-integer example provided by Basu et al. (2011). | Characterization of the split closure via geometric lifting |
S0377221714010200 | In this paper, we investigate a new variant of the vehicle routing problem (VRP), termed the multi-period vehicle routing problem with time windows and limited visiting quota (MVRPTW-LVQ), which requires that any customer can be served by at most a certain number of different vehicles over the planning horizon. We first formulate this problem as a mixed integer programming model and then devise a three-stage heuristic approach to solve the problem. Extensive computational experiments demonstrate the effectiveness of our approach. Moreover, we empirically analyze the impacts of varying the levels of service consistency and demand fluctuation on the operational cost. The analysis results show that when demand fluctuation is relatively small compared to vehicle capacity, enforcing consistent service can increase customer satisfaction with only a slight increase in the operational cost. However, when a vehicle can only serve a small number of customers due to its capacity limit, relaxing the service consistency requirement by increasing the value of the visiting quota could be considered. | On service consistency in multi-period vehicle routing |
S0377221714010212 | Traveling Salesman Problem (TSP) tour length estimations can be used when it is not necessary to know an exact tour, e.g., when using certain heuristics to solve location-routing problems. The best estimation models in the TSP literature focus on random instances where the node dispersion is known; those that do not require knowledge of node dispersion are either less accurate or slower. In this paper, we develop a new regression-based tour length estimation model that is distribution-free, accurate, and fast, with a small standard deviation of the estimation errors. When the distribution of the node coordinates is known, it provides a close estimate of the well-known asymptotic tour length estimation formula of Beardwood et al. (1959); more importantly, when the distribution is unknown or non-integrable so Beardwood et al.’s estimation cannot be used, our model still provides good, fast tour length estimates. | A distribution-free TSP tour length estimation model for random graphs |
S0377221714010224 | There remains little consensus about how to define and formulate audit quality. It does not have a consistent definition and operationalization across studies and this has troubled theorists for many years. This study contributes to this discussion by introducing a probability tree model of audit quality. This model is built up of characteristics to create an association with the four sets of audit quality indicators; inputs, process, context, and outcomes (Knechel, W. R, Krishnan, G. V., Pevzner, M., Shefchik, L. B., & Velury, U. (2012). Audit quality: Insights from the academic literature. Auditing: A Journal of Practice & Theory 32(1), 385–421). The purpose is to show how these indicators interplay in the context of audit quality. The model describes the audit program of an audit engagement as a random tree model based on a stochastic process. Following Simon's (Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review 63(2), 129–138; Simon, H. A. (1957). Models of man. Social and rational. New York: John Wiley & Sons) description of adaptive behavior the model describes an audit program as an organic procedure where an auditor does not maximize but is seeking for material misstatements (inadvertent errors) in a random environment. If the auditor under a budget constraint does not (in spite of positive inherent risk) detect any misstatement, the audit program will erroneously end with an unqualified report (false negative outcome). In this context, we measure subjective audit quality as the probability of the complement of this event (probability of detecting one or more misstatements). We also introduce a concept of perfect auditor with optimal characteristics. Finally, we measure objective audit quality as the relation of the complement event probabilities between the auditor and the perfect auditor. The analytical results are demonstrated by numerical examples. | A probability tree model of audit quality |
S0377221714010236 | In this paper, we present an approach for parallelizing computation and implementation time for problems where the objective is to complete the solution as soon after receiving the problem instance as possible. We demonstrate the approach on the TSP. We define the TSP race problem, present a computation-implementation parallelized (CIP) approach for solving it, and demonstrate CIP’s effectiveness on TSP Race instances. We also demonstrate a method for determining a priori when CIP will be effective. Although in this paper we focus on TSP, our general CIP approach can be effective on other problems and applications with similar time sensitivity. | TSP Race: Minimizing completion time in time-sensitive applications |
S0377221714010248 | This note extends the results on the first four derivatives of the utility function by Menegatti (Eur. J. Oper. Res. 232 (2014) 613–617) to the case of high-order derivatives. We show that, under usual assumptions, if the generic derivative of the utility function of order n is sign invariant then all the derivatives from order n to order 2 alternate in sign. We then focus on the case where the derivative of the utility function of order n is either positive when n is odd or negative when n is even, and we show the implications of this result for high-order risk changes and for saving decisions. | New results on high-order risk changes |
S0377221714010261 | We study stochastic linear programming games: a class of stochastic cooperative games whose payoffs under any realization of uncertainty are determined by a specially structured linear program. These games can model a variety of settings, including inventory centralization and cooperative network fortification. We focus on the core of these games under an allocation scheme that determines how payoffs are distributed before the uncertainty is realized, and allows for arbitrarily different distributions for each realization of the uncertainty. Assuming that each player’s preferences over random payoffs are represented by a concave monetary utility functional, we prove that these games have a nonempty core. Furthermore, by establishing a connection between stochastic linear programming games, linear programming games and linear semi-infinite programming games, we show that an allocation in the core can be computed efficiently under some circumstances. | Stochastic linear programming games with concave preferences |
S0377221714010443 | This paper develops a stylized (or conceptual) system optimization model to analyze the adoption of an emerging infrastructure associated with uncertain technological learning and spatial reconfigurations. The model first assumes that the emerging infrastructure will be implemented for the entire system when it is adopted. With the model, this paper explores (1) how the emerging infrastructure's initial investment cost, technological learning and its uncertainty, market size, and efficiency influence the adoption of the emerging infrastructure and (2) how the efficiency and investment cost of the associated technology (which will be located in a different place with the adoption of the emerging infrastructure) influence the adoption of the emerging infrastructure. Then, this paper extends the model and explores whether it is a better solution to implement the emerging infrastructure for part of the distance from resource site to demand site if its efficiency is a function of the implemented distance. With optimizations under three types of efficiency dynamics, this paper finds that whether the emerging infrastructure should be implemented partly or entirely is not determined by the value of its efficiency but by the dynamics of its efficiency. | Adoption of an emerging infrastructure with uncertain technological learning and spatial reconfiguration |
S0377221714010455 | We propose an allocation rule that takes into account the importance of both players and their links and characterize it for a fixed network. Our characterization is along the lines of the characterization of the Position value for Network games by van den Nouweland and Slikker (2012). The allocation rule so defined admits multilateral interactions among the players through their links which distinguishes it from the other existing rules. Next, we extend our allocation rule to flexible networks à la Jackson (2005). | A solution concept for network games: The role of multilateral interactions |
S0377221714010467 | Given a graph G and a positive integer K, the inverse chromatic number problem consists in modifying the graph as little as possible so that it admits a chromatic number not greater than K. In this paper, we focus on the inverse chromatic number problem for certain classes of graphs. First, we discuss diverse possible versions and then focus on two application frameworks which motivate this problem in interval and permutation graphs: the inverse booking problem and the inverse track assignment problem. The inverse booking problem is closely related to some previously known scheduling problems; we propose new hardness results and polynomial cases. The inverse track assignment problem motivates our study of the inverse chromatic number problem in permutation graphs; we show how to solve in polynomial time a generalization of the problem with a bounded number of colors. | Inverse chromatic number problems in interval and permutation graphs |
S0377221714010479 | This paper approaches a pickup and delivery problem with multiple vehicles in which LIFO conditions are imposed when performing loading and unloading operations and the route durations cannot exceed a given limit. We propose two mixed integer formulations of this problem and a heuristic procedure that uses tabu search in a multi-start framework. The first formulation is a compact one, that is, the number of variables and constraints is polynomial in the number of requests, while the second one contains an exponential number of constraints and is used as the basis of a branch-and-cut algorithm. The performances of the proposed solution methods are evaluated through an extensive computational study using instances of different types that were created by adapting existing benchmark instances. The proposed exact methods are able to optimally solve instances with up to 60 nodes. | The multiple vehicle pickup and delivery problem with LIFO constraints |
S0377221714010480 | This paper surveys recent publications on berth allocation, quay crane assignment, and quay crane scheduling problems in seaport container terminals. It continues the survey of Bierwirth and Meisel (2010) that covered the research up to 2009. Since then, there was a strong increase of activity observed in this research field resulting in more than 120 new publications. In this paper, we classify this new literature according to the features of models considered for berth allocation, quay crane scheduling and integrated approaches by using the classification schemes proposed in the preceding survey. Moreover, we identify trends in the field, we take a look at the methods that have been developed for solving new models, we discuss ways for evaluating models and algorithms, and, finally, we light up potential directions for future research. | A follow-up survey of berth allocation and quay crane scheduling problems in container terminals |
S0377221714010492 | In this paper, we propose a new random volatility model, where the volatility has a deterministic term structure modified by a scalar random variable. Closed-form approximation is derived for European option price using higher order Greeks with respect to volatility. We show that the calibration of our model is often more than two orders of magnitude faster than the calibration of commonly used stochastic volatility models, such as the Heston model or Bates model. On 15 different index option data sets, we show that our model achieves accuracy comparable with the aforementioned models, at a much lower computational cost for calibration. Further, our model yields prices for certain exotic options in the same range as these two models. Lastly, the model yields delta and gamma values for options in the same range as the other commonly used models, over most of the data sets considered. Our model has a significant potential for use in high frequency derivative trading. | A fast calibrating volatility model for option pricing |
S0377221714010509 | This study measures the performance of participating nations at the Olympics, considering the quest for medals as a two-stage Olympic process. The first stage is characterized as athlete preparation (AP) and the second stage as athlete competition (AC). We extend the relational model from the constant returns to scale framework to the variable returns to scale version. The efficiency of each participating nation in the entire two-stage Olympic process is calculated as a product of the efficiencies of both stages, and a heuristic search is applied to the extended relational model. The efficiency of each stage can be obtained and directions for improving the performance of participating nations in the two-stage Olympic process can be identified. An empirical study of the 2012 London Summer Olympic Games reveals that the efficiency of the AP stage is higher than that of the AC stage for the majority of participants. In addition, a plot of the relationship between these three efficiencies shows that the efficiency of the entire two-stage Olympic process is more significantly related to that of the AC stage than that of the AP stage. | Performance evaluation of participating nations at the 2012 London Summer Olympics by a two-stage data envelopment analysis |
S0377221714010510 | Inventory sharing among decentralized retailers has been widely used in practice to improve profitability and reduce risks at the same time. We study the coordination of a decentralized inventory sharing system with n (n > 2) retailers who non-cooperatively determine their order quantities but cooperatively share their inventory. There has been very limited research on coordinating such a system due to the many unique challenges involved, e.g., incomplete residual sharing, formation of subcoalitions for inventory sharing etc. In this paper, we develop a coordination mechanism (nRCM) that simultaneously possesses a few important properties—leading to formation of only grand coalition, inducing complete residual sharing, and ensuring each retailer obtains a higher profit as the system size increases. We also consider the impact of asymmetric demand distribution parameter information on the coordination mechanisms when the retailers privately hold such information. We show that although true coordination requires complete information sharing, under any n-retailer inventory sharing coordination mechanism, retailers may not have incentives to share information with all other retailers and will not share true information even if they do so. In this regard, nRCM possesses another important property: it can be implemented under asymmetric information and retailers can obtain profits very close to their first-best profits even if they do not share demand information. Such nice properties of nRCM also hold when retailers have correlated demands. This paper is the first to study coordination mechanism for an n-retailer (n > 2) inventory sharing system considering asymmetric information. | Inventory sharing and coordination among n independent retailers |
S0377221714010522 | In this paper, we first propose a portfolio management model where the objective is to balance equity and liability. The asset price dynamics includes both permanent and temporary price impact, where the permanent impact is a linear function of the cumulative trading amount and the temporary impact is a kth (between 0 and 1) order power function of the instantaneous trading rate. We construct efficient frontiers to visualize the tradeoff between equity and liability and obtain analytical properties regarding the optimal trading strategies. In the second part, we further consider an optimal deleveraging problem with leverage constraints. It reduces to a non-convex polynomial optimization program with polynomial and box constraints. A Lagrangian method for solving the problem is presented and the quality of the solution is studied. | Optimal deleveraging with nonlinear temporary price impact |
S0377221714010534 | We propose and empirically test statistical approaches to debiasing judgmental corporate cash flow forecasts. Accuracy of cash flow forecasts plays a pivotal role in corporate planning as liquidity and foreign exchange risk management are based on such forecasts. Surprisingly, to our knowledge there is no previous empirical work on the identification, statistical correction, and interpretation of prediction biases in large enterprise financial forecast data in general, and cash flow forecasting in particular. Employing a unique set of empirical forecasts delivered by 34 legal entities of a multinational corporation over a multi-year period, we compare different forecast correction techniques such as Theil’s method and approaches employing robust regression, both with various discount factors. Our findings indicate that rectifiable mean as well as regression biases exist for all business divisions of the company and that statistical correction increases forecast accuracy significantly. We show that the parameters estimated by the models for different business divisions can also be related to the characteristics of the business environment and provide valuable insights for corporate financial controllers to better understand, quantify, and feedback the biases to the forecasters aiming to systematically improve predictive accuracy over time. | Analytical debiasing of corporate cash flow forecasts |
S0377221714010546 | We consider a single-server queueing system with Poisson arrivals and generally distributed service times. To systematically control the workload of the queue, we define for each busy period an associated timer process, {R(t), t ≥ 0}, where R(t) represents the time remaining before the system is closed to potential arrivals. The process {R(t), t ≥ 0} is similar to the well-known workload process, in that it decreases at unit rate and consists of up-jumps at the arrival instants of admitted customers. However, if X represents the service requirement of an admitted customer, then the magnitude of the up-jump for the timer process occurring at the arrival instant of this customer is (1 − q)X for a fixed q ∈ [0, 1]. Consequently, there will be an instant in time within the busy period when the timer process hits level zero, at which point the system immediately closes and will remain closed until the end of the current busy period. We refer to this particular blocking policy as the q-policy. In this paper, we employ a level crossing analysis to derive the Laplace–Stieltjes transform (LST) of the steady-state waiting time distribution of serviceable customers. We conclude the paper with a numerical example which shows that controlling arrivals in this fashion can be beneficial. | Controlling the workload of M/G/1 queues via the q-policy |
S0377221714010558 | In this paper we address the Preemptive Resource Constrained Project Scheduling Problem (PRCPSP). PRCPSP requires a partially ordered set of activities to be scheduled using limited renewable resources such that any activity can be interrupted and later resumed without penalty. The objective is to minimize the project duration. This paper proposes an effective branch-and-price algorithm for solving PRCPSP based upon minimal Interval Order Enumeration involving column generation as well as constraint propagation. Experiments conducted on various types of instances have given very satisfactory results. Our algorithm is able to solve to optimality the entire set of J30, BL and Pack instances while satisfying the preemptive requirement. Furthermore, this algorithm provides improved best-known lower bounds for some of the J60, J90 and J120 instances in the non-preemptive case (RCPSP). | An effective branch-and-price algorithm for the Preemptive Resource Constrained Project Scheduling Problem based on minimal Interval Order Enumeration |
S0377221714010571 | In this article we survey mathematical programming approaches to problems in the field of drinking water distribution network optimization. Among the predominant topics treated in the literature, we focus on two different, but related problem classes. One can be described by the notion of network design, while the other is more aptly termed by network operation. The basic underlying model in both cases is a nonlinear network flow model, and we give an overview on the more specific modeling aspects in each case. The overall mathematical model is a Mixed Integer Nonlinear Program having a common structure with respect to how water dynamics in pipes are described. Finally, we survey the algorithmic approaches to solve the proposed problems and we discuss computation on various types of water networks. | Mathematical programming techniques in water network optimization |
S0377221714010583 | The problem of dynamic portfolio choice with transaction costs is often addressed by constructing a Markov Chain approximation of the continuous time price processes. Using this approximation, we present an efficient numerical method to determine optimal portfolio strategies under time- and state-dependent drift and proportional transaction costs. This scenario arises when investors have behavioral biases or the actual drift is unknown and needs to be estimated. Our numerical method solves dynamic optimal portfolio problems with an exponential utility function for time-horizons of up to 40 years. It is applied to measure the value of information and the loss from transaction costs using the indifference principle. | Dynamic portfolio optimization with transaction costs and state-dependent drift |
S0377221714010595 | In recent years, large amounts of financial data have become available for analysis. We propose exploring returns from 21 European stock markets by model-based clustering of regime switching models. These econometric models identify clusters of time series with similar dynamic patterns and moreover allow relaxing assumptions of existing approaches, such as the assumption of conditional Gaussian returns. The proposed model handles simultaneously the heterogeneity across stock markets and over time, i.e., time-constant and time-varying discrete latent variables capture unobserved heterogeneity between and within stock markets, respectively. The results show a clear distinction between two groups of stock markets, each one characterized by different regime switching dynamics that correspond to different expected return-risk patterns. We identify three regimes: the so-called bull and bear regimes, as well as a stable regime with returns close to 0, which turns out to be the most frequently occurring regime. This is consistent with stylized facts in financial econometrics. | Clustering financial time series: New insights from an extended hidden Markov model |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.