FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0377221715010747 | Facility location problems reported in the literature generally assume the problem parameter values (like cost, budget, etc.) to be known with complete certainty, even if they change over time (as in multi-period versions). However, in reality, there may be some uncertainty about the exact values of these parameters. Specifically, in the context of locating primary health centers (PHCs) in developing countries, there is generally a high level of uncertainty in the availability of servers (doctors) joining the facilities in different time periods. For transparency and efficient assignment of the doctors to PHCs, it is desirable to decide the facility opening sequence (assigning doctors to unmanned PHCs) at the start of the planning horizon. We present a new formulation for a multi-period maximal covering location problem with server uncertainty. We further demonstrate the superiority of our proposed formulation over the only other formulation reported in the literature. For instances of practical size, we provide a Benders decomposition based solution method, along with several refinements. For instances that the CPLEX MIP solver could solve within a time limit of 20 hours, our proposed solution method turns out to be of the order of 150–250 times faster for the problems with complete coverage, and around 1000 times faster for gradual coverage. | A new formulation and Benders decomposition for the multi-period maximal covering facility location problem with server uncertainty |
S0377221715010759 | Motivated by the growing prevalence for airlines to charge for checked baggage, this paper studies pricing of primary products and ancillary services. We consider a single seller with a fixed capacity or inventory of primary products that simultaneously makes an ancillary service available, e.g. a single-leg flight and checked baggage service. The seller seeks to maximize total expected revenue by dynamically setting prices on both the primary product and the ancillary service. In each period, a random number of customers arrive each of whom may belong to one of three groups: those that only want the primary products, those that would buy the ancillary service if the price is right, and those that only purchase a primary product together with the ancillary service. A multi-period dynamic pricing model is presented with computational complexity only of order equal to the number of periods. For certain distributions, close to analytical results can be obtained from which structural insights may be gleaned. | Dynamic pricing of primary products and ancillary services |
S0377221715010760 | The Wiener-Hopf factorization of a complex function arises in a variety of fields in applied mathematics such as probability, finance, insurance, queuing theory, radio engineering and fluid mechanics. The factorization fully characterizes the distribution of functionals of a random walk or a Lévy process, such as the maximum, the minimum and hitting times. Here we propose a constructive procedure for the computation of the Wiener-Hopf factors, valid for both single and double barriers, based on the combined use of the Hilbert and the z-transform. The numerical implementation can be simply performed via the fast Fourier transform and the Euler summation. Given that the information in the Wiener-Hopf factors is strictly related to the distributions of the first passage times, as a concrete application in mathematical finance we consider the pricing of discretely monitored exotic options, such as lookback and barrier options, when the underlying price evolves according to an exponential Lévy process. We show that the computational cost of our procedure is independent of the number of monitoring dates and the error decays exponentially with the number of grid points. | Spitzer identity, Wiener-Hopf factorization and pricing of discretely monitored exotic options |
S0377221715010772 | In university buildings with many rooms spread over different floors, large student flows between two consecutive lectures might cause congestion problems. These congestions result in long queues at elevators or at stairwells, which might lead to delays in lecture starts. The course timetable clearly has an important impact on these congestions. This paper presents a two-stage integer programming approach for building a university course timetable that aims at minimizing the resulting student flows. The first stage minimizes the violation of the teacher and educational preferences by assigning lectures to timeslots and rooms. The second stage reassigns classrooms to lectures of the timetable of the first stage and minimizes the student flow. The conceptual model is applied to the dataset of the Faculty of Economics and Business of the KU Leuven Campus Brussels and is tested and validated with 21 adapted instances from the literature. In contrast to a monolithic model, the two-stage model consistently succeeds in finding good quality feasible solutions. Moreover, the generated timetables entail significantly reduced student flows compared to the flows of the manually developed course timetable. | Developing compact course timetables with optimized student flows |
S0377221715010784 | Analytical and iterative optimization techniques are employed to solve a job shop-like capacity planning problem for a maintenance service provider with contractually defined lead time requirements. The problem is motivated by a real-life case example, namely the overhaul of airline aircraft engines through an external service provider. The production network is modeled as a network of GI/G/1 queues, where the service rates are the decision variables and capacity costs and penalty costs for not meeting contractually defined lead times are minimized. In addition, we analytically investigate the effects of collaborative maintenance management as a source of advanced information regarding future maintenance demand. More specifically, we consider the benefits of improved service rates and service and demand variabilities on production capacities and total costs. Numerical examples are provided to verify the proposed optimization procedure and illustrate the effects of collaborative maintenance management. | Capacity planning for a maintenance service provider with advanced information |
S0377221715010796 | Committees with yes-no-decisions are commonly modeled as simple games and the ability of a member to influence the group decision is measured by so-called power indices. For a weighted game we say that a power index satisfies local monotonicity if a player who controls a large share of the total weight vote does not have less power than a player with a smaller voting weight. In (Holler, 1982) Manfred Holler introduced the Public Good index. In its unnormalized version, i.e., the raw measure, it counts the number of times that a player belongs to a minimal winning coalition. Unlike the Banzhaf index, it does not count the remaining winning coalitions in which the player is crucial. Holler noticed that his index does not satisfy local monotonicity, a fact that can be seen either as a major drawback (Felsenthal & Machover, 1998, 221 ff.)or as an advantage (Holler & Napel 2004). In this paper we consider a convex combination of the two indices and require the validity of local monotonicity. We prove that the cost of obtaining it is high, i.e., the achievable new indices satisfying local monotonicity are closer to the Banzhaf index than to the Public Good index. All these achievable new indices are more solidary than the Banzhaf index, which makes them as very suitable candidates to divide a public good. As a generalization we consider convex combinations of either: the Shift index, the Public Good index, and the Banzhaf index, or alternatively: the Shift Deegan–Packel, Deegan–Packel, and Johnston indices. | The cost of getting local monotonicity |
S0377221715010802 | The development of appropriate project management techniques for Research and Development (R&D) projects has received significant academic and practical attention over the past few decades. Project managers typically face the problem of allocating resources and scheduling activities, for which the underlying combinatorial problem is NP-hard. The inherent uncertainty in many R&D environments increases the complexity of the problem. This paper addresses the problem of resource allocation and activity scheduling with a focus on R&D projects. The work is different from the existing literature in at least three aspects: (1) the problem formulation is based on a real-world Chinese aerospace project, (2) each individual resource unit can have a different resource efficiency, and (3) the uncertainty of the duration of an activity is time-dependent (efficiency-dependent) in nature. The problem is formulated as a multi-objective optimization model with simultaneous consideration of makespan and balance of resource efficiency. A cooperative coevolutionary multi-objective algorithm (CCMOA) is designed to produce high-quality solutions. Two chromosome representations and three resource selection policies are tested for the algorithm. The proposed CCMOA is found to be competitive when compared to MOEA/D and NSGA–II, which are two popular algorithms for multi-objective optimization. | Evolutionary multi-objective resource allocation and scheduling in the Chinese navigation satellite system project |
S0377221715010814 | Various shock models have been extensively studied in the literature, mostly under the assumption of the Poisson process of shocks. In the current paper, we study shock models under the generalized Polya process (GPP) of shocks, which has been recently introduced and characterized in the literature (see Konno (2010) and Cha, 2014). Distinct from the widely used nonhomogeneous Poisson process, the important feature of this process is the dependence of its stochastic intensity on the number of previous shocks. We consider the extreme shock model, where each shock is catastrophic for a system with probability p(t) and is harmless with the complementary probability q ( t ) = 1 − p ( t ) . The corresponding survival and the failure rate functions are derived and analyzed. These results can be used in various applications including engineering, survival analysis, finance, biology and so forth. The cumulative shock model, where each shock results in the increment of wear and a system's failure occurs when the accumulated wear reaches some boundary is also considered. A new general concept describing the dependent increments property of a stochastic process is suggested and discussed with respect to the GPP. | New shock models based on the generalized Polya process |
S0377221715010826 | Simulation models are frequently analyzed through a linear regression model that relates the input/output data behavior. However, in several situations, it happens that different data subsets may resemble different models. The purpose of this paper is to present a procedure for constructing switching regression metamodels in stochastic simulation, and to exemplify the practical use of statistical techniques of switching regression in the analysis of simulation results. The metamodel estimation is made using a mixture weighted least squares and the maximum likelihood method. The consistency and the asymptotic normality of the maximum likelihood estimator are establish. The proposed methods are applied in the construction of a switching regression metamodel. This paper gives special emphasis on the usefulness of constructing switching metamodels in simulation analysis. | Switching regression metamodels in stochastic simulation |
S0377221715010838 | This paper focuses on the problem of minimizing CO2 emissions in the routing of vehicles in urban areas. While many authors have realized the importance of speed in minimizing emissions, most of the existing literature assumes that vehicles can travel at the emissions-minimizing speed on each arc in the road network. In urban areas, vehicles must travel at the speed of traffic, which is variable and time-dependent. The best routes also depend on the vehicle load. To solve the problem, we take advantage of previous work that transforms the stochastic shortest path subproblems into deterministic problems. While in general, these paths must be computed for each combination of start time and load, we introduce a result that identifies when the emissions-minimizing path between customers is the same for all loads. When this occurs, we can precompute the paths and store them in a lookup table which saves on runtime. To solve the routing problem, we adapt an existing tabu search algorithm. We test our approach on instances from a real road network dataset and 230 million speed observations. Experiments with different numbers of vehicles, vehicle weights, and pickup quantities demonstrate the value of our approach. We show that large savings in emissions can occur particularly in the suburbs, with heavier vehicles, and with heterogeneous pickup quantities as compared with routes created with more traditional objectives. We show that the savings in emissions are proportionally larger than the associated increases in duration, indicating improved emissions are achievable at a fairly low cost. | Vehicle routing to minimize time-dependent emissions in urban areas |
S037722171501084X | Autonomous ‘word of mouth’, as a channel of social influence that is out of firms’ direct control, has acquired particular importance with the development of the Internet. Depending on whether a given product or service is a good or a bad deal, this can significantly contribute to commercial success or failure. Yet the existing dynamic models of sales in marketing still assume that the influence of word of mouth on sales is at best advertising-dependent. This omission can produce ineffective management and therefore misleading marketing policies. This paper seeks to bridge the gap by introducing a contagion sales model of a monopolist firm's product where sales are affected by advertising-dependent as well as autonomous word of mouth. We assume that the firm's attraction rate of new customers is determined by the degree at which the current sales price is advantageous or not compared with the current customers’ reservation price. A primary goal of the paper is to determine the optimal sales price and advertising effort. We show that, despite costly price adjustments, the interactions between sales price, advertising-dependent and autonomous word of mouth can result in complex dynamic pricing policies involving history-dependence or limit cycling consisting of alternating attraction of new customers and attrition of current customers. | Autonomous and advertising-dependent ‘word of mouth’ under costly dynamic pricing |
S0377221715010851 | We present two new mixed integer programming formulations for the order acceptance and scheduling problem in two machine flow shops. Solving this optimization problem is challenging because two types of decisions must be made simultaneously: which orders to be accepted for processing and how to schedule them. To speed up the solution procedure, we present several techniques such as preprocessing and valid inequalities. An extensive computational study, using different instances, demonstrates the efficacy of the new formulations in comparison to some previous ones found in the relevant literature. | Order acceptance and scheduling problems in two-machine flow shops: New mixed integer programming formulations |
S0377221715010863 | Modern performance measures differ from the classical ones since they assess the performance against a benchmark and usually account for asymmetry in return distributions. The Omega ratio is one of these measures. Until recently, limited research has addressed the optimization of the Omega ratio since it has been thought to be computationally intractable. The Enhanced Index Tracking Problem (EITP) is the problem of selecting a portfolio of securities able to outperform a market index while bearing a limited additional risk. In this paper, we propose two novel mathematical formulations for the EITP based on the Omega ratio. The first formulation applies a standard definition of the Omega ratio where it is computed with respect to a given value, whereas the second formulation considers the Omega ratio with respect to a random target. We show how each formulation, nonlinear in nature, can be transformed into a Linear Programming model. We further extend the models to include real features, such as a cardinality constraint and buy-in thresholds on the investments, obtaining Mixed Integer Linear Programming problems. Computational results conducted on a large set of benchmark instances show that the portfolios selected by the model assuming a standard definition of the Omega ratio are consistently outperformed, in terms of out-of-sample performance, by those obtained solving the model that considers a random target. Furthermore, in most of the instances the portfolios optimized with the latter model mimic very closely the behavior of the benchmark over the out-of-sample period, while yielding, sometimes, significantly larger returns. | Linear programming models based on Omega ratio for the Enhanced Index Tracking Problem |
S0377221715010875 | We present an integrated methodological approach for selecting portfolios. The proposed methodology is focused on incorporation of investor’s preferences in the Mean-Risk framework. We propose a risk measure calculated with the downside part of the portfolio return distribution which, we argue, captures better the practical behavior of the loss-averse investor. We establish its properties, study the link with stochastic dominance criteria, point out the relations with Conditional Value at Risk and Lower Partial Moment of first order, and give the explicit formula for the case of scenario-based portfolio optimization. The proposed methodology involves two stages: firstly, the investment opportunity set (efficient frontier) is determined, and secondly, one single preferred efficient portfolio is selected, namely the one having the highest Expected Utility value. Three classes of utility functions with loss aversion corresponding to three types of investors are considered. The empirical study is targeted on assessing the differences between the efficient frontier of the proposed model and the classical Mean-Variance, Mean-CVaR and Mean-LPM1 frontiers. We firstly analyze the loss of welfare incurred by using another model instead of the proposed one and measure the corresponding gain/loss of utility. Secondly, we assess how much the portfolios really differ in terms of their compositions using a dissimilarity index based on the 1-norm. We describe and interpret the optimal solutions obtained and emphasize the role and influence of loss aversion parameters values and of constraints. Three types of constraints are studied: no short selling allowed, a certain degree of diversification imposed, and short selling allowed. | Portfolio optimization under loss aversion |
S0377221715010887 | This paper deals with assigning hierarchically skilled technicians to jobs by considering preferences. We investigate stability definitions in multi-skill workforce assignments stemming from the notion of blocking pairs as stated in the Marriage model of Gale–Shapley. We propose a Branch-and-Price approach to find a stable workforce assignment in which no technician and job pair can be better off by replacing an already assigned technician in current team of the job. As base for our exact algorithm, we give a reformulation of the problem which constructs a stable assignment by selecting teams from a base set. Then, the pricing problem accounts finding a team to a job. We provide details of the algorithm and show its efficiency by means of a computational study. We also show that checking stability becomes NP-hard, if replacing groups of technicians is considered in defining stability. | A Branch-and-Price algorithm for stable workforce assignments with hierarchical skills |
S0377221715011091 | This paper examines whether the conclusions of standard supply-chain models carry over to repeated supply-chain relationships. The past models assume profit-maximizing agents in one-shot games. In these games, an essential unresolved issue concerns which parties in the supply chain have greater power to extract a larger share of supply chain profit: the manufacturer or the retailer. In particular, we consider a two-manufacturer/one-retailer supply chain over repeated periods of interaction. We find that the experimental results are closest to a symmetric outcomes hypothesis: the supply chain members tend to choose similar margin levels and profits tend to be more fairly divided than non-cooperative, game-theoretic, supply-chain models predict. Individual supply chain member's behavior shows evidence of fairness concerns for supply chain members. These results indicate the significant role of fairness in competitive supply chain relationships, even in a scenario that is designed to favor one supply chain member over the others. | The role of fairness in competitive supply chain relationships: An experimental study |
S0377221715011108 | This paper discusses a multi-period service scheduling problem. In this problem, a set of customers is given who periodically require service over a finite time horizon. To satisfy the service demands, a set of operators is given, each with a fixed capacity in terms of the number of customers an operator can serve per period. The task is to determine for each customer the periods in which he will be visited by an operator such that the periodic service requests of the customers are adhered to and the total number of operators used over the time horizon is minimal. Two alternative policies for scheduling customer visits are considered. In the first one, a customer is visited just on time, i.e., in the period where he or she has a demand for service. The second policy allows service visits ahead of time. The rationale behind this policy is that allowing irregular visits may reduce the overall number of operators needed throughout the time horizon. To solve the problem, integer linear programming formulations are proposed for both policies and numerical experiments are presented that show the reduction in the number of operators used when visits ahead of time are allowed. As only small instances can be solved optimally, a heuristic algorithm is introduced in order to obtain good quality solutions and shorter computing times. | Scheduling policies for multi-period services |
S037722171501111X | We develop a dynamic continuous-time theory of the competitive firm under multiple correlated uncertainties (output price uncertainty and output uncertainty as an example). In doing so, we completely generalize and extend the previous (one-period) comparative statics results (the marginal impact of each parameter on optimal output). Particularly, we relax the assumption of the statistical independence between the risks, and the restrictions on the coefficient of absolute/relative risk aversion. Furthermore, we generally show the impact of one risk on the aversion to another. Moreover, we show the role of the factor of correlation between risks on the decisions of the firm. | A note on the theory of the firm under multiple uncertainties |
S0377221715011121 | There is a need to identify and categorise different types of nonlinearities that commonly appear in supply chain dynamics models, as well as establishing suitable methods for linearising and analysing each type of nonlinearity. In this paper simplification methods to reduce model complexity and to assist in gaining system dynamics insights are suggested. Hence, an outcome is the development of more accurate simplified linear representations of complex nonlinear supply chain models. We use the highly cited Forrester production-distribution model as a benchmark supply chain system to study nonlinear control structures and apply appropriate analytical control theory methods. We then compare performances of the linearised model with numerical solutions of the original nonlinear model and with other previous research on the same model. Findings suggest that more accurate linear approximations can be found. These simplified and linearised models enhance the understanding of the system dynamics and transient responses, especially for inventory and shipment responses. A systematic method is provided for the rigorous analysis and design of nonlinear supply chain dynamics models, especially when overly simplistic linear relationship assumptions are not possible or appropriate. This is a precursor to robust control system optimisation. | A technique to develop simplified and linearised models of complex dynamic supply chain systems |
S0377221715011133 | Compactness and landscape connectivity are essential properties for effective functioning of conservation reserves. In this article we introduce a linear integer programming model to determine optimal configuration of a conservation reserve with such properties. Connectivity can be defined either as structural (physical) connectivity or functional connectivity; the model developed here addresses both properties. We apply the model to identify the optimal conservation management areas for protection of Gopher Tortoise (GT) in a military installation, Ft. Benning, Georgia, which serves as a safe refuge for this ‘at risk’ species. The recent expansion in the military mission of the installation increases the pressure on scarce GT habitat areas, which requires moving some of the existent populations in those areas to suitably chosen new conservation management areas within the boundaries of the installation. Using the model, we find the most suitable and spatially coherent management areas outside the heavily used training areas. | Optimal design of compact and functionally contiguous conservation management areas |
S0377221715011145 | Risk has always been a dominant part of financial decision making in any industry. Recently models, tools and computational techniques have been developed so that we can effectively incorporate risk in optimal decision policies. The focus of this paper is on electricity markets, where much of the inherent risk falls on the retail sector. We introduce a three-stage model of an electricity market where firms can choose to enter the retail market, then enter into retail contracts, and finally purchase electricity in a wholesale market to satisfy their contracts. We explicitly assume that firms are risk-averse in this model. We demonstrate how the behaviour of firms change with risk-aversion, and use the example of an asset-swap policy over a transmission network to demonstrate the importance of modeling risk-aversion in determining policy outcomes. | Electricity retail contracting under risk-aversion |
S0377221715011157 | Bus transit network planning is a complex process that is divided into several phases such as: line planning, timetable generation, vehicle scheduling, and crew scheduling. In this work, we address the timetable generation which consists in scheduling the departure times for all trips of each bus line. We focus on the Synchronization Bus Timetabling Problem (SBTP) that favors passenger transfers and avoids congestion of buses at common stops. A Mixed Integer Program (MIP) was proposed in the literature for the SBTP but it fails to solve real bus network instances. We develop in this paper four classes of valid inequalities for this MIP using combinatorial properties of the SBTP on the number of synchronizations. Experimental results show that large instances are solved within few minutes with a relative deviation from the optimal solution that is usually less than 3 percent. | Valid inequalities for the synchronization bus timetabling problem |
S0377221715011169 | Inland waterways form a natural network infrastructure with capacity for more traffic. Transportation by ship is widely promoted as it is a reliable, efficient and environmental friendly way of transport. Nevertheless, locks managing the water level on waterways and within harbors sometimes constitute bottlenecks for transportation over water. The lockmaster’s problem concerns the optimal strategy for operating such a lock. In the lockmaster’s problem we are given a lock, a set of upstream-bound ships and another set of ships traveling in the opposite direction. We are given the arrival times of the ships and a constant lockage time; the goal is to minimize total waiting time of the ships. In this paper, a dynamic programming algorithm is proposed that solves the lockmaster’s problem in polynomial time. This algorithm can also be used to solve a single batching machine scheduling problem more efficiently than the current algorithms from the literature do. We extend the algorithm such that it can be applied in realistic settings, taking into account capacity, ship-dependent handling times, weights and water usage. In addition, we compare the performance of this new exact algorithm with the performance of some (straightforward) heuristics in a computational study. | The lockmaster’s problem |
S0377221715011170 | Real-life planning problems are often complicated by the occurrence of disturbances, which imply that the original plan cannot be followed anymore and some recovery action must be taken to cope with the disturbance. In such a situation it is worthwhile to arm yourself against possible disturbances by including recourse actions in your planning strategy. Well-known approaches to create plans that take possible, common disturbances into account are robust optimization and stochastic programming. More recently, another approach has been developed that combines the best of these two: recoverable robustness. In this paper, we solve recoverable robust optimization problems by the technique of branch-and-price. We consider two types of decomposition approaches: separate recovery and combined recovery. We will show that with respect to the value of the LP-relaxation combined recovery dominates separate recovery. We investigate our approach for two example problems: the size robust knapsack problem, in which the knapsack size may get reduced, and the demand robust shortest path problem, in which the sink is uncertain and the cost of edges may increase. For each problem, we present elaborate computational experiments. We think that our approach is very promising and can be generalized to many other problems. | Decomposition approaches for recoverable robust optimization problems |
S0377221715011182 | We use developments in full-information optimal stopping to decide kidney-offer admissibility depending on the patient’s age in treatment, on his/her estimated lifetime probabilistic profile and his/her prospects on the waiting list. We allow for a broad family of lifetime distributions – the Gamma – thus enabling flexible modeling of patients survival under dialysis. We fully automate an appropriate recursive solution in a spreadsheet application. It yields the optimal critical times for acceptance of offers of different qualities, and the ensuing expected value-to-go as a function of time. The model may serve both the organizer of a donation program for planning purposes, and the particular surgeon in making the critical decision at the proper time. It may further serve the potential individual recipient, practicing present-day patient-choice. Numerical results and their discussion are included. | Deciding kidney-offer admissibility dependent on patients’ lifetime failure rate |
S0377221715011194 | This paper proposes an enhanced approach to modeling and forecasting volatility using high frequency data. Using a forecasting model based on Realized GARCH with multiple time-frequency decomposed realized volatility measures, we study the influence of different timescales on volatility forecasts. The decomposition of volatility into several timescales approximates the behaviour of traders at corresponding investment horizons. The proposed methodology is moreover able to account for impact of jumps due to a recently proposed jump wavelet two scale realized volatility estimator. We propose a realized Jump-GARCH models estimated in two versions using maximum likelihood as well as observation-driven estimation framework of generalized autoregressive score. We compare forecasts using several popular realized volatility measures on foreign exchange rate futures data covering the recent financial crisis. Our results indicate that disentangling jump variation from the integrated variation is important for forecasting performance. An interesting insight into the volatility process is also provided by its multiscale decomposition. We find that most of the information for future volatility comes from high frequency part of the spectra representing very short investment horizons. Our newly proposed models outperform statistically the popular as well conventional models in both one-day and multi-period-ahead forecasting. | Modeling and forecasting exchange rate volatility in time-frequency domain |
S0377221715011200 | We study infinite-horizon, optimal switching problems for underlying processes that exhibiting “fast” mean-reverting stochastic volatility. We obtain closed-form analytic approximations of the solution for the resulting quasi-variational inequalities, that provide quantitative and qualitative results for the effects of multi-scale variability of the underlying process on the optimal switching rule. The proposed methodology is applicable to a number of operations research problems involving switching flexibility. | Optimal switching decisions under stochastic volatility with fast mean reversion |
S0377221715011212 | In this paper, we develop a fast and accurate numerical method for pricing of the three-asset equity-linked securities options. The option pricing model is based on the Black–Scholes partial differential equation. The model is discretized by using a non-uniform finite difference method and the resulting discrete equations are solved by using an operator splitting method. For fast and accurate calculation, we put more grid points near the singularity of the nonsmooth payoff function. To demonstrate the accuracy and efficiency of the proposed numerical method, we compare the results of the method with those from Monte Carlo simulation in terms of computational cost and accuracy. The numerical results show that the cost of the proposed method is comparable to that of the Monte Carlo simulation and it provides more stable hedging parameters such as the Greeks. | A practical finite difference method for the three-dimensional Black–Scholes equation |
S0377221715011224 | We consider productivity measurement based on radial DEA models with a single constant input. We show that in this case the Malmquist and the Hicks–Moorsteen productivity indices coincide and are multiplicatively complete, the choice of orientation of the Malmquist index for the measurement of productivity change does not matter, and there is a unique decomposition of productivity change containing two independent sources, namely technical efficiency change and technical change. Technical change decomposes in an infinite number of ways into a radial magnitude effect and an output bias effect. We also show that the aggregate productivity index is given by the geometric mean between any two periods of the simple arithmetic averages of the individual contemporaneous and mixed period distance functions. | Productivity measurement in radial DEA models with a single constant input |
S0377221715011236 | This paper proposes a novel interest rate model that presents simple analytical pricing formulas for interest rate-based derivatives, including swaps, futures, swaptions, caps and floors. Exploring the regime-switching feature of Markov chains, the proposed model focuses on discrete changes in the central bank policy rates – the main driver of short-term rate fluctuations. An empirical analysis shows that the proposed model generally outperforms other standard short-term rate models in fitting cross-sections of options prices. Moreover, the explicit nature of policy rates, to some extent, enables the model to infer risk-neutral probabilities of the central-bank rate decisions. | A tractable interest rate model with explicit monetary policy rates |
S037722171501139X | We study the regulation of one-way station-based vehicle sharing systems through parking reservation policies. We measure the performance of these systems in terms of the total excess travel time of all users caused as a result of vehicle or parking space shortages. We devise mathematical programming based bounds on the total excess travel time of vehicle sharing systems under any passive regulation (i.e., policies that do not involve active vehicle relocation) and, in particular, under any parking space reservation policy. These bounds are compared to the performance of several partial parking reservation policies, a parking space overbooking policy and to the complete parking reservation (CPR) and no-reservation (NR) policies introduced in a previous paper. A detailed user behavior model for each policy is presented, and a discrete event simulation is used to evaluate the performance of the system under various settings. The analysis of two case studies of real-world systems shows the following: (1) a significant improvement of what can theoretically be achieved is obtained via the CPR policy; (2) the performances of the proposed partial reservation policies monotonically improve as more reservations are required; and (3) parking space overbooking is not likely to be beneficial. In conclusion, our results reinforce the effectiveness of the CPR policy and suggest that parking space reservations should be used in practice, even if only a small share of users are required to place reservations. | Regulating vehicle sharing systems through parking reservation policies: Analysis and performance bounds |
S0377221715011406 | Communication links connect pairs of wireless nodes in a wireless network. Links can interfere with each other due to their proximity and transmission power if they use the same frequency channel. Given that a frequency channel is the most important and scarce resource in a wireless network, we wish to minimize the total number of different frequency channels used. We can assign the same channel to multiple different links if the assignment is done in a way that avoids co-channel interference. Given a conflict graph which shows conflicts between pairs of links if they are assigned the same frequency channel, assigning channels to links can be cast as a minimum coloring problem. However the coloring problem is complicated by the fact that acceptably small levels of interference between pairs of links using the same channel can accumulate to cause an unacceptable level of total interference at a given link. In this paper we develop fast and effective methods for frequency channel assignment in multi-hop wireless networks via new heuristics for solving this extended coloring problem. The heuristics are orders of magnitude faster than an exact solution method while consistently returning near-optimum results. | Fast heuristics for the frequency channel assignment problem in multi-hop wireless networks |
S0377221715011418 | Search games for a mobile or immobile hider traditionally have the hider permanently confined to a compact ‘search region’ making eventual capture inevitable. Hence the payoff can be taken as time until capture. However in many real life search problems it is possible for the hider to escape an area in which he was known to be located (e.g. Bin Laden from Tora Bora) or for a prey animal to escape a predator’s hunting territory. We model and solve such continuous time problems with escape where we take the probability of capture to be the searcher’s payoff. We assume the searcher, while cruise searching, can cover the search region at unit rate of area, for a given time horizon T known to the hider. The hider can stay still or choose any time to flee the region. To counter this, the searcher can also adopt an ambush mode which will capture a fleeing hider. The searcher wins the game if he either finds the hider while cruise searching or ambushes him while he is attempting to flee; the hider wins if he flees successfully (while the searcher is cruising) or has not been found by time T. The optimal searcher strategy involves decreasing the ambush probability over time, to a limit of zero. This surprising behaviour is opposite to that found recently by Alpern et al. (2011, 2013) in a predator-prey game with similar dynamics but without the possibility of the hider escaping. Our work also complements that of Zoroa et al. (2015) on searching for multiple prey and Gal and Casas (2014) for a combined model of search and pursuit. | Optimal search and ambush for a hider who can escape the search region |
S037722171501142X | This manuscript reviews recent advances in deterministic global optimization for Mixed-Integer Nonlinear Programming (MINLP), as well as Constrained Derivative-Free Optimization (CDFO). This work provides a comprehensive and detailed literature review in terms of significant theoretical contributions, algorithmic developments, software implementations and applications for both MINLP and CDFO. Both research areas have experienced rapid growth, with a common aim to solve a wide range of real-world problems. We show their individual prerequisites, formulations and applicability, but also point out possible points of interaction in problems which contain hybrid characteristics. Finally, an inclusive and complete test suite is provided for both MINLP and CDFO algorithms, which is useful for future benchmarking. | Global optimization advances in Mixed-Integer Nonlinear Programming, MINLP, and Constrained Derivative-Free Optimization, CDFO |
S0377221715011431 | Burn-in is a method of ‘elimination’ of initial failures (infant mortality). In the conventional burn-in procedures, to burn-in an item means to subject it to a fixed time period of simulated use prior to actual operation. Then, the items which failed during burn-in are just scrapped and only those which survived the burn-in procedure are considered to be of satisfactory quality. Thus, when the items are subject to degradation phenomena, those whose degradation levels at the end of burn-in exceed a given failure threshold level are eliminated. In this paper, we consider a new burn-in procedure for items subject to degradation phenomena and belonging to mixed populations composed of a weak and a strong subpopulation. The new procedure is based on the ‘whole history’ of the degradation process of an item periodically observed during the burn-in and utilizes the information contained in the observed degradation process to assess whether the item belongs to the strong or weak subpopulation. The problem of determining the optimal burn-in parameters is considered and the properties of the optimal parameters are derived. A numerical example is also provided to illustrate the theoretical results obtained in this paper. | Optimal burn-in procedure for mixed populations based on the device degradation process history |
S0377221715011455 | We propose a method for bank efficiency assessment, based on weight restricted DEA, that limits banks’ abilities to use extreme weights, corresponding to extreme judgements of the risk adjusted prices on funding sources and assets. Based on a data set comprising the largest European banks during the financial crisis, we illustrate the impact of the proposed weight restrictions in two different efficiency models; one related to banks’ funding mix and one related to their asset mix. The results show that using a more balanced set of weights tend to reduce the estimated efficiency scores more for those banks which were bailed out during the crisis, which confirms the potential bias within standard DEA that does not control for extreme weights applied by highly risky banks. We discuss the use of the proposed method as a regulatory tool to constrain discretion when complying with regulatory capital benchmarks such as the Basel regulatory capital ratios. | Controlling for the use of extreme weights in bank efficiency assessments during the financial crisis |
S0377221715011467 | Dynamic Ambulance Management (DAM) is generally believed to provide means to enhance the response-time performance of emergency medical service providers. The implementation of DAM algorithms leads to additional movements of ambulance vehicles compared to the reactive paradigm, where ambulances depart from the base station when an incident is reported. In practice, proactive relocations are only acceptable when the number of additional movements is limited. Motivated by this trade-off, we study the effect of the number of relocations on the response-time performance. We formulate the relocations from one configuration to a target configuration by the Linear Bottleneck Assignment Problem, so as to provide the quickest way to transition to the target configuration. Moreover, the performance is measured by a general penalty function, assigning to each possible response time a certain penalty. We extensively validate the effectiveness of relocations for a wide variety of realistic scenarios, including a day and night scenario in a critically and realistically loaded system. The results consistently show that already a small number of relocations lead to near-optimal performance, which is important for the implementation of DAM algorithms in practice. | The effect of ambulance relocations on the performance of ambulance service providers |
S0377221715011479 | Risk assessment and management was established as a scientific field some 30–40 years ago. Principles and methods were developed for how to conceptualise, assess and manage risk. These principles and methods still represent to a large extent the foundation of this field today, but many advances have been made, linked to both the theoretical platform and practical models and procedures. The purpose of the present invited paper is to perform a review of these advances, with a special focus on the fundamental ideas and thinking on which these are based. We have looked for trends in perspectives and approaches, and we also reflect on where further development of the risk field is needed and should be encouraged. The paper is written for readers with different types of background, not only for experts on risk. | Risk assessment and risk management: Review of recent advances on their foundation |
S0377221715011637 | We study a two-person zero-sum game where the payoff matrix entries are random and the constraints are satisfied jointly with a given probability. We prove that for the general random-payoff zero-sum game there exists a “weak duality” between the two formulations, i.e., the optimal value of the minimizing player is an upper bound of the one of the maximizing player. Under certain assumptions, we show that there also exists a “strong duality” where their optimal values are equal. Moreover, we develop two approximation methods to solve the game problem when the payoff matrix entries are independent and normally distributed. Finally, numerical examples are given to illustrate the performances of the proposed approaches. | Random-payoff two-person zero-sum game with joint chance constraints |
S0377221715011649 | This article investigates the role played by both production and market risks on cash crop farmers’ decision to adopt long rotations considered as innovative cropping systems. We build a multi-period recursive farm model with Discrete Stochastic Programming. The model arbitrates each year between conventional and innovative, longer rotations. Yearly farming operations are declined according to a decision tree, so that production risk is an intra-year risk. Market risk is considered as an inter-year risk influencing crop successions. Simulations are performed on a specialized French cash crop farm. They show that when the long rotation is subsidized by an area premium, farmers are encouraged to remain in longer rotations. They also show that a high level of risk aversion tends to slow down the conversion towards longer rotations. | A Dynamic Stochastic Programming model of crop rotation choice to test the adoption of long rotation under price and production risks |
S0377221715011650 | The maximum capture problem with random utilities seeks to locate new facilities in a competitive market such that the captured demand of users is maximized, assuming that each individual chooses among all available facilities according to the well-know a random utility model namely the multinomial logit. The problem is complex mostly due to its integer nonlinear objective function. Currently, the most efficient approaches deal with this complexity by either using a nonlinear programing solver or reformulating the problem into a Mixed-Integer Linear Programing (MILP) model. In this paper, we show how the best MILP reformulation available in the literature can be strengthened by using tighter coefficients in some inequalities. We also introduce a new branch-and-bound algorithm based on a greedy approach for solving a relaxation of the original problem. Extensive computational experiments are presented, benchmarking the proposed approach with other linear and non-linear relaxations of the problem. The computational experiments show that our proposed algorithm is competitive with all other methods as there is no method which outperforms the others in all instances. We also show a large-scale real instance of the problem, which comes from an application in park-and-ride facility location, where our proposed branch-and-bound algorithm was the most effective method for solving this type of problem. | A branch-and-bound algorithm for the maximum capture problem with random utilities |
S0377221715011662 | Given that companies have the flexibility to decide about size and timing of a renewable electricity investment, the existence of four paradox effects is proven: Only the type but not the amount of governmental support has an influence on the optimal capacity of a renewable electricity generating system. A decrease of governmental support over time may result in higher capacities of renewables installed on an industry level, at least on the short term. Likewise, higher uncertainty may encourage an expansion of these capacities. In contrast, technological progress may hamper the expansion of capacities. Finally, these four paradox effects are exemplified in a Germany-based case study regarding a photovoltaic project. | The paradox effects of uncertainty and flexibility on investment in renewables under governmental support |
S0377221715011674 | We consider a production/clearing process in a random environment where a single machine produces a certain product into a buffer continuously. The demands arrive according to a Markov Additive Process (MAP) governed by a continuous-time Markov chain, and their sizes are independent and have phase-type distributions depending on the type of arrival. Since negative inventory is not allowed, the demand may be partially satisfied. The production process switches between predetermined rates that depend on the state of the environment. In addition, the system is totally cleared at stationary renewal times and starts anew at level zero immediately. Several clearing policies are considered: clearing at random times, clearing at crossings of a specified level, and a combination of the above policies. We assume the total cost includes a fixed clearing cost, a variable cost for the cleared amount, a holding cost, and a lost demand cost. By applying regenerative theory, we use tools from the exit-time theorem for fluid processes and martingales to obtain cost functionals under both the discounted and average criteria. Finally, illustrative examples and a comparative study are provided. | Clearing control policies for MAP inventory process with lost sales |
S0377221715011686 | We consider the problem of customer equilibrium strategies in an M/M/1 queue under dynamic service control. The service rate switches between a low and a high value depending on system congestion. Arriving customers do not observe the system state at the moment of arrival. We show that due to service rate variation, the customer equilibrium strategy is not generally unique, and derive an upper bound on the number of possible equilibria. For the problem of social welfare optimization, we numerically analyze the relationship between the optimal and equilibrium arrival rates as a function of various parameter values, and assess the level of inefficiency via the price of anarchy measure. We finally derive analytic solutions for the special case where the service rate switch occurs when the queue ceases to be empty. | Customer equilibrium and optimal strategies in an M/M/1 queue with dynamic service control |
S0377221715011698 | The quality of short-term electricity load forecasting is crucial to the operation and trading activities of market participants in an electricity market. In this paper, it is shown that a multiple equation time-series model, which is estimated by repeated application of ordinary least squares, has the potential to match or even outperform more complex nonlinear and nonparametric forecasting models. The key ingredient of the success of this simple model is the effective use of lagged information by allowing for interaction between seasonal patterns and intra-day dependencies. Although the model is built using data for the Queensland region of Australia, the method is completely generic and applicable to any load forecasting problem. The model’s forecasting ability is assessed by means of the mean absolute percentage error (MAPE). For day-ahead forecast, the MAPE returned by the model over a period of 11 years is an impressive 1.36%. The forecast accuracy of the model is compared with a number of benchmarks including three popular alternatives and one industrial standard reported by the Australia Energy Market Operator (AEMO). The performance of the model developed in this paper is superior to all benchmarks and outperforms the AEMO forecasts by about a third in terms of the MAPE criterion. | Forecasting day-ahead electricity load using a multiple equation time series approach |
S0377221715011704 | This paper analyzes a model where a manufacturer sells a product in two markets. One market is directly served by the manufacturer and the other is served by a retailer. While the manufacturer can offer consumer rebates, the retailer can potentially sell in a gray market, i.e., selling products outside of the authorized channel. Using a game-theoretic approach, we find that (1) rebates have a gray-market-deterrence effect, (2) rebates are beneficial to the manufacturer and possibly to retailer, (3) partial redemption of rebates is not always beneficial to the manufacturer, and (4) rebate leakage across markets or rebate under-valuation by consumers is not always detrimental to the retailer. These findings suggest the possible use of rebates even in scenarios where the conventional rationales for their use are absent. | The benefits of consumer rebates: A strategy for gray market deterrence |
S0377221715011716 | This paper addresses the Pickup and Delivery Problem with Time Windows, Profits, and Reserved Requests (PDPTWPR), a new vehicle routing problem appeared in carrier collaboration realized through Combinatorial Auction (CA). In carrier collaboration, several carriers form an alliance and exchange some of their transportation requests. Each carrier has reserved requests, which will be served by itself, whereas its other requests called selective requests may be served by the other carriers. Each request is a pickup and delivery request associated with an origin, a destination, a quantity, two time windows, and a price for serving the request paid by its corresponding shipper. For each carrier in CA, it has to determine which selective requests to serve, in addition to its reserved requests, and builds feasible routes to maximize its total profit. A Mixed-Integer Linear Programming (MILP) model is formulated for the problem and an adaptive large neighborhood search (ALNS) approach is developed. The ALNS involves ad-hoc destroy/repair operators and a local search procedure. It runs in successive segments which change the behavior of operators and compute their own statistics to adapt selection probabilities of operators. The MILP model and the ALNS approach are evaluated on 54 randomly generated instances with 10–100 requests. The computational results indicate that the ALNS significantly outperforms the solver, not only in terms of solution quality but also in terms of CPU time. | Adaptive large neighborhood search for the pickup and delivery problem with time windows, profits, and reserved requests |
S0377221715011728 | Changing the topology of a railway network can greatly affect its capacity. Railway networks however can be altered in a multitude of different ways. As each way has significant immediate and long term financial ramifications, it is a difficult task to decide how and where to expand the network. In response some railway capacity expansion models (RCEM) have been developed to help capacity planning activities, and to remove physical bottlenecks in the current railway system. The exact purpose of these models is to decide given a fixed budget, where track duplications and track sub divisions should be made, in order to increase theoretical capacity most. These models are high level and strategic, and this is why increases to the theoretical capacity is concentrated upon. The optimisation models have been applied to a case study to demonstrate their application and their worth. The case study evidently shows how automated approaches of this nature could be a formidable alternative to current manual planning techniques and simulation. If the exact effect of track duplications and sub-divisions can be sufficiently approximated, this approach will be very applicable. | Optimisation models for expanding a railway's theoretical capacity |
S037722171501173X | A characteristic aspect of risks in a complex, modern society is the nature and degree of the public response – sometimes significantly at variance with objective assessments of risk. A large part of the risk management task involves anticipating, explaining and reacting to this response. One of the main approaches we have for analysing the emergent public response, the social amplification of risk framework, has been the subject of little modelling. The purpose of this paper is to explore how social risk amplification can be represented and simulated. The importance of heterogeneity among risk perceivers, and the role of their social networks in shaping risk perceptions, makes it natural to take an agent-based approach. We look in particular at how to model some central aspects of many risk events: the way actors come to observe other actors more than external events in forming their risk perceptions; the way in which behaviour both follows risk perception and shapes it; and the way risk communications are fashioned in the light of responses to previous communications. We show how such aspects can be represented by availability cascades, but also how this creates further problems of how to represent the contrasting effects of informational and reputational elements, and the differentiation of private and public risk beliefs. Simulation of the resulting model shows how certain qualitative aspects of risk response time series found empirically – such as endogenously-produced peaks in risk concern – can be explained by this model. | Agent-based computational modelling of social risk responses |
S0377221715011741 | In the recent past, the global public has been alarmed by several natural disasters with tremendous consequences. The OR/MS community has reacted by developing quantitative methods to support humanitarian aid, which have become well-established in the areas of disaster operations management and humanitarian logistics. An especially rapidly growing strand of literature in these areas uses multicriteria optimization methods, which is natural in view of the ubiquity of multiple objectives in disaster operations. The article reviews recent literature on the application of multicriteria optimization to the management of natural disasters, epidemics or other forms of humanitarian crises. Different optimization criteria as well as multicriteria decision making approaches applied in this field are discussed and examined. The available literature is classified according to several attributes, and each paper is presented in some detail. Possible future research directions are outlined. | Multicriteria optimization in humanitarian aid |
S0377221715011753 | This paper develops a planning concept for defining repetitive delivery patterns according to which stores of a grocery retailer are supplied from a distribution center. Applying repetitive delivery patterns offers major advantages when scheduling the workforce for shelf replenishment, defining cyclic transportation routes and managing warehouse capacities. In doing so, all logistics subsystems of a retail chain, i.e., warehousing, transportation and instore logistics, are jointly scheduled. We propose a novel model to minimize total costs in all associated subsystems of a retail distribution chain. A solution approach is developed for clustering stores and selecting delivery patterns that reflects practical requirements. A broad numerical analysis demonstrates cost savings of 2.5 percent on average compared to a state-of-the-art approach (see Sternbeck & Kuhn, 2014). This considerable cost reduction potential is confirmed by applying the suggested approach to a real case of a major European grocery retailer. | Delivery pattern and transportation planning in grocery retailing |
S0377221715011765 | Planned infrastructure works reduce the available capacity of a railway system and make it more vulnerable to conflicts and delay propagation. The starting point of this paper is a published timetable that needs to be adapted due to the temporary unavailability of some resources. Since the timetable is in operation, changed arrival or departure times and cancelations have an impact on the passengers who need to adapt their travel behavior. In the light of passenger service, a trade-off is made between these inconveniences and the delays that occur in practice due to the reduced capacity. Taking the robustness of the adapted railway timetable into account is a new approach to rescheduling in case of a planned infrastructure unavailability. In this paper, an algorithm that adjusts the train routing and the train schedule to the planned maintenance interventions and keeps the level of passenger service as high as possible is presented. To avoid large inconveniences, the developed algorithm tries to minimize the number of cancelations. Computational results show that by allowing small modifications to the routing and the timetable, the robustness of the resulting solution can improve by more than 10 percent and only few trains need to be canceled. | An iterative approach for reducing the impact of infrastructure maintenance on the performance of railway systems |
S0377221715011777 | This paper examines the cost and profit efficiency of four types of Chinese commercial banks over the period from 2002 to 2013. We find that the cost and profit efficiencies improved across all types of Chinese domestic banks in general and the banks are more profit-efficient than cost efficient. Foreign banks are the most cost efficient but the least profit efficient. The profit efficiency gap between foreign banks and domestic banks has widened after the World Trade Organization transition period (2007–2013). Ownership structure, market competition, bank size, and listing status are the main determinants of the efficiency of Chinese banks. We also find a causal relationship between efficiency and SROE by using the panel auto regression method. The evidence from the shadow return on equity (SROE) suggests that policy makers should be cautious of the adjustment costs imposed by the recapitalization process, which offsets the efficiency gains. | Evaluating the performance of Chinese commercial banks: A comparative analysis of different types of banks |
S0377221715011789 | We investigate the controversial role of the informal sector in the economy of 64 countries between 2003 and 2007 by focusing for the first time on the impact it has on sovereign debt markets. In addition to a standard ordered probit regression, we employ two nonparametric neural network modeling techniques in order to capture possible complex interactions between our variables. Results confirm our main hypothesis that the informal sector has significant adverse effects on credit ratings and lending costs. MLP neural networks offer the best fit to the data, followed by the RBF neural networks and probit regression, respectively. The results do not change with respect to the stage of economic development of a country and contradict views about the possibility of significant economic benefits arising from the informal sector. Our study has important implications, especially in the context of the ongoing sovereign debt crisis, since it suggests that a reduction in the informal sector of financially challenged countries is likely to help in relaxing credit risk concerns and cutting down lending costs. Finally, a decision tree analysis is used to exploit the inherent discreteness in the data and derive intuitive rules with respect to the level of the informal sector. | Sovereign debt markets in light of the shadow economy |
S0377221715011790 | In this paper, we examine the impact of manufacturers upgrading strategy of durable products on the decision of third-party entrant in a secondary market. To do so, we develop a two-period model in which a monopolistic manufacturer sells new durable products directly to end consumers in both periods, while a third-party entrant operates a reverse channel selling used products in the secondary market. The manufacturer releases an upgraded product (i.e., one that is technologically superior to the version introduced in the first period). We derive conditions under which it is optimal (1) for the manufacture to release an upgraded product in the second period and (2) for a third party entrant to enter a secondary market. We also find, through numerical analysis, that when upgrades are typically small or moderate, the upgrading of new products can increase a third party entrant’s profitability in the secondary market but it does not benefit the third party entrant when upgrades are typically large. | The impact of product upgrading on the decision of entrance to a secondary market |
S0377221715011807 | Optimal solutions to the Level of Repair Analysis (LORA) and the Spare Parts Stocking (SPS) problems are essential in achieving a desired system/equipment operational availability. Although these two problems are interdependent, they are seldom solved simultaneously due to the complicating nature of the relationships between spare levels and system availability (or expected backorder) thus leading to sub-optimal solutions for both problems. This paper uses genetic programming-based symbolic regression methodology to evolve simpler mathematical expressions for the expected backorder equation. In addition to making the SPS problem more tractable, the simpler mathematical expressions make it possible for a combined SPS and LORA model to be formulated and solved using standard optimization techniques. Three sets of spare parts stocking problems are presented to study the feasibility of the proposed approach. Further, a case study for the joint problem is solved which shows that the proposed methodology can tackle the integrated problem. | Spare parts stocking analysis using genetic programming |
S0377221715011819 | Assembly lines with mixed products present ergonomic risks that can affect productivity of workers and lines. Because of that, the line balancing must consider the risk of injury in regard with the set of tasks necessary to process a product unit, in addition to other managerial and technological attributes such as the workload or the space. Therefore, in this paper we propose a new approach to solve the assembly line balancing problem considering temporal, spatial and ergonomic attributes at once. We formulate several mathematical models and we analyze the behavior of one of these models through case study linked to Nissan. Furthermore, we study the effect of the demand plan variations and ergonomic risk on the line balancing result. | Models for assembly line balancing by temporal, spatial and ergonomic risk attributes |
S0377221715011820 | In classical scheduling problems it is common to assume that the due dates are predefined parameters for the scheduler. In integrated systems, however, due date assignment and scheduling decisions have to be carefully coordinated to make sure that the company can meet the assigned due dates. Thus, a huge effort has been made recently to provide tools to optimally integrate due date assignment and scheduling decisions. In most cases it is common to assume that the assigned due date(s) are not restricted. However, in many practical cases, assigning due dates too far into the future may violate early agreements between the manufacturer and his customers. Thus, in this paper we extend the current literature to deal with such a constraint. This is done by analyzing a model that integrates due date assignment and scheduling decisions where each job may be assigned a different due date whose value cannot exceed a predefined threshold. The objective is to minimize the total weighted earliness, tardiness and due date assignment penalties. We show that the problem is equivalent to a two stepwise weighted tardiness problem, and thus for a large set of special cases it is strongly NP -hard, even when the scheduling is done on a single machine. We then provide several special cases that can be solved in polynomial time, and present approximation results for a slightly modified (and equivalent) problem on various machine settings. | Optimal restricted due date assignment in scheduling |
S0377221715011832 | We consider a self-storage warehouse, facing storage orders for homogeneous or heterogeneous storage units over a certain time horizon. The warehouse operations manager needs to decide which storage orders to accept and schedule them across different storage units to maximize revenue. We model warehouse operations as scheduling n independent multiprocessor tasks with given start and end times, with an objective to maximize revenue. With operational constraints like the maximal upscaling level, precedence order constraints, and maximal idle time, the established mixed-integer program cannot be efficiently solved by commercial softwares. We therefore propose a column generation approach and a branch-and-price method to find an optimal schedule. Computational experiments show that, compared with current methods in self-storage warehouses, our method can significantly increase the revenue. | Increasing the revenue of self-storage warehouses by optimizing order scheduling |
S0377221715011844 | The consistency check within each pairwise comparison matrix is an important step in an Analytic Network Process (ANP) decision. In an ANP network there is both the ability and the need to test for additional levels of consistency or coherency among the priority vectors. Examples are used to highlight cases where a Supermatrix with priority vectors that were obtained from either perfect or nearly perfect consistent pairwise comparison matrices generates suboptimal decisions. Simulations are used to further demonstrate the frequency of these occurrences in general ANP networks. A form of cross validation within the Supermatrix called linking validation is developed and demonstrated. The linking validation method allows decision makers to use the priority vectors within the Supermatrix to validate other priority vectors within the Supermatrix. The linking validation method involves generating linking estimates. The linking estimates are compared against each other to identify the most incoherent priority vector by calculating the Linking Coherency Index (LCI) scores. The decision maker can then update the specified priority vector and repeat this process until the LCI-score for every linking estimate is below the given threshold. The use of linking validation to test for coherency further improves the validity of ANP models. | Linking validation: A search for coherency within the Supermatrix |
S0377221715011856 | We consider the min-max regret version of a single-machine scheduling problem to determine which jobs are processed by outsourcing under processing time uncertainty. The performance measure is expressed as the total cost for processing some jobs in-house and outsourcing the rest. Processing time uncertainty is described through two types of scenarios: either an interval scenario or a discrete scenario. The objective is to minimize the maximum deviation from optimality over all scenarios. We show that when the cost for in-house jobs is expressed as the makespan, the problem with an interval scenario is polynomially solvable, while the one with a discrete scenario is NP-hard. Thus, for the discrete scenario case, we develop a 2-approximation algorithm and investigate when the problem is polynomially solvable. Since the problem minimizing the total completion time as a performance measure for in-house jobs is known to be NP-hard for both scenarios, we consider the problem with a special structure for the processing time uncertainty and develop a polynomial-time algorithm for both scenarios. | Min–max regret version of a scheduling problem with outsourcing decisions under processing time uncertainty |
S0377221715011868 | We consider a general model for scheduling jobs on unrelated parallel-machines with maintenance interventions. The processing times are deteriorating with their position in the production sequence and the goal of the maintenance is to help to restore good processing conditions. The maintenance duration is depending on the time elapsed since the last maintenance intervention. Several performance criteria and different maintenance systems have been proposed in the literature, leading basically to assignment problems as the underlying model. We shall inverse the approach and start first to set up the matrix for the assignment problems, which catches all the information for the production-maintenance system. This can be done for very general processing times and maintenance durations. The solutions to the assignment problems are determined first. They define the order in which the jobs are to be processed on the various machines and only then the vital informations about the schedule are retrieved, like completion and maintenance times. It will be shown that these matrices are easily obtained, and this approach does not necessitate any complex calculations. | Parallel-machine scheduling with maintenance: Praising the assignment problem |
S0377221716000023 | Challenges associated with resource allocation to mitigate and recover from natural and man-made disasters inspire new theoretical questions for decision making in the intertwined natural and human world. Disaster loss is determined not only by post-disaster relief but also the pre-disaster mitigation and preparedness. To examine the decision making process at ex ante and ex post disaster stages, we develop a two-stage dynamic programming model that optimally allocates preparedness and relief expenditures. We analytically and numerically solve the model and provide new insights by sensitivity analysis. | Balancing pre-disaster preparedness and post-disaster relief |
S0377221716000035 | District heating systems provide the heat generated in a centralized location to a set of users for their residential and commercial heating requirements. Heat distribution is generally obtained by using hot water or steam flowing through a closed network of insulated pipes and heat exchange stations at the users’ locations. The use of optimization techniques for the strategic design of such networks is strongly motivated by the high cost of the required infrastructures but is particularly challenging because of the technical characteristics and the size of the real world applications. We present a mathematical model developed to support district heating system planning. The objective is the selection of an optimal set of new users to be connected to an existing thermal network, maximizing revenues and minimizing infrastructure and operational costs. The model considers steady state conditions of the hydraulic system and takes into account the main technical requirements of the real world application. Results on real and randomly generated benchmark networks are discussed. | An optimization approach for district heating strategic network design |
S0377221716000047 | In this paper, two Benders decomposition algorithms and a novel two-stage integer programming-based heuristic are presented to optimize the beam angle and fluence map in Intensity Modulated Radiation Therapy (IMRT) planning. Benders decomposition is first implemented in the traditional manner by iteratively solving the restricted master problem and then identifying and adding the violated Benders cuts. We also implemented Benders decomposition using the “lazy constraint” feature included in CPLEX. In contrast, the two-stage heuristic first seeks to find a good solution by iteratively eliminating the least used angles in the linear programming relaxation solution until the size of the formulation is manageable. In the second stage of the heuristic, the solution is improved by applying local branching. The various methods were tested on real patient data to evaluate their effectiveness and runtime characteristics. The results indicated that implementing Benders using the lazy constraint usually led to better feasible solutions than the traditional approach. Moreover, the LP rounding heuristic was seen to generate high-quality solutions within a short amount of time, with further improvement obtained with the local branching search. | Benders decomposition and an IP-based heuristic for selecting IMRT treatment beam angles |
S0377221716000059 | The q-gradient vector is a generalization of the gradient vector based on the q-derivative. We present two global optimization methods that do not require ordinary derivatives: a q-analog of the Steepest Descent method called the q-G method and a q-analog of the Conjugate Gradient method called the q-CG method. Both q-G and q-CG are reduced to their classical versions when q equals 1. These methods are implemented in such a way that the search process gradually shifts from global in the beginning to almost local search in the end. Moreover, Gaussian perturbations are used in some iterations to guarantee the convergence of the methods to the global minimum in a probabilistic sense. We compare q-G and q-CG with their classical versions and with other methods, including CMA-ES, a variant of Controlled Random Search, and an interior point method that uses finite-difference derivatives, on 27 well-known test problems. In general, the q-G and q-CG methods are very promising and competitive, especially when applied to multimodal problems. | Global optimization using q-gradients |
S0377221716000060 | In this note we first show that the centroid (or centre of gravity) gives in value a ( σ + 1 ) -approximation to any continuous single facility minisum location problem for any gauge with asymmetry measure σ, and thus a 2-approximate solution for any norm. On the other hand for any gauge the true minimum point (the 1-median) remains within a bounded set whenever a fixed proportion of less than half of the total weight of the destination points is moved to any other positions. It follows that the distance between the centroid and the 1-median may be arbitrary close to half the diameter of the destination set. | How bad can the centroid be? |
S0377221716000072 | This paper presents an evolutionary algorithm for the fixed-charge multicommodity network design problem (MCNDP), which concerns routing multiple commodities from origins to destinations by designing a network through selecting arcs, with an objective of minimizing the fixed costs of the selected arcs plus the variable costs of the flows on each arc. The proposed algorithm evolves a pool of solutions using principles of scatter search, interlinked with an iterated local search as an improvement method. New cycle-based neighborhood operators are presented which enable complete or partial re-routing of multiple commodities. An efficient perturbation strategy, inspired by ejection chains, is introduced to perform local compound cycle-based moves to explore different parts of the solution space. The algorithm also allows infeasible solutions violating arc capacities while performing the “ejection cycles”, and subsequently restores feasibility by systematically applying correction moves. Computational experiments on benchmark MCNDP instances show that the proposed solution method consistently produces high-quality solutions in reasonable computational times. | A cycle-based evolutionary algorithm for the fixed-charge capacitated multi-commodity network design problem |
S0377221716000084 | Given a directed graph G = ( V , A ) with arbitrary arc costs, the Elementary Shortest Path Problem (ESPP) consists of finding a minimum-cost path between two nodes s and t such that each node of G is visited at most once. If negative costs are allowed, the problem is NP -hard. In this paper, several integer programming formulations for the ESPP are compared. We present analytical results based on a polyhedral study of the formulations, and computational experiments where we compare their linear programming relaxation bounds and their behavior within a branch-and-cut framework. The computational results show that a formulation with dynamically generated cutset inequalities is the most effective. | Integer programming formulations for the elementary shortest path problem |
S0377221716000096 | Durable products are characterized by their modular structured design as well as their long life cycle. Each class of components involved in the multi-indenture structure of such products requires a different recovery process. Moreover, due to their long life cycle, the return flows are of various quality levels. In this article, we study a closed-loop supply chain in the context of durable products with generic modular structures. To this end, we propose a mixed-integer programming model based on a generic disassembly tree where the number of each sub-assembly depends on the quality status of the return stream. The model determines the location of various types of facilities in the reverse network while coordinating forward and reverse flows. We also consider the legislative target for the recovery of used products as a constraint in the problem formulation. We present a Benders decomposition-based solution algorithm together with several algorithmic enhancements for this problem. Computational results illustrate the superior performance of the solution method. | Accelerating Benders decomposition for closed-loop supply chain network design: Case of used durable products with different quality levels |
S0377221716000102 | We propose a regression approach for estimating the distribution of ambulance travel times between any two locations in a road network. Our method uses ambulance location data that can be sparse in both time and network coverage, such as Global Positioning System data. Estimates depend on the path traveled and on explanatory variables such as the time of day and day of week. By modeling at the trip level, we account for dependence between travel times on individual road segments. Our method is parsimonious and computationally tractable for large road networks. We apply our method to estimate ambulance travel time distributions in Toronto, providing improved estimates compared to a recently published method and a commercial software package. We also demonstrate our method’s impact on ambulance fleet management decisions, showing substantial differences between our method and the recently published method in the predicted probability that an ambulance arrives within a time threshold. | Large-network travel time distribution estimation for ambulances |
S0377221716000114 | Pumped-hydro storage plants are increasingly considered as a complement to intermittent renewable energy sources, hence a profound understanding of their underlying economics gains in importance. To this end, we derive efficient operation programs for storage plants which operate in an environment with time-varying but deterministic power prices. Optimal control theory thereby provides a consistent framework for analysis in continuous time, taking into account the different specifics of pumped-hydro setups with large (not restricting) and small (restricting) reservoirs. An empirical illustration discusses storage operation in the German market, showing that the profit potential for storage plants decreased significantly between 2008 and 2011, affecting both large- and small-reservoir plants. | Optimal operation of pumped-hydro storage plants with continuous time-varying power prices |
S0377221716000126 | We investigate the following question of relevance to truckload dispatchers striving for profitable decisions in the context of dynamic pick-up and delivery problems: ``since not all future pick-up/delivery requests are known with certainty (i.e., advance load information (ALI) is incomplete), how effective are alternative methods for guiding those decisions?'' We propose a simple intuitive policy and integrate it into a new two-index mixed integer programming formulation, which we implement using the rolling horizon approach. On average, in one of the practical transportation network settings studied, the proposed policy can, with just second-day ALI, yield an optimality ratio equal to almost 90 percent of profits in the static optimal solution (i.e., the solution with asymptotically complete ALI). We also observe from studying the policy that second-day load information is essential when a carrier operates in a large service area. We enhance the proposed policy by adopting the idea of a multiple scenario approach. With only one-day load information, the enhanced policy improves the ratio of optimality by an average of 6 percentage points. That improvement declines with more ALI. In comparison to other dispatching methods, our proposed policy and the enhanced version we developed were found to be very competitive in terms of solution quality and computational efficiency. | Effective truckload dispatch decision methods with incomplete advance load information |
S0377221716000138 | The optimal solution, as well as the objective of stochastic programming problems vary with the underlying probability measure. This paper addresses stability with respect to the underlying probability measure and stability of the objective. The techniques presented are employed to make problems numerically tractable, which are formulated by involving numerous scenarios, or even by involving a continuous probability measure. The results justify clustering techniques, which significantly reduce computation times while guaranteeing a desired approximation quality. The second part of the paper highlights Newton’s method to solve the reduced stochastic recourse problems. The techniques presented exploit the particular structure of the recourse function of the stochastic optimization problem. The tools are finally demonstrated on a benchmark problem, which is taken from electrical power flows. graph of the transmission network consisting of buses B and transmission lines L buses i, k , … the set of transmission lines linking buses: (i, k) ∈ L if a transmission line connects i and k generators (PV bus) demand, or load bus (PQ bus) net power injected at bus i. The superscripts g indicates power generated at bus i, while d relates to demand net reactive power injected at bus i complex power (measured in watt) impedance (measured in ohm). Rik is the resistance of transmission asset linking the buses i and k, Xik the reactance admittance (measured in siemens). Gik is the conductance, Bik the susceptance voltage magnitude at bus i (measured in volt) difference of voltage angles at buses i and j voltage angle (phase) at bus i (measured in radian) imaginary unit, j 2 = − 1 | Nonlinear stochastic programming–With a case study in continuous switching |
S037722171600014X | This study addresses the optimal pipe-sizing problem of a tree-shaped gas distribution network with a single supply source. An algorithm was developed with the aim of minimizing the investment for constructing a gas distribution network with a tree-shaped layout in which demands are fixed. The construction cost is known to depend on the pipe diameters used for each arc in the network. However, under the assumption that pipe diameters are continuous, we prove that it is possible to obtain the minimum construction cost directly and analytically by an iterating procedure that converts the original tree into a single equivalent arc. In addition, we also show that expanding the converted single arc inversely to the original tree computes the optimal continuous pipe diameter for each arc. Following this, we present an additional heuristic to convert optimal continuous pipe diameters into approximate discrete pipe diameters. The algorithms were evaluated by applying them to sample networks. The numerical results obtained by comparing the approximate discrete diameters with the optimal discrete diameters confirm the efficiency of our algorithms, thereby demonstrating their suitability for designing real gas distribution networks. | Optimal pipe-sizing problem of tree-shaped gas distribution networks |
S0377221716000151 | Sorting methods, in particular ELECTRE Tri methods, are widely used in Multiple Criteria Decision Aiding to deal with ordinal classification problems. Problems of this kind encountered in practice involve the evaluation of different alternatives (actions) on several evaluation criteria that are structured in a hierarchical way. In order to deal with a hierarchical structure of criteria in decision problems, Multiple Criteria Hierarchy Process (MCHP) has been recently proposed. In this paper, we apply the MCHP to the ELECTRE-Tri methods. In particular, we extend ELECTRE Tri-B, ELECTRE Tri-C and ELECTRE Tri-nC methods. We also adapt the MCHP concept to the case where interaction among evaluation criteria has either strengthening, or weakening, or antagonistic effect. Finally, we present an extension of the SRF method to determine the weights of criteria in case they are hierarchically structured. | Multiple Criteria Hierarchy Process for ELECTRE Tri methods |
S0377221716000163 | In this paper we study two-agent scheduling in a two-machine flowshop. The cost function is the weighted sum of some common regular functions, including the makespan and the total completion time. Specifically, we consider two problems, namely the problem to minimize the weighted sum of both agents’ makespan, and the problem to minimize the weighted sum of one agent’s total completion time and the other agent’s makespan. For the first problem, we give an ordinary NP-hardness proof and a pseudo-polynomial-time algorithm. We also analyze the performance of treating the problem using Johnson’s rule and propose an approximation algorithm based on Johnson’s rule. For the second problem, we propose an approximation algorithm based on linear programming relaxation of the problem. Finally, we show that some simple algorithms can be used to solve special cases of the two problems. | Two-agent scheduling in a flowshop |
S0377221716000175 | We consider the two-machine no-wait job shop minimum makespan scheduling problem. We show that when each job has exactly two equal length operations (also called a proportionate job shop), the problem is solvable in O(nlog n) time. We also show that the proportionate problem becomes strongly NP-hard when some jobs are allowed to visit only one machine. Finally, we show that the proportionate problem with missing operations becomes solvable in O(nlog n) time when all missing operations are on the same machine. | The proportionate two-machine no-wait job shop scheduling problem |
S0377221716000187 | We propose an axiomatic definition of a dispersion measure that could be applied for any finite sample of k-dimensional real observations. Next we introduce a taxonomy of the dispersion measures based on the possible behavior of these measures with respect to new upcoming observations. This way we get two classes of unstable and absorptive dispersion measures. We examine their properties and illustrate them by examples. We also consider a relationship between multidimensional dispersion measures and multidistances. Moreover, we examine new interesting properties of some well-known dispersion measures for one-dimensional data like the interquartile range and a sample variance. | Measures of dispersion for multidimensional data |
S0377221716000412 | Effective bankruptcy prediction is critical for financial institutions to make appropriate lending decisions. In general, the input variables (or features), such as financial ratios, and prediction techniques, such as statistical and machine learning techniques, are the two most important factors affecting the prediction performance. While many related works have proposed novel prediction techniques, very few have analyzed the discriminatory power of the features related to bankruptcy prediction. In the literature, in addition to financial ratios (FRs), corporate governance indicators (CGIs) have been found to be another important type of input variable. However, the prediction performance obtained by combining CGIs and FRs has not been fully examined. Only some selected CGIs and FRs have been used in related studies and the chosen features may differ from study to study. Therefore, the aim of this paper is to assess the prediction performance obtained by combining seven different categories of FRs and five different categories of CGIs. The experimental results, based on a real-world dataset from Taiwan, show that the FR categories of solvency and profitability and the CGI categories of board structure and ownership structure are the most important features in bankruptcy prediction. Specifically, the best prediction model performance is obtained with a combination in terms of prediction accuracy, Type I/II errors, ROC curve, and misclassification cost. However, these findings may not be applicable in some markets where the definition of distressed companies is unclear and the characteristics of corporate governance indicators are not obvious, such as in the Chinese market. | Financial ratios and corporate governance indicators in bankruptcy prediction: A comprehensive study |
S0377221716000424 | Optimal inventory allocation policies have a significant impact on profits in the retail industry. A manufacturer ships products to the retailers’ stores where the end customers buy the product during the selling season. It has been put forward that it is beneficial for the manufacturer to reserve a certain fraction of the inventory for a second replenishment. Then the manufacturer can replenish the retailers’ inventories optimally and can take advantage of the risk pooling effect. In practice, retailers require a certain availability of the product throughout the selling season. Supply contracts are used to coordinate the delivery of products. Under such a contract, the manufacturer agrees to achieve a certain service level and to pay a financial penalty if she misses it. We analyze how a manufacturer responds to a service level contract if she wants to minimize her expected costs. We develop an allocation strategy for the multiple retailer case and find that our results allow for a better understanding of the effect of service level contracts on manufacturers and retailers. | Optimal two-period inventory allocation under multiple service level contracts |
S0377221716000436 | In this paper we study the periodic maintenance problem: given a set of m machines and a horizon of T periods, find indefinitely repeating itself maintenance schedule such that at most one machine can be serviced at each period. In addition, all the machines must be serviced at least once for any cycle. In each period the machine i generates a servicing cost bi or an operating cost which depends on the last period in which i was serviced. The operating cost of each machine i in a period equals ai times the number of periods since the last servicing of that machine. The main objective is to find a cyclic maintenance schedule of a periodicity T that minimizes total cost. To solve this problem we propose a new Mixed Integer programming formulation and a new heuristic method based on general Variable neighborhood search called Nested general variable neighborhood search. The performance of this heuristic is shown through an extensive experimentation on a diverse set of problem instances. | Nested general variable neighborhood search for the periodic maintenance problem |
S0377221716000448 | It is important, in practice, to find robust solutions to optimisation problems. This issue has been the subject of extensive research focusing on single-objective problems. Recently, researchers also acknowledged the need to find robust solutions to multi-objective problems and presented some first results on this topic. In this paper, we deal with bi-objective optimisation problems in which only one objective function is uncertain. The contribution of our paper is three-fold. Firstly, we introduce and analyse four different robustness concepts for bi-objective optimisation problems with one uncertain objective function, and we propose an approach for defining a meaningful robust Pareto front for these types of problems. Secondly, we develop an algorithm for computing robust solutions with respect to these four concepts for the case of discrete optimisation problems. This algorithm works for finite and for polyhedral uncertainty sets using a transformation to a multi-objective (deterministic) optimisation problem and the recently published concept of Pareto robust optimal solutions (Iancu & Trichakis, 2014). Finally, we apply our algorithm to two real-world examples, namely aircraft route guidance and the shipping of hazardous materials, illustrating the four robustness concepts and their solutions in practical applications. | Bi-objective robust optimisation |
S037722171600045X | In this paper we present an approach to generate cyclic production schemes for multiple products with stochastic demand to be produced on a single production unit. This scheme specifies a fixed and periodic production sequence, called a cycle, where each product may occur more than once in the sequence. In order to stabilize the cycle length alternative control strategies are introduced. These strategies keep the cycle length close to a target length by means of adding idle time to the cycle, cutting production short or overproduction. On basis of the selected control strategy as well as a base-stock policy target inventory levels are determined for each production run in the cycle. Demand can be backlogged but a certain service level is required. Setup times are sequence-dependent and the storage is capacitated. Employing the developed strategies we investigate the tradeoff between stability of cycle length and total cost induced by the production schedule in a computational study based on both, real world data and random test instances. | Quasi-fixed cyclic production schemes for multiple products with stochastic demand |
S0377221716000461 | This paper presents a biobjective robust optimization formulation for identifying robust solutions from a given Pareto set. The objectives consider both solution and model robustness when the exact values of the selected solution are affected by uncertainty. The problem is formulated equivalently as a model with uncertainty on the constraint parameters and objective function coefficients. Structural properties and a solution algorithm are developed for the case of multiobjective linear programs. The algorithm is based on facial decomposition; each subproblem is a biobjective linear program and is related to an efficient face of the multiobjective program. The resulting Pareto set reduction methodology allows the handling of continuous and discrete Pareto sets, and can be generalized to consider criteria other than robustness. The use of secondary criteria to further break ties among the many efficient solutions provides opportunities for additional trade-off analysis in the space of the secondary criteria. Examples illustrate the algorithm and characteristics of solutions obtained. | Biobjective robust optimization over the efficient set for Pareto set reduction |
S0377221716000473 | An object tracking sensor network (OTSN) is a wireless sensor network designed to track moving objects in its sensing area. It is made of static sensors deployed in a region for tracking moving targets. Usually, these sensors are equipped of a sensing unit and a non-rechargeable battery. The investigated mission involves a moving target with a known trajectory, such as a train on a railway or a plane in an airline route. In order to save energy, the target must be monitored by exactly one sensor at any time. In our context, the sensors may be not accessible during the mission and the target can be subject to earliness or tardiness. Therefore, our aim is to build a static schedule of sensing activities that resists to these perturbations. A pseudo-polynomial two-step algorithm is proposed. First, a discretization step processes the input data, and a mathematical formulation of the scheduling problem is proposed. Then, a dichotomy approach that solves a transportation problem at every iteration is introduced; the very last step is addressed by solving a linear program. | Robust scheduling of wireless sensor networks for target tracking under uncertainty |
S0377221716000485 | This paper studies the Elevator Dispatching Problem arising in destination controls. In many of the presented methods to the problem a routing aspect is not considered; decision variables specify only request-to-elevator assignments and the service order of the requests is determined by applying a heuristic rule called the collective control principle. However, quality of this approach is rarely investigated. In this paper the approach is compared with a formulation defining explicitly the elevator routes. The average waiting time as well as average journey time are used as objective functions in the comparisons. Computational experiments with the former objective function on random instances arising during light and normal traffic conditions indicate that both approaches very often produce the same solution while with the latter one the situation is the opposite. Some well-known traffic patterns are also analyzed to identify cases in which the optimal solutions of these approaches are equal. | Assignment formulation for the Elevator Dispatching Problem with destination control and its performance analysis |
S0377221716000497 | In this paper, we deal with bilevel quadratic programming problems with binary decision variables in the leader problem and convex quadratic programs in the follower problem. For this purpose, we transform the bilevel problems into equivalent quadratic single level formulations by replacing the follower problem with the equivalent Karush Kuhn Tucker (KKT) conditions. Then, we use the single level formulations to obtain mixed integer linear programming (MILP) models and semidefinite programming (SDP) relaxations. Thus, we compute optimal solutions and upper bounds using linear programming (LP) and SDP relaxations. Our numerical results indicate that the SDP relaxations are considerably tighter than the LP ones. Consequently, the SDP relaxations allow finding tight feasible solutions for the problem. Especially, when the number of variables in the leader problem is larger than in the follower problem. Moreover, they are solved at a significantly lower computational cost for large scale instances. | A computational study for bilevel quadratic programs using semidefinite relaxations |
S0377221716000503 | When designing an optimization model for use in mass casualty incident (MCI) response, the dynamic and uncertain nature of the problem environment poses a significant challenge. Many key problem parameters, such as the number of casualties to be processed, will typically change as the response operation progresses. Other parameters, such as the time required to complete key response tasks, must be estimated and are therefore prone to errors. In this work we extend a multi-objective combinatorial optimization model for MCI response to improve performance in dynamic and uncertain environments. The model is developed to allow for use in real time, with continuous communication between the optimization model and problem environment. A simulation of this problem environment is described, allowing for a series of computational experiments evaluating how model utility is influenced by a range of key dynamic or uncertain problem and model characteristics. It is demonstrated that the move to an online system mitigates against poor communication speed, while errors in the estimation of task duration parameters are shown to significantly reduce model utility. | Online optimization of casualty processing in major incident response: An experimental analysis |
S0377221716000527 | Value added models have been proposed to analyze different aspects related to school effectiveness on the basis of student growth. There is consensus in the literature about the need to control for socioeconomic status and other contextual variables at student and school level in the estimation of value added, for which the methodologies employed have largely relied on hierarchical linear models. However, this approach is problematic because results are based on comparisons to the school’s average—implying no real incentive for performance excellence. Meanwhile, activity analysis models to estimate school value added have been unable to control for contextual variables at both the student and school levels. In this study we propose a robust frontier model to estimate contextual value added which merges relevant branches of the activity analysis literature, namely, metafrontiers and partial frontier methods. We provide an application to a large sample of Chilean schools, a relevant country to study due to the reforms made to its educational system that point out to the need of accountability measures. Results indicate not only the general relevance of including contextual variables but also how they contribute to explaining the performance differentials found for the three types of schools—public, privately-owned subsidized, and privately-owned fee-paying. Also, the results indicate that contextual value added models generate school rankings more consistent with the evaluation models currently used in Chile than any other type of evaluation models. | Value added, educational accountability approaches and their effects on schools’ rankings: Evidence from Chile |
S0377221716000539 | This work considers the scheduling of railway preventive condition-based tamping, which is the maintenance operation performed to restore the track irregularities to ensure both safety and comfort for passengers and freight. The problem is to determine when to perform the tamping on which section for given railway tracks over a planning horizon. The objective is to minimize the Net Present Costs (NPC) considering the following technical and economic factors: 1) track quality (the standard deviation of the longitudinal level) degradation over time; 2) track quality thresholds based on train speed limits; 3) the impact of previous tamping operations on the track quality recovery; 4) track geometrical alignment; 5) tamping machine operation factors and finally 6) the discount rate. In this work, a Mixed Integer Linear Programming (MILP) model is formulated and tested on data from the railway corridor between Odense and Fredericia, part of the busiest main line in Denmark. Computational experiments are carried out to compare our model to the existing models in the literature. The results show that taking into consideration these previously overlooked technical and economic factors 3, 5 and 6 can prevent under-estimation of required tamping operations, produce a more economic solution, prevent unnecessary early tamping, and improve the track quality by 2 percent. | Optimization of preventive condition-based tamping for railway tracks |
S0377221716000540 | We consider the container loading problem that occurs at many furniture factories where product boxes are arranged on product pallets and the product pallets are arranged in a container for shipments. The volume of products in the container should be maximized, and the bottom of each pallet must be fully supported by the container floor or by the top of a single pallet to simplify the unloading process. To improve the filling rate of the container, the narrow spaces at the tops and sides of the pallets in the container should be filled with product boxes. However, it must be ensured that all of the infill product boxes can be entirely palletized into complete pallets after being shipped to the destination. To solve this problem, we propose a heuristic algorithm consisting of a tree search sub-algorithm and a greedy sub-algorithm. The tree search sub-algorithm is employed to arrange the pallets in the container. Then, the greedy sub-algorithm is applied to fill the narrow spaces with product boxes. The computational results on BR1–BR15 show that our algorithm is competitive. | A heuristic algorithm for container loading of pallets with infill boxes |
S0377221716000552 | We study a strategic, two-player, sequential game between an attacker and defender. The defender must allocate resources amongst possible countermeasures and across possible targets. The attacker then chooses a type of threat and a target to attack. This paper proposes a model for determining optimal resource allocation by combining game theory with a simple multi-attribute utility model. Given a set of possible attributes representing goals or preferences, we allow each player to choose a weight for each attribute, where the subset of attributes with nonzero weights represents that player’s preferences. Every countermeasure is given a score for its effectiveness at both mitigating the effects of an attack in terms of each attribute and reducing the probability of the success of an attack. Furthermore, the consequences of each possible attack are scored in terms of each attribute. The multi-attribute utility aspect of this model uses these scores, along with the players’ weights, to form the basis of the utility (or disutility) for each player. We find that (i) the zero-sum game where the attacker’s and defender’s weights are identical results in the worst losses for the defender, (ii) in general cases the defender’s equilibrium strategy has the result of making the attacker indifferent between multiple attacks and (iii) the use of target-independent countermeasures (i.e. countermeasures which operate at national levels as opposed to operating at differing levels for each target) can increase the cost-effectiveness of countermeasures. | A game theoretic model for resource allocation among countermeasures with multiple attributes |
S0377221716000564 | In this paper, we generalize a formula for the discrete Choquet integral by replacing the standard product by a suitable fusion function. For the introduced fusion functions based discrete Choquet-like integrals we discuss and prove several properties and also show that our generalization leads to several new interesting functionals. We provide a complete characterization of the introduced functionals as aggregation functions. For n = 2 , several new aggregation functions are obtained, and if symmetric capacities are considered, our approach yields new generalizations of OWA operators. If n > 2, the introduced functionals are aggregation functions only if they are Choquet integrals with respect to some distorted capacity. | Fusion functions based discrete Choquet-like integrals |
S0377221716000734 | Mass casualty incidents create a surge in demand for emergency medical services, and this can often overwhelm the emergency response capacity. Thus, it is critically important to ration the emergency medical resources based on prioritization to maximize the lifesaving capacity. In a traditional triage scenario, the priority for receiving care is solely determined by a patient’s criticality at the time of assessment. Recent studies show that a resource-constrained triage is more effective in providing the greatest good to the maximum number of patients. We model this problem as an ambulance routing problem, and determine the order and destination hospitals for patient evacuation. This is formulated as a set partitioning problem, and we apply a column generation approach to efficiently handle a large number of feasible ambulance schedules. We show that the proposed algorithm with a column generation approach solves the problem to near optimality within a short computation time, and the solutions derived by the algorithm outperform fixed-priority resource allocations. | Optimal allocation of emergency medical resources in a mass casualty incident: Patient prioritization by column generation |
S0377221716000746 | Speech language programs aim to prevent and correct disorders of language, speech, voice and fluency. Speech problems in children can adversely affect emotional, educational and occupational development. In the past several years, a particular health region in Saskatchewan, Canada has experienced an increase in the number of preschool children referred for speech language therapy. Indeed, current wait times from referral to first appointment are well in excess of one year and one-tenth of patients do not receive any service before entering school. In an effort to demonstrate successful operational research (OR) practice through improving patient flow, we developed a discrete-event simulation model to test change ideas proposed by speech language therapists. These change ideas involved increasing the percentage of group treatments (rather than having a majority of patients treated individually), using a paraprofessional to complete many of the routine tasks currently covered by the therapists, standardizing appointment durations, hiring additional therapists and incorporating block treatment scheduling. We also tested combinations of the above strategies in order to determine the impact of simultaneously adopting different change ideas. Some strategies showed considerable promise for improving patient flow and are now being used in actual practice. | Using simulation to test ideas for improving speech language pathology services |
S0377221716000758 | Kumar and Russell (2002), Henderson and Russell (2005), and Badunenko, Henderson and Russell (2013) have proposed a production-frontier methodology to analyze the economic growth and the convergence of growth rates across countries. They see two main advantages to this methodology: (1) it reconstructs the world production frontier without relying on any particular (typically unverifiable) assumptions on any aspects of the growth process, and (2) it allows to decompose labor productivity growth into several parts. In this paper, I extend the previous approach by considering a multi-sector setting. This setting allows to propose a more realistic and complete country-level analysis, while keeping the same advantages as with the previous methodology. I also tackle the criticism of less reliable data at the sector level than at the country level by showing how the multi-sector approach can easily be adapted. I apply it to the OECD countries from 1995 to 2008. The results confirm the non-neutrality of technological change. I also find that capital accumulation plays the biggest role in the increase of output-labor productivity, while technological change and human capital accumulation also play an important role, but it is twice as small as capital accumulation. Interestingly, these results suggest the emergence of two groups: eastern and central European countries; and the EU15 and Korea. The two groups seem to diverge over time. | Growth and convergence of the OECD countries: A multi-sector production-frontier approach |
S037722171600076X | We consider robust one-way trading with limited information on price fluctuations. Our analysis finds the best guarantee of difference from the optimal offline performance. We provide closed-form solution, and reveal for the first time all possible worst-case scenarios. Numerical experiments show that our policy is more tolerant of information inaccuracy than Bayesian policies, and can earn higher average revenue than other robust policies while keeping a lower standard deviation. | Competitive difference analysis of the one-way trading problem with limited information |
S0377221716000771 | Spatial partial equilibrium models incorporating conjectural variations are widely used to analyze the development of oligopolistic multi-agent markets, such as international energy and raw material markets. Although this model type can produce multiple equilibria under commonly used assumptions, to the best of our knowledge, the consequences for the interpretation of the model results have not yet been explored in detail. To this end, we derive a linear complementarity model for the gas market and discuss under which assumptions on the model structure a component of the solution is unique. In particular, we find that the gas flow between a trader and a consumer is unique whenever the trader is modeled to exert market power in the consumer’s market. We demonstrate our findings by computing the extreme points of the polyhedral solution space and show that erroneous conclusions could be drawn whenever only one (arbitrary) point in the solution space is picked for interpretation. Furthermore, we discuss whether economically meaningful parameter value changes exist which would enforce uniqueness in all components of the solution. | Multiplicity of equilibria in conjectural variations models of natural gas markets |
S0377221716000783 | This work investigates how fairness concerns influence supply-chain decision making, while examining the effect of private production-cost information and touching on issues related to bounded rationality. We conduct laboratory work utilizing a supply-chain dyad with an upstream supplier feeding a downstream retailer under a simple wholesale-price contract. We perform human–computer (H–C) experiments where human subjects play the role of the supplier paired with the computerized retailer, as well as human–human (H–H) experiments where human subjects play the role of both supplier and retailer. These experiments allow us to isolate other effects like bounded rationality from the effects of fairness concerns on supply-chain decision making. We find that, compared to standard analytical model, the bounded rationality slightly reduces overall supply chain profit without changing its distribution between the supplier and the retailer, while fairness concerns lead to greater supply-chain profits and a more balanced supply-chain profit distribution. We further illustrate that under private cost information, the retailer's fairness concern is suppressed by the lack of reciprocity from not being able to observe her rival's profit information, but that the supplier's fairness concern from altruism persists. Based on our experimental results, we modify classical supply-chain models to include utility functions that incorporate both bounded rationality and fairness concerns. The estimated other-regarding coefficients are significantly lower under private information than under public information for the H–H experiments, and we find no evidence of inequity aversion for the H–C experiments. | Supply-chain performance anomalies: Fairness concerns under private cost information |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.