FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0377221713009557 | This paper introduces a fast solution procedure to solve 100-node instances of the time-dependent orienteering problem (TD-OP) within a few seconds of computation time. Orienteering problems occur in logistic situations were an optimal combination of locations needs to be selected and the routing between the selected locations needs to be optimized. In the time-dependent variant, the travel time between two locations depends on the departure time at the first location. Next to a mathematical formulation of the TD-OP, the main contribution of this paper is the design of a fast and effective algorithm to tackle this problem. This algorithm combines the principles of an ant colony system (ACS) with a time-dependent local search procedure equipped with a local evaluation metric. Additionally, realistic benchmark instances with varying size and properties are constructed. The average score gap with the known optimal solution on these test instances is only 1.4% with an average computation time of 0.5seconds. An extensive sensitivity analysis shows that the performance of the algorithm is insensitive to small changes in its parameter settings. | A fast solution method for the time-dependent orienteering problem |
S0377221713009569 | As input flows of secondary raw materials show high volatility and tend to behave in a chaotic way, the identification of the main drivers of the dynamic behavior of returns plays a crucial role. Based on a stylized production-recycling system consisting of a set of nonlinear difference equations, we explicitly derive parameter constellations where the system will or will not converge to its equilibrium. Using a constant elasticity of substitution production function, the model is then extended to enable coverage of real world situations. Using waste paper as a reference raw material, we empirically estimate the parameters of the system. By using these regression results, we are able to show that the equilibrium solution is a Lyapunov unstable saddle point. This implies that the system is sensitive on initial conditions that will hence impede the predictability of product returns. Small variations of production input proportions could however stabilize the whole system. | On stabilizing volatile product returns |
S0377221713009776 | We propose a way of using DEA cross-efficiency evaluation in portfolio selection. While cross efficiency is an approach developed for peer evaluation, we improve its use in portfolio selection. In addition to (average) cross-efficiency scores, we suggest to examine the variations of cross-efficiencies, and to incorporate two statistics of cross-efficiencies into the mean-variance formulation of portfolio selection. Two benefits are attained by our proposed approach. One is selection of portfolios well-diversified in terms of their performance on multiple evaluation criteria, and the other is alleviation of the so-called “ganging together” phenomenon of DEA cross-efficiency evaluation in portfolio selection. We apply the proposed approach to stock portfolio selection in the Korean stock market, and demonstrate that the proposed approach can be a promising tool for stock portfolio selection by showing that the selected portfolio yields higher risk-adjusted returns than other benchmark portfolios for a 9-year sample period from 2002 to 2011. | Use of DEA cross-efficiency evaluation in portfolio selection: An application to Korean stock market |
S0377221713009788 | In this paper we propose a heuristic for solving the problem of resource constrained preemptive scheduling in the two-stage flowshop with one machine at the first stage and parallel unrelated machines at the second stage, where renewable resources are shared among the stages, so some quantities of the same resource can be used at different stages at the same time. Availability of every resource at any moment is limited and resource requirements of jobs are arbitrary. The objective is minimization of makespan. The problem is NP-hard. The heuristic first sequences jobs on the machine at stage 1 and then solves the preemptive scheduling problem at stage 2. Priority rules which depend on processing times and resource requirements of jobs are proposed for sequencing jobs at stage 1. A column generation algorithm which involves linear programming, a tabu search algorithm and a greedy procedure is proposed to minimize the makespan at stage 2. A lower bound on the optimal makespan in which sharing of the resources between the stages is taken into account is also derived. The performance of the heuristic evaluated experimentally by comparing the solutions to the lower bound is satisfactory. | A heuristic for scheduling in a two-stage hybrid flowshop with renewable resources shared among the stages |
S0377221713009806 | This paper considers a discrete-time priority queueing model with one server and two types (classes) of customers. Class-1 customers have absolute (service) priority over class-2 customers. New customer batches enter the system at the rate of one batch per slot, according to a general independent arrival process, i.e., the batch sizes (total numbers of arrivals) during consecutive time slots are i.i.d. random variables with arbitrary distribution. All customers entering the system during the same time slot (i.e., belonging to the same arrival batch) are of the same type, but customer types may change from slot to slot, i.e., from batch to batch. Specifically, the types of consecutive customer batches are correlated in a Markovian way, i.e., the probability that any batch of customers has type 1 or 2, respectively, depends on the type of the previous customer batch that has entered the system. Such an arrival model allows to vary not only the relative loads of both customer types in the arrival stream, but also the amount of correlation between the types of consecutive arrival batches. The results reveal that the amount of delay differentiation between the two customer classes that can be achieved by the priority mechanism strongly depends on the amount of such interclass correlation (or, class clustering) in the arrival stream. We believe that this phenomenon has been largely overlooked in the priority-scheduling literature. | Class clustering destroys delay differentiation in priority queues |
S0377221713009818 | Variational inequality theory facilitates the formulation of equilibrium problems in economic networks. Examples of successful applications include models of supply chains, financial networks, transportation networks, and electricity networks. Previous economic network equilibrium models that were formulated as variational inequalities only included linear constraints; in this case the equivalence between equilibrium problems and variational inequality problems is achieved with a standard procedure because of the linearity of the constraints. However, in reality, often nonlinear constraints can be observed in the context of economic networks. In this paper, we first highlight with an application from the context of reverse logistics why the introduction of nonlinear constraints is beneficial. We then show mathematical conditions, including a constraint qualification and convexity of the feasible set, which allow us to characterize the economic problem by using a variational inequality formulation. Then, we provide numerical examples that highlight the applicability of the model to real-world problems. The numerical examples provide specific insights related to the role of collection targets in achieving sustainability goals. | A variational inequality formulation of equilibrium models for end-of-life products with nonlinear constraints |
S0377221713009831 | To safeguard analytical tractability and the concavity of objective functions, the vast majority of models belonging to oligopoly theory relies on the restrictive assumption of linear demand functions. Here we lay out the analytical solution of a differential Cournot game with hyperbolic inverse demand, where firms accumulate capacity over time à la Ramsey. The subgame perfect equilibrium is characterized via the Hamilton–Jacobi–Bellman equations solved in closed form both on infinite and on finite horizon setups. To illustrate the applicability of our model and its implications, we analyze the feasibility of horizontal mergers in both static and dynamic settings, and find appropriate conditions for their profitability under both circumstances. Static profitability of a merger implies dynamic profitability of the same merger. It appears that such a demand structure makes mergers more likely to occur than they would on the basis of the standard linear inverse demand. | On the feedback solutions of differential oligopoly games with hyperbolic demand curve and capacity accumulation |
S0377221713009843 | In this paper we introduce a new fast and accurate numerical method for pricing exotic derivatives when discrete monitoring occurs, and the underlying evolves according to a Markov one-dimensional stochastic processes. The approach exploits the structure of the matrix arising from the numerical quadrature of the pricing backward formulas to devise a convenient factorization that helps greatly in the speed-up of the recursion. The algorithm is general and is examined in detail with reference to the CEV (Constant Elasticity of Variance) process for pricing different exotic derivatives, such as Asian, barrier, Bermudan, lookback and step options for which up to date no efficient procedures are available. Extensive numerical experiments confirm the theoretical results. The MATLAB code used to perform the computation is available online at http://www1.mate.polimi.it/∼marazzina/BP.htm. | Pricing exotic derivatives exploiting structure |
S0377221713009855 | This paper proposes an affine-based approach which jointly captures the nominal interest rate, the real interest rate, and the inflation risk premium to price inflation-indexed derivatives, including zero-coupon inflation-indexed swaps, year-on-year inflation-indexed swaps, inflation-indexed swaptions, and inflation-indexed caps and floors. We provide an example and explain how to use traded zero-coupon inflation-indexed swap rates to estimate inflation risk premiums. | Affine model of inflation-indexed derivatives and inflation risk premium |
S0377221713009867 | We address the single-machine stochastic scheduling problem with an objective of minimizing total expected earliness and tardiness costs, assuming that processing times follow normal distributions and due dates are decisions. We develop a branch and bound algorithm to find optimal solutions to this problem and report the results of computational experiments. We also test some heuristic procedures and find that surprisingly good performance can be achieved by a list schedule followed by an adjacent pairwise interchange procedure. | Minimizing earliness and tardiness costs in stochastic scheduling |
S0377221713009879 | A tandem queueing system with infinite and finite intermediate buffers, heterogeneous customers and generalized phase-type service time distribution at the second stage is investigated. The first stage of the tandem has a finite number of servers without buffer. The second stage consists of an infinite and a finite buffers and a finite number of servers. The arrival flow of customers is described by a Marked Markovian arrival process. Type 1 customers arrive to the first stage while type 2 customers arrive to the second stage directly. The service time at the first stage has an exponential distribution. The service times of type 1 and type 2 customers at the second stage have a phase-type distribution with different parameters. During a waiting period in the intermediate buffer, type 1 customers can be impatient and leave the system. The ergodicity condition and the steady-state distribution of the system states are analyzed. Some key performance measures are calculated. The Laplace–Stieltjes transform of the sojourn time distribution of type 2 customers is derived. Numerical examples are presented. | Tandem queueing system with infinite and finite intermediate buffers and generalized phase-type service time distribution |
S0377221713009880 | This paper presents a detailed description of a particular class of deterministic single product Maritime Inventory Routing Problems (MIRPs), which we call deep-sea MIRPs with inventory tracking at every port. This class involves vessel travel times between ports that are significantly longer than the time spent in port and require inventory levels at all ports to be monitored throughout the planning horizon. After providing a comprehensive literature survey of this class, we introduce a core model for it cast as a mixed-integer linear program. This formulation is quite general and incorporates assumptions and families of constraints that are most prevalent in practice. We also discuss other modeling features commonly found in the literature and how they can be incorporated into the core model. We then offer a unified discussion of some of the most common advanced techniques used for improving the bounds of these problems. Finally, we present a library, called MIRPLib, of publicly available test problem instances for MIRPs with inventory tracking at every port. Despite a growing interest in combined routing and inventory management problems in a maritime setting, no data sets are publicly available, which represents a significant “barrier to entry” for those interested in related research. Our main goal for MIRPLib is to help maritime inventory routing gain maturity as an important and interesting class of planning problems. As a means to this end, we (1) make available benchmark instances for this particular class of MIRPs; (2) provide the mixed-integer linear programming community with a set of optimization problem instances from the maritime transportation domain in LP and MPS format; and (3) provide a template for other researchers when specifying characteristics of MIRPs arising in other settings. Best known computational results are reported for each instance. | MIRPLib – A library of maritime inventory routing problem instances: Survey, core model, and benchmark results |
S0377221713009892 | In this paper, we realise an early warning system for hedge funds based on specific red flags that help detect the symptoms of impending extreme negative returns and the contagion effect. To do this we use regression tree analysis to identify a series of splitting rules that act as risk signals. The empirical findings presented herein prove that contagion, crowded trades, leverage commonality and liquidity concerns are the leading indicators for predicting worst returns. We not only provide a variable selection among potential predictors, but also assign specific risk thresholds for the selected key indicators at which the vulnerability of hedge funds becomes systemically relevant. | Hedge fund systemic risk signals |
S0377221713009909 | The Multi-Handler Knapsack Problem under Uncertainty is a new stochastic knapsack problem where, given a set of items, characterized by volume and random profit, and a set of potential handlers, we want to find a subset of items which maximizes the expected total profit. The item profit is given by the sum of a deterministic profit plus a stochastic profit due to the random handling costs of the handlers. On the contrary of other stochastic problems in the literature, the probability distribution of the stochastic profit is unknown. By using the asymptotic theory of extreme values, a deterministic approximation for the stochastic problem is derived. The accuracy of such a deterministic approximation is tested against the two-stage with fixed recourse formulation of the problem. Very promising results are obtained on a large set of instances in negligible computing time. | The Multi-Handler Knapsack Problem under Uncertainty |
S0377221713009910 | We suggest a new one-parameter family of solidarity values for TU-games. The members of this class are distinguished by the type of player whose removal from a game does not affect the remaining players’ payoffs. While the Shapley value and the equal division value are the boundary members of this family, the solidarity value is its center. With exception of the Shapley value, all members of this family are asymptotically equivalent to the equal division value in the sense of Radzik (2013). | On a class of solidarity values |
S0377221713009922 | Column generation for solving linear programs with a huge number of variables alternates between solving a master problem and a pricing subproblem to add variables to the master problem as needed. The method is known to often suffer from degeneracy in the master problem. Inspired by recent advances in coping with degeneracy in the primal simplex method, we propose a row-reduced column generation method that may take advantage of degenerate solutions. The idea is to reduce the number of constraints to the number of strictly positive basic variables in the current master problem solution. The advantage of this row-reduction is a smaller working basis, and thus a faster re-optimization of the master problem. This comes at the expense of a more involved pricing subproblem, itself eventually solved by column generation, that needs to generate weighted subsets of variables that are said compatible with the row-reduction, if possible. Such a subset of variables gives rise to a strict improvement in the objective function value if the weighted combination of the reduced costs is negative. We thus state, as a by-product, a necessary and sufficient optimality condition for linear programming. This methodological paper generalizes the improved primal simplex and dynamic constraints aggregation methods. On highly degenerate linear programs, recent computational experiments with these two algorithms show that the row-reduction of a problem might have a large impact on the solution time. We conclude with a few algorithmic and implementation issues. | Row-reduced column generation for degenerate master problems |
S0377221713009934 | The paper surveys the literature on cooperative advertising in marketing channels (supply chains) using game theoretic methods. During the last decade, in particular, this literature has expanded considerably and has studied static as well as dynamic settings. The survey is divided into two main parts. The first one deals with simple marketing channels having one supplier and one reseller only. The second one covers marketing channels of a more complex structure, having more than one supplier and/or reseller. In the first part we find that a number of results carry over from static to dynamic environments. We also find that the work on static models is quite homogeneous, in the sense that most papers employ the same basic consumer demand specification and address the same situations of vertical integration and noncooperative games with simultaneous or sequential actions. The work on dynamic problems of cooperative advertising also shows some similarities. The second part shows that models incorporating horizontal interaction on either or both layers of the supply chain are much less numerous than those supposing its absence. Participation rates in co-op advertising programs depend on inter- and intra-brand competition, and participation may not always be in the best interest of the firms in the marketing channel. | A survey of game-theoretic models of cooperative advertising |
S0377221713010023 | The philosophical position referred to as critical rationalism (CR) is potentially important to OR because it holds out the possibility of supporting OR’s claim to offer managers a scientifically ‘rational’ approach. However, as developed by Karl Popper, and subsequently extended by David Miller, CR can only support practice (deciding what to do, how to act) in a very limited way; concentrating on the critical application of deductive logic, the crucial role of subjective judgements in making technical and moral choices are ignored or are at least left underdeveloped. By reflecting on the way that managers, engineers, administrators and other professionals take decisions in practice, three strategies are identified for handling the inevitable subjectivity in practical decision-making. It is argued that these three strategies can be understood as attempts to emulate the scientific process of achieving intersubjective consensus, a process inherent in CR. The perspective developed in the paper provides practitioners with a way of understanding their clients’ approach to decision-making and holds out the possibility of making coherent the claim that they are offering advice on how to apply a scientific approach to decision-making; it presents academics with some philosophical challenges and some new avenues for research. | Critical rationalism in practice: Strategies to manage subjectivity in OR investigations |
S0377221713010035 | Given a double round-robin tournament, the traveling umpire problem (TUP) consists of determining which games will be handled by each one of several umpire crews during the tournament. The objective is to minimize the total distance traveled by the umpires, while respecting constraints that include visiting every team at home, and not seeing a team or venue too often. We strengthen a known integer programming formulation for the TUP and use it to implement a relax-and-fix heuristic that improves the quality of 24 out of 25 best-known feasible solutions to instances in the TUP benchmark. We also improve all best-known lower bounds for those instances and, for the first time, provide lower bounds for instances with more than 16 teams. | Improved bounds for the traveling umpire problem: A stronger formulation and a relax-and-fix heuristic |
S0377221713010047 | Unexpected events, such as accidents or track damages, can have a significant impact on the railway system so that trains need to be canceled and delayed. In case of a disruption it is important that dispatchers quickly present a good solution in order to minimize the nuisance for the passengers. In this paper, we focus on adjusting the timetable of a passenger railway operator in case of major disruptions. Both a partial and a complete blockade of a railway line are considered. Given a disrupted infrastructure situation and a forecast of the characteristics of the disruption, our goal is to determine a disposition timetable, specifying which trains will still be operated during the disruption and determining the timetable of these trains. Without explicitly taking the rolling stock rescheduling problem into account, we develop our models such that the probability that feasible solutions to this problem exist, is high. The main objective is to maximize the service level offered to the passengers. We present integer programming formulations and test our models using instances from Netherlands Railways. | Adjusting a railway timetable in case of partial or complete blockades |
S0377221713010059 | In this paper, we consider a variant of the many-to-many location-routing problem, where hub facilities have to be located and customers with either pickup or delivery demands have to be combined in vehicle routes. In addition, several commodities and inter-hub transport processes are taken into account. A practical application of the problem can be found in the timber-trade industry, where companies provide their services using hub-and-spoke networks. We present a mixed-integer linear model for the problem and use CPLEX 12.4 to solve small-scale instances. Furthermore, a multi-start procedure based on a fix-and-optimize scheme and a genetic algorithm are introduced that efficiently construct promising solutions for medium- and large-scale instances. A computational performance analysis shows that the presented methods are suitable for practical application. | Many-to-many location-routing with inter-hub transport and multi-commodity pickup-and-delivery |
S0377221713010060 | Conditional Value at Risk (CVaR) is widely used in portfolio optimization as a measure of risk. CVaR is clearly dependent on the underlying probability distribution of the portfolio. We show how copulas can be introduced to any problem that involves distributions and how they can provide solutions for the modeling of the portfolio. We use this to provide the copula formulation of the CVaR of a portfolio. Given the critical dependence of CVaR on the underlying distribution, we use a robust framework to extend our approach to Worst Case CVaR (WCVaR). WCVaR is achieved through the use of rival copulas. These rival copulas have the advantage of exploiting a variety of dependence structures, symmetric and not. We compare our model against two other models, Gaussian CVaR and Worst Case Markowitz. Our empirical analysis shows that WCVaR can asses the risk more adequately than the two competitive models during periods of crisis. | Robust portfolio optimization with copulas |
S0377221713010072 | We develop a network-based warehouse model of individual pallet locations and their interactions with appropriate cross aisles in order to evaluate the expected travel distance of a given design. The model is constructive in that it uses Particle Swarm Optimization to determine the best angles of cross aisles and picking aisles for multiple, pre-determined pickup and deposit (P&D) points in a unit-load warehouse. Our results suggest that alternative designs offer reduced expected travel distance, but at the expense of increased storage space. The opportunity for benefit also seems to decline as P&D points increase in number and dispersion. | A constructive aisle design model for unit-load warehouses with multiple pickup and deposit points |
S0377221713010084 | Half-life is a unique characteristic of radioactive substances used in a variety of medical treatments. Radioisotope F-18 used for diagnosing and monitoring many types of cancers has a half-life of 110minutes. As such, it requires careful coordination of production and delivery by manufacturers and medical end-users. To model this critical production and delivery problem, we develop a mixed integer program and propose a variant of a large neighborhood search algorithm with various improvement algorithms. We conduct several computational experiments to demonstrate the effectiveness of the proposed approach. The method when applied in a case study shows that improvement in terms of both time and cost is possible in the production and delivery of F-18. | The nuclear medicine production and delivery problem |
S0377221713010102 | We estimate the probability of delinquency and default for a sample of credit card loans using intensity models, via semi-parametric multiplicative hazard models with time-varying covariates. It is the first time these models, previously applied for the estimation of rating transitions, are used on retail loans. Four states are defined in this non-homogenous Markov chain: up-to-date, one month in arrears, two months in arrears, and default; where transitions between states are affected by individual characteristics of the debtor at application and their repayment behaviour since. These intensity estimations allow for insights into the factors that affect movements towards (and recovery from) delinquency, and into default (or not). Results indicate that different types of debtors behave differently while in different states. The probabilities estimated for each type of transition are then used to make out-of-sample predictions over a specified period of time. | Intensity models and transition probabilities for credit card loan delinquencies |
S0377221713010114 | An n-unit system provisioned with a single warm standby is investigated. The individual units are subject to repairable failures, while the entire system is subject to a nonrepairable failure at some finite but random time in the future. System performance measures for systems observed over a time interval of random duration are introduced. Two models to compute these system performance measures, one employing a policy of block replacement, and the other without a block replacement policy, are developed. Distributional assumptions involving distributions of phase type introduce matrix Laplace transformations into the calculations of the performance measures. It is shown that these measures are easily carried out on a laptop computer using Microsoft Excel. A simple economic model is used to illustrate how the performance measures may be used to determine optimal economic design specifications for the warm standby. | Reliability analysis of a single warm-standby system subject to repairable and nonrepairable failures |
S0377221713010126 | We present a methodology for fitting time-varying paired comparisons models in which the parameters are allowed to vary deterministically, as opposed to stochastically, with time. Our dynamic paired comparisons model is based on a new closed-form for Stern’s continuum of paired comparisons models which include the Bradley–Terry model and the Thurstone–Mosteller model. The dynamic element of our model is facilitated by utilising barycentric rational interpolants BRIs. An incidental result of our work is to show that BRIs often provide a better fit to data than the obvious alternative of spline interpolation. We use our model to shed light on the debate of who is the greatest tennis player of the Open Era of men’s professional tennis since 1968. Constructing a single rankings list from our model is not trivial as there are many alternative metrics that could be used to identify which player was the best ever. We present three alternative rankings lists derived from our model. In general our rankings lists largely agree with the rankings list based on number of Grand Slam titles won, which, to some extent, validates our choice of metrics. So who is the greatest tennis player of the Open Era? Roger Federer seems like the most likely candidate, with Bjorn Borg and Jimmy Connors close behind. | A dynamic paired comparisons model: Who is the greatest tennis player? |
S0377221713010138 | We demonstrate how the problem of determining the ask price for electricity swing options can be considered as a stochastic bilevel program with asymmetric information. Unlike as for financial options, there is no way for basing the pricing method on no-arbitrage arguments. Two main situations are analyzed: if the seller has strong market power he/she might be able to maximize his/her utility, while in fully competitive situations he/she will just look for a price which makes profit and has acceptable risk. In both cases the seller has to consider the decision problem of a potential buyer – the valuation problem of determining a fair value for a specific option contract – and anticipate the buyer’s optimal reaction to any proposed strike price. We also discuss some methods for finding numerical solutions of stochastic bilevel problems with a special emphasis on using duality gap penalizations. | Electricity swing option pricing by stochastic bilevel optimization: A survey and new approaches |
S0377221713010151 | We consider a two-echelon, continuous review inventory system under Poisson demand and a one-for-one replenishment policy. Demand is lost if no items are available at the local warehouse, the central depot, or in the pipeline in between. We give a simple, fast and accurate approach to approximate the service levels in this system. In contrast to other methods, we do not need an iterative analysis scheme. Our method works very well for a broad set of cases, with deviations to simulation below 0.1% on average and below 0.36% for 95% of all test instances. | On two-echelon inventory systems with Poisson demand and lost sales |
S0377221713010163 | Recently, there is a growing concern about the environmental and social footprint of business operations. While most of the papers in the field of supply chain network design focus on economic performance, recently, some studies have considered environmental dimensions. However, there still exists a gap in quantitatively modeling social impacts together with environmental and economic impacts. In this study, this gap is covered by simultaneously considering the three pillars of sustainability in the network design problem. A mixed integer programming model is developed for this multi-objective closed-loop supply chain network problem. In order to solve this NP-hard problem, three novel hybrid metaheuristic methods are developed which are based on adapted imperialist competitive algorithms and variable neighborhood search. To test the efficiency and effectiveness of these algorithms, they are compared not only with each other but also with other strong algorithms. The results indicate that the nested approach achieves better solutions compared with the others. Finally, a case study for a glass industry is used to demonstrate the applicability of the approach. | Designing a sustainable closed-loop supply chain network based on triple bottom line approach: A comparison of metaheuristics hybridization techniques |
S0377221713010175 | Road freight transportation is a major contributor to carbon dioxide equivalent emissions. Reducing these emissions in transportation route planning requires an understanding of vehicle emission models and their inclusion into the existing optimization methods. This paper provides a review of recent research on green road freight transportation. | A review of recent research on green road freight transportation |
S0377221713010187 | Recent research on robust decision aiding has focused on identifying a range of recommendations from preferential information and the selection of representative models compatible with preferential constraints. This study presents an experimental analysis on the relationship between the results of a single decision model (additive value function) and the ones from the full set of compatible models in classification problems. Different optimization formulations for selecting a representative model are tested on artificially generated data sets with varying characteristics. | Inferring robust decision models in multicriteria classification problems: An experimental analysis |
S0377221713010199 | We study a class of non-convex optimization problems involving sigmoid functions. We show that sigmoid functions impart a combinatorial element to the optimization variables and make the global optimization computationally hard. We formulate versions of the knapsack problem, the generalized assignment problem and the bin-packing problem with sigmoid utilities. We merge approximation algorithms from discrete optimization with algorithms from continuous optimization to develop approximation algorithms for these NP-hard problems with sigmoid utilities. | Knapsack problems with sigmoid utilities: Approximation algorithms via hybrid optimization |
S0377221713010205 | The Maximum Balanced Subgraph Problem (MBSP) is the problem of finding a subgraph of a signed graph that is balanced and maximizes the cardinality of its vertex set. This paper is the first one to discuss applications of the MBSP arising in three different research areas: the detection of embedded structures, portfolio analysis in risk management and community structure. The efficient solution of the MBSP is also in the focus of this paper. We discuss pre-processing routines and heuristic solution approaches to the problem. a GRASP metaheuristic is developed and improved versions of a greedy heuristic are discussed. Extensive computational experiments are carried out on a set of instances from the applications previously mentioned as well as on a set of random instances. | The maximum balanced subgraph of a signed graph: Applications and solution approaches |
S0377221713010217 | Most developed countries support farming activities through policies that are tailored to meet their specific social, economic and environmental objectives. Economic and environmental efficiency have recently become relevant targets of most of these policies, whose sound implementation can be enhanced by monitoring farm performance from a multidimensional perspective. This paper proposes farm-level technical and environmental efficiency measures that recognize the stochastic conditions in which production takes place. A state-contingent framework is used to model production uncertainty. An implementable representation of the technology is developed using data envelopment analysis. The application focuses on a sample of Catalan arable crop farms. Results suggest that technical efficiency is slightly lower in bad than in good growing conditions. Nitrogen pollution can decrease substantially more under good than bad growing conditions. | Measuring technical and environmental efficiency in a state-contingent technology |
S0377221713010229 | In this research, multistage one-shot decision making under uncertainty is studied. In such a decision problem, a decision maker has one and only one chance to make a decision at each stage with possibilistic information. Based on the one-shot decision theory, approaches to multistage one-shot decision making are proposed. In the proposed approach, a decision maker chooses one state amongst all the states according to his/her attitude about satisfaction and possibility at each stage. The payoff at each stage is associated with the focus points at the succeeding stages. Based on the selected states (focus points), the sequence of optimal decisions is determined by dynamic programming. The proposed method is a fundamental alternative for multistage decision making under uncertainty because it is scenario-based instead of lottery-based as in the other existing methods. The one-shot optimal stopping problem is analyzed where a decision maker has only one chance to determine stopping or continuing at each stage. The theoretical results have been obtained. | Approaches to multistage one-shot decision making |
S0377221713010230 | This paper addresses the problem of collecting inventory of production at various plants having limited storage capacity, violation of which forces plant shutdowns. The production at plants is continuous (with known rates) and a fleet of vehicles need to be scheduled to transport the commodity from plants to a central storage or depot, possibly making multiple pickups at a given plant to avoid shutdown. One operational objective is to achieve the highest possible rate of product retrieval at the depot, relative to the total travel time of the fleet. This problem is a variant (and generalization) of the inventory routing problem. The motivating application for this paper is barge scheduling for oil pickup from off-shore oil-producing platforms with limited holding capacity, where shutdowns are prohibitively expensive. We develop a new model that is fundamentally different from standard node-arc or path formulations in the literature. The proposed model is based on assigning a unique position to each vehicle visit at a node in a chronological sequence of vehicle-nodal visits. This approach leads to substantial flexibility in modeling multiple visits to a node using multiple vehicles, while controlling the number of binary decision variables. Consequently, our position-based model solves larger model instances significantly more efficiently than the node-arc counterpart. Computational experience of the proposed model with the off-shore barge scheduling application is reported. | Fleet routing position-based model for inventory pickup under production shutdown |
S0377221713010242 | Due to the ongoing trend towards increased product variety, fast-moving consumer goods such as food and beverages, pharmaceuticals, and chemicals are typically manufactured through so-called make-and-pack processes. These processes consist of a make stage, a pack stage, and intermediate storage facilities that decouple these two stages. In operations scheduling, complex technological constraints must be considered, e.g., non-identical parallel processing units, sequence-dependent changeovers, batch splitting, no-wait restrictions, material transfer times, minimum storage times, and finite storage capacity. The short-term scheduling problem is to compute a production schedule such that a given demand for products is fulfilled, all technological constraints are met, and the production makespan is minimised. A production schedule typically comprises 500–1500 operations. Due to the problem size and complexity of the technological constraints, the performance of known mixed-integer linear programming (MILP) formulations and heuristic approaches is often insufficient. We present a hybrid method consisting of three phases. First, the set of operations is divided into several subsets. Second, these subsets are iteratively scheduled using a generic and flexible MILP formulation. Third, a novel critical path-based improvement procedure is applied to the resulting schedule. We develop several strategies for the integration of the MILP model into this heuristic framework. Using these strategies, high-quality feasible solutions to large-scale instances can be obtained within reasonable CPU times using standard optimisation software. We have applied the proposed hybrid method to a set of industrial problem instances and found that the method outperforms state-of-the-art methods. scheduled batches new batches tasks {PM,FM,S,P} units groups of identical batches scheduled make batches new make batches scheduled pack batches new pack batches scheduled batches that require task k ∈ K scheduled batches whose task k ∈ K is performed on unit/tank j ∈ J k new batches that require task k ∈ K new batches that require task k ∈ K and can be assigned to unit j ∈ J k new batches that require task k and belong to group g ∈ G scheduled make batches that supply scheduled pack batch i ∈ I P new make batches that can supply new pack batch i ∈ N P scheduled pack batches that are supplied by scheduled make batch i ∈ I M new pack batches that can be supplied by new make batch i ∈ N M units that perform task k ∈ K assigned unit for scheduled batch i ∈ I k and available units for new batch i ∈ N k ( k ∈ K ) final-mix units that can be connected to premix unit j ∈ J PM immediate successor(s) of batch i ∈ I k ( k ∈ K ) last pack batch in the processing sequence of unit j ∈ J P duration of task k ∈ { PM , FM } for batch i ∈ I k ∪ N k duration of task k ∈ K ⧹ { S } for batch i ∈ I k ∪ N k on unit j ∈ J i k (= α i k for k ∈ { PM , FM } ) quarantine time of make batch i ∈ I M ∪ N k transfer time of make batch i ∈ I PM ∪ N PM pump out time of make batch i ∈ I M ∪ N M changeover time between batch i ∈ I k ∪ N k and i ′ ∈ I k ∪ N k on unit j ∈ J i k ∩ J i ′ k capacity of tank j ∈ J s size of make batch i ∈ I M ∪ N M (batch independent) size of pack batch i ∈ I P ∪ N P (batch independent) position of task k ∈ K ⧹ { S } of batch i ∈ I k ∪ N k in processing sequence value of p i k in previous iteration; in the first iteration, p i prev , k = 1 position of storage task of batch i ∈ I M ∪ N M in tank j in storing sequence value of p ij S in previous iteration; in the first iteration, p ij prev , S = 1 makespan of previous iteration; in the first iteration, C prev =0 makespan of packing line j ∈ J P of previous iteration; in the first iteration, C j prev = 0 sufficiently large number start time of task k ∈ K ⧹ { S } of batch i ∈ I k ∪ N k amount of material that make batch i ∈ N M supplies to pack batch i ′ ∈ N i P through tank j ∈ J S makespan of production plan makespan of packing line j ∈ J P =1, if make batch i ∈ N M supplies pack batch i ′ ∈ N i P ; =0, otherwise =1, if batch i ∈ N is assigned to unit j ∈ J i k for task k ∈ K ; =0, otherwise = 1 , if batch i ∈ N k is processed / stored before batch i ′ ∈ I k ∪ N k for task k ∈ K ⧹ { S } = 0 , if batch i ∈ N k is processed / stored after batch i ′ ∈ I k ∪ N k for task k ∈ K ⧹ { S } | A hybrid method for large-scale short-term scheduling of make-and-pack production processes |
S0377221713010254 | This paper considers line search optimization methods using a mathematical framework based on the simple concept of a v-pattern and its properties. This framework provides theoretical guarantees on preserving, in the localizing interval, a local optimum no worse than the starting point. Notably, the framework can be applied to arbitrary unidimensional functions, including multimodal and infinitely valued ones. Enhanced versions of the golden section, bisection and Brent’s methods are proposed and analyzed within this framework: they inherit the improving local optimality guarantee. Under mild assumptions the enhanced algorithms are proved to converge to a point in the solution set in a finite number of steps or that all their accumulation points belong to the solution set. | Line search methods with guaranteed asymptotical convergence to an improving local optimum of multimodal functions |
S0377221713010266 | The irregular demand and communication network disruption that are characteristics of situations demanding humanitarian logistics, particularly after large-scale earthquakes, present a unique challenge for relief inventory modelling. However, there are few quantitative inventory models in humanitarian logistics, and assumptions inherent in commercial logistics naturally have little applicability to humanitarian logistics. This paper develops a humanitarian disaster relief inventory model that assumes a uniformly distributed function in both lead-time and demand parameters, which is appropriate considering the limited historical data on relief operation. Furthermore, this paper presents different combinations of lead-time and demand scenarios to demonstrate the variability of the model. This is followed by the discussion of a case study wherein the decision variables are evaluated and sensitivity analysis is performed. The results reveal the presence of a unique reorder level in the inventory wherever the order quantity is insensitive to some lead-time demand values, providing valuable direction for humanitarian relief planning efforts and future research. | Relief inventory modelling with stochastic lead-time and demand |
S0377221713010278 | Spare parts are known to be associated with intermittent demand patterns and such patterns cause considerable problems with regards to forecasting and stock control due to their compound nature that renders the normality assumption invalid. Compound distributions have been used to model intermittent demand patterns; there is however a lack of theoretical analysis and little relevant empirical evidence in support of these distributions. In this paper, we conduct a detailed empirical investigation on the goodness of fit of various compound Poisson distributions and we develop a distribution-based demand classification scheme the validity of which is also assessed in empirical terms. Our empirical investigation provides evidence in support of certain demand distributions and the work described in this paper should facilitate the task of selecting such distributions in a real world spare parts inventory context. An extensive discussion on parameter estimation related difficulties in this area is also provided. | Spare parts management: Linking distributional assumptions to demand classification |
S0377221713010291 | Having sufficient inventories in the forward or piece picking area of a warehouse is an essential condition for warehouse operations. As pickers consume the inventory in the piece racks, there is a risk of stockout. This can be reduced by the timely replenishment of products from the bulk reserve area to the forward area. We develop and compare three policies for prioritizing replenishments for the case where order picking and replenishments occur concurrently because of time restrictions. The first policy, based on the ratio of available inventory to wave demand, reduces the number of stockouts considerably. The other two more sophisticated policies reduce the number of stockouts even more but require much more computation time, and are more costly in terms of implementation, maintenance and software updates. We present the results of implementing one of these policies in the warehouse of a large cosmetics firm. | Prioritizing replenishments of the piece picking area |
S0377221714000022 | A queueing analysis is presented for base-stock controlled multi-stage production-inventory systems with capacity constraints. The exact queueing model is approximated by replacing some state-dependent conditional probabilities (that are used to express the transition rates) by constants. Two recursive algorithms (each with several variants) are developed for analysis of the steady-state performance. It is analytically shown that one of these algorithms is equivalent to the existing approximations given in the literature. The system studied here is more general than the systems studied in the literature. The numerical investigation for three-stage systems shows that the proposed approximations work well to estimate the relevant performance measures. | Approximate queueing models for capacitated multi-stage inventory systems under base-stock control |
S0377221714000034 | In this paper, a memetic algorithm is developed to solve the orienteering problem with hotel selection (OPHS). The algorithm consists of two levels: a genetic component mainly focuses on finding a good sequence of intermediate hotels, whereas six local search moves embedded in a variable neighborhood structure deal with the selection and sequencing of vertices between the hotels. A set of 176 new and larger benchmark instances of OPHS are created based on optimal solutions of regular orienteering problems. Our algorithm is applied on these new instances as well as on 224 benchmark instances from the literature. The results are compared with the known optimal solutions and with the only other existing algorithm for this problem. The results clearly show that our memetic algorithm outperforms the existing algorithm in terms of solution quality and computational time. A sensitivity analysis shows the significant impact of the number of possible sequences of hotels on the difficulty of an OPHS instance. | A memetic algorithm for the orienteering problem with hotel selection |
S0377221714000046 | In this paper we study an inexact steepest descent method for multicriteria optimization whose step-size comes with Armijo’s rule. We show that this method is well-defined. Moreover, by assuming the quasi-convexity of the multicriteria function, we prove full convergence of any generated sequence to a Pareto critical point. As an application, we offer a model for the Psychology’s self regulation problem, using a recent variational rationality approach. | The self regulation problem as an inexact steepest descent method for multicriteria optimization |
S0377221714000058 | In this paper we discuss the multicriteria p-facility median location problem on networks with positive and negative weights. We assume that the demand is located at the nodes and can be different for each criterion under consideration. The goal is to obtain the set of Pareto-optimal locations in the graph and the corresponding set of non-dominated objective values. To that end, we first characterize the linearity domains of the distance functions on the graph and compute the image of each linearity domain in the objective space. The lower envelope of a transformation of all these images then gives us the set of all non-dominated points in the objective space and its preimage corresponds to the set of all Pareto-optimal solutions on the graph. For the bicriteria 2-facility case we present a low order polynomial time algorithm. Also for the general case we propose an efficient algorithm, which is polynomial if the number of facilities and criteria is fixed. | The multicriteria p-facility median location problem on networks |
S0377221714000071 | The design of distribution systems raises hard combinatorial optimization problems. For instance, facility location problems must be solved at the strategic decision level to place factories and warehouses, while vehicle routes must be built at the tactical or operational levels to supply customers. In fact, location and routing decisions are interdependent and studies have shown that the overall system cost may be excessive if they are tackled separately. The location-routing problem (LRP) integrates the two kinds of decisions. Given a set of potential depots with opening costs, a fleet of identical vehicles and a set of customers with known demands, the classical LRP consists in opening a subset of depots, assigning customers to them and determining vehicle routes, to minimize a total cost including the cost of open depots, the fixed costs of vehicles used, and the total cost of the routes. Since the last comprehensive survey on the LRP, published by Nagy and Salhi (2007), the number of articles devoted to this problem has grown quickly, calling a review of new research works. This paper analyzes the recent literature (72 articles) on the standard LRP and new extensions such as several distribution echelons, multiple objectives or uncertain data. Results of state-of-the-art metaheuristics are also compared on standard sets of instances for the classical LRP, the two-echelon LRP and the truck and trailer problem. | A survey of recent research on location-routing problems |
S0377221714000083 | Products are often demanded in tandem because of the cross-selling effect. The demand for an item can increase if sales of its cross-selling-associated items are achieved or decrease when the associated items are out of stock, resulting in lost sales. Therefore, a joint inventory policy should be pursued in a cross-selling system. This paper introduces customer-driven cross-selling into centralized and competitive newsvendor (NV) models by representing an item’s effective demand as a function of other items’ order quantities. We derive first-order optimality conditions for the centralized model in addition to pure-strategy Nash equilibrium conditions and uniqueness conditions of the equilibria for the competitive model. We further develop gradient-based (GB) and iteration-based (IB) algorithms to solve the centralized and competitive models, respectively. A computational study verifies the effectiveness of the proposed algorithms. The computational results show that a larger cross-selling effect leads to a larger order quantity in a centralized NV model but a smaller order quantity in a competitive NV model, and a larger positive correlation between items’ demands leads to higher profits with smaller order quantities in both models. Moreover, NVs will order more items if the demand variance is greater, however resulting in lower profits. In a competitive situation, one will prefer smaller order quantities than in a centralized decision situation. | The multi-item newsvendor model with cross-selling and the solution when demand is jointly normally distributed |
S0377221714000095 | Aviation authorities such as the Federal Aviation Administration (FAA) provide stringent guidelines for aircraft maintenance, with violations leading to significant penalties for airlines. Moreover, poorly maintained aircraft can lead to mass cancellation of flights, causing tremendous inconvenience to passengers and resulting in a significant erosion in brand image for the airline in question. Aircraft maintenance operations of a complex and extended nature can only be performed at designated maintenance bases. Aircraft maintenance planning literature has focused on developing good tail-number routing plans, while assuming that the locations of the maintenance bases themselves are fixed. This paper considers an inverse optimization problem, viz., locating a minimal number of maintenance bases on an Euler tour, while ensuring that all required aircraft maintenance activities can be performed with a stipulated periodicity. The Aircraft Maintenance Base Location Problem (AMBLP) is shown to be NP-complete and a new lower bound is developed for the problem. The performance of four simple “quick and dirty” heuristics for obtaining feasible solutions to AMBLP is analyzed. | The Aircraft Maintenance Base Location Problem |
S0377221714000101 | This paper presents an overview of methods for the analysis of data structured in blocks of variables or in groups of individuals. More specifically, regularized generalized canonical correlation analysis (RGCCA), which is a unifying approach for multiblock data analysis, is extended to be also a unifying tool for multigroup data analysis. The versatility and usefulness of our approach is illustrated on two real datasets. | Regularized generalized canonical correlation analysis for multiblock or multigroup data analysis |
S0377221714000113 | Home Care includes medical, paramedical and social services which are delivered to patients at their domicile rather than in hospital. Managing human and material resources in Home Care services is a difficult task, as the provider has to deal with peculiar constraints (e.g., the continuity of care, which imposes that a patient is always cared for by the same nurse) and to manage the high variability of patients’ demands. One of the main issues encountered in planning Home Care services under continuity of care requirement is the nurse-to-patient assignment. Despite the importance of this topic, the problem is only marginally addressed in the literature, where continuity of care is usually treated as a soft-constraint rather than as a hard one. Uncertainty is another relevant feature of nurse-to-patient assignment problem, and it is usually managed adopting stochastic programming or analytical policies. However, both these approaches proved to be limited, even if they improve the quality of the assignments upon those actually provided in practice. In this paper, we develop a cardinality-constrained robust assignment model, which allows exploiting the potentialities of a mathematical programming model without the necessity of generating scenarios. The developed model is tested on real-life instances related to a relevant Home Care provider operating in Italy, in order to evaluate its capability of reducing the costs related to nurses’ overtimes. | A cardinality-constrained robust model for the assignment problem in Home Care services |
S0377221714000125 | Cook and Zhu (2007) introduced an innovative method to deal with flexible measures. Toloo (2009) found a computational problem in their approach and tackled this issue. Amirteimoori and Emrouznejad (2012) claimed that both Cook and Zhu (2007) and Toloo (2009) models overestimate the efficiency. In this response, we prove that their claim is incorrect and there is no overestimate in these approaches. | Notes on classifying inputs and outputs in data envelopment analysis: A comment |
S0377221714000137 | This paper analyzes the level and cyclicality of regulatory bank capital for asset portfolio securitizations in relation to the cyclicality of capital requirements for the underlying loan portfolio as under Basel II/III. We find that the cyclicality of capital requirements is higher for (i) asset portfolio securitizations relative to primary loan portfolios, (ii) Ratings Based Approach (RBA) relative to the Supervisory Formula Approach, (iii) given the RBA for a point-in-time rating methodology relative to a rate-and-forget rating methodology, and (iv) under the passive reinvestment rule relative to alternative rules. Capital requirements of the individual tranches reveal that the volatility of aggregated capital charges for the securitized portfolio is triggered by the most senior tranches. This is due to the fact that senior tranches are more sensitive to the macroeconomy. An empirical analysis provides evidence that current credit ratings are time-constant and that economic losses for securitizations have exceeded the required capital in the recent financial crisis. | Asset portfolio securitizations and cyclicality of regulatory capital |
S0377221714000149 | The paper deals with the two most important mathematical models for sequencing products on a mixed-model assembly line in order to minimize work overload the mixed-model sequencing (MMS) model and the car sequencing (CS) model. Although both models follow the same underlying objective, only MMS directly addresses the work overload in its objective function. CS instead applies a surrogate objective using so-called sequencing rules which restrict labor-intensive options accompanied with the products in the sequence. The CS model minimizes the number of violations of the respective sequencing rules, which is widely assumed to lead to minimum work overload. This paper experimentally compares CS with MMS in order to quantify the gap in the solution quality between both models. The paper studies several variants of CS with different sequencing rule generation approaches and different objective functions from the literature as well as a newly introduced weighting factor. The performance of the different models is evaluated on a variety of random test instances. Although the objectives of CS and MMS are positively linearly correlated, results show that a sequence found by CS leads to at least 15% more work overload than a solution found by MMS. For none of the considered test instances and for none of the three different objective functions, CS is able to produce competitive results in terms of solution quality (work overload) compared to MMS. The results suggest that decision makers using CS should investigate whether MMS would lead to better sequencing orders for their specific instances. | Car sequencing versus mixed-model sequencing: A computational study |
S0377221714000150 | We establish a flexible capacity strategy model with multiple market periods under demand uncertainty and investment constraints. In the model, a firm makes its capacity decision under a financial budget constraint at the beginning of the planning horizon which embraces n market periods. In each market period, the firm goes through three decision-making stages: the safety production stage, the additional production stage and the optimal sales stage. We formulate the problem and obtain the optimal capacity, the optimal safety production, the optimal additional production and the optimal sales of each market period under different situations. We find that there are two thresholds for the unit capacity cost. When the capacity cost is very low, the optimal capacity is determined by its financial budget; when the capacity cost is very high, the firm keeps its optimal capacity at its safety production level; and when the cost is in between of the two thresholds, the optimal capacity is determined by the capacity cost, the number of market periods and the unit cost of additional production. Further, we explore the endogenous safety production level. We verify the conditions under which the firm has different optimal safety production levels. Finally, we prove that the firm can benefit from the investment only when the designed planning horizon is longer than a threshold. Moreover, we also derive the formulae for the above three thresholds. | Flexible capacity strategy with multiple market periods under demand uncertainty and investment constraint |
S0377221714000162 | The general aim of this study is to provide a guide to the future marketing decisions of a firm, using a model to predict customer lifetime values. The proposed framework aims to eliminate the limitations and drawbacks of the majority of models encountered in the literature through a simple and industry-specific model with easily measurable and objective indicators. In addition, this model predicts the potential value of the current customers rather than measuring the current value, which has generally been used in the majority of previous studies. This study contributes to the literature by helping to make future marketing decisions via Markov decision processes for a company that offers several types of products. Another contribution is that the states for Markov decision processes are also generated using the predicted customer lifetime values where the prediction is realized by a regression-based model. Finally, a real world application of the proposed model is provided in the banking sector to show the empirical validity of the model. Therefore, we believe that the proposed framework and the developed model can guide both practitioners and researchers. | Analysis of customer lifetime value and marketing expenditure decisions through a Markovian-based model |
S0377221714000174 | Supplier reliability is a key determinant of a manufacturer’s competitiveness. It reflects a supplier’s capability of order fulfillment, which can be measured by the percentage of order quantity delivered in a given time window. A perfectly reliable supplier delivers an amount equal to the order placed by its customer, while an unreliable supplier may deliver an amount less than the amount ordered. Therefore, when suppliers are unreliable, manufacturers often have incentives to help suppliers improve delivery reliability. Suppliers, however, often work with multiple manufacturers and the benefit of enhanced reliability may spill over to competing manufacturers. In this study, we explore how potential spillover influences manufacturers’ incentives to improve supplier’s reliability. We consider two manufacturers that compete with imperfectly substitutable products on Type I service level (i.e., in-stock probability). The manufacturers share a common supplier who, due to variations in production quality or yield, is unreliable. Manufacturers may exert efforts to improve the supplier’s reliability in the sense that the delivered quantity is stochastically larger after improvement. We develop a two-stage model that encompasses supplier improvement, uncertain supply and random demand in a competitive setting. In this complex model, we characterize the manufacturers’ equilibrium in-stock probability. Moreover, we characterize sufficient conditions for the existence of the equilibrium of the manufacturers’ improvement efforts. Finally, we numerically test the impact of market characteristics on the manufacturers’ equilibrium improvement efforts. We find that a manufacturer’s equilibrium improvement effort usually declines in market competition, market uncertainty or spillover effect, although its expected equilibrium profit typically increases in spillover effect. | Improving reliability of a shared supplier with competition and spillovers |
S0377221714000186 | The variable returns to scale data envelopment analysis (DEA) model is developed with a maintained hypothesis of convexity in input–output space. This hypothesis is not consistent with standard microeconomic production theory that posits an S-shape for the production frontier, i.e. for production technologies that obey the Regular Ultra Passum Law. Consequently, measures of technical efficiency assuming convexity are biased downward. In this paper, we provide a more general DEA model that allows the S-shape. | Maintaining the Regular Ultra Passum Law in data envelopment analysis |
S0377221714000198 | Transmission congestion management is a vital task in electricity markets. Series FACTS devices can be used as effective tools to relieve congestion mostly employing Optimal Power Flow based methods, in which total cost as the objective function is minimized. However, power system stability may be deteriorated after relieving congestion using traditional methods leading to a vulnerable power system against disturbances. In this paper, a multi-objective framework is proposed for congestion management where three competing objective functions including total operating cost, voltage and transient stability margins are simultaneously optimized. This leads to an economical and robust operating point where enough levels of voltage and transient security are included. The proposed method optimally locates and sizes series FACTS devices on the most congested branches determined by a priority list based on Locational Marginal Prices. Individual sets of Pareto solutions, resulted from solving multi-objective congestion management for each location of FACTS devices, are merged together to create the comprehensive Pareto set. Results of testing the proposed method on the well-known New-England test system are discussed in details and confirm efficiency of the proposed method. | Locating series FACTS devices for multi-objective congestion management improving voltage and transient stability |
S0377221714000393 | A monopolist typically defers entry into an industry as both price uncertainty and the level of risk aversion increase. By contrast, the presence of a rival typically hastens entry under risk neutrality. Here, we examine these two opposing effects in a duopoly setting. We demonstrate that the value of a firm and its entry decision behave differently with risk aversion and uncertainty depending on the type of competition. Interestingly, if the leader’s role is defined endogenously, then higher uncertainty makes her relatively better off, whereas with the roles exogenously defined, the impact of uncertainty is ambiguous. | Duopolistic competition under risk aversion and uncertainty |
S0377221714000411 | The voting system of the Legislative Council of Hong Kong (Legco) is sometimes unicameral and sometimes bicameral, depending on whether the bill is proposed by the Hong Kong government. Therefore, although without any representative within Legco, the Hong Kong government has certain degree of legislative power – as if there is a virtual representative of the Hong Kong government within the Legco. By introducing such a virtual representative of the Hong Kong government, we show that Legco is a three-dimensional voting system. We also calculate two power indices of the Hong Kong government through this virtual representative and consider the C-dimension and the W-dimension of Legco. Finally, some implications of this Legco model to the current constitutional reform in Hong Kong will be given. | A three-dimensional voting system in Hong Kong |
S0377221714000423 | A reliability system subject to shocks producing damage and failure is considered. The source of shocks producing failures is governed by a Markovian arrival process. All the shocks produce deterioration and some of them failures, which can be repairable or non-repairable. Repair times are governed by a phase-type distribution. The number of deteriorating shocks that the system can stand is fixed. After a fatal failure the system is replaced by another identical one. For this model the availability, the reliability, and the rate of occurrence of the different types of failures are calculated. It is shown that this model extends other previously published in the literature. | A reliability system under different types of shock governed by a Markovian arrival process and maintenance policy K |
S0377221714000435 | This paper proposes a new algorithm to solve nonsmooth multiobjective programming. The algorithm is a descent direction method to obtain the critical point (a necessary condition for Pareto optimality). We analyze both global and local convergence results under some assumptions. Numerical tests are also given. | Nonsmooth multiobjective programming with quasi-Newton methods |
S0377221714000447 | The paper proposes a rank-dependent bi-criterion (travel time & monetary travel cost) equilibrium model for route choice problems, stochasticities in both the criteria measurements and the subjective preferences are considered simultaneously. Travelers rank all the choices, according to the generalized travel dis-utility, then choose from the first several (see K) best ranked ones. By searching inversely the supporting preference sets for each alternative in each rank, the overall choice probability of a path is determined. The equilibrium model is formulated and transformed into a fixed-point problem. The existence of the equilibrium is given out for a simple two-link network, however may not be guaranteed for more complex network topologies. When K = 1 , the proposed model reduces to the optimal user equilibrium that allows for the stochasticities of criteria measurements and the arbitrarily distributed preferences. Some remarks about the selection of some parameters in the new model are discussed and also the solution algorithms. Two numerical examples are presented to illustrate the implementation of the model, and also the capability and flexibility of the new model in handling the heterogeneity in traveler preferences and requirements. The paper concludes with discussions about the assumptions and limitations of the new model and possible future research opportunities as well. | A rank-dependent bi-criterion equilibrium model for stochastic transportation environment |
S0377221714000459 | We study the complete set packing problem (CSPP) where the family of feasible subsets may include all possible combinations of objects. This setting arises in applications such as combinatorial auctions (for selecting optimal bids) and cooperative game theory (for finding optimal coalition structures). Although the set packing problem has been well-studied in the literature, where exact and approximation algorithms can solve very large instances with up to hundreds of objects and thousands of feasible subsets, these methods are not extendable to the CSPP since the number of feasible subsets is exponentially large. Formulating the CSPP as an MILP and solving it directly, using CPLEX for example, is impossible for problems with more than 20 objects. We propose a new mathematical formulation for the CSPP that directly leads to an efficient algorithm for finding feasible set packings (upper bounds). We also propose a new formulation for finding tighter lower bounds compared to LP relaxation and develop an efficient method for solving the corresponding large-scale MILP. We test the algorithm with the winner determination problem in spectrum auctions, the coalition structure generation problem in coalitional skill games, and a number of other simulated problems that appear in the literature. | A fast approximation algorithm for solving the complete set packing problem |
S0377221714000460 | Part obsolescence is a common problem across industries, from avionics and military sectors to most original equipment manufacturers serving industrial markets. When a part supplier announces that a part will become obsolete, the OEM can choose from a number of sourcing options. In practice, the three most commonly adopted mitigation strategies are: (1) a lifetime, or life-of-type (LOT), buy from the original supplier; (2) part substitution, which finds a suitable alternative; and (3) line redesign, which modifies the production line to accommodate a new part. We first develop a framework incorporating fixed cost, variable cost, leadtime, demand uncertainty and the discount rate to directly compare and characterize these three sourcing strategies in a static context. We next formulate an integrated sourcing approach that starts with a bridge buy and may continue with part substitution or line redesign when the originals parts are depleted. Through numerical studies, we identify the joint impact of the problem parameters on the static and integrated sourcing strategies and the optimal choice among them. While the integrated sourcing approach outperforms the static ones in many cases it is not a dominant strategy. | Modeling sourcing strategies to mitigate part obsolescence |
S0377221714000472 | This paper discusses a new meta-DEA approach to solve the problem of choosing direction vectors when estimating the directional distance function. The proposed model emphasizes finding the “direction” for productivity improvement rather than estimating the “score” of efficiency; focusing on “planning” over “evaluation”. In fact, the direction towards marginal profit maximization implies a step-by-step improvement and “wait-and-see” decision process, which is more consistent with the practical decision-making process. An empirical study of U.S. coal-fired power plants operating in 2011 validates the proposed model. The results show that the efficiency measure using the proposed direction is consistent with all other indices with the exception of the direction towards the profit-maximized benchmark. We conclude that the marginal profit maximization is a useful guide for determining direction in the directional distance function. | Meta-data envelopment analysis: Finding a direction towards marginal profit maximization |
S0377221714000484 | We derive no-arbitrage bounds for expected excess returns to generate scenarios used in financial applications. The bounds allow to distinguish three regions: one where arbitrage opportunities will never exist, a second where arbitrage may be present, and a third, where arbitrage opportunities will always exist. No-arbitrage bounds are derived in closed form for a given covariance matrix using the least possible number of scenarios. Empirical examples illustrate the practical potential of knowing these bounds. | No-arbitrage bounds for financial scenarios |
S0377221714000496 | Large corporations fund their capital and operational expenses by issuing bonds with a variety of indexations, denominations, maturities and amortization schedules. We propose a multistage linear stochastic programming model that optimizes bond issuance by minimizing the mean funding cost while keeping leverage under control and insolvency risk at an acceptable level. The funding requirements are determined by a fixed investment schedule with uncertain cash flows. Candidate bonds are described in a detailed and realistic manner. A specific scenario tree structure guarantees computational tractability even for long horizon problems. Based on a simplified example, we present a sensitivity analysis of the first stage solution and the stochastic efficient frontier of the mean-risk trade-off. A realistic exercise stresses the importance of controlling leverage. Based on the proposed model, a financial planning tool has been implemented and deployed for Brazilian oil company Petrobras. | A multistage linear stochastic programming model for optimal corporate debt management |
S0377221714000502 | The line-cell (or line-seru) conversion is an innovation of assembly system applied widely in the electronics industry. Its essence is tearing out an assembly line and adopting a mini-assembly unit, called seru (or Japanese style assembly cell). In this paper, we develop a multi-objective optimization model to investigate two line-cell conversion performances: the total throughput time (TTPT) and the total labor hours (TLH). We analyze the bi-objective model to find out its mathematical characteristics such as solution space, combinatorial complexity and non-convex properties, and others. Owing to the difficulties of the model, a non-dominated sorting genetic algorithm that can solve large size problems in a reasonable time is developed. To verify the reliability of the algorithm, solutions are compared with those obtained from the enumeration method. We find that the proposed genetic algorithm is useful and can get reliable solutions in most cases. | Mathematical analysis and solutions for multi-objective line-cell conversion problem |
S0377221714000514 | Using five alternative data sets and a range of specifications concerning the underlying linear predictability models, we study whether long-run dynamic optimizing portfolio strategies may actually outperform simpler benchmarks in out-of-sample tests. The dynamic portfolio problems are solved using a combination of dynamic programming and Monte Carlo methods. The benchmarks are represented by two typical fixed mix strategies: the celebrated equally-weighted portfolio and a myopic, Markowitz-style strategy that fails to account for any predictability in asset returns. Within a framework in which the investor maximizes expected HARA (constant relative risk aversion) utility in a frictionless market, our key finding is that there are enormous difference in optimal long-horizon (in-sample) weights between the mean–variance benchmark and the optimal dynamic weights. In out-of-sample comparisons, there is however no clear-cut, systematic, evidence that long-horizon dynamic strategies outperform naively diversified portfolios. | Can long-run dynamic optimal strategies outperform fixed-mix portfolios? Evidence from multiple data sets |
S0377221714000526 | The transportation system examined in this paper is the city tram one, where failed trams are replaced by reliable spare ones. If failed tram is repaired and delivered, then it comes back on work. There is the time window that failed tram has to be either replaced (exchanged) by spare or by repaired and delivered within. Time window is therefore paramount to user perception of transport system unreliability. Time between two subsequent failures, exchange time, and repair together with delivery time, respectively, are described by random variables A, E, and D. A/E/D is selected as the notation for these random variables. There is a finite number of spare trams. Delivery time does not depend on the number of repair facilities. Hence, repair and delivery process can be treated as one with infinite number of facilities. Undesirable event called hazard is the event: neither the replacement nor the delivery has been completed in the time window. The goal of the paper is to find the following relationships: hazard probability of the tram system and mean hazard time as functions of number of spare trams. For systems with exponential time between failures, Weibull exchange and exponential delivery (so M/W/M in the proposed notation) two accurate solutions have been found. For systems with Weibull time between failures with shape in the range from 0.9 to 1.1, Weibull exchange and exponential delivery (i.e. W/W/M) a method yielding small errors has been provided. For the most general and difficult case in which all the random variables conform to Weibull distribution (W/W/W) a method returning moderate errors has been given. | Exact and approximation methods for dependability assessment of tram systems with time window |
S0377221714000538 | This paper considers the issue of performing testing inference in fixed effects panel data models under heteroskedasticity of unknown form. We use numerical integration to compute the exact null distributions of different quasi-t test statistics and compare them to their limiting counterpart. The test statistics use different heteroskedasticity-consistent standard errors. Our results reveal that the asymptotic approximation is usually poor in small samples when the test statistic is based on the covariance matrix estimator proposed by Arellano (1987). The quality of the approximation is greatly increased when the standard error is obtained using other heteroskedasticity-consistent estimators, most notably the CHC4 estimator. Our results also reveal that the performance of Arellano’s test improves considerably when standard errors are computed using restricted residuals. | Testing inference in heteroskedastic fixed effects models |
S0377221714000551 | We develop a flexible discrete-time hedging methodology that minimizes the expected value of any desired penalty function of the hedging error within a general regime-switching framework. A numerical algorithm based on backward recursion allows for the sequential construction of an optimal hedging strategy. Numerical experiments comparing this and other methodologies show a relative expected penalty reduction ranging between 0.9 % and 12.6 % with respect to the best benchmark. | Optimal hedging when the underlying asset follows a regime-switching Markov process |
S0377221714000563 | The personnel task scheduling problem is a subject of commercial interest which has been investigated since the 1950s. This paper proposes an effective and efficient three-phase algorithm for solving the shift minimization personnel task scheduling problem (SMPTSP). To illustrate the increased efficacy of the proposed algorithm over an existing algorithm, computational experiments are performed on a test problem set with characteristics motivated by employee scheduling applications. Experimental results show that the proposed algorithm outperforms the existing algorithm in terms of providing optimal solutions, improving upon most of the best-known solutions and revealing high-quality feasible solutions for those unsolved test instances in the literature. | Minimizing shifts for personnel task scheduling problems: A three-phase algorithm |
S0377221714000575 | Oil tankers play a fundamental role in every offshore petroleum supply chain and due to its high price, it is essential to optimize its use. Since this optimization requires handling detailed operational aspects, complete optimization models are typically intractable. Thus, a usual approach is to solve a tactical level model prior to optimize the operational details. In this case, it is desirable that tactical models are as precise as possible to avoid too severe adjustments in the next optimization level. In this paper, we study tactical models for a crude oil transportation problem by tankers. We did our work on the top of a previous paper found in the literature. The previous model considers inventory capacities and discrete lot sizes to be transported, aiming to meet given demands over a finite time horizon. We compare several formulations for this model using 50 instances from the literature and proposing 25 new harder ones. A column generation-based heuristic is also proposed to find good feasible solutions with less computational burden than the heuristics of the commercial solver used. | Formulations for a problem of petroleum transportation |
S0377221714000599 | Game theoretic analysis of queueing systems is an important research direction of queueing theory. In this paper, we study the service rate control problem of closed Jackson networks from a game theoretic perspective. The payoff function consists of a holding cost and an operating cost. Each server optimizes its service rate control strategy to maximize its own average payoff. We formulate this problem as a non-cooperative stochastic game with multiple players. By utilizing the problem structure of closed Jackson networks, we derive a difference equation which quantifies the performance difference under any two different strategies. We prove that no matter what strategies the other servers adopt, the best response of a server is to choose its service rates on the boundary. Thus, we can limit the search of equilibrium strategy profiles from a multidimensional continuous polyhedron to the set of its vertex. We further develop an iterative algorithm to find the Nash equilibrium. Moreover, we derive the social optimum of this problem, which is compared with the equilibrium using the price of anarchy. The bounds of the price of anarchy of this problem are also obtained. Finally, simulation experiments are conducted to demonstrate the main idea of this paper. | Service rate control of closed Jackson networks from game theoretic perspective |
S0377221714000605 | This paper presents the results of developing a branch and price algorithm and an ejection chain method for nurse rostering problems. The approach is general enough to be able to apply it to a wide range of benchmark nurse rostering instances. The majority of the instances are real world applications. They have been collected from a variety of sources including industrial collaborators, other researchers and various publications. The results of entering these algorithms in the 2010 International Nurse Rostering Competition are also presented and discussed. In addition, incorporated within both algorithms is a dynamic programming method which we present. The algorithm contains a number of heuristics and other features which make it very effective on the broad rostering model introduced. | New approaches to nurse rostering benchmark instances |
S0377221714000617 | Two methods of reducing the risk of disruptions to distribution systems are (1) strategically locating facilities to mitigate against disruptions and (2) hardening facilities. These two activities have been treated separately in most of the academic literature. This article integrates facility location and facility hardening decisions by studying the minimax facility location and hardening problem (MFLHP), which seeks to minimize the maximum distance from a demand point to its closest located facility after facility disruptions. The formulation assumes that the decision maker is risk averse and thus interested in mitigating against the facility disruption scenario with the largest consequence, an objective that is appropriate for modeling facility interdiction. By taking advantage of the MFLHP’s structure, a natural three-stage formulation is reformulated as a single-stage mixed-integer program (MIP). Rather than solving the MIP directly, the MFLHP can be decomposed into sub-problems and solved using a binary search algorithm. This binary search algorithm is the basis for a multi-objective algorithm, which computes the Pareto-efficient set for the pre- and post-disruption maximum distance. The multi-objective algorithm is illustrated in a numerical example, and experimental results are presented that analyze the tradeoff between objectives. | A multi-objective integrated facility location-hardening model: Analyzing the pre- and post-disruption tradeoff |
S0377221714000629 | We consider congestion games on networks with nonatomic users and user-specific costs. We are interested in the uniqueness property defined by Milchtaich (2005) as the uniqueness of equilibrium flows for all assignments of strictly increasing cost functions. He settled the case with two-terminal networks. As a corollary of his result, it is possible to prove that some other networks have the uniqueness property as well by adding common fictitious origin and destination. In the present work, we find a necessary condition for networks with several origin–destination pairs to have the uniqueness property in terms of excluded minors or subgraphs. As a key result, we characterize completely bidirectional rings for which the uniqueness property holds: it holds precisely for nine networks and those obtained from them by elementary operations. For other bidirectional rings, we exhibit affine cost functions yielding to two distinct equilibrium flows. Related results are also proven. For instance, we characterize networks having the uniqueness property for any choice of origin–destination pairs. | The uniqueness property for networks with several origin–destination pairs |
S0377221714000630 | This paper discusses the way that different operational characteristics including existing capacity, scale economies, and production policy have an important influence on the capacity outcomes when firms compete in the market place. We formulate a game-theoretical model where each firm has an existing capacity and faces both fixed and variable costs in purchasing additional capacity. Specifically, the firms simultaneously (or sequentially) make their expansion decisions, and then simultaneously decide their production decisions with these outputs being capacity constrained. We also compare our results with cases where production has to match capacity. By characterizing the firms’ capacity and production choices in equilibrium, our analysis shows that the operational factors play a crucial role in determining what happens. The modeling and analysis in the paper gives insight into the way that the ability to use less production capacity than has been built will undermine the commitment value of existing capacity. If a commitment to full production is not possible, sinking operational costs can enable a firm to keep some preemptive advantage. We also show that the existence of fixed costs can introduce cases where there are either no pure strategy equilibrium or multiple equilibria. The managerial implications of our analysis are noted in the discussion. Our central contribution in this paper is the innovative integration of the strategic analysis of capacity expansion and well-known ( s , S ) policy in operations and supply chain theory. | Competition through capacity investment under asymmetric existing capacities and costs |
S0377221714000642 | The internal estimates of Loss Given Default (LGD) must reflect economic downturn conditions, thus estimating the “downturn LGD”, as the new Basel Capital Accord Basel II establishes. We suggest a methodology to estimate the downturn LGD distribution to overcome the arbitrariness of the methods suggested by Basel II. We assume that LGD is a mixture of an expansion and recession distribution. In this work, we propose an accurate parametric model for LGD and we estimate its parameters by the EM algorithm. Finally, we apply the proposed model to empirical data on Italian bank loans. | Downturn Loss Given Default: Mixture distribution estimation |
S0377221714000654 | With the fast development of financial products and services, bank’s credit departments collected large amounts of data, which risk analysts use to build appropriate credit scoring models to evaluate an applicant’s credit risk accurately. One of these models is the Multi-Criteria Optimization Classifier (MCOC). By finding a trade-off between overlapping of different classes and total distance from input points to the decision boundary, MCOC can derive a decision function from distinct classes of training data and subsequently use this function to predict the class label of an unseen sample. In many real world applications, however, owing to noise, outliers, class imbalance, nonlinearly separable problems and other uncertainties in data, classification quality degenerates rapidly when using MCOC. In this paper, we propose a novel multi-criteria optimization classifier based on kernel, fuzzification, and penalty factors (KFP-MCOC): Firstly a kernel function is used to map input points into a high-dimensional feature space, then an appropriate fuzzy membership function is introduced to MCOC and associated with each data point in the feature space, and the unequal penalty factors are added to the input points of imbalanced classes. Thus, the effects of the aforementioned problems are reduced. Our experimental results of credit risk evaluation and their comparison with MCOC, support vector machines (SVM) and fuzzy SVM show that KFP-MCOC can enhance the separation of different applicants, the efficiency of credit risk scoring, and the generalization of predicting the credit rank of a new credit applicant. | Credit risk evaluation using multi-criteria optimization classifier with kernel, fuzzification and penalty factors |
S0377221714000666 | Retailers, from fashion stores to grocery stores, have to decide what range of products to offer, i.e., their product assortment. Frequent introduction of new products, a recent business trend, makes predicting demand more difficult, which in turn complicates assortment planning. We propose and study a stochastic dynamic programming model for simultaneously making assortment and pricing decisions which incorporates demand learning using Bayesian updates. We show analytically that it is profitable for the retailer to use price reductions early in the sales season to accelerate demand learning. A computational study demonstrates the benefits of such a policy and provides managerial insights that may help improve a retailer’s profitability. | Pricing to accelerate demand learning in dynamic assortment planning for perishable products |
S0377221714000678 | Pesticides are widely used by crop producers in developed countries to combat risk associated with pests and diseases. However, their indiscriminate use can lead to various environmental spillovers that may alter the agricultural production environment thus contributing to production risk. This study utilises a data envelopment analysis (DEA) approach to measure performance of arable farms, incorporating pesticides’ environmental spillovers and output variance as undesirable outputs in the efficiency analysis and taking explicitly into account the effect of pesticides and other inputs on production risk. The application focuses on panel data from Dutch arable farms over the period 2003–2007. A moment approach is used to compute output variance, providing empirical representations of the risk-increasing or -decreasing nature of the used inputs. Finally, shadow values of risk-adjusted inputs are computed. We find that pesticides are overused in Dutch arable farming and there is a considerable evidence of the need for decreasing pesticides’ environmental spillovers. | Pesticide use, environmental spillovers and efficiency: A DEA risk-adjusted efficiency approach applied to Dutch arable farming |
S0377221714000691 | This paper provides a new model of network formation that bridges the gap between the two benchmark game-theoretic models by Bala and Goyal (2000a) – the one-way flow model, and the two-way flow model – and includes both as limiting cases. As in both the said models, a link can be initiated unilaterally by any player with any other in what we call an “asymmetric flow” network, and the flow through a link towards the player who supports it is perfect. Unlike those models, there is friction or decay in the opposite direction. When this decay is complete there is no flow and this corresponds to the one-way flow model. The limit case when the decay in the opposite direction (and asymmetry) disappears corresponds to the two-way flow model. We characterize stable and strictly stable architectures for the whole range of parameters of this “intermediate” and more general model. A study of the efficiency of these architectures shows that in general stability and efficiency do not go together. We also prove the convergence of Bala and Goyal’s dynamic model in this context. | Asymmetric flow networks |
S0377221714000885 | We study a tandem queueing system with K servers and no waiting space in between. A customer needs service from one server but can leave the system only if all down-stream servers are unoccupied. Such a system is often observed in toll collection during rush hours in transportation networks, and we call it a tollbooth tandem queue. We apply matrix-analytic methods to study this queueing system, and obtain explicit results for various performance measures. Using these results, we can efficiently compute the mean and variance of the queue lengths, waiting time, sojourn time, and departure delays. Numerical examples are presented to gain insights into the performance and design of the tollbooth tandem queue. In particular, it reveals that the intuitive result of arranging servers in decreasing order of service speed (i.e., arrange faster servers at downstream stations) is not always optimal for minimizing the mean queue length or mean waiting time. | A tollbooth tandem queue with heterogeneous servers |
S0377221714000897 | A multiphase approach that incorporates demand points aggregation, Variable Neighbourhood Search (VNS) and an exact method is proposed for the solution of large-scale unconditional and conditional p-median problems. The method consists of four phases. In the first phase several aggregated problems are solved with a “Local Search with Shaking” procedure to generate promising facility sites which are then used to solve a reduced problem in Phase 2 using VNS or an exact method. The new solution is then fed into an iterative learning process which tackles the aggregated problem (Phase 3). Phase 4 is a post optimisation phase applied to the original (disaggregated) problem. For the p-median problem, the method is tested on three types of datasets which consist of up to 89,600 demand points. The first two datasets are the BIRCH and the TSP datasets whereas the third is our newly geometrically constructed dataset that has guaranteed optimal solutions. The computational experiments show that the proposed approach produces very competitive results. The proposed approach is also adapted to cater for the conditional p-median problem with interesting results. | An adaptive multiphase approach for large unconditional and conditional p-median problems |
S0377221714000903 | There has been much research on network flows over time due to their important role in real world applications. This has led to many results, but the more challenging continuous time model still lacks some of the key concepts and techniques that are the cornerstones of static network flows. The aim of this paper is to advance the state of the art for dynamic network flows by developing the continuous time analogues of the theory for static network flows. Specifically, we make use of ideas from the static case to establish a reduced cost optimality condition, a negative cycle optimality condition, and a strong duality result for a very general class of network flows over time. | Flows over time in time-varying networks: Optimality conditions and strong duality |
S0377221714000915 | Inventory record inaccuracy leads to ineffective replenishment decisions and deteriorates supply chain performance. Conducting cycle counts (i.e., periodic inventory auditing) is a common approach to correcting inventory records. It is not clear, however, how inaccuracy at different locations affects supply chain performance and how an effective cycle-count program for a multi-stage supply chain should be designed. This paper aims to answer these questions by considering a serial supply chain that has inventory record inaccuracy and operates under local base-stock policies. A random error, representing a stock loss, such as shrinkage or spoilage, reduces the physical inventory at each location in each period. The errors are cumulative and are not observed until a location performs a cycle count. We provide a simple recursion to evaluate the system cost and propose a heuristic to obtain effective base-stock levels. For a two-stage system with identical error distributions and counting costs, we prove that it is more effective to conduct more frequent cycle counts at the downstream stage. In a numerical study for more general systems, we find that location (proximity to the customer), error rates, and counting costs are primary factors that determine which stages should get a higher priority when allocating cycle counts. However, it is in general not effective to allocate all cycle counts to the priority stages only. One should balance cycle counts between priority stages and non-priority stages by considering secondary factors such as lead times, holding costs, and the supply chain length. In particular, more cycle counts should be allocated to a stage when the ratio of its lead time to the total system lead time is small and the ratio of its holding cost to the total system holding cost is large. In addition, more cycle counts should be allocated to downstream stages when the number of stages in the supply chain is large. The analysis and insights generated from our study can be used to design guidelines or scorecard systems that help managers design better cycle-count policies. Finally, we discuss implications of our study on RFID investments in a supply chain. | Evaluation of cycle-count policies for supply chains with inventory inaccuracy and implications on RFID investments |
S0377221714000927 | This paper introduces a bi-objective turning restriction design problem (BOTRDP), which aims to simultaneously improve network traffic efficiency and reduce environmental pollution by implementing turning restrictions at selected intersections. A bi-level programming model is proposed to formulate the BOTRDP. The upper level problem aims to minimize both the total system travel time (TSTT) and the cost of total vehicle emissions (CTVE) from the viewpoint of traffic managers, and the lower level problem depicts travelers’ route choice behavior based on stochastic user equilibrium (SUE) theory. The modified artificial bee colony (ABC) heuristic is developed to find Pareto optimal turning restriction strategies. Different from the traditional ABC heuristic, crossover operators are captured to enhance the performance of the heuristic. The computational experiments show that incorporating crossover operators into the ABC heuristic can indeed improve its performance and that the proposed heuristic significantly outperforms the non-dominated sorting genetic algorithm (NSGA) even if different operators are randomly chosen and used in the NSGA as in our proposed heuristic. The results also illustrate that a Pareto optimal turning restriction strategy can obviously reduce the TSTT and the CTVE when compared with those without implementing the strategy, and that the number of Pareto optimal turning restriction designs is smaller when the network is more congested but greater network efficiency and air quality improvement can be achieved. The results also demonstrate that traffic information provision does have an impact on the number of Pareto optimal turning restriction designs. These results should have important implications on traffic management. | A bi-objective turning restriction design problem in urban road networks |
S0377221714000939 | The aim of synthetic biology is to confer novel functions to cells by rationally interconnecting basic genetic parts into circuits. A key barrier in the design of synthetic genetic circuits is that only a qualitative description of the performance and interactions of the basic genetic parts is available in databases such as the Registry of Standard Biological Parts. Modeling approaches capable of harnessing this qualitative knowledge are thus timely. Here, we introduce an optimization-based framework, which makes use of the available qualitative information about basic biological parts to automatically identify the circuit elements and structures enabling a desired response to the presence/absence of input signals. Promoters and ribosome binding sites are categorized as high, medium or low efficiency and protein expressions in the circuit are described using piecewise linear differential equations. The desired function of the circuit is also mathematically described as the maximization/minimization of a constrained objective function. We employed this framework for the design of a toggle switch, a genetic decoder and a genetic half adder unit. The identified designs are consistent with previously constructed circuit configurations and in some cases point to completely new architectures. The identified non-intuitive circuit structures highlight the importance of accounting for ribosome binding site efficiencies and relative protein abundance levels in circuit design. Our results reaffirm the usefulness of the qualitative information for the coarse-grained genetic circuit design and simulation in the absence of detailed quantitative information. | Coarse-grained optimization-driven design and piecewise linear modeling of synthetic genetic circuits |
S0377221714000940 | In this research, we integrate the issues related to operations and marketing strategy of firms characterized by large product variety, short lead times, and demand variability in an assemble-to-order environment. The operations decisions are the inventory level of components and semi-finished goods, and configuration of semi-finished goods. The marketing decisions are the products price and a lead time guarantee which is uniform for all products. We develop an integrated mathematical model that captures trade-offs related to inventory of semi-finished goods, inventory of components, outsourcing costs, and customer demand based on guaranteed lead time and price.The mathematical model is a two-stage, stochastic, integer, and non-linear programming problem. In the first stage, prior to demand realization, the operation and marketing decisions are determined. In the second stage, inventory is allocated to meet the demand. The objective is to maximize the expected profit per-unit time. The computational results on the test problems provide managerial insights for firms faced with the conflicting needs of offering: (i) low prices, (ii) guaranteed and short lead time, and (iii) a large product variety by leveraging operations decisions. We also develop a solution procedure to solve large instances of the problem based on an accelerated version of the Generalized Benders’ Decomposition (GBD) method. The accelerating mechanism involves search intensification and diversification around solutions which improve the upper bound. The suggested GBD method gives a better solution and a tighter lower bound in a given time period than the conventional GBD implementation and the non-linear branch-and-bound method. | Integrating operations and marketing decisions using delayed differentiation of products and guaranteed delivery time under stochastic demand |
S0377221714000952 | In this paper, we analyze cost sharing problems arising from a general service by explicitly taking into account the generated revenues. To this cost-revenue sharing problem, we associate a cooperative game with transferable utility, called cost-revenue game. By considering cooperation among the agents using the general service, the value of a coalition is defined as the maximum net revenues that the coalition may obtain by means of cooperation. As a result, a coalition may profit from not allowing all its members to get the service that generates the revenues. We focus on the study of the core of cost-revenue games. Under the assumption that cooperation among the members of the grand coalition grants the use of the service under consideration to all its members, it is shown that a cost-revenue game has a nonempty core for any vector of revenues if, and only if, the dual game of the cost game has a large core. Using this result, we investigate minimum cost spanning tree games with revenues. We show that if every connection cost can take only two values (low or high cost), then, the corresponding minimum cost spanning tree game with revenues has a nonempty core. Furthermore, we provide an example of a minimum cost spanning tree game with revenues with an empty core where every connection cost can take only one of three values (low, medium, or high cost). | On the core of cost-revenue games: Minimum cost spanning tree games with revenues |
S0377221714000964 | The current air traffic system is forecasted to face strong challenges due to the continuous increase in air traffic demand. Hence, there is a need for new types of organization permitting a more efficient air traffic management, with both a high capacity and a high level of safety, and possibly with a reduced environmental impact. In this article, we study a holistic approach, consisting in designing across Europe a very organized air traffic system, as opposed to free flight, to reduce costs while maintaining safety. Our work is based on the moving point paradigm, initially presented in Prot et al. (2010). We give theoretical background to design conflict-free routes with high capacity and propose, based on these results, the allocation of aircraft to a new system of air routes, that include both lattices and orthodromic routes. The efficiency of the approach is assessed through simulations based on real data sets representing a full day of traffic over the whole European sky. The numerical results demonstrates a drastic reduction of the conflicts rate compared to the actual commercial routes, with a very limited fuel overconsumption. | A 4D-sequencing approach for air traffic management |
S0377221714000976 | We provide a novel approach to characterize the order process of continuous review ( s , S ) and ( r , nQ ) inventory policies, and study the impact of the batching parameter (the value of Q or S - s ) on the variability in the order process. First, we characterize the distribution of the time between orders, as well as the distribution of order sizes. We find that the coefficient of variation (cv) of the time between orders is smaller than the cv of the time between demands. The size of the orders can exhibit either variance amplification or dampening, compared to the demand sizes, depending on the demand size distribution and the value of the batching parameter. This may motivate a supplier to adjust his imposed fixed order cost to influence the batching size. Second, we look at the compound order process, defined by the number of units ordered during an arbitrary interval. The compound order process always exhibits variance amplification compared to the compound demand, which increases linearly in the batching parameter for large values of Q or S - s ; for small values, the variance amplification is fluctuating. We point out that the time interval, during which the number of units ordered/demanded is observed, also impacts the level of variance amplification, and we show to what extent larger time intervals (resulting in more aggregation of the data) lead to lower values of variance amplification. Both perspectives (looking at time between orders and order quantities, or observing the compound order process) provide useful information for the upstream supplier. | Characterizing order processes of continuous review ( s , S ) and ( r , nQ ) policies |
S0377221714000988 | In this paper we consider nonlinear integer optimization problems. Nonlinear integer programming has mainly been studied for special classes, such as convex and concave objective functions and polyhedral constraints. In this paper we follow an other approach which is not based on convexity or concavity. Studying geometric properties of the level sets and the feasible region, we identify cases in which an integer minimizer of a nonlinear program can be found by rounding (up or down) the coordinates of a solution to its continuous relaxation. We call this property rounding property. If it is satisfied, it enables us (for fixed dimension) to solve an integer programming problem in the same time complexity as its continuous relaxation. We also investigate the strong rounding property which allows rounding a solution to the continuous relaxation to the next integer solution and in turn yields that the integer version can be solved in the same time complexity as its continuous relaxation for arbitrary dimensions. | When is rounding allowed in integer nonlinear optimization? |
S0377221714001003 | The sales force deployment problem arises in many selling organizations. This complex planning problem involves the concurrent resolution of four interrelated subproblems: sizing of the sales force, sales representatives locations, sales territory alignment, and sales resource allocation. The objective is to maximize the total profit. For this, a well-known and accepted concave sales response function is used. Unfortunately, literature is lacking approaches that provide valid upper bounds. Therefore, we propose a model formulation with an infinite number of binary variables. The linear relaxation is solved by column generation where the variables with maximum reduced costs are obtained analytically. For the optimal objective function value of the linear relaxation an upper bound is provided. To obtain a very tight gap for the objective function value of the optimal integer solution we introduce a Branch-and-Price approach. Moreover, we propose explicit contiguity constraints based on flow variables. In a series of computational studies we consider instances which may occur in the pharmaceutical industry. The largest instance comprises 50 potential locations and more than 500 sales coverage units. We are able to solve this instance in 1273seconds with a gap of less than 0.01%. A comparison with Drexl and Haase (1999) shows that we are able to halve the solution gap due to tight upper bounds provided by the column generation procedure. | Upper and lower bounds for the sales force deployment problem with explicit contiguity constraints |
S0377221714001015 | This paper presents a stochastic model that evaluates the value of real-time shipment tracking information for supply systems that consist of a retailer, a manufacturer, and multiple stages of transportation. The retailer aggregates demand for a single product from end customers and places orders on the manufacturer. Orders received by the manufacturer may take several time periods before they are fulfilled. Shipments dispatched by the manufacturer move through multiple stages before they reach the retailer, where each stage represents a physical location or a step in the replenishment process. The lead time for a new order depends on the number of unshipped orders at the manufacturer’s site and the number and location of all shipments in transportation. The analytic model uses real-time information on the number of orders unfulfilled at the manufacturer’s site, as well as the location of shipments to the retailer, to determine the ordering policy that minimizes the long-run average cost for the retailer. It is shown that the long-run average cost is lower with real-time tracking information, and that the cost savings are substantial for a number of situations. The model also provides some guidelines for operating this supply system under various scenarios. Numerical examples demonstrate that when there is a lack of information it is better for the retailer to order every time period, but with full information on the status in the supply system it is not always necessary for the retailer to order every time period to lower the long-run average cost. | Real-time order tracking for supply systems with multiple transportation stages |
S0377221714001027 | Stroke disease places a heavy burden on society, incurring long periods of time in hospital and community care, and associated costs. Also stroke is a highly complex disease with diverse outcomes and multiple strategies for therapy and care. Previously a modeling framework has been developed which clusters patients into classes with respect to their length of stay (LOS) in hospital. Phase-type models were then used to describe patient flows for each cluster. Also multiple outcomes, such as discharge to normal residence, nursing home, or death can be permitted. We here add costs to this model and obtain the Moment Generating Function for the total cost of a system consisting of multiple transient phase-type classes with multiple absorbing states. This system represents different classes of patients in different hospital and community services states. Based on stroke patients’ data from the Belfast City Hospital, various scenarios are explored with a focus on comparing the cost of thrombolysis treatment under different regimes. The overall modeling framework characterizes the behavior of stroke patient populations, with a focus on integrated system-wide costing and planning, encompassing hospital and community services. Within this general framework we have developed models which take account of patient heterogeneity and multiple care options. Such complex strategies depend crucially on developing a deep engagement with the health care professionals and underpinning the models with detailed patient-specific data. | Using phase-type models to cost stroke patient care across health, social and community services |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.