FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0377221713004153 | Memory allocation has a significant impact on energy consumption in embedded systems. In this paper, we are interested in dynamic memory allocation for embedded systems with a special emphasis on time performance. We propose two mid-term iterative approaches which are compared with existing long-term and short-term approaches, and with an ILP formulation as well. These approaches rely on solving a static version of the allocation problem and they take advantage of previous works for addressing the static problem. A statistic analysis is carried out for showing that the mid-term approach is the best one in terms of solution quality. | Iterative approaches for a dynamic memory allocation problem in embedded systems |
S0377221713004165 | Cross-efficiency in data envelopment analysis (DEA) models is an effective way to rank decision-making units (DMUs). The common methods to aggregate cross-efficiency do not consider the preference structure of the decision maker (DM). When a DM’s preference structure does not satisfy the “additive independence” condition, a new aggregation method must be proposed. This paper uses the evidential-reasoning (ER) approach to aggregate the cross-efficiencies obtained from cross-evaluation through the transformation of the cross-efficiency matrix to pieces of evidence. This paper provides a new method for cross-efficiency aggregation and a new way for DEA models to reflect a DM’s preference or value judgments. Additionally, this paper presents examples that demonstrate the features of cross-efficiency aggregation using the ER approach, including an empirical example of the evaluation practice of 16 basic research institutes in Chinese Academy of Sciences (CAS) in 2010 that illustrates how the ER approach can be used to aggregate the cross-efficiency matrix produced from DEA models. | Cross-efficiency aggregation in DEA models using the evidential-reasoning approach |
S0377221713004189 | This paper addresses specification and estimation of multiple-outputs and multiple-inputs production technology in the presence of technical inefficiency. The primary focus is on the primal formulations. Several competing specifications such as production function, input (output) distance function, input requirement function are considered. We show that all these specifications come from the same transformation function and are algebraically identical. We also show that: (i) unless the transformation function is separable (i.e., outputs are separable from inputs), the input (output) ratios in the input (output) distance function can not be treated as exogenous (uncorrelated with technical inefficiency) resulting inconsistent estimates of the input (output) distance function parameters. (ii) Even if input (output) ratios are exogenous, estimation of the input (output) distance function will result in inconsistent parameter estimates if outputs (inputs) are endogenous. We address endogeneity and instrumental variable issues in details in the context of flexible (translog) functional forms. Estimation of several specifications using both single and system approaches are discussed using Norwegian dairy farming data. | Specification and estimation of multiple output technologies: A primal approach |
S0377221713004190 | This technical note extends the results of our recent paper [Korhonen, Soleimani-damaneh, Wallenius, EJOR 215 (2011) 431âÂÂ438], for determining the RTS status of Decision Making Units in Weight-Restricted DEA models. | On ratio-based RTS determination: An extension |
S0377221713004207 | We investigate a combined routing and scheduling problem for the maintenance of electricity networks. In electricity networks power lines must be regularly maintained to ensure a high quality of service. For safety reasons a power line must be physically disconnected from the network before maintenance work can be performed. After completing maintenance work the power line must be reconnected. Each maintenance job therefore consists of multiple tasks which must be performed at different locations in the network. The goal is to assign each task to a worker and to determine a schedule such that the downtimes of power lines and the travel effort of workers are minimized. For solving this problem, we combine a Large Neighborhood Search meta-heuristic with mathematical programming techniques. The method is evaluated on a large set of test instances which are derived from network data of a German electricity provider. | Workforce routing and scheduling for electricity network maintenance with downtime minimization |
S0377221713004219 | The Time-Dependent Travelling Salesman Problem (TDTSP) is a generalization of the traditional TSP where the travel cost between two cities depends on the moment of the day the arc is travelled. In this paper, we focus on the case where the travel time between two cities depends not only on the distance between them, but also on the position of the arc in the tour. We consider two formulations proposed in the literature, we analyze the relationship between them and derive several families of valid inequalities and facets. In addition to their theoretical properties, they prove to be very effective in the context of a Branch and Cut algorithm. | Facets and valid inequalities for the time-dependent travelling salesman problem |
S0377221713004220 | We analyze a business model for e-supermarkets to enable multi-product sourcing capacity through co-opetition (collaborative competition). The logistics aspect of our approach is to design and execute a network system where “premium” goods are acquired from vendors at multiple locations in the supply network and delivered to customers. Our specific goals are to: (i) investigate the role of premium product offerings in creating critical mass and profit; (ii) develop a model for the multiple-pickup single-delivery vehicle routing problem in the presence of multiple vendors; and (iii) propose a hybrid solution approach. To solve the problem introduced in this paper, we develop a hybrid metaheuristic approach that uses a Genetic Algorithm for vendor selection and allocation, and a modified savings algorithm for the capacitated VRP with multiple pickup, single delivery and time windows (CVRPMPDTW). The proposed Genetic Algorithm guides the search for optimal vendor pickup location decisions, and for each generated solution in the genetic population, a corresponding CVRPMPDTW is solved using the savings algorithm. We validate our solution approach against published VRPTW solutions and also test our algorithm with Solomon instances modified for CVRPMPDTW. | A new VRPPD model and a hybrid heuristic solution approach for e-tailing |
S0377221713004232 | We study a vehicle routing problem with soft time windows and stochastic travel times. In this problem, we consider stochastic travel times to obtain routes which are both efficient and reliable. In our problem setting, soft time windows allow early and late servicing at customers by incurring some penalty costs. The objective is to minimize the sum of transportation costs and service costs. Transportation costs result from three elements which are the total distance traveled, the number of vehicles used and the total expected overtime of the drivers. Service costs are incurred for early and late arrivals; these correspond to time-window violations at the customers. We apply a column generation procedure to solve this problem. The master problem can be modeled as a classical set partitioning problem. The pricing subproblem, for each vehicle, corresponds to an elementary shortest path problem with resource constraints. To generate an integer solution, we embed our column generation procedure within a branch-and-price method. Computational results obtained by experimenting with well-known problem instances are reported. | Vehicle routing with soft time windows and stochastic travel times: A column generation and branch-and-price solution approach |
S0377221713004244 | In recent years several countries have set up policies that allow exchange of kidneys between two or more incompatible patient–donor pairs. These policies lead to what is commonly known as kidney exchange programs. The underlying optimization problems can be formulated as integer programming models. Previously proposed models for kidney exchange programs have exponential numbers of constraints or variables, which makes them fairly difficult to solve when the problem size is large. In this work we propose two compact formulations for the problem, explain how these formulations can be adapted to address some problem variants, and provide results on the dominance of some models over others. Finally we present a systematic comparison between our models and two previously proposed ones via thorough computational analysis. Results show that compact formulations have advantages over non-compact ones when the problem size is large. | New insights on integer-programming models for the kidney exchange problem |
S0377221713004256 | We propose a tabu search meta-heuristic for the Time-dependent Multi-zone Multi-trip Vehicle Routing Problem with Time Windows. Two types of neighborhoods, corresponding to the two sets of decisions of the problem, together with a strategy controlling the selection of the neighborhood type for particular phases of the search, provide the means to set up and combine exploration and exploitation capabilities for the search. A diversification strategy, guided by an elite solution set and a frequency-based memory, is also used to drive the search to potentially unexplored good regions and, hopefully, enhance the solution quality. Extensive numerical experiments and comparisons with the literature show that the proposed tabu search yields very high quality solutions, improving those currently published. | A tabu search for Time-dependent Multi-zone Multi-trip Vehicle Routing Problem with Time Windows |
S0377221713004281 | In this paper we consider the problem of allocating servers to maximize throughput for tandem queues with no buffers. We propose an allocation method that assigns servers to stations based on the mean service times and the current number of servers assigned to each station. A number of simulations are run on different configurations to refine and verify the algorithm. The algorithm is proposed for stations with exponentially distributed service times, but where the service rate at each station may be different. We also provide some initial thoughts on the impact on the proposed allocation method of including service time distributions with different coefficients of variation. | Server allocation for zero buffer tandem queues |
S0377221713004293 | This paper presents a methodology to find near-optimal joint inventory control policies for the real case of a one-warehouse, n-retailer distribution system of infusion solutions at a University Medical Center in France. We consider stochastic demand, batching and order-up-to level policies as well as aspects particular to the healthcare setting such as emergency deliveries, required service level rates and a new constraint on the ordering policy that fits best the hospital’s interests instead of abstract ordering costs. The system is modeled as a Markov chain with an objective to minimize the stock-on-hand value for the overall system. We provide the analytical structure of the model to show that the optimal reorder point of the policy at both echelons is easily derived from a simple probability calculation. We also show that the optimal policy at the care units is to set the order-up-to level one unit higher than the reorder point. We further demonstrate that optimizing the care units in isolation is optimal for the joint multi-echelon, n-retailer problem. A heuristic algorithm is presented to find the near-optimal order-up-to level of the policy of each product at the central pharmacy; all other policy parameters are guaranteed optimal via the structure provided by the model. Comparison of our methodology versus that currently in place at the hospital showed a reduction of approximately 45% in the stock-on-hand value while still respecting the service level requirements. | Joint-optimization of inventory policies on a multi-product multi-echelon pharmaceutical system with batching and ordering constraints |
S0377221713004311 | Multi-echelon inventory optimization literature distinguishes stochastic- (SS) and guaranteed-service (GS) approaches as mutually exclusive frameworks. While the GS approach considers flexibility measures at the stages to deal with stockouts, the SS approach only relies on safety stock. Within a supply chain, flexibility levels might differ between stages rendering them appropriate candidates for one approach or the other. The existing approaches, however, require the selection of a single framework for the entire supply chain instead of a stage-wise choice. We develop an integrated hybrid-service (HS) approach which endogenously determines the overall cost-optimal approach for each stage and computes the required inventory levels. We present a dynamic programming optimization algorithm for serial supply chains that partitions the entire system into subchains of different types. From a numerical study we find that, besides implicitly choosing the better of the two pure frameworks, whose cost differences can be considerable, the HS approach enables additional pipeline and on-hand stock cost savings. We further identify drivers for the preferability of the HS approach. | An integrated guaranteed- and stochastic-service approach to inventory optimization in supply chains |
S0377221713004323 | This paper addresses two versions of a lifetime maximization problem for target coverage with wireless directional sensor networks. The sensors used in these networks have a maximum sensing range and a limited sensing angle. In the first problem version, predefined sensing directions are assumed to be given, whereas sensing directions can be freely devised in the second problem version. In that case, a polynomial-time algorithm is provided for building sensing directions that allow to maximize the network lifetime. A column generation algorithm is proposed for both problem versions, the subproblem being addressed with a hybrid approach based on a genetic algorithm, and an integer linear programming formulation. Numerical results show that addressing the second problem version allows for significant improvements in terms of network lifetime while the computational effort is comparable for both problem versions. | Lifetime maximization in wireless directional sensor network |
S0377221713004335 | One of the most important concerns for managing public health is the prevention of infectious diseases. Although vaccines provide the most effective means for preventing infectious diseases, there are two main reasons why it is often difficult to reach a socially optimal level of vaccine coverage: (i) the emergence of operational issues (such as yield uncertainty) on the supply side, and (ii) the existence of negative network effects on the consumption side. In particular, uncertainties about production yield and vaccine imperfections often make manufacturing some vaccines a risky process and may lead the manufacturer to produce below the socially optimal level. At the same time, negative network effects provide incentives to potential consumers to free ride off the immunity of the vaccinated population. In this research, we consider how a central policy-maker can induce a socially optimal vaccine coverage through the use of incentives to both consumers and the vaccine manufacturer. We consider a monopoly market for an imperfect vaccine; we show that a fixed two-part subsidy is unable to coordinate the market, but derive a two-part menu of subsidies that leads to a socially efficient level of coverage. | Operational issues and network effects in vaccine markets |
S0377221713004347 | This paper studies the problem of pricing high-dimensional American options. We propose a method based on the state-space partitioning algorithm developed by Jin et al. (2007) and a dimension-reduction approach introduced by Li and Wu (2006). By applying the approach in the present paper, the computational efficiency of pricing high-dimensional American options is significantly improved, compared to the extant approaches in the literature, without sacrificing the estimation precision. Various numerical examples are provided to illustrate the accuracy and efficiency of the proposed method. Pseudcode for an implementation of the proposed approach is also included. | A computationally efficient state-space partitioning approach to pricing high-dimensional American options via dimension reduction |
S0377221713004360 | This paper considers the block relocation problem (BRP), in which a set of identically-sized items is to be retrieved from a set of last-in-first-out (LIFO) stacks in a specific order using the fewest number of moves. The problem is encountered in the maritime container shipping industry and other industries where inventory is stored in stacks. After surveying the work done on the BRP, we introduce “BRP-III”—a new mathematical formulation for the BRP—and show that it has considerably fewer decision variables and better runtime performance than the other formulation in the literature. We then introduce a new look-ahead algorithm (LA-N) that is an extension of the algorithms from the literature and show that the new algorithm generally obtains better solutions than the other algorithms and has minimal CPU runtime. | A new mixed integer program and extended look-ahead heuristic algorithm for the block relocation problem |
S0377221713004372 | This paper proposes new methods for computation of greeks using the binomial tree and the discrete Malliavin calculus. In the last decade, the Malliavin calculus has come to be considered as one of the main tools in financial mathematics. It is particularly important in the computation of greeks using Monte Carlo simulations. In previous studies, greeks were usually represented by expectation formulas that are derived from the Malliavin calculus and these expectations are computed using Monte Carlo simulations. On the other hand, the binomial tree approach can also be used to compute these expectations. In this article, we employ the discrete Malliavin calculus to obtain expectation formulas for greeks by the binomial tree method. All the results are obtained in an elementary manner. | Discrete Malliavin calculus and computations of greeks in the binomial tree |
S0377221713004578 | In this paper we present a new approach, based on the Nearest Interval Approximation Operator, for dealing with a multiobjective programming problem with fuzzy-valued objective functions. By the way we have established a Karush–Kuhn–Tucker (K.K.T) kind of Pareto optimality conditions, for the resulting interval multiobjective program. To this end, we made use of gH-differentiability of involved interval-valued functions. Two algorithms play a pivotal role in the proposed method. The first one returns a nearest interval approximation to a given fuzzy number. The other one makes use of K.K.T conditions to deliver a Pareto optimal solution of the above mentioned resulting interval program. | An approach for solving a fuzzy multiobjective programming problem |
S0377221713004591 | Two-staged patterns are often used in manufacturing industries to divide stock plates into rectangular items. A heuristic algorithm is presented to solve the rectangular two-dimensional single stock size cutting stock problem with two-staged patterns. It uses the column-generation method to solve the residual problems repeatedly, until the demands of all items are satisfied. Each pattern is generated using a procedure for the constrained single large object placement problem to guarantee the convergence of the algorithm. The computational results of benchmark and practical instances indicate the following: (1) the algorithm can solve most instances to optimality, with the gap to optimality being at most one plate for those solutions whose optimality is not proven and (2) for the instances tested, the algorithm is more efficient (on average) in reducing the number of plates used than a published algorithm and a commercial stock cutting software package. | Heuristic for the rectangular two-dimensional single stock size cutting stock problem with two-staged patterns |
S0377221713004608 | Timely imaging examinations are critical for stroke patients due to the potential life threat. We have proposed a contract-based Magnetic Resonance Imaging (MRI) reservation process [1] in order to reduce their waiting time for MRI examinations. Contracted time slots (CTS) are especially reserved for Neural Vascular Department (NVD) treating stroke patients. Patients either wait in a CTS queue for such time slots or are directed to Regular Time Slot (RTS) reservation. This strategy creates “unlucky” patients having to wait for lengthy RTS reservation. This paper proposes and analyzes other contract implementation strategies called RTS reservation strategies. These strategies reserve RTS for NVD but do not direct patients to regular reservations. Patients all wait in the same queue and are served by either CTS or RTS on a FIFO (First In First Out) basis. We prove that RTS reservation strategies are able to reduce the unused time slots and patient waiting time. Extensive numerical results are presented to show the benefits of RTS reservation and to compare various RTS reservation strategies. | Implementation strategies of a contract-based MRI examination reservation process for stroke patients |
S0377221713004621 | This paper presents a backward state reduction dynamic programming algorithm for generating the exact Pareto frontier for the bi-objective integer knapsack problem. The algorithm is developed addressing a reduced problem built after applying variable fixing techniques based on the core concept. First, an approximate core is obtained by eliminating dominated items. Second, the items included in the approximate core are subject to the reduction of the upper bounds by applying a set of weighted-sum functions associated with the efficient extreme solutions of the linear relaxation of the multi-objective integer knapsack problem. Third, the items are classified according to the values of their upper bounds; items with zero upper bounds can be eliminated. Finally, the remaining items are used to form a mixed network with different upper bounds. The numerical results obtained from different types of bi-objective instances show the effectiveness of the mixed network and associated dynamic programming algorithm. | A reduction dynamic programming algorithm for the bi-objective integer knapsack problem |
S0377221713004633 | Lifetime estimation based on the measured health monitoring data has long been investigated and applied in reliability and operational management communities and practices, such as planning maintenance schedules, logistic supports, and production planning. It is known that measurement error (ME) is a source of uncertainty in the measured data considerably affecting the performance of data driven lifetime estimation. While the effect of ME on the performance of data driven lifetime estimation models has been studied recently, a reversed problem—“the specification of the ME range to achieve a desirable lifetime estimation performance” has not been addressed. This problem is related to the usability of the measured health monitoring data for estimating the lifetime. In this paper, we deal with this problem and develop guidelines regarding the formulation of specification limits to the distribution-related ME characteristics. By referring to one widely applied Wiener process-based degradation model, permissible values for the ME bias and standard deviation can be given under a specified lifetime estimation requirement. If the performance of ME does not satisfy the permissible values, the desirable performance for lifetime estimation cannot be ensured by the measured health monitoring data. We further analyze the effect of ME on an age based replacement decision, which is one of the most common and popular maintenance policies in maintenance scheduling. Numerical examples and a case study are provided to illustrate the implementation procedure and usefulness of theoretical results. | Specifying measurement errors for required lifetime estimation performance |
S0377221713004645 | In this paper, we show how Data Envelopment Analysis (DEA) may be used to measure and decompose revenue inefficiency, taking into account all sources of technical waste in the context of an application to assess the Spanish quality wine sector, in particular Designation of Origin (DO) wines. We try to go beyond the standard approaches, which use Shephard distance functions or directional distance functions, to provide decomposition that incorporates slacks as a source of technical inefficiency. To accomplish this, we will base our analysis on a recent approach introduced in Cooper et al. (2011a). In particular, we show how an output-oriented version of the Weighted Additive model can be used to properly identify revenue, technical, and allocative inefficiencies in Spanish DOs. In the application, we conclude that the main source of revenue inefficiency in this sector is technical waste, and that Cava can be highlighted as the DO that performs as a benchmark for more numbers of units. | Accounting for slacks to measure and decompose revenue efficiency in the Spanish Designation of Origin wines with DEA |
S0377221713004657 | The large-scale natural gas equilibrium model applied in Egging, 2013 combines long-term market equilibria and investments in infrastructure while accounting for market power by certain suppliers. Such models are widely used to simulate market outcomes given different scenarios of demand and supply development, environmental regulations and investment options in natural gas and other resource markets. However, no model has so far combined the logarithmic production cost function commonly used in natural gas models with endogenous investment decisions in production capacity. Given the importance of capacity constraints in the determination of the natural gas supply, this is a serious shortcoming of the current literature. This short note provides a proof that combining endogenous investment decisions and a logarithmic cost function yields a convex minimization problem, paving the way for an important extension of current state-of-the-art equilibrium models. | Endogenous production capacity investment in natural gas market equilibrium models |
S0377221713004669 | We consider the problem of minimizing a smooth function over a feasible set defined as the Cartesian product of convex compact sets. We assume that the dimension of each factor set is huge, so we are interested in studying inexact block coordinate descent methods (possibly combined with column generation strategies). We define a general decomposition framework where different line search based methods can be embedded, and we state global convergence results. Specific decomposition methods based on gradient projection and Frank–Wolfe algorithms are derived from the proposed framework. The numerical results of computational experiments performed on network assignment problems are reported. | On the convergence of inexact block coordinate descent methods for constrained optimization |
S0377221713004670 | Batching customer orders in a warehouse can result in considerable savings in order pickers’ travel distances. Many picker-to-parts warehouses have precedence constraints in picking a customer order. In this paper a joint order-batching and picker routing method is introduced to solve this combined precedence-constrained routing and order-batching problem. It consists of two sub-algorithms: an optimal A ∗-algorithm for the routing; and a simulated annealing algorithm for the batching which estimates the savings gained from batching more than two customer orders to avoid unnecessary routing. For batches of three customer orders, the introduced algorithm produces results with an error of less than 1.2% compared to the optimal solution. It also compares well to other heuristics from literature. A data set from a large Finnish order picking warehouse is rerouted and rebatched resulting in savings of over 5000kilometres or 16% in travel distance in 3months compared to the current method. | A fast simulated annealing method for batching precedence-constrained customer orders in a warehouse |
S0377221713004682 | The Maximum Diversity Problem (MDP) consists in selecting a subset of m elements from a given set of n elements ( n > m ) in such a way that the sum of the pairwise distances between the m chosen elements is maximized. We present a hybrid metaheuristic algorithm (denoted by MAMDP) for MDP. The algorithm uses a dedicated crossover operator to generate new solutions and a constrained neighborhood tabu search procedure for local optimization. MAMDP applies also a distance-and-quality based replacement strategy to maintain population diversity. Extensive evaluations on a large set of 120 benchmark instances show that the proposed approach competes very favorably with the current state-of-art methods for MDP. In particular, it consistently and easily attains all the best known lower bounds and yields improved lower bounds for 6 large MDP instances. The key components of MAMDP are analyzed to shed light on their influence on the performance of the algorithm. | A hybrid metaheuristic method for the Maximum Diversity Problem |
S0377221713004694 | We consider the problem of finding the optimal routing of a single vehicle that starts its route from a depot and picks up from and delivers K different products to N customers that are served according to a predefined customer sequence. The vehicle is allowed during its route to return to the depot to unload returned products and restock with new products. The items of all products are of the same size. For each customer the demands for the products that are delivered by the vehicle and the quantity of the products that is returned to the vehicle are discrete random variables with known joint distribution. Under a suitable cost structure, it is shown that the optimal policy that serves all customers has a specific threshold-type structure. We also study a corresponding infinite-time horizon problem in which the service of the customers is not completed when the last customer has been serviced but it continues indefinitely with the same customer order. For each customer, the joint distribution of the quantities that are delivered and the quantity that is picked up is the same at each cycle. The discounted-cost optimal policy and the average-cost optimal policy have the same structure as the optimal policy in the finite-horizon problem. Numerical results are given that illustrate the structural results. | Finite and infinite-horizon single vehicle routing problems with a predefined customer sequence and pickup and delivery |
S0377221713004700 | We study cooperation strategies for companies that continuously review their inventories and face Poisson demand. Our main goal is to analyze stable cost allocations of the joint costs. These are such that any group of companies has lower costs than the individual companies. If such allocations exist they provide an incentive for the companies to cooperate. We consider two natural cooperation strategies: (i) the companies jointly place an order for replenishment if their joint inventory position reaches a certain reorder level, and (ii) the companies reorder as soon as one of them reaches its reorder level. Numerical experiments for two companies show that the second strategy has the lowest joint costs. Under this strategy, the game-theoretical Shapley value and the distribution rule—a cost allocation in which the companies share the procurement cost and each pays its own holding cost—are shown to be stable cost allocations. These results also hold for situations with three companies. | Cooperation and game-theoretic cost allocation in stochastic inventory models with continuous review |
S0377221713004712 | To generate insights into how production of new items and remanufacturing and disposal of returned products can be effectively coordinated, we develop a model of a hybrid manufacturing–remanufacturing system. Formulating the model as a Markov decision process, we investigate the structure of the optimal policy that jointly controls production, remanufacturing, and disposal decisions. Considering the average profit maximization criterion, we show that the joint optimal policy can be characterized by three monotone switching curves. Moreover, we show that there exist serviceable (i.e., as-new) and remanufacturing (i.e., returned) inventory thresholds beyond which production cannot be optimal but disposal is always optimal. We also identify conditions under which idling and disposal actions are always optimal when the system is empty. Using numerical comparisons between models with and without remanufacturing and disposal options, we generate insights into the benefit of utilizing these options. To effectively coordinate production, remanufacturing, and disposal activities, we propose a simple, implementable, and yet effective heuristic policy. Our extensive numerical results suggest that the proposed heuristic can greatly help firms to effectively coordinate their production, remanufacturing, and disposal activities and thereby reduce their operational costs. | Joint control of production, remanufacturing, and disposal activities in a hybrid manufacturing–remanufacturing system |
S0377221713004724 | A previous approach to robust intensity-modulated radiation therapy (IMRT) treatment planning for moving tumors in the lung involves solving a single planning problem before the start of treatment and using the resulting solution in all of the subsequent treatment sessions. In this paper, we develop an adaptive robust optimization approach to IMRT treatment planning for lung cancer, where information gathered in prior treatment sessions is used to update the uncertainty set and guide the reoptimization of the treatment for the next session. Such an approach allows for the estimate of the uncertain effect to improve as the treatment goes on and represents a generalization of existing robust optimization and adaptive radiation therapy methodologies. Our method is computationally tractable, as it involves solving a sequence of linear optimization problems. We present computational results for a lung cancer patient case and show that using our adaptive robust method, it is possible to attain an improvement over the traditional robust approach in both tumor coverage and organ sparing simultaneously. We also prove that under certain conditions our adaptive robust method is asymptotically optimal, which provides insight into the performance observed in our computational study. The essence of our method – solving a sequence of single-stage robust optimization problems, with the uncertainty set updated each time – can potentially be applied to other problems that involve multi-stage decisions to be made under uncertainty. | Adaptive and robust radiation therapy optimization for lung cancer |
S0377221713004736 | In this paper, linear production games are extended so that instead of assuming a linear production technology with fixed technological coefficients, the more general, non-parametric, DEA production technology is considered. Different organizations are assumed to possess their own technology and the cooperative game arises from the possibility of pooling their available inputs, collectively processing them and sharing the revenues. Two possibilities are considered: using a joint production technology that results from merging their respective technologies or each cooperating organization keeping its own technology. This gives rise to two different DEA production games, both of which are totally balanced and have a non-empty core. A simple way of computing a stable solution, using the optimal dual solution for the grand coalition, is presented. The full cooperation scenario clearly produces more benefits for the organizations involved although the implied technology sharing is not always possible. Examples of applications of the proposed approach are given. | DEA production games |
S0377221713004748 | Isotonic nonparametric least squares (INLS) is a regression method for estimating a monotonic function by fitting a step function to data. In the literature of frontier estimation, the free disposal hull (FDH) method is similarly based on the minimal assumption of monotonicity. In this paper, we link these two separately developed nonparametric methods by showing that FDH is a sign-constrained variant of INLS. We also discuss the connections to related methods such as data envelopment analysis (DEA) and convex nonparametric least squares (CNLS). Further, we examine alternative ways of applying isotonic regression to frontier estimation, analogous to corrected and modified ordinary least squares (COLS/MOLS) methods known in the parametric stream of frontier literature. We find that INLS is a useful extension to the toolbox of frontier estimation both in the deterministic and stochastic settings. In the absence of noise, the corrected INLS (CINLS) has a higher discriminating power than FDH. In the case of noisy data, we propose to apply the method of non-convex stochastic envelopment of data (non-convex StoNED), which disentangles inefficiency from noise based on the skewness of the INLS residuals. The proposed methods are illustrated by means of simulated examples. | Stochastic non-convex envelopment of data: Applying isotonic regression to frontier estimation |
S0377221713004773 | In this paper we use simulations to numerically evaluate the Hybrid DEA – Second Score Auction. In a procurement setting, the winner of the Hybrid auction by design receives payment at the most equal to the Second Score auction. It is therefore superior to the traditional Second Score scheme from the point of view of a principal interested in acquiring an item at the minimum price without losing in quality. For a set of parameters we quantify the size of the improvements and show that the improvement depends intimately on the regularity imposed on the underlying cost function. In the least structured case of a variable returns to scale technology, the hybrid auction only improved the outcome for a small percentage of cases. For other technologies with constant returns to scale, the gains are considerably higher and payments are lowered in a large percentage of cases. We also show that the number of the participating agents, the concavity of the principal value functions, and the number of quality dimensions impact the expected payment. | Short communication: DEA based auctions simulations |
S0377221713004785 | This article presents a study on the long-term (i.e., steady-state, convergence) characteristics of workers’ skill levels under learning and forgetting in processing units in a manufacturing environment, in which products are produced in batches. Assuming that all workers already have the basic knowledge to execute the jobs, workers learn (accumulate their skill) while producing units within a batch, forget during interruptions in production, and relearn when production resumes. The convergence properties in the paper are examined under assumptions of an infinite time horizon, a constant demand rate, and a fixed lot size. Our work extends the steady-state results of Teyarachakul, Chand, and Ward (2008) to the learning and forgetting functions that belong to a large class of functions possessing some differentiability conditions. We also discuss circumstances of manufacturing environments where our results would provide useful managerial information and other potential applications. | Steady-state skill levels of workers in learning and forgetting environments: A dynamical system analysis |
S0377221713004797 | In this paper we tackle a generalization of the Single Source Capacitated Facility Location Problem in which two sets of facilities, called intermediate level and upper level facilities, have to be located; the dimensioning of the intermediate set, the assignment of clients to intermediate level facilities, and of intermediate level facilities to upper level facilities, must be optimized, as well. Such problem arises, for instance, in telecommunication network design: in fact, in hierarchical networks the traffic arising at client nodes often have to be routed through different kinds of facility nodes, which provide different services. We propose a heuristic approach, based on very large scale neighborhood search to tackle the problem, in which both ad hoc algorithms and general purpose solvers are applied to explore the search space. We report on experimental results using datasets from the capacitated location literature. Such results show that the approach is promising and that Integer Linear Programming based neighborhoods are significantly effective. | Combining very large scale and ILP based neighborhoods for a two-level location problem |
S0377221713004803 | This paper studies the team orienteering problem with time windows, the aim of which is to maximize the total profit collected by visiting a set of customers with a limited number of vehicles. Each customer has a profit, a service time and a time window. A service provided to any customer must begin in his or her time window. We propose an iterative framework incorporating three components to solve this problem. The first two components are a local search procedure and a simulated annealing procedure. They explore the solution space and discover a set of routes. The third component recombines the routes to identify high quality solutions. Our computational results indicate that this heuristic outperforms the existing approaches in the literature in average performance by at least 0.41%. In addition, 35 new best solutions are found. | An iterative three-component heuristic for the team orienteering problem with time windows |
S0377221713005006 | We consider the Multi Trip Vehicle Routing Problem, in which a set of geographically scattered customers have to be served by a fleet of vehicles. Each vehicle can perform several trips during the working day. The objective is to minimize the total travel time while respecting temporal and capacity constraints. The problem is particularly interesting in the city logistics context, where customers are located in city centers. Road and law restrictions favor the use of small capacity vehicles to perform deliveries. This leads to trips much briefer than the working day. A vehicle can then go back to the depot and be re-loaded before starting another service trip. We propose an hybrid genetic algorithm for the problem. Especially, we introduce a new local search operator based on the combination of standard VRP moves and swaps between trips. Our procedure is compared with those in the literature and it outperforms previous algorithms with respect to average solution quality. Moreover, a new feasible solution and many best known solutions are found. | A memetic algorithm for the Multi Trip Vehicle Routing Problem |
S0377221713005018 | In this paper, we deal with the generation of bundles of loads to be submitted by carriers participating in combinatorial auctions in the context of long-haul full truckload transportation services. We develop a probabilistic optimization model that integrates the bid generation and pricing problems together with the routing of the carrier’s fleet. We propose two heuristic procedures that enable us to solve models with up to 400 auctioned loads. | The stochastic bid generation problem in combinatorial transportation auctions |
S0377221713005031 | This study investigates a two-echelon supply chain model for deteriorating inventory in which the retailer’s warehouse has a limited capacity. The system includes one wholesaler and one retailer and aims to minimise the total cost. The demand rate in retailer is stock-dependent and in case of any shortages, the demand is partially backlogged. The warehouse capacity in the retailer (OW) is limited; therefore the retailer can rent a warehouse (RW) if needed with a higher cost compared to OW. The optimisation is done from both the wholesaler’s and retailer’s perspectives simultaneously. In order to solve the problem a genetic algorithm is devised. After developing a heuristic a numerical example together with sensitivity analysis are presented. Finally, some recommendations for future research are presented. | A two-echelon inventory model for a deteriorating item with stock-dependent demand, partial backlogging and capacity constraints |
S0377221713005043 | We present models of trucks and shovels in oil sand surface mines. The models are formulated to minimize the number of trucks for a given set of shovels, subject to throughput and ore grade constraints. We quantify and validate the nonlinear relation between a shovel’s idle probability (which determines the shovel’s productivity) and the number of trucks assigned to the shovel via a simple approximation, based on the theory of finite source queues. We use linearization to incorporate this expression into linear integer programs. We assume in our integer programs that each shovel is assigned a single truck size but we outline how one could account for multiple truck sizes per shovel in an approximate fashion. The linearization of shovel idle probabilities allows us to formulate more accurate truck allocation models that are easily solvable for realistic-sized problems. | A linear model for surface mining haul truck allocation incorporating shovel idle probabilities |
S0377221713005055 | We consider a deterministic n-job, single machine scheduling problem with the objective of minimizing the Mean Squared Deviation (MSD) of job completion times about a common due date (d). The MSD measure is non-regular and its value can decrease when one or more completion times increases. MSD problem is connected with the Completion Time Variance (CTV) problem and has been proved to be NP-hard. This problem finds application in situations where uniformity of service is important. We present an exact algorithm of pseudo-polynomial complexity, using ideas from branch and bound and dynamic programming. We propose a dominance rule and also develop a lower bound on MSD. The dominance rule and lower bound are effectively combined and used in the development of the proposed algorithm. The search space is explored using the breadth first branching strategy. The asymptotic space complexity of the algorithm is O(nd). Irrespective of the version of the problem – tightly constrained, constrained or unconstrained – the proposed algorithm provides optimal solutions for problem instances up to 1000 jobs size under different due date settings. | An exact algorithm to minimize mean squared deviation of job completion times about a common due date |
S0377221713005067 | Solutions of portfolio optimization problems are often influenced by a model misspecification or by errors due to approximation, estimation and incomplete information. The obtained results, recommendations for the risk and portfolio manager, should be then carefully analyzed. We shall deal with output analysis and stress testing with respect to uncertainty or perturbations of input data for static risk constrained portfolio optimization problems by means of the contamination technique. Dependence of the set of feasible solutions on the probability distribution rules out the straightforward construction of convexity-based global contamination bounds. Results obtained in our paper [Dupačová, J., & Kopa, M. (2012). Robustness in stochastic programs with risk constraints. Annals of Operations Research, 200, 55–74.] were derived for the risk and second order stochastic dominance constraints under suitable smoothness and/or convexity assumptions that are fulfilled, e.g. for the Markowitz mean–variance model. In this paper we relax these assumptions having in mind the first order stochastic dominance and probabilistic risk constraints. Local bounds for problems of a special structure are obtained. Under suitable conditions on the structure of the problem and for discrete distributions we shall exploit the contamination technique to derive a new robust first order stochastic dominance portfolio efficiency test. | Robustness of optimal portfolios under risk and stochastic dominance constraints |
S0377221713005079 | We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudopolynomial exact and approximation algorithms. | Algorithmic aspects of mean–variance optimization in Markov decision processes |
S0377221713005080 | The nesting problem is commonly encountered in sheet metal, clothing and shoe-making industries. The nesting problem is a combinatorial optimization problem in which a given set of irregular polygons is required to be placed on a rectangular sheet. The objective is to minimize the length of the sheet while having all polygons inside the sheet without overlap. In this study, a methodology that hybridizes cuckoo search and guided local search optimization techniques is proposed. To reduce the complexity of the nesting problem, pairwise clustering is introduced to group congruent polygons together in pairs. Pairwise clustering is done automatically to discover matched features among multiple present polygons. Computational experiments show that the implementation is robust and also reasonably fast. The proposed approach provides significantly better results than the previous state of the art on a wide range of benchmark data instances. | A new approach for sheet nesting problem using guided cuckoo search and pairwise clustering |
S0377221713005092 | This paper proposes an integer linear programming formulation for a simultaneous lot sizing and scheduling problem in a job shop environment. Among others, one of our realistic assumptions is dealing with flexible machines which enable the production manager to change their working speeds. Then, a number of valid inequalities are developed based on problem structures. As the valid inequalities can help in reducing the non-optimal parts of the solution space, they are dealt with as some cutting planes. The proposed cutting planes are used to solve the problem in (i) cut-and-branch, and (ii) branch-and-cut approaches. The performance of each cutting plane is investigated with CPLEX 12.2 on a set of randomly-generated test data. Then, some performance criteria are identified and the proposed cutting planes are ranked by TOPSIS method. | Multi-level lot sizing and job shop scheduling with compressible process times: A cutting plane approach |
S0377221713005109 | This paper presents a new local search approach for solving continuous location problems. The main idea is to exploit the relation between the continuous model and its discrete counterpart. A local search is first conducted in the continuous space until a local optimum is reached. It then switches to a discrete space that represents a discretisation of the continuous model to find an improved solution from there. The process continues switching between the two problem formulations until no further improvement can be found in either. Thus, we may view the procedure as a new adaption of formulation space search. The local search is applied to the multi-source Weber problem where encouraging results are obtained. This local search is also embedded within Variable Neighbourhood Search producing excellent results. | A new local search for continuous location problems |
S0377221713005110 | We first study mean–variance efficient portfolios when there are no trading constraints and show that optimal strategies perform poorly in bear markets. We then assume that investors use a stochastic benchmark (linked to the market) as a reference portfolio. We derive mean–variance efficient portfolios when investors aim to achieve a given correlation (or a given dependence structure) with this benchmark. We also provide upper bounds on Sharpe ratios and show how these bounds can be useful for fraud detection. For example, it is shown that under some conditions it is not possible for investment funds to display a negative correlation with the financial market and to have a positive Sharpe ratio. All the results are illustrated in a Black–Scholes market. | Mean–variance optimal portfolios in the presence of a benchmark with applications to fraud detection |
S0377221713005122 | We define a general game which forms a basis for modelling situations of static search and concealment over regions with spatial structure. The game involves two players, the searching player and the concealing player, and is played over a metric space. Each player simultaneously chooses to deploy at a point in the space; the searching player receiving a payoff of 1 if his opponent lies within a predetermined radius r of his position, the concealing player receiving a payoff of 1 otherwise. The concepts of dominance and equivalence of strategies are examined in the context of this game, before focusing on the more specific case of the game played over a graph. Methods are presented to simplify the analysis of such games, both by means of the iterated elimination of dominated strategies and through consideration of automorphisms of the graph. Lower and upper bounds on the value of the game are presented and optimal mixed strategies are calculated for games played over a particular family of graphs. | Static search games played over graphs and general metric spaces |
S0377221713005134 | In sport tournaments in which teams are matched two at a time, it is useful for a variety of reasons to be able to quantify how important a particular game is. The need for such quantitative information has been addressed in the literature by several more or less simple measures of game importance. In this paper, we point out some of the drawbacks of those measures and we propose a different approach, which rather targets how decisive a game is with respect to the final victory. We give a definition of this idea of game decisiveness in terms of the uncertainty about the eventual winner prevailing in the tournament at the time of the game. As this uncertainty is strongly related to the notion of entropy of a probability distribution, our decisiveness measure is based on entropy-related concepts. We study the suggested decisiveness measure on two real tournaments, the 1988 NBA Championship Series and the UEFA 2012 European Football Championship (Euro 2012), and we show how well it agrees with what intuition suggests. Finally, we also use our decisiveness measure to objectively analyse the recent UEFA decision to expand the European Football Championship from 16 to 24 nations in the future, in terms of the overall attractiveness of the competition. | On the decisiveness of a game in a tournament |
S0377221713005146 | Firms often sell products in bundles to extract consumer surplus. While most bundling decisions studied in the literature are geared to integrated firms, we examine a decentralized supply chain where the suppliers retain decision rights. Using a generic distribution of customers’ reservation price we establish equilibrium solutions for three different bundling scenarios in a supply chain, and generate interesting insights for distributions with specific forms. We find that (i) in supply chain bundling the retailer’s margin equals the margin of each independent supplier, and it equals the combined margin when the suppliers are in a coalition, (ii) when the suppliers form a coalition to bundle their products the bundling gain in the supply chain is higher and retail price is lower than when the retailer bundles the products, (iii) the supply chain has more to gain from bundling relative to an integrated firm, (iv) the first-best supply chain bundling remains viable over a larger set of parameter values than those in the case of the integrated firm, (v) supplier led bundling is preferable to separate sales over a wider range of parameter values than if the retailer led the bundling, and (vi) if the reservation prices are uniformly distributed bundling can be profitable when the variable costs are low and valuations of the products are not significantly different from one another. For normally distributed reservation prices, we show that the bundling set is larger and the bundling gain is higher than that for a uniform distribution. | Bundling decisions in supply chains |
S0377221713005353 | There are several identical facilities in which precious or dangerous material is processed or stored. Since parts of this material may be diverted by some manager or employee of these facilities or since failures in the processing of the material may occur, an authorized organization inspects these facilities regularly at the beginning and at the end of some reference time interval. In order to shorten the time required for detecting such an illegal activity or failures, in addition some interim inspections are performed in these facilities during the reference time interval. The optimal distribution of these interim inspections in space and time poses considerable analytical problems since adversary strategies have to be taken into account. So far only special cases have been analysed successfully, but these results lead to a conjecture for the solution of the general case which is surprisingly simple in view of the complexity of this inspection problem. | Distributing inspections in space and time â Proposed solution of a difficult problem |
S0377221713005365 | We formulate a noncooperative game to model competition for policyholders among non-life insurance companies, taking into account market premium, solvency level, market share and underwriting results. We study Nash equilibria and Stackelberg equilibria for the premium levels, and give numerical illustrations. | Competition among non-life insurers under solvency constraints: A game-theoretic approach |
S0377221713005377 | This paper seeks to enrich the literature of operations and supply chain management through the development of the concept of Reactivity and the introduction of related performance indicators. Reactivity explains the capability to perform operationally and economically under unexpected conditions. A qualitative investigation has aimed to identify useful managerial practices to be adopted to properly perform Reactivity, while an empirical analysis has tested the relevance of each practice as well as the economic benefits that Reactivity provides. The findings suggest that managers and practitioners should develop a Reactivity orientation because it benefits firms’ economic performance when an unexpected event occurs; in addition, several recommended managerial practices should be undertaken to ensure its correct implementation. | Recent developments on Reactivity: Theoretical conceptualization and empirical verification |
S0377221713005389 | An effective sourcing strategy leads to cost savings and value added collaborations. For radical innovative product sourcing (RIPS), the exact nature and demand of products are highly uncertain. As such, knowledge sharing competences and production capacities of potential suppliers are prerequisite capabilities. The main aim is to investigate the impacts of these considerations on sourcing strategies through the development of two optimization models. Under the assumptions of single product sourcing, single period time window, uncertain demand and stochastic supply, KKT conditions are used to solve a simplified nonlinear optimization model analytically. The model is then expanded and particle swarm optimization is used to solve numerically the number of suppliers, order quantities and the level of relationship investments that maximize the value of sourcing. Through extensive scenario and sensitivity analyses, we provide some key insights. | Impacts of supplier knowledge sharing competences and production capacities on radical innovative product sourcing |
S0377221713005390 | We consider a two-period closed-loop supply chain (CLSC) game where a remanufacturer appropriates of the returns’ residual value and decides whether to exclusively manage the end-of-use product collection or to outsource it to either a retailer or a third-service provider (3P). We determine that the manufacturer outsources the product collection only when an outsourcee performs environmentally and operationally better. On the outsourcees side there is always an economic convenience in managing the product returns process exclusively, independently of returns rewards and operational performance. When outsourcing is convenient, a manufacturer always chooses a retailer if the outsourcees show equal performance. Overall, the manufacturer is more sensitive to environmental performance than to operational perfomance. Finally, there exists only a small region inside which outsouring the collection process contributes to the triple bottom line. | A two-period game of a closed-loop supply chain |
S0377221713005407 | In this paper I draw on research on the role of objects in problem solving collaboration to make a case for the conceptualisation of models as potential boundary objects. Such conceptualisation highlights the possibility that the models used in Soft OR interventions perform three roles with specific effects: transfer to develop a shared language, translation to develop shared meanings, and transformation to develop common interests. If these roles are carried out effectively, models enable those involved to traverse the syntactic, semantic and pragmatic boundaries encountered when tackling a problem situation of mutual concern, and help create new knowledge that has consequences for action. I illustrate these roles and associated effects via two empirical case vignettes drawn from an ongoing action research programme studying the impact of Soft OR interventions. Building on the insights generated by the case vignettes, I develop an analytical framework that articulates the dynamics of knowledge creation within Soft OR interventions. The framework can shed new light on a core aspect of Soft OR practice, especially with regards to the impact of models on the possibilities for action they can afford to those involved. I conclude with a discussion of the prescriptive value of the framework for research into the evaluation of Soft OR interventions, and its implications for the conduct of Soft OR practice. | Rethinking Soft OR interventions: Models as boundary objects |
S0377221713005419 | Mathematical programming has been proposed in the literature as an alternative technique to simulating a special class of Discrete Event Systems. There are several benefits to using mathematical programs for simulation, such as the possibility of performing sensitivity analysis and the ease of better integrating the simulation and optimisation. However, applications are limited by the usually long computational times. This paper proposes a time-based decomposition algorithm that splits the mathematical programming model into a number of submodels that can be solved sequentially to make the mathematical programming approach viable for long running simulations. The number of required submodels is the solution of an optimisation problem that minimises the expected time for solving all of the submodels. In this way, the solution time becomes a linear function of the number of simulated entities. | Mathematical programming time-based decomposition algorithm for discrete event simulation |
S0377221713005420 | A new methodology of making a decision on an optimal investment in several projects is proposed. The methodology is based on experts’ evaluations and consists of three stages. In the first stage, Kaufmann’s expertons method is used to reduce a possibly large number of applicants for credit. Using the combined expert data, the credit risk level is determined for each project. Only the projects with low risks are selected. In the second stage, the model of refined decisions is constructed using the new modification of the previously proposed possibilistic discrimination analysis method (Sirbiladze, Khutsisvili, & Dvalishvili, 2010). This stage is based on expert knowledge and experience. The projects selected in the first stage are compared in order to identify high-quality ones among them. The possibility levels of experts’ preferences are calculated and the projects are ranked. Finally, the third stage deals with the bicriteria discrete optimization problem whose solution makes it possible to arrange the most advantageous investment in several projects simultaneously. The decision on funding the selected projects is made and an optimal distribution of the allocated investment amount among them is provided. The efficiency of the proposed methodology is illustrated by an example. | Multistage decision-making fuzzy methodology for optimal investments based on experts’ evaluations |
S0377221713005432 | We consider a supply chain in which orders and lead times are linked endogenously, as opposed to assuming lead times are exogenous. This assumption is relevant when a retailer’s orders are produced by a supplier with finite capacity and replenished when the order is completed. The retailer faces demands that are correlated over time – either positively or negatively – which may, for example, be induced by a pricing or promotion policy. The auto-correlation in demand affects the order stream placed by the retailer onto the supplier, and this in turn influences the resulting lead times seen by the retailer. Since these lead times also determine the retailer’s orders and its safety stocks (which the retailer must set to cover lead time demand), there is a mutual dependency between orders and lead times. The inclusion of endogenous lead times and autocorrelated demand represents a better fit with real-life situations. However, it poses some additional methodological issues, compared to assuming exogenous lead times or stationary demand processes that are independent over time. By means of a Markov chain analysis and matrix analytic methods, we develop a procedure to determine the distribution of lead times and inventories, that takes into account the correlation between orders and lead times. Our analysis shows that negative autocorrelation in demand, although more erratic, improves both lead time and inventory performance relative to IID demand. Positive correlation makes matters worse than IID demand. Due to the endogeneity of lead times, these effects are much more pronounced and substantial error may be incurred if this endogeneity is ignored. | Coordinating lead times and safety stocks under autocorrelated demand |
S0377221713005444 | Recent literature shows that the arrival and discharge processes in hospital intensive care units do not satisfy the Markovian property, that is, interarrival times and length of stay tend to have a long tail. In this paper we develop a generalised loss network framework for capacity planning of a perinatal network in the UK. Decomposing the network by hospitals, each unit is analysed with a GI/G/c/0 overflow loss network model. A two-moment approximation is performed to obtain the steady state solution of the GI/G/c/0 loss systems, and expressions for rejection probability and overflow probability have been derived. Using the model framework, the number of required cots can be estimated based on the rejection probability at each level of care of the neonatal units in a network. The generalisation ensures that the model can be applied to any perinatal network for renewal arrival and discharge processes. | Capacity planning of a perinatal network with generalised loss network model with overflow |
S0377221713005456 | We develop a delay time model (DTM) to determine the optimal maintenance policy under a novel assumption: postponed replacement. Delay time is defined as the time lapse from the occurrence of a defect up until failure. Inspections can be performed to monitor the system state at non-negligible cost. Most works in the literature assume that instantaneous replacement is enforced as soon as a defect is detected at an inspection. In contrast, we relax this assumption and allow replacement to be postponed for an additional time period. The key motivation is to achieve better utilization of the system’s useful life, and reduce replacement costs by providing a sufficient time window to prepare maintenance resources. We model the preventive replacement cost as a non-increasing function of the postponement interval. We then derive the optimal policy under the modified assumption for a system with exponentially distributed defect arrival time, both for a deterministic delay time and for a more general random delay time. For the settings with a deterministic delay time, we also establish an upper bound on the cost savings that can be attained. A numerical case study is presented to benchmark the benefits of our modified assumption against conventional instantaneous replacement discussed in the literature. | Optimal policies for a delay time model with postponed replacement |
S0377221713005468 | In distribution problems, and specifically in bankruptcy issues, the Proportional (P) and the Egalitarian (EA) divisions are two of the most popular ways to resolve the conflict. Nonetheless, when using the egalitarian division, agents may receive more than her claim. We propose a compromise between the proportional and the egalitarian approaches by considering the restriction that no one receives more than her claim. We show that the most egalitarian compromise fulfilling this restriction ensures a minimum amount to each agent. We also show that this compromise can be interpreted as a process that works in two steps as follows: first, all agents receive an equal share up to the smallest claim if possible (egalitarian distribution), and then, the remaining estate (if any) is allocated proportionally to the remaining claims (proportional distribution). Finally, we obtain that the recursive application of this process finishes at the Constrained Equal Awards solution (CEA). | A proportional approach to claims problems with a guaranteed minimum |
S0377221713005481 | This paper considers the use of scenarios to treat uncertain attribute evaluations in the outranking methods. The scenario-based approach allows the decision maker to think deterministically about the problem by attaching causal links to a small number of potential outcomes, instead of using probability distributions. The scenario approach can be expressed as a simplified version of the comprehensive but practically complex “distributive” outranking method of d’Avignon and Vincke. Using a scenario approach has distinct practical advantages, but also presents the inherent danger that meaningful information is ignored. The extent of this danger is assessed using a simulation experiment, where it is found to be of a magnitude that is non-trivial but still potentially acceptable for certain decision contexts. | Outranking under uncertainty using scenarios |
S0377221713005493 | We consider a short sea fuel oil distribution problem where an oil company is responsible for the routing and scheduling of ships between ports such that the demand for various fuel oil products is satisfied during the planning horizon. The inventory management has to be considered at the demand side only, and the consumption rates are given and assumed to be constant within the planning horizon. The objective is to determine distribution policies that minimize the routing and operating costs, while the inventory levels are maintained within their limits. We propose an arc-load flow formulation for the problem which is tightened with valid inequalities. In order to obtain good feasible solutions for planning horizons of several months, we compare different hybridization strategies. Computational results are reported for real small-size instances. | Hybrid heuristics for a short sea inventory routing problem |
S0377221713005511 | This paper aims to find a faster method for optimal solutions of Feng et al.’s int m –int n decision making scheme. We first give theoretical characterizations of optimal decision sets. Then we develop a pruning method which filters out those objects that cannot be elements of any optimal decision sets in the beginning. Experimental results have shown that our method has higher efficiency in computing the optimal solutions of this scheme, particularly when we are processing soft sets with a great quantity of data. | Pruning method for optimal solutions of int m –int n decision making scheme |
S0377221713005523 | The ship placement problem constitutes a daily challenge for planners in tide river harbours. In essence, it entails positioning a set of ships into as few lock chambers as possible while satisfying a number of general and specific placement constraints. These constraints make the ship placement problem different from traditional 2D bin packing. A mathematical formulation for the problem is presented. In addition, a decomposition model is developed which allows for computing optimal solutions in a reasonable time. A multi-order best fit heuristic for the ship placement problem is introduced, and its performance is compared with that of the left-right-left-back heuristic. Experiments on simulated and real-life instances show that the multi-order best fit heuristic beats the other heuristics by a landslide, while maintaining comparable calculation times. Finally, the new heuristic’s optimality gap is small, while it clearly outperforms the exact approach with respect to calculation time. set of ships that need to be placed, N ={1,2,…, n} set of lockages (bins) available, M ={1,2,…, m}, where m should be a sufficiently large number, e.g. m = n or equal to an appropriate upper bound set of ships to which ship i ∈ N is allowed to moor width and length of the chamber (integer) width and length of ship i ∈ N (integer) minimal distance between ship i ∈ N and the front/back of the chamber minimal safety distance between ships i ∈ N and j ∈ N when they are adjacent, or laying behind each other represents the left quay of the chamber: x 0 =0, y 0 =0, w 0 =0, l 0 = L represents the right quay of the chamber: x n+1 = W, y n+1 =0, w n+1 =0, l n+1 = L relative cost of the number of lockages integer variables that define the x and y position of ship i ∈ N in the chamber (lower left corner of ship) binary variable that indicates if ship i ∈ N is left to ship j ∈ N (left ij =1⇒ x i + w i ⩽ x j ) binary variable that indicates if ship i ∈ N is behind ship j ∈ N (b ij =1⇒ y i + l i ⩽ y j ) binary variables that indicate if ship i ∈ N is moored to ship j ∈ N’s left, respectively right, side binary variable that indicates whether lockage k ∈ M is used or not binary variable that indicates whether ship i ∈ N is processed in lockage k ∈ M or not binary variable, 0 when ship i ∈ N and j ∈ N are processed in the same lockage, 1 otherwise | Exact and heuristic methods for placing ships in locks |
S0377221713005535 | There are some specific features of the non-radial data envelopment analysis (DEA) models which cause some problems for the returns to scale measurement. In the scientific literature on DEA, some methods were suggested to deal with the returns to scale measurement in the non-radial DEA models. These methods are based on using Strong Complementary Slackness Conditions from optimization theory. However, our investigation and computational experiments show that such methods increase computational complexity significantly and may generate as optimal, solutions contradicting optimization theory. In this paper, we propose and substantiate a direct method for the returns to scale measurement in the non-radial DEA models. Our computational experiments documented that the proposed method works reliably and efficiently on the real-life data sets. | Measurement of returns to scale using non-radial DEA models |
S0377221713005559 | The Steiner tree problem (STP) is one of the most popular combinatorial optimization problems with various practical applications. In this paper, we propose a Breakout Local Search (BLS) algorithm for an important generalization of the STP: the Steiner tree problem with revenue, budget and hop constraints (STPRBH), which consists of determining a subtree of a given undirected graph which maximizes the collected revenues, subject to both budget and hop constraints. Starting from a probabilistically constructed initial solution, BLS uses a Neighborhood Search (NS) procedure based on several specifically designed move operators for local optimization, and employs an adaptive diversification strategy to escape from local optima. The diversification mechanism is implemented by adaptive perturbations, guided by dedicated information of discovered high-quality solutions. Computational results based on 240 benchmarks show that BLS produces competitive results with respect to several previous approaches. For the 56 most challenging instances with unknown optimal results, BLS succeeds in improving 49 and matching one best known results within reasonable time. For the 184 instances which have been solved to optimality, BLS can also match 167 optimal results. | Breakout local search for the Steiner tree problem with revenue, budget and hop constraints |
S0377221713005560 | This paper proposes a shape-restricted nonparametric quantile regression to estimate the τ-frontier, which acts as a benchmark for whether a decision making unit achieves top τ efficiency. This method adopts a two-step strategy: first, identifying fitted values that minimize an asymmetric absolute loss under the nondecreasing and concave shape restriction; second, constructing a nondecreasing and concave estimator that links these fitted values. This method makes no assumption on the error distribution and the functional form. Experimental results on some artificial data sets clearly demonstrate its superiority over the classical linear quantile regression. We also discuss how to enforce constraints to avoid quantile crossings between multiple estimated frontiers with different values of τ. Finally this paper shows that this method can be applied to estimate the production function when one has some prior knowledge about the error term. | Nonparametric quantile frontier estimation under shape restriction |
S0377221713005572 | We consider here a NP-hard problem related to the Routing and Wavelength Assignment (RWA) problem in optical networks, dealing with Scheduled Lightpath Demands (SLDs). An SLD is a connection demand between two nodes of the network, during a certain time. Given a set of SLDs, we want to assign a lightpath, i.e. a routing path and a wavelength, to each SLD, so that the total number of required wavelengths is minimized. The constraints are the following: a same wavelength must be assigned all along the edges of the routing path of any SLD; at any time, a given wavelength on a given edge of the network cannot be used to satisfy more than one SLD. To solve this problem, we design a post-optimization method improving the solutions provided by a heuristic. The experimental results show that this post-optimization method is quite efficient to reduce the number of necessary wavelengths. | A post-optimization method for the routing and wavelength assignment problem applied to scheduled lightpath demands |
S0377221713005584 | The quay crane scheduling problem plays an important role in the paradigm of port container terminal management, due to the fact that it closely relates to vessel berthing time. In this paper, we focus on the study of a special strategy for the cluster-based quay crane scheduling problem that forces quay cranes to move unidirectionally during the scheduling. The scheduling problem arising when this strategy is applied is called the unidirectional quay crane scheduling problem in the literature. Different from other researches attempting to construct more sophisticated searching algorithms, in this paper, we seek for a more compact mathematical formulation of the unidirectional cluster-based quay crane scheduling problem that can be easily solved by a standard optimization solver. To assess the performance of the proposed model, commonly accepted benchmark suites are used and the results indicate that the proposed model outperforms the state-of-the-art algorithms designed for the unidirectional cluster-based quay crane scheduling problem. | An effective mathematical formulation for the unidirectional cluster-based quay crane scheduling problem |
S0377221713005596 | In this paper, we investigate adaptive linear combinations of graph coloring heuristics with a heuristic modifier to address the examination timetabling problem. We invoke a normalisation strategy for each parameter in order to generalise the specific problem data. Two graph coloring heuristics were used in this study (largest degree and saturation degree). A score for the difficulty of assigning each examination was obtained from an adaptive linear combination of these two heuristics and examinations in the list were ordered based on this value. The examinations with the score value representing the higher difficulty were chosen for scheduling based on two strategies. We tested for single and multiple heuristics with and without a heuristic modifier with different combinations of weight values for each parameter on the Toronto and ITC2007 benchmark data sets. We observed that the combination of multiple heuristics with a heuristic modifier offers an effective way to obtain good solution quality. Experimental results demonstrate that our approach delivers promising results. We conclude that this adaptive linear combination of heuristics is a highly effective method and simple to implement. | Adaptive linear combination of heuristic orderings in constructing examination timetables |
S0377221713005614 | We consider a p-norm linear discrimination model that generalizes the model of Bennett and Mangasarian (1992) and reduces to a linear programming problem with p-order cone constraints. The proposed approach for handling linear programming problems with p-order cone constraints is based on reformulation of p-order cone optimization problems as second order cone programming (SOCP) problems when p is rational. Since such reformulations typically lead to SOCP problems with large numbers of second order cones, an “economical” representation that minimizes the number of second order cones is proposed. A case study illustrating the developed model on several popular data sets is conducted. | On p-norm linear discrimination |
S0377221713005626 | The aggregation of individuals’ preferences into a consensus ranking is a group ranking problem which has been widely utilized in various applications, such as decision support systems, recommendation systems, and voting systems. Gathering the comparison of preferences and aggregate them to gain consensuses is a conventional issue. For example, b>c⩾d⩾a indicates that b is favorable to c, and c (d) is somewhat favorable but not fully favorable to d (a), where>and⩾are comparators, and a, b, c, and d are items. Recently, a new type of ranking model was proposed to provide temporal orders of items. The order, b&c→a, means that b and c can occur simultaneously and are also before a. Although this model can derive the order ranking of items, the knowledge about quantity-related items is also of importance to approach more real-life circumstances. For example, when enterprises or individuals handle their portfolios in financial management, two considerations, the sequences and the amount of money for investment objects, should be raised initially. In this study, we propose a model for discovering consensus sequential patterns with quantitative linguistic terms. Experiments using synthetic and real datasets showed the model’s computational efficiency, scalability, and effectiveness. | A novel group ranking model for revealing sequence and quantity knowledge |
S0377221713005638 | Multimodal transportation offers an advanced platform for more efficient, reliable, flexible, and sustainable freight transportation. Planning such a complicated system provides interesting areas in Operations Research. This paper presents a structured overview of the multimodal transportation literature from 2005 onward. We focus on the traditional strategic, tactical, and operational levels of planning, where we present the relevant models and their developed solution techniques. We conclude our review paper with an outlook to future research directions. | Multimodal freight transportation planning: A literature review |
S0377221713005651 | Geometric branch-and-bound techniques are well-known solution algorithms for non-convex continuous global optimization problems with box constraints. Several approaches can be found in the literature differing mainly in the bounds used. The aim of this paper is to extend geometric branch-and-bound methods to mixed integer optimization problems, i.e. to objective functions with some continuous and some integer variables. Mixed-integer non-linear and non-convex optimization problems are extremely hard, containing several classes of NP-hard problems as special cases. We identify for which type of mixed integer non-linear problems our method can be applied efficiently, derive several bounding operations and analyze their rates of convergence theoretically. Moreover, we show that the accuracy of any algorithm for solving the problem with fixed integer variables can be transferred to the mixed integer case. Our results are demonstrated theoretically and experimentally using the truncated Weber problem and the p-median problem. For both problems we succeed in finding exact optimal solutions. | A solution algorithm for non-convex mixed integer optimization problems with only few continuous variables |
S0377221713005663 | In this paper, we study the optimal policies of retailers who operate their inventory with a single period model (i.e., newsvendor model) under a free shipping offer where a fixed shipping fee is exempted if an order quantity is greater than or equal to a given minimum quantity. Zhou et al. (2009) have explored this model, and we further investigate their analysis for the optimal ordering policies which they did not sufficiently develop. Based on the investigation, we extend the base model in order to deal with the practically important aspect of inventory management when the exact distribution function of demand is not available. We incorporate the aspect into the base model and present the optimal policies for the extended model with a numerical example. Finally, we conduct extensive numerical experiments to evaluate the performance of the extended model and analyze the impacts of minimum free shipping quantity and the fixed shipping fee on the performance. | A minimax distribution-free procedure for a newsvendor problem with free shipping |
S0377221713005675 | Container terminals pay more and more attention to the service quality of inland transport modes such as tucks, trains and barges. Truck appointment systems are a common approach to reduce truck turnaround times. This paper provides a tool to use the truck appointment system to increase not only the service quality of trucks, but also of trains, barges and vessels. We propose a mixed integer linear programming model to determine the number of appointments to offer with regard to the overall workload and the available handling capacity. The model is based on a network flow representation of the terminal and aims to minimize overall delays at the terminal. It simultaneously determines the number of truck appointments to offer and allocates straddle carriers to different transport modes. Numerical experiments, conducted on actual data, quantify the benefits of this combined solution approach. Discrete-event simulation validates the results obtained by the optimization model in a stochastic environment. | Benefits of a truck appointment system on the service quality of inland transport modes at a multimodal container terminal |
S0377221713005687 | We consider a manufacturing system with product recovery. The system manufactures a new product as well as remanufactures the product from old, returned items. The items remanufactured with the returned products are as good as new and satisfy the same demand as the new item. The demand rate for the new item and the return rate for the old item are deterministic and constant. The relevant costs are the holding costs for the new item and the returned item, and the fixed setup costs for both manufacturing and remanufacturing. The objective is to determine the lot sizes and production schedule for manufacturing and remanufacturing so as to minimize the long-run average cost per unit time. We first develop a lower bound among all classes of policies for the problem. We then show that the optimal integer ratio policy for the problem obtains a solution whose cost is at most 1.5% more than the lower bound. | Heuristics with guaranteed performance bounds for a manufacturing system with product recovery |
S0377221713005778 | We present a novel integer programming model for analyzing inter-terminal transportation (ITT) in new and expanding sea ports. ITT is the movement of containers between terminals (sea, rail or otherwise) within a port. ITT represents a significant source of delay for containers being transshipped, which costs ports money and affects a port’s reputation. Our model assists ports in analyzing the impact of new infrastructure, the placement of terminals, and ITT vehicle investments. We provide analysis of ITT at two ports, the port of Hamburg, Germany and the Maasvlakte 1 & 2 area of the port of Rotterdam, The Netherlands, in which we solve a vehicle flow combined with a multi-commodity container flow on a congestion based time–space graph to optimality. We introduce a two-step solution procedure that computes a relaxation of the overall ITT problem in order to find solutions faster. Our graph contains special structures to model the long term loading and unloading of vehicles, and our model is general enough to model a number of important real-world aspects of ITT, such as traffic congestion, penalized late container delivery, multiple ITT transportation modes, and port infrastructure modifications. We show that our model can scale to real-world sizes and provide ports with important information for their long term decision making. | A mathematical model of inter-terminal transportation |
S0377221713005791 | In this paper, we review recent advances in the distributional analysis of mixed integer linear programs with random objective coefficients. Suppose that the probability distribution of the objective coefficients is incompletely specified and characterized through partial moment information. Conic programming methods have been recently used to find distributionally robust bounds for the expected optimal value of mixed integer linear programs over the set of all distributions with the given moment information. These methods also provide additional information on the probability that a binary variable attains a value of 1 in the optimal solution for 0–1 integer linear programs. This probability is defined as the persistency of a binary variable. In this paper, we provide an overview of the complexity results for these models, conic programming formulations that are readily implementable with standard solvers and important applications of persistency models. The main message that we hope to convey through this review is that tools of conic programming provide important insights in the probabilistic analysis of discrete optimization problems. These tools lead to distributionally robust bounds with applications in activity networks, vertex packing, discrete choice models, random walks and sequencing problems, and newsvendor problems. | Distributionally robust mixed integer linear programs: Persistency models with applications |
S0377221713005808 | In the Corridor Allocation Problem, we are given n facilities to be arranged along a corridor. The arrangements on either side of the corridor should start from a common point on the left end of the corridor. In addition, no space is allowed between two adjacent facilities. The problem is motivated by applications such as the arrangement of rooms in office buildings, hospitals, shopping centers or schools. Tabu search and simulated annealing algorithms are presented to minimize the sum of weighted distances between every pair of facilities. The algorithms are evaluated on several instances of different sizes either randomly generated or available in the literature. Both algorithms reached the optimal (when available) or best-known solutions of the instances with n ⩽30. For larger instances with size 42⩽ n ⩽70, the simulated annealing implementation obtained smaller objective values, while requiring a smaller number of function evaluations. | Simulated annealing and tabu search approaches for the Corridor Allocation Problem |
S0377221713005924 | Recent advances in Stein’s lemma imply that under elliptically symmetric distributions all rational investors will select a portfolio which lies on Markowitz’ mean–variance efficient frontier. This paper describes extensions to Stein’s lemma for the case when a random vector has the multivariate extended skew-Student distribution. Under this distribution, rational investors will select a portfolio which lies on a single mean–variance–skewness efficient hyper-surface. The same hyper-surface arises under a broad class of models in which returns are defined by the convolution of a multivariate elliptically symmetric distribution and a multivariate distribution of non-negative random variables. Efficient portfolios on the efficient surface may be computed using quadratic programming. | Mean–variance–skewness efficient surfaces, Stein’s lemma and the multivariate extended skew-Student distribution |
S0377221713005936 | Conventional data envelopment analysis (DEA) models only consider the inputs supplied to the system and the outputs produced from the system in measuring efficiency, ignoring the operations of the internal processes. The results thus obtained sometimes are misleading. This paper discusses the efficiency measurement and decomposition of general multi-stage systems, where each stage consumes exogenous inputs and intermediate products (produced from the preceding stage) to produce exogenous outputs and intermediate products (for the succeeding stage to use). A relational model is developed to measure the system and stage efficiencies at the same time. By transforming the system into a series of parallel structures, the system efficiency is decomposed into the product of a modification of the stage efficiencies. Efficiency decomposition enables decision makers to identify the stages that cause the inefficiency of the system, and to effectively improve the performance of the system. An example of an electricity service system is used to explain the idea of efficiency decomposition. | Efficiency decomposition for general multi-stage systems in data envelopment analysis |
S0377221713005948 | We analyze retail space-exchange problems where two or more retailers exchange their excess retail spaces to improve the utilization of their space resource. We first investigate the two-retailer space exchange problem. In order to entice both retailers with different bargaining powers to exchange their spaces, we use the generalized Nash bargaining scheme to allocate the total profit surplus between the two retailers. Next, we consider the space-exchange problem involving three or more retailers, and construct a cooperative game in characteristic function form. We show that the game is essential and superadditive, and also prove that the core is non-empty. Moreover, in order to find a unique allocation scheme that ensures the stability of the grand coalition, we propose a new approach to compute a weighted Shapley value that satisfies the core conditions and also reflects retailers’ bargaining powers. Our analysis indicates that the space exchange by more retailers can result in a higher system-wide profit surplus and thus a higher allocation to each retailer under a fair scheme. | Cooperative game analysis of retail space-exchange problems |
S0377221713005961 | A comparison of regime-switching approaches to modeling the stochastic behavior of temperature with an aim to the valuation of temperature-based weather options is presented. Four models are developed. Three of these are two-state Markov regime-switching models and the other is a single-regime model. The regime-switching models are generated from a combination of different underlying processes for the stochastic component of temperature. In Model 1, one regime is governed by a mean-reverting process and the other by a Brownian motion. In Model 2, each regime is governed by a Brownian motion. In Model 3, each regime is governed by a mean-reverting process in which the mean and speed of the mean-reversion remain the same, but only the volatility switches between the states. Model 4 is a single-regime model, where the temperature dynamics are governed by a single mean-reverting process. All four models are utilized to determine the expected heating degree days (HDD) and cooling degree days (CDD), which play a crucial role in the valuation of weather options. A four-year temperature dataset from Toronto, Canada, is used for the analysis. Results demonstrate that Model 1 captures the temperature dynamics more accurately than the other three models. Model 1 is then used to price the monthly call options based on a range of strike HDD. | A comparison of regime-switching temperature modeling approaches for applications in weather derivatives |
S0377221713005973 | Despite advances in retail point-of-sale (POS) data sharing, retailers’ suppliers struggle to effectively use POS data to improve their fulfillment planning processes. The challenge lies in predicting retailer orders. We present evidence that retail echelon inventory processes translate into a long-run balance or equilibrium between orders and POS, which we refer to as the inventory balance effect, allowing for more accurate order forecasting. Based on the inventory balance effect, this research prescribes a forecasting approach which simultaneously uses both sources of information (retailer order history and POS data) to predict retailer orders to suppliers. Using data from a consumable product category, this approach is shown to outperform approaches based singularly on order or POS data, by up to 125%. The strength of this novel approach – significantly improved forecast accuracy with minimal additional analysis – make it a candidate for widespread adoption in retail supply chain collaborative planning and forecasting initiatives with corresponding impact on fulfillment performance and related operating costs. | Predicting retailer orders with POS and order data: The inventory balance effect |
S0377221713005985 | We study the acquisition and production planning problem for a hybrid manufacturing/remanufacturing system with core acquisition at two (high and low) quality conditions. We model the problem as a stochastic dynamic programming, derive the optimal dynamic acquisition pricing and production policy, and analyze the influences of system parameters on the acquisition prices and production quantities. The production cost differences among remanufacturing high- and low-quality cores and manufacturing new products are found to be critical for the optimal production and acquisition pricing policy: the acquisition price of high-quality cores is increasing in manufacturing and remanufacturing cost differences, while the acquisition price of low-quality cores is decreasing in the remanufacturing cost difference between high- and low-quality cores and increasing in manufacturing and remanufacturing cost differences; the optimal remanufacturing/manufacturing policy follows a base-on-stock pattern, which is characterized by some crucial parameters dependent on these cost differences. | Optimal acquisition and production policy in a hybrid manufacturing/remanufacturing system with core acquisition at different quality levels |
S0377221713005997 | In this paper we introduce an extension of the well known Rural Postman Problem, which combines arc routing with profits and facility location. Profitable arcs must be selected, facilities located at both end-points of the selected arcs, and a tour identified so as to maximize the difference between the profit collected along the arcs and the cost of traversing the arcs and installing the facilities. We analyze properties of the problem, present a mathematical programming formulation and a branch-and-cut algorithm. In an extensive computational experience the algorithm could solve instances with up to 140 vertices and 190 arcs and up to 50 vertices and 203 arcs. | The directed profitable location Rural Postman Problem |
S0377221713006000 | We analyze the problem of technology selection and capacity investment for electricity generation in a competitive environment under uncertainty. Adopting a Nash-Cournot competition model, we consider the marginal cost as the uncertain parameter, although the results can be easily generalized to other sources of uncertainty such as a load curve. In the model, firms make three different decisions: (i) the portfolio of technologies, (ii) each technology’s capacity and (iii) the technology’s production level for every scenario. The decisions related to the portfolio and capacity are ex-ante and the production level is ex-post to the realization of uncertainty. We discuss open and closed-loop models, with the aim to understand the relationship between different technologies’ cost structures and the portfolio of generation technologies adopted by firms in equilibrium. For a competitive setting, to the best of our knowledge, this paper is the first not only to explicitly discuss the relation between costs and generation portfolio but also to allow firms to choose a portfolio of technologies. We show that portfolio diversification arises even with risk-neutral firms and technologies with different cost expectations. We also investigate conditions on the probability and cost under which different equilibria of the game arise. | Technology selection and capacity investment under uncertainty |
S0377221713006012 | A novel optimal preventive maintenance policy for a cold standby system consisting of two components and a repairman is described herein. The repairman is to be responsible for repairing either failed component and maintaining the working components under certain guidelines. To model the operational process of the system, some reasonable assumptions are made and all times involved in the assumptions are considered to be arbitrary and independent. Under these assumptions, all system states and transition probabilities between them are analyzed based on a semi-Markov theory and a regenerative point technique. Markov renewal equations are constructed with the convolution of the cumulative distribution function of system time in each state and corresponding transition probability. By using the Laplace transform to solve these equations, the mean time from the initial state to system failure is derived. The optimal preventive maintenance policy that will provide the optimal preventive maintenance cycle is identified by maximizing the mean time from the initial state to system failure, and is determined in the form of a theorem. Finally, a numerical example and simulation experiments are shown which validated the effectiveness of the policy. component i i =1, 2 the cumulative distribution function (CDF) of component lifetime the CDF of repair time for failed component the CDF of maintenance time the failure rate of components the repair rate, the mean of G 1(t) the maintenance rate, the mean of G 2(t) the random variable denoting component lifetime the random variable denoting repair time the random variable denoting maintenance time the PM cycle the optimal PM cycle the random variable denoting T the CDF of X T the CDF for system transits from regenerative state S i to S j the CDF of time from entering S i to system failure the mean of π i (t) the states space E ={S i ∣i =0, 1, 2, 3} the convolution operator the symbol denoting the result of Laplace transform for a variable | A novel optimal preventive maintenance policy for a cold standby system based on semi-Markov theory |
S0377221713006024 | Product family design is generally characterized by two types of approaches: module-based and scale-based. While the former aims to enable product variety based on module configuration, the latter is to variegate product design by scaling up or down certain design parameters. The prevailing practice is to treat module configuration and scaling design as separate decisions or aggregate two design problems as a single-level, all-in-one optimization problem. In practice, optimization of scaling variables is always enacted within a specific modular platform; and meanwhile an optimal module configuration depends on how design parameters are to be scaled. The key challenge is how to deal with explicitly the coupling of these two design optimization problems. This paper formulates a Stackelberg game theoretic model for joint optimization of product family configuration and scaling design, in which a bilevel decision structure reveals coupled decision making between module configuration and parameter scaling. A bilevel mixed 0–1 non-linear programming model is developed and solved, comprising an upper-level optimization problem and a lower-level optimization problem. The upper level seeks for an optimal configuration of modules and module attributes by maximizing the shared surplus of an entire product family. The lower level entails parametric optimization of attribute values for optimal technical performance of each individual module. A case study of electric motors demonstrates that the bilevel joint optimization model excels in leveraging optimal scaling in conjunction with optimal module configuration, which is advantageous over the existing paradigm of product family scaling design that cannot change the product family configuration. | Joint optimization of product family configuration and scaling design by Stackelberg game |
S0377221713006048 | Due to an increased awareness and significant environmental pressures from various stakeholders, companies have begun to realize the significance of incorporating green practices into their daily activities. This paper proposes a framework using Fuzzy TOPSIS to select green suppliers for a Brazilian electronics company; our framework is built on the criteria of green supply chain management (GSCM) practices. An empirical analysis is made, and the data are collected from a set of 12 available suppliers. We use a fuzzy TOPSIS approach to rank the suppliers, and the results of the proposed framework are compared with the ranks obtained by both the geometric mean and the graded mean methods of fuzzy TOPSIS methodology. Then a Spearman rank correlation coefficient is used to find the statistical difference between the ranks obtained by the three methods. Finally, a sensitivity analysis has been performed to examine the influence of the preferences given by the decision makers for the chosen GSCM practices on the selection of green suppliers. Results indicate that the four dominant criteria are Commitment of senior management to GSCM; Product designs that reduce, reuse, recycle, or reclaim materials, components, or energy; Compliance with legal environmental requirements and auditing programs; and Product designs that avoid or reduce toxic or hazardous material use. | Selecting green suppliers based on GSCM practices: Using fuzzy TOPSIS applied to a Brazilian electronics company |
S0377221713006061 | This paper proposes an easily implementable, scalable decomposition heuristic for determining near optimal base stocks in two-level general inventory systems. In this heuristic, the general system is decomposed into assembly systems—one for each end product. For these assembly systems, the base-stock levels are calculated separately, taking into account risk-pooling effects for the common components. Our numerical analyses yield two main insights: First, the base-stock levels determined by the heuristic are close-to-optimal. Second, considerable improvements can be obtained compared to common-sense heuristics. | Determining near optimal base-stock levels in two-stage general inventory systems |
S0377221713006073 | Elevated fuel loads are contributing to an increase in the occurrence of, and area burned by, severe wildfires in many regions across the globe. In an attempt to reverse this trend, fire and land management agencies are investing in extensive fuel management programs. However, the planning of fuel treatment activities poses complicated decision-making problems with spatial and temporal dimensions. Here, we present a mixed integer programming model for spatially explicit multi-period scheduling of fuel treatments. The model provides a flexible framework that allows for landscape heterogeneity and a range of ecological and operational considerations and constraints. The model’s functionality is demonstrated on a series of hypothetical test landscapes and a number of implementation issues are discussed. | A spatial optimisation model for multi-period landscape level fuel management to mitigate wildfire impacts |
S0377221713006085 | To improve ATMs’ cash demand forecasts, this paper advocates the prediction of cash demand for groups of ATMs with similar day-of-the week cash demand patterns. We first clustered ATM centers into ATM clusters having similar day-of-the week withdrawal patterns. To retrieve “day-of-the-week” withdrawal seasonality parameters (effect of a Monday, etc.) we built a time series model for each ATMs. For clustering, the succession of seven continuous daily withdrawal seasonality parameters of ATMs is discretized. Next, the similarity between the different ATMs’ discretized daily withdrawal seasonality sequence is measured by the Sequence Alignment Method (SAM). For each cluster of ATMs, four neural networks viz., general regression neural network (GRNN), multi layer feed forward neural network (MLFF), group method of data handling (GMDH) and wavelet neural network (WNN) are built to predict an ATM center’s cash demand. The proposed methodology is applied on the NN5 competition dataset. We observed that GRNN yielded the best result of 18.44% symmetric mean absolute percentage error (SMAPE), which is better than the result of Andrawis, Atiya, and El-Shishiny (2011). This is due to clustering followed by a forecasting phase. Further, the proposed approach yielded much smaller SMAPE values than the approach of direct prediction on the entire sample without clustering. From a managerial perspective, the clusterwise cash demand forecast helps the bank’s top management to design similar cash replenishment plans for all the ATMs in the same cluster. This cluster-level replenishment plans could result in saving huge operational costs for ATMs operating in a similar geographical region. | Cash demand forecasting in ATMs by clustering and neural networks |
S0377221713006097 | This paper presents a novel four-stage algorithm for the measurement of the rank correlation coefficients between pairwise financial time series. In first stage returns of financial time series are fitted as skewed-t distributions by the generalized autoregressive conditional heteroscedasticity model. In the second stage, the joint probability density function (PDF) of the fitted skewed-t distributions is computed using the symmetrized Joe–Clayton copula. The joint PDF is then utilized as the scoring scheme for pairwise sequence alignment in the third stage. After solving the optimal sequence alignment problem using the dynamic programming method, we obtain the aligned pairs of the series. Finally, we compute the rank correlation coefficients of the aligned pairs in the fourth stage. To the best of our knowledge, the proposed algorithm is the first to use a sequence alignment technique to pair numerical financial time series directly, without initially transforming numerical values into symbols. Using practical financial data, the experiments illustrate the method and demonstrate the advantages of the proposed algorithm. | Measuring rank correlation coefficients between financial time series: A GARCH-copula based sequence alignment algorithm |
S0377221713006103 | In this paper we take into account three different spanning tree problems with degree-dependent objective functions. The main application of these problems is in the field of optical network design. In particular, we propose the classical Minimum Leaves Spanning Tree problem as a relevant problem in this field and show its relations with the Minimum Branch Vertices and the Minimum Degree Sum Problems. We present a unified memetic algorithm for the three problems and show its effectiveness on a wide range of test instances. | Relations, models and a memetic approach for three degree-dependent spanning tree problems |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.