FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0377221715003768 | The paper proposes a model for stochastic multi-mode resource-constrained project scheduling under risk aversion with the two objectives makespan and cost. Activity durations and costs are assumed as uncertain and modeled as random variables. For the scheduling part of the decision problem, the class of early-start policies is considered. In addition to the schedule, the assignment of execution modes to activities has to be selected. To take risk aversion into account, the approach of optimization under multivariate stochastic dominance constraints, recently developed in other fields, is adopted. For the resulting bi-objective stochastic integer programming problem, the Pareto frontier is determined by means of an exact solution method, incorporating a branch-and-bound technique based on the forbidden set branching scheme from stochastic project scheduling. Randomly generated test instances, partially derived from a test case from the PSPLIB, are used to show the computational feasibility of the approach. | Bi-Objective Multi-Mode Project Scheduling Under Risk Aversion |
S0377221715003781 | We address the problem of scheduling jobs in a permutation flowshop when their processing times adopt a given distribution (stochastic flowshop scheduling problem) with the objective of minimization of the expected makespan. For this problem, optimal solutions exist only for very specific cases. Consequently, some heuristics have been proposed in the literature, all of them with similar performance. In our paper, we first focus on the critical issue of estimating the expected makespan of a sequence and found that, for instances with a medium/large variability (expressed as the coefficient of variation of the processing times of the jobs), the number of samples or simulation runs usually employed in the literature may not be sufficient to derive robust conclusions with respect to the performance of the different heuristics. We thus propose a procedure with a variable number of iterations that ensures that the percentage error in the estimation of the expected makespan is bounded with a very high probability. Using this procedure, we test the main heuristics proposed in the literature and find significant differences in their performance, in contrast with existing studies. We also find that the deterministic counterpart of the most efficient heuristic for the stochastic problem performs extremely well for most settings, which indicates that, in some cases, solving the deterministic version of the problem may produce competitive solutions for the stochastic counterpart. | On heuristic solutions for the stochastic flowshop scheduling problem |
S0377221715003793 | We introduce the integrative cooperative search method (ICS), a multi-thread cooperative search method for multi-attribute combinatorial optimization problems. ICS musters the combined capabilities of a number of independent exact or meta-heuristic solution methods. A number of these methods work on sub-problems defined by suitably selected subsets of decision-set attributes of the problem, while others combine the resulting partial solutions into complete ones and, eventually, improve them. All these methods cooperate through an adaptive search-guidance mechanism, using the central-memory cooperative search paradigm. Extensive numerical experiments explore the behavior of ICS and its interest through an application to the multi-depot, periodic vehicle routing problem, for which ICS improves the results of the current state-of-the-art methods. | An integrative cooperative search framework for multi-decision-attribute combinatorial optimization: Application to the MDPVRP |
S0377221715003811 | This paper presents a novel allocation scheme to improve profits when splitting a scarce product among customer segments. These segments differ by demand and margin and they form a multi-level tree, e.g. according to a geography-based organizational structure. In practice, allocation has to follow an iterative process in which higher level quotas are disaggregated one level at a time, only based on local, aggregate information. We apply well-known econometric concepts such as the Lorenz curve and TheilâÂÂs index of inequality to find a non-linear approximation of the profit function in the customer tree. Our resulting Approximate Profit Decentral Allocation (ADA) scheme ensures that a group of truthfully reporting decentral planners makes quasi-coordinated decisions in support of overall profit-maximization in the hierarchy. The new scheme outperforms existing simple rules by a large margin and comes close to the first-best theoretical solution under a central planner and central information. | Decentral allocation planning in multi-stage customer hierarchies |
S0377221715003823 | Simulation optimization has gained popularity over the decades because of its ability to solve many practical problems that involve profound randomness. The methodology development of simulation optimization, however, is largely concerned with problems whose objective function is mean-based performance metric. In this paper, we propose a direct search method to solve the unconstrained simulation optimization problems with quantile-based objective functions. Because the proposed method does not require gradient estimation in the search process, it can be applied to solve many practical problems where the gradient of objective function does not exist or is difficult to estimate. We prove that the proposed method possesses desirable convergence guarantee, i.e., the algorithm can converge to the true global optima with probability one. An extensive numerical study shows that the performance of the proposed method is promising. Two illustrative examples are provided in the end to demonstrate the viability of the proposed method in real settings. | A direct search method for unconstrained quantile-based simulation optimization |
S0377221715003835 | The purpose of this paper is to highlight a serious omission in the recent work of Li (2012) for solving the two person zero-sum matrix games with pay-offs of triangular fuzzy numbers (TFNs) and propose a new methodology for solving such games. Li (2012) proposed a method which always assures that the max player gain-floor and min player loss-ceiling have a common TFN value. The present paper exhibits a flaw in this claim of Li (2012). The flaw arises on account of Li (2012) not explaining the meaning of solution of game under consideration. The present paper attempts to provide certain appropriate modifications in Li’s model to take care of this serious omission. These modifications in conjunction with the results of Clemente, Fernandez, and Puerto (2011) lead to an algorithm to solve matrix games with pay-offs of general piecewise linear fuzzy numbers. | On solving matrix games with pay-offs of triangular fuzzy numbers: Certain observations and generalizations |
S0377221715003847 | Empirically, cointegration and stochastic covariances, including stochastic volatilities, are statistically significant for commodity prices and energy products. To capture such market phenomena, we develop a continuous-time dynamics of cointegrated assets with a stochastic covariance matrix and derive the joint characteristic function of asset returns in closed-form. The proposed model offers an endogenous explanation for the stochastic mean-reverting convenience yield. The time series of spot and futures prices of WTI crude oil and gasoline shows cointegration relationship under both physical and risk-neutral measures. The proposed model also allows us to fit the observed term structure of futures prices and calibrate the market-implied cointegration relationship. We apply it to value options on a single commodity and on multiple commodities. | Commodity derivatives pricing with cointegration and stochastic covariances |
S0377221715003859 | This study seeks to analyse the price determination of low cost airlines in Europe and the effect that Internet has on this strategy. The outcomes obtained reveal that both users and companies benefit from the use of ICTs in the purchase and sale of airline tickets: the Internet allows consumers to increase their bargaining power comparing different airlines and choosing the most competitive flight, while companies can easily check the behaviour of users to adapt their pricing strategies using internal information. More than 2500 flights of the largest European low cost airlines have been used to carry out the study. The study revealed that the most significant variables for understanding pricing strategies were the number of rivals, the behaviour of the demand and the associated costs. The results indicated that consumers should buy their tickets before 25 days prior to departure. | The impact of the internet on the pricing strategies of the European low cost airlines |
S0377221715003860 | This paper presents a systemic methodology for identifying and analysing the stakeholders of an organisation at many different levels. The methodology is based on soft systems methodology and is applicable to all types of organisation, both for profit and non-profit. The methodology begins with the top-level objectives of the organisation, developed through debate and discussion, and breaks these down into the key activities needed to achieve them. A range of stakeholders are identified for each key activity. At the end, the functions and relationships of all the stakeholder groups can clearly be seen. The methodology is illustrated with an actual case study in Hunan University. | A systemic method for organisational stakeholder identification and analysis using Soft Systems Methodology (SSM) |
S0377221715003872 | This paper presents a novel nonparametric methodology to evaluate convergence in an industry, considering a multi-input multi-output setting for the assessment of total factor productivity. In particular, we develop two new indexes to evaluate σ-convergence and β-convergence that can be computed using nonparametric techniques such as Data Envelopment Analysis. The methodology developed is particularly useful to enhance productivity assessments based on the Malmquist index. The methodology is applied to a real world context, consisting of a sample of Portuguese construction companies that operated in the sector between 2008 and 2010. The empirical results show that Portuguese companies tended to converge, both in the sense of σ and β, in all construction activity segments in the aftermath of the financial crisis. | A nonparametric methodology for evaluating convergence in a multi-input multi-output setting |
S0377221715003884 | Consider n mobile application (app) developers selling their software through a common platform provider (retailer), who offers a consignment contract with revenue sharing. Each app developer simultaneously determines the selling price of his app and the extent to which he invests in its quality. The demand for the app, which depends on both price and quality investment, is uncertain, so the risk attitudes of the supply chain members have to be considered. The members' equilibrium strategies are analyzed under different attitudes toward risk: risk-aversion, risk-neutrality and risk-seeking. We show that the retailer's utility function has no effect on the equilibrium strategies, and suggest schemes to identify these strategies for any utility function of the developers. Closed-form solutions are obtained under the exponential utility function. | Consignment contract for mobile apps between a single retailer and competitive developers with different risk attitudes |
S0377221715003896 | The three main measures of competition (HHI, Lerner index, and H-statistic) are uncorrelated for U.S. banks. We investigate why this occurs, propose a frontier measure of competition, and apply it to five major bank service lines. Fee-based banking services comprise 35 percent of bank revenues so assessing competition by service line is preferred to using a single measure for traditional activities extended to the entire bank. As the Lerner index and the H-statistic together explain only 1 percent of HHI variation and the HHI is similarly unrelated to the frontier method developed here, current merger/acquisition guidelines should be adjusted as banking concentration seems unrelated to likely more accurate competition measures. | A frontier measure of U.S. banking competition |
S0377221715003902 | The purpose of this research is to help manufacturing companies identify the key performance evaluation criteria for achieving customer satisfaction through Balanced Scorecard (BSC) and Multiple Criteria Decision Making (MCDM) approaches. To explore the causal relationships among the four dimensions of business performance in Balanced Scorecard as well as their key performance criteria, a MCDM approach combining DEMATEL and ANP techniques is adopted. Subsequently, the MCDM framework is tested using Delphi method and a questionnaire survey is conducted in 24 manufacturing firms from Taiwan, Vietnam and Thailand. The research findings indicate that manufacturing companies should focus more on improving customer perspectives such as customer satisfaction and customer loyalty by integrating products and services innovation and providing diversified value-added product–service offerings as well as developing close long-term partnership with customers. In addition, the classification of importance and improvability into four strategic planning zones provide practicing managers with a decision making tool for prioritizing continuous improvement projects and effectively allocating their resources to those key criteria identified in the strategic map for business performance improvement. By identifying critical criteria and their interrelationships, our research results can help manufacturing companies enhance their business performance in both financial and non-financial perspectives. They can also serve as valuable guidelines and references for manufacturing companies to achieve better customer satisfaction through sustainable product–service system practices. | Achieving customer satisfaction through product–service systems |
S0377221715004099 | Recently, there has been an increasing concern on the carbon efficiency of the manufacturing industry. Since the carbon emissions in the manufacturing sector are directly related to the energy consumption, an effective way to improve carbon efficiency in an industrial plant is to design scheduling strategies aiming at reducing the energy cost of production processes. In this paper, we consider a permutation flow shop (PFS) scheduling problem with the objectives of minimizing the total carbon emissions and the makespan. To solve this multi-objective optimization problem, we first investigate the structural properties of non-dominated solutions. Inspired by these properties, we develop an extended NEH-Insertion Procedure with an energy-saving capability. The accelerating technique in Taillard’s method, which is commonly used for the ordinary flowshop problem, is incorporated into the procedure to improve the computational efficiency. Based on the extended NEH-Insertion Procedure, a multi-objective NEH algorithm (MONEH) and a modified multi-objective iterated greedy (MMOIG) algorithm are designed for solving the problem. Numerical computations show that the energy-saving module of the extended NEH-Insertion Procedure in MONEH and MMOIG significantly helps to improve the discovered front. In addition, systematic comparisons show that the proposed algorithms perform more effectively than other tested high-performing meta-heurisitics in searching for non-dominated solutions. | Carbon-efficient scheduling of flow shops by multi-objective optimization |
S0377221715004105 | In this paper, a capacitated vehicle routing problem is discussed which occurs in the context of glass waste collection. Supplies of several different product types (glass of different colors) are available at customer locations. The supplies have to be picked up at their locations and moved to a central depot at minimum cost. Different product types may be transported on the same vehicle, however, while being transported they must not be mixed. Technically this is enabled by a specific device, which allows for separating the capacity of each vehicle individually into a limited number of compartments where each compartment can accommodate one or several supplies of the same product type. For this problem, a model formulation and a variable neighborhood search algorithm for its solution are presented. The performance of the proposed heuristic is evaluated by means of extensive numerical experiments. Furthermore, the economic benefits of introducing compartments on the vehicles are investigated. | The multi-compartment vehicle routing problem with flexible compartment sizes |
S0377221715004117 | The deviation flow refueling location problem is to locate p refueling stations in order to maximize the flow volume that can be refueled respecting the range limitations of the alternative fuel vehicles and the shortest path deviation tolerances of the drivers. We first provide an enhanced compact model based on a combination of existing models in the literature for this relatively new operations research problem. We then extend this problem and introduce the refueling station location problem which adds the routing aspect of the individual drivers. Our proposed branch and price algorithm relaxes the simple path assumption generally adopted in the existing studies and implicitly takes into account deviation tolerances without the pregeneration of the routes. Therefore, the decrease in solution times with respect to existing models is significant and our algorithm scales very efficiently to more realistic network dimensions. | A branch and price approach for routing and refueling station location model |
S0377221715004129 | We introduce a new stochastic model for inflow time series that is designed with the requirements of hydropower scheduling problems in mind. The model is an “iterated function system’’: it models inflow as continuous, but the random innovation at each time step has a discrete distribution. With this inflow model, hydro-scheduling problems can be solved by the stochastic dual dynamic programming (SDDP) algorithm exactly as posed, without the additional sampling error introduced by sample average approximations. The model is fitted to univariate inflow time series by quantile regression. We consider various goodness-of-fit metrics for the new model and some alternatives to it, including performance in an actual hydro-scheduling problem. The numerical data used are for inflows to New Zealand hydropower reservoirs. | Stochastic inflow modeling for hydropower scheduling problems |
S0377221715004130 | The Police Districting Problem (PDP) concerns the efficient and effective design of patrol sectors in terms of performance attributes such as workload, response time, etc. A balanced definition of the patrol sector is desirable as it results in crime reduction and in better service. In this paper, a multi-criteria Police Districting Problem defined in collaboration with the Spanish National Police Corps is presented. This is the first model for the PDP that considers the attributes of area, risk, compactness, and mutual support. The decision-maker can specify his/her preferences on the attributes, on workload balance, and efficiency. The model is solved by means of a heuristic algorithm that is empirically tested on a case study of the Central District of Madrid. The solutions identified by the model are compared to patrol sector configurations currently in use and their quality is evaluated by public safety service coordinators. The model and the algorithm produce designs that significantly improve on the current ones. | A multi-criteria Police Districting Problem for the efficient and effective design of patrol sector |
S0377221715004142 | This work develops a stochastic model of a two-echelon supply chain of virtual products in which the decision makers—a manufacturer and a retailer—may be risk-sensitive. Virtual products allow the retailer to avoid holding costs and ensure timely fulfillment of demand with no risk of shortage. We expand on the work of Chernonog and Avinadav (2014), who investigated the pricing of virtual products under uncertain and price-dependent demand, by including sales-effort as a decision variable that affects demand. Whereas in the previous work equilibrium was obtained exactly as in a deterministic case for any utility function, herein it is not. Consequently, we focus on the strategies of both the manufacturer and the retailer under different profit criteria, including the use of bi-criteria. By formulating the problem as a Stackelberg game, we show that the problem can be analytically solved by assuming certain common structures of the demand function and of the preferences of both the manufacturer and the retailer with regard to risk. We extend the solution to the case of imperfect information regarding the preferences and offer guidelines for the formation of efficient sets of decisions under bi-criteria. Finally, we provide numerical results. | Pricing and sales-effort investment under bi-criteria in a supply chain of virtual products involving risk |
S0377221715004166 | This paper analyzes scheduling in a data gathering network with data compression. The nodes of the network collect some data and pass them to a single base station. Each node can, at some cost, preprocess the data before sending it, in order to decrease its size. Our goal is to transfer all data to the base station in given time, at the minimum possible cost. We prove that the decision version of this scheduling problem is NP-complete. Polynomial-time heuristic algorithms for solving the problem are proposed and tested in a series of computational experiments. | Scheduling for data gathering networks with data compression |
S0377221715004178 | A new group decision-making approach is developed to address a multiple criteria sorting problem with uncertainty. The uncertainty in this paper refers to imprecise evaluations of alternatives with respect to the considered criteria. The belief structure and the evidential reasoning approach are employed to represent and aggregate the uncertain evaluations. In our approach, the preference information elicited from a group of decision makers is composed of the assignment examples of some reference alternatives. The disaggregation–aggregation paradigm is utilized to infer compatible preference models from these assignment examples. To help the group reach an agreement on the assignment of alternatives, we propose a consensus-reaching process. In this process, a consensus degree is defined to measure the agreement among the decision makers’ opinions. When the decision makers are not satisfied with the consensus degree, possible solutions are explored to help them adjust assignment examples in order to improve the consensus level. If the consensus degree arrives at a satisfactory level, a linear program is built to determine the collective assignment of alternatives. The application of the proposed approach to a customer satisfaction analysis is presented at the end of the paper. | A group decision-making approach based on evidential reasoning for multiple criteria sorting problem with uncertainty |
S0377221715004191 | This work considers a continuous inventory replenishment system where demand is stochastic and dependent on the state of the environment. A Markov Modulated Poisson Process (MMPP) is utilized to model the demand process where the corresponding embedded Markov Chain represents the state of the environment. The equations to calculate the system inventory measures and the number of orders per unit time are obtained for a continuous, infinite horizon and dynamically changing (s, S) policy. An efficient optimization heuristic is presented and compared to the commonly used approach of approximating the demand-count process over the lead time with a Normal distribution. An investigation of the MMPP demand process is considered where we quantify the impact of variability in the demand-count process which is due to auto-correlation. Our findings indicate that when demand correlation is high, a dynamic control, where the (s, S) policy changes with state of the environment governing the MMPP, is highly superior to the commonly used “static” heuristics. We propose two dynamic policies of varying computational complexity, and cost efficiency, depending on the class of the product (one for class A, and one for classes B and C), to handle such high-correlation situations. | Continuous (s, S) policy with MMPP correlated demand |
S0377221715004208 | Many years have passed since Baesens et al. published their benchmarking study of classification algorithms in credit scoring [Baesens, B., Van Gestel, T., Viaene, S., Stepanova, M., Suykens, J., & Vanthienen, J. (2003). Benchmarking state-of-the-art classification algorithms for credit scoring. Journal of the Operational Research Society, 54(6), 627–635.]. The interest in prediction methods for scorecard development is unbroken. However, there have been several advancements including novel learning methods, performance measures and techniques to reliably compare different classifiers, which the credit scoring literature does not reflect. To close these research gaps, we update the study of Baesens et al. and compare several novel classification algorithms to the state-of-the-art in credit scoring. In addition, we examine the extent to which the assessment of alternative scorecards differs across established and novel indicators of predictive accuracy. Finally, we explore whether more accurate classifiers are managerial meaningful. Our study provides valuable insight for professionals and academics in credit scoring. It helps practitioners to stay abreast of technical advancements in predictive modeling. From an academic point of view, the study provides an independent assessment of recent scoring methods and offers a new baseline to which future approaches can be compared. | Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research |
S0377221715004221 | Finance is a popular field for applied and methodological research involving multiple criteria decision aiding (MCDA) techniques. In this study we present an up-to-date bibliographic survey of the contributions of MCDA in financial decision making, focusing on the developments during the past decade. The survey covers all main areas of financial modeling as well as the different methodological approaches in MCDA and its connections with other analytical fields. On the basis of the survey results, we discuss the contributions of MCDA in different areas of financial decision making and identify established and emerging research topics, as well as future opportunities and challenges. | Multiple criteria decision aiding for finance: An updated bibliographic survey |
S0377221715004233 | This paper considers single-component repairable systems supporting different levels of workloads and subject to random repair times. The mission is successful if the system can perform a specified amount of work within the maximum allowed mission time. The system can work with different load levels, each corresponding to different productivity, time-to-failure distribution, and per time unit operation cost. A numerical algorithm is first suggested to evaluate mission success probability and conditional expected cost of a successful mission for the considered repairable system. The load optimization problem is then formulated and solved for finding the system load level that minimizes the expected mission cost subject to providing a desired level of the mission success probability. Examples with discrete and continuous load variation are provided to illustrate the proposed methodology. Effects of repair efficiency, repair time distribution, and maximum allowed time on the mission reliability and cost are also investigated through the examples. total amount of work in the mission maximal possible number of repairs during the mission maximum allowed mission time mission success probability conditional expected cost of successful mission conditional expected mission time given that the mission succeeds after j repairs event when the jth failure happens at time Tj and the system spends time Xj in operation mode before the failure joint distribution function of random values Tj and Xj probability that jth failure happens in time interval v and the system spent in operation mode w intervals before the failure random repair time for system minimal and maximal possible realizations of D productivity of system working with load level L pdf of time-to-failure distribution of system working with load level L cdf of time-to-failure distribution of system working with load level L pdf of repair time cdf of repair time number of discrete intervals considered in the numerical algorithm duration of a discrete time interval: Δ = τ/m number of time intervals needed to complete the mission if no failures occur floor operation that returns the maximal integer not exceeding x event when the mission is completed after j repairs Pr(Yj ) scale, shape parameters of Weibull time-to-failure distribution for load level L per time unit repair cost per time unit operation cost under load level L repair efficiency coefficient | Optimal loading of system with random repair time |
S0377221715004245 | This paper examines the conditions for a widespread adoption of Carbon Capture transport and Storage (CCS) by a group of emitters that can be connected to a common CO2 pipeline. It details a modeling framework aimed at assessing the critical value in the charge for the CO2 emissions required for each of the emitters to decide to implement capture capabilities. This model can be used to analyze how the tariff structure imposed on the CO2 pipeline operator modifies the overall cost of CO2 abatement via CCS. This framework is applied to the case of a real European CO2 pipeline project. We find that the obligation to use cross-subsidy-free pipeline tariffs has a minor impact on the minimum CO2 price required to adopt the CCS. In contrast, the obligation to charge non-discriminatory prices can either impede the adoption of CCS or significantly raise that price. Besides which, we compared two alternative regulatory frameworks for CO2 pipelines: a common European organization as opposed to a collection of national regulations. The results indicate that the institutional scope of that regulation has a limited impact on the adoption of CCS compared to the detailed design of the tariff structure imposed on pipeline operators. | Joining the CCS club! The economics of CO2 pipeline projects |
S0377221715004257 | We consider the coordination challenge with a risk-neutral manufacturer which supplies to multiple heterogeneou retailers. We find that the manufacturer can maximize its expected profit only if the expected profit of the supply chain is maximized, or equivalently supply chain coordination (SCC) is achieved. The target sales rebate (TSR) contract is commonly used in practice to achieve SCC. However, as we found in this paper, the presence of heterogeneity in retailers’ minimum expected profit requirements is the major cause that a single TSR contract and the related single hybrid contracts all fail to achieve SCC and maximize the manufacturer’s expected profit simultaneously. Thus, we develop an innovative menu of TSR contracts with fixed order quantity (TSR-FOQ) . Although there are multiple contracts in a menu, we find that the manufacturer only needs to decide one basic TSR contract and two newly developed parameters termed as the risk-level indicator and the separation indicator, in applying the sophisticated menu of TSR-FOQ. By adjusting the two indicators, the manufacturer can control the profit variance of the retailers and the separations of the component contracts of the menu. We further propose another sophisticated menu of TSR with minimum order quantity and quantity discount contracts which can give each retailer a higher degree of freedom in the selection of order quantity. Differences between the two menus are analytically examined. Some meaningful managerial insights are generated. | Innovative menu of contracts for coordinating a supply chain with multiple mean-variance retailers |
S0377221715004269 | We investigate the impact of advance notice of product returns on the performance of a decentralised closed loop supply chain. The market demands and the product returns are stochastic and are correlated with each other. The returned products are converted into “as-good-as-new” products and used, together with new products, to satisfy the market demand. The remanufacturing process takes time and is subject to a random yield. We investigate the benefit of the manufacturer obtaining advance notice of product returns from the remanufacturer. We demonstrate that lead times, random yields and the parameters describing the returns play a significant role in the benefit of the advance notice scheme. Our mathematical results offer insights into the benefits of lead time reduction and the adoption of information sharing schemes. | The impact of information sharing, random yield, correlation, and lead times in closed loop supply chains |
S0377221715004270 | This paper introduces and studies the minimum edge blocker dominating set problem (EBDP), which is formulated as follows. Given a vertex-weighted undirected graph and r > 0, remove a minimum number of edges so that the weight of any dominating set in the remaining graph is at least r. Dominating sets are used in a wide variety of graph-based applications such as the analysis of wireless and social networks. We show that the decision version of EBDP is NP-hard for any fixed r > 0. We present an analytical lower bound for the value of an optimal solution to EBDP and formulate this problem as a linear 0–1 program with a large number of constraints. We also study the convex hull of feasible solutions to EBDP and identify facet-inducing inequalities for this polytope. Furthermore, we develop the first exact algorithm for solving EBDP, which solves the proposed formulation by a branch-and-cut approach where nontrivial constraints are applied in a lazy fashion. Finally, we also provide the computational results obtained by using our approach on a test-bed of randomly generated instances and real-life power-law graphs. | Minimum edge blocker dominating set problem |
S0377221715004282 | The Set Covering Problem (SCP) is NP -hard. We propose a new Row Weighting Local Search (RWLS) algorithm for solving the unicost variant of the SCP, i.e., USCPs where the costs of all sets are identical. RWLS is a heuristic algorithm that has three major components united in its local search framework: (1) a weighting scheme, which updates the weights of uncovered elements to prevent convergence to local optima, (2) tabu strategies to avoid possible cycles during the search, and (3) a timestamp method to break ties when prioritizing sets. RWLS has been evaluated on a large number of problem instances from the OR-Library and compared with other approaches. It is able to find all the best known solutions (BKS) and improve 14 of them, although requiring a higher computational effort on several instances. RWLS is especially effective on the combinatorial OR-Library instances and can improve the best known solution to the hardest instance CYC11 considerably. RWLS is conceptually simple and has no instance-dependent parameters, which makes it a practical and easy-to-use USCP solver. | An efficient local search heuristic with row weighting for the unicost set covering problem |
S0377221715004294 | Decision making on the selection of transportation infrastructure projects is an interesting subject to both transportation authorities and researchers. Due to resource limitations, the selected projects should then be scheduled during the planning horizon. Integration of selecting and scheduling projects into a single model increases the accuracy of results; however it leads to more complexity. In this paper, first, three different mathematical programming models are presented to integrate selecting and scheduling of urban road construction projects as a time-dependent discrete network design problem. Then, the model that seems more flexible and realistic is selected and an evolutionary approach is proposed to solve it. The proposed approach is a combination of three well-known techniques: the phase-I of the two-phase simplex method, Frank-Wolfe algorithm, and genetic algorithm. Taguchi method is used to optimize the genetic algorithm parameters. The main difficulty in solving the model is due to the large number of subsequent network traffic assignment problems that should be solved which makes the solution process very time-consuming. Therefore, a procedure is proposed to overcome this difficulty by significantly reducing the traffic assignment problem solution time. In order to verify the performance of the proposed approach, 27 randomly generated test problems of different scales are applied to Sioux Falls urban transportation network. The proposed approach and full enumeration method are used to solve the problems. Numerical results show that the proposed approach has an acceptable performance in terms of both solution quality and solution time. | Integration of selecting and scheduling urban road construction projects as a time-dependent discrete network design problem |
S0377221715004300 | An effective emergency medical service (EMS) is a critical part of any health care system. This paper presents the optimization of EMS vehicle fleet allocation and base station location through the use of a genetic algorithm (GA) with an integrated EMS simulation model. Two tiers to the EMS model realized the different demands on two vehicle classes; ambulances and rapid response cars. Multiple patient classes were modelled and survival functions used to differentiate the required levels of service. The objective was maximization of the overall expected survival probability across patient classes. Applications of the model were undertaken using real call data from the London Ambulance Service. The simulation model was shown to effectively emulate real-life performance. Optimization of the existing resource plan resulted in significant improvements in survival probability. Optimizing a selection of 1 hour periods in the plan, without introducing additional resources, resulted in a notable increase in the number of cardiac arrest patients surviving per year. The introduction of an additional base station further improved survival when its location and resourcing were optimized for key periods of service. Also, the removal of a base station from the system was found to have minimal impact on survival probability when the selected station and resourcing were optimized simultaneously. | A simulation model to enable the optimization of ambulance fleet allocation and base station location for increased patient survival |
S0377221715004312 | In this paper we propose an inexact proximal point method to solve constrained minimization problems with locally Lipschitz quasiconvex objective functions. Assuming that the function is also bounded from below, lower semicontinuous and using proximal distances, we show that the sequence generated for the method converges to a stationary point of the problem. | An inexact proximal method for quasiconvex minimization |
S0377221715004324 | The reliability of an expert is an important concept in multiple attribute group decision analysis (MAGDA). However, reliability is rarely considered in MAGDA, or it may be simply assumed that all experts are fully reliable and thus their reliabilities do not need to be considered explicitly. In fact, any experts can only be bounded rational and their various degrees of reliabilities may significantly influence MAGDA results. In this paper, we propose a new method based on the evidential reasoning rule to explicitly measure the reliability of each expert in a group and use expert weights and reliabilities to combine expert assessments. Two sets of assessments, i.e., original assessments and updated assessments provided after group analysis and discussion are taken into account to measure expert reliabilities. When the assessments of some experts are incomplete while global ignorance is incurred, pairs of optimization problems are constructed to decide interval-valued expert reliabilities. The resulting expert reliabilities are applied to combine the expert assessments of alternatives on each attribute and then to generate the aggregated assessments of alternatives. An industry evaluation problem in Wuhu, a city in Anhui Province of China is analyzed by using the proposed method as a real case study to demonstrate its detailed implementation process, validity, and applicability. | A group evidential reasoning approach based on expert reliability |
S0377221715004336 | Global population ageing is creating immense pressures on hospitals and other healthcare services, compromising their abilities to meet the growing demand from elderly patients. Current demand–supply gaps result in prolonged waiting times in emergency departments (EDs), and several studies have focused on improving ED performance. However, the overcrowding in EDs generally stems from delayed patient flows to inpatient wards – which are congested with inpatients waiting for beds in post-acute facilities. This problem of bed blocking in acute hospitals causes substantial cost burdens on hospitals. This study presents a system dynamics methodology to model the dynamic flow of elderly patients in the Irish healthcare system aimed at gaining a better understanding of the dynamic complexity caused by the system's various parameters. The model evaluates the stock and flow interventions that Irish healthcare executives have proposed to address the problem of delayed discharges, and ultimately reduce costs. The anticipated growth in the nation's demography is also incorporated in the model. Policy makers can also use the model to identify the potential strategic risks that might arise from the unintended consequences of new policies designed to overcome the problem of the delayed discharge of elderly patients. | A system dynamics view of the acute bed blockage problem in the Irish healthcare system |
S0377221715004348 | This paper considers a tollbooth queueing system where customers arrive according to a Poisson process and there are two heterogeneous servers of exponential service times. We show that eigenvalues can be found explicitly for the characteristic matrix polynomial associated with the Markov chain characterizing the system. We derive a closed-form solution for the steady state probabilities to make the straightforward computation of performance measures. | A closed-form solution for a tollbooth tandem queue with two heterogeneous servers and exponential service times |
S0377221715004361 | This paper investigates the effect of information updating on the members of a two-stage supply chain in the presence of spot market. The supplier decides the contract price. New information becomes available as time progresses. The manufacturer updates his belief on demand and/or spot price and subsequently decides the contract quantity. The demand and spot price are correlated. Thus, the new demand information also updates the belief on the spot price, and vice versa. We model the problem with an information updating Stackelberg game and derive unique equilibrium strategies. Previous studies have considered only the demand information and concluded that improved demand information always benefits the supplier. By contrast, we demonstrate that improved demand information benefits both the supplier and manufacturer if the correlation coefficient between the two uncertainties has a small positive value and benefits the manufacturer but hurts the supplier otherwise. Moreover, superior spot price information benefits only the manufacturer and always hurts the supplier. Surprisingly, superior information fails to improve the performance of the supply chain and only changes the allocation of the profits between the supplier and manufacturer. Our findings likewise provide insights into when the supplier intends to use the contract channel and which type of information updating facility or expertise to invest in if a choice must be made. | Demand information and spot price information: Supply chains trading in spot markets |
S0377221715004373 | This paper develops a slack-based decomposition of profit efficiency based on a directional distance function. It is an alternative to Cooper, Pastor, Aparicio, and Borras (2011). | Decomposing profit efficiency using a slack-based directional distance function |
S0377221715004385 | We discuss the incorporation of risk measures into multistage stochastic programs. While much attention has been recently devoted in the literature to this type of model, it appears that there is no consensus on the best way to accomplish that goal. In this paper, we discuss pros and cons of some of the existing approaches. A key notion that must be considered in the analysis is that of consistency, which roughly speaking means that decisions made today should agree with the planning made yesterday for the scenario that actually occurred. Several definitions of consistency have been proposed in the literature, with various levels of rigor; we provide our own definition and give conditions for a multi-period risk measure to be consistent. A popular way to ensure consistency is to nest the one-step risk measures calculated in each stage, but such an approach has drawbacks from the algorithmic viewpoint. We discuss a class of risk measures—which we call expected conditional risk measures—that address those shortcomings. We illustrate the ideas set forth in the paper with numerical results for a pension fund problem in which a company acts as the sponsor of the fund and the participants’ plan is defined-benefit. | Risk aversion in multistage stochastic programming: A modeling and algorithmic perspective |
S0377221715004397 | Social media usage in evacuations and emergency management represents a rapidly expanding field of study. Our paper thus provides quantitative insight into a serious practical problem. Within this context a behavioural approach is key. We discuss when facilitators should consider model-based interventions amid further implications for disaster communication and emergency management. We model the behaviour of individual people by deriving optimal contrarian strategies. We formulate a Bayesian algorithm which enables the optimal evacuation to be conducted sequentially under worsening conditions. | Elementary modelling and behavioural analysis for emergency evacuations using social media |
S0377221715004610 | Recent years have witnessed increased attention on peer-to-peer (P2P) lending, which provides an alternative way of financing without the involvement of traditional financial institutions. A key challenge for personal investors in P2P lending marketplaces is the effective allocation of their money across different loans by accurately assessing the credit risk of each loan. Traditional rating-based assessment models cannot meet the needs of individual investors in P2P lending, since they do not provide an explicit mechanism for asset allocation. In this study, we propose a data-driven investment decision-making framework for this emerging market. We designed an instance-based credit risk assessment model, which has the ability of evaluating the return and risk of each individual loan. Moreover, we formulated the investment decision in P2P lending as a portfolio optimization problem with boundary constraints. To validate the proposed model, we performed extensive experiments on real-world datasets from two notable P2P lending marketplaces. Experimental results revealed that the proposed model can effectively improve investment performances compared with existing methods in P2P lending. | Instance-based credit risk assessment for investment decisions in P2P lending |
S0377221715004622 | This paper investigates the anchor points in nonconvex Data Envelopment Analysis (DEA), called Free Disposal Hull (FDH), technologies. We develop the concept of anchor points under various returns to scale assumptions in FDH models. A necessary and sufficient condition for characterizing the anchor points is provided. Since the set of anchor points is a subset of the set of extreme units, a definition of extreme units in non-convex technologies as well as a new method for obtaining these units are given. Finally, a polynomial-time algorithm for identification of the anchor points in FDH models is provided. Obtaining both extreme units and anchor points is done via calculating only some ratios, without solving any mathematical programming problem. | Identification of the anchor points in FDH models |
S0377221715004634 | This paper proposes an optimization model for high-speed railway scheduling. The model is composed of two sub-models. The first is a discrete event simulation model which represents the supply of the railway services whereas the second is a constrained logit-type choice model which takes into account the behaviour of users. This discrete choice model evaluates the attributes of railway services such as the timetable, fare, travel time and seat availability (capacity constraints) and computes the High-Speed railway demand for each planned train service. A hybridisation of a Standard Particle Swarm Optimisation and Nealder–Mead methods has been applied for solving the proposed model and a real case study of the High-Speed corridor Madrid–Seville in Spain has been analysed. Furthermore, parallel computation strategies are used to speed up the proposed approach. | High-speed railway scheduling based on user preferences |
S0377221715004646 | Given the growing importance of service supply chain management (SSCM) in operations, we review a selection of papers in the operations research and the management science (OR/MS) literature that focus on innovative measures associated with the SSCM. First, we review and discuss the definitions of service supply chains (SSCs) and categorize SSCs into the Service Only Supply Chains (SOSCs) and the Product Service Supply Chains (PSSCs). Second, by classifying the literature into three major areas, namely service supply management, service demand management, and the coordination of service supply chains, we derive insights into the current state of knowledge in each area, and examine the evolution of the SSCM research over the past decade. Finally, we identify some associated research challenges and explore future directions for research on SSCM from an operational perspective. | Service supply chain management: A review of operational models |
S0377221715004658 | This paper proposes a nonlinear integer program for determining an optimal plan of zero-defect, single-sampling by attributes for incoming inspections in assembly lines. Individual parts coming to an assembly line differ in the non-conforming (NC) risk, NC severity, lot size, and inspection cost-effectiveness. The proposed optimization model is able to determine the inspection sample size for each of the parts in a resource constrained condition where a product’s NC risk is not a linear combination of NC risks of the individual parts. This paper develops a three-step solution procedure that effectively reduces the solution time for larger size problems commonly seen in assembly lines. The proposed optimization model provides insightful implications for quality management. For example, it reveals the principle of sample size decisions for heterogeneous, dependent parts waiting for incoming inspections; as well as provides a tool for quantifying the expected return from investing additional inspection resources. The optimization model builds a foundation for extensions to advanced inspection sampling plans. | An optimal plan of zero-defect single-sampling by attributes for incoming inspections in assembly lines |
S0377221715004671 | The transformation of the European electricity system requires substantial investment in transmission capacity to facilitate cross-border trade and to efficiently integrate renewable energy sources. However, network planning in the EU is still mainly a national prerogative. In contrast to other studies aiming to identify the pan-European (continental) welfare-optimal transmission expansion, we investigate the impact of zonal planners deciding on network investment strategically, with the aim of maximizing the sum of consumer surplus, generator profits and congestion rents in their jurisdiction. This reflects the inadequacy of current mechanisms to compensate for welfare re-allocations across national boundaries arising from network upgrades. We propose a three-stage equilibrium model to describe the Nash game between zonal planners (i.e., national governments, regulators, or system operators), each taking into account the impact of network expansion on the electricity spot market and the resulting welfare effects on the constituents within its jurisdiction. We propose a novel way to solve the resulting GNE/EPEC-type problem using a variation of a disjunctive constraints reformulation. This allows to solve this problem without a priori assumptions on the relative valuation of shared constraints by each player. Using a four-node sample network, we identify several Nash equilibria of the game between the zonal planners. This example illustrates the failure to reach the first-best welfare expansion in the absence of an effective framework for cross-border cost-benefit allocation. | National-strategic investment in European power transmission capacity |
S0377221715004683 | We study a distribution warehouse in which trailers need to be assigned to docks for loading or unloading. A parking lot is used as a buffer zone and transportation between the parking lot and the docks is performed by auxiliary resources called terminal tractors. Each incoming trailer has a known arrival time and each outgoing trailer a desired departure time. The primary objective is to produce a docking schedule such that the weighted sum of the number of late outgoing trailers and the tardiness of these trailers is minimized; the secondary objective is to minimize the weighted completion time of all trailers, both incoming and outgoing. The purpose of this paper is to produce high-quality solutions to large instances that are comparable to a real-life case. This will oblige us to abandon the guarantee of always finding an optimal solution, and we will instead look into a number of sub-optimal procedures. We implement four different methods: a mathematical formulation that can be solved using an IP solver, a branch-and-bound algorithm, a beam search procedure and a tabu search method. Lagrangian relaxation is embedded in the algorithms for computing lower bounds. The different solution frameworks are compared via extensive computational experiments. | Practical solutions for a dock assignment problem with trailer transportation |
S0377221715004695 | Container ships in a string may not have the same capacity. Therefore, the sequence of ships affects the number of containers that are delayed at export ports due to demand uncertainty, for instance, “a large ship, followed by a small ship, then another large ship, and finally another small ship” is better than “a large ship, followed by another large ship, then a small ship, and finally another small ship”. We hence aim to determine the sequence of the ships in a string to minimize the delay of containers, without requiring the probability distribution functions for the future demand. We propose three rules to identify an optimal or near-optimal string. The rules have been proved to be effective based on extensive numerical experiments. A rough estimation indicates that over 6 million dollars/year could be saved for all liner services in the world by optimizing the sequences of ships. | Optimal sequence of container ships in a string |
S0377221715004701 | This paper proposes a self-contained reference for both policy makers and scholars who want to address the problem of efficiency and effectiveness of local public transport (LPT), with special emphasis on urban transit, in a sound empirical way. Framing economic efficiency studies into a transport planning perspective, it offers a critical discussion of the existing empirical studies, relating them to the main methodological approaches used. The connection between such perspectives and Operations Research studies dealing with scheduling and tactical design of public transport services is also developed. The comprehensive classification of selected relevant dimensions of the empirical literature, namely inputs, outputs, kind of data analysed, methods adopted and policy relevant questions addressed, and the systematic investigation of their interrelationships allows us to summarise the existing literature and to propose desirable developments and extensions for future studies in the field. | Efficiency and effectiveness in the urban public transport sector: A critical review with directions for future research |
S0377221715004713 | When cheap fossil energy is polluting and pollutant no longer absorbed beyond a certain concentration, there is a moment when the introduction of a cleaner renewable energy, although onerous, is optimal with respect to inter-temporal utility. The cleaner technology is adopted either instantaneously or gradually at a controlled rate. The problem of optimum under viability constraints is 6-dimensional under a continuous-discrete dynamic controlled by energy consumption and investment into production of renewable energy. Viable optima are obtained either with gradual or with instantaneous adoption. A longer time horizon increases the probability of adoption of renewable energy and the time for starting this adoption. It also increases maximal utility and the probability to cross the threshold of irreversible pollution. Exploiting a renewable energy starts sooner when adoption is gradual rather than instantaneous. The shorter the period remaining after adoption until the time horizon, the higher the investment into renewable energy. | Optimal transition to renewable energy with threshold of irreversible pollution |
S0377221715004725 | This paper proposes to investigate the impact of financialization on energy markets (oil, gas, coal, and electricity European forward prices) during both normal times and periods of extreme fluctuation by using an original behavioral and emotional approach. With this aim, we propose a new theoretical and empirical framework based on a heterogeneous agents model in which fundamentalists and chartists co-exist and are subject to regret and uncertainty. We find significant evidence that energy markets are composed of heterogeneous traders who behave differently depending on the intensity of the price fluctuations and the uncertainty context. In particular, energy prices are governed primarily by fundamental and chartist agents that are neutral to uncertainty during normal times, whereas these prices face irrational chartist investors averse to uncertainty during periods of extreme fluctuations. In this context, the recent surge in energy prices can be viewed as the consequence of irrational exuberance. Our new theoretical model is suitable for modeling energy price dynamics and outperforms both the random walk and the ARMA model in out-of-sample predictive ability. | Heterogeneous beliefs, regret, and uncertainty: The role of speculation in energy price dynamics |
S0377221715004737 | In the context of Justus von Liebig's Law of the Minimum, this study assesses the impacts that trade barriers have on trade resistance between United States (U.S.) manufacturing industries and their trade partners. An undesirable trade resistance model is presented, where trade barriers are (undesirable) inputs into the production of the (undesirable) output, trade resistance. It is then presented how Johansen's notion of Capacity is utilized to assess trade barriers’ impacts. Estimation takes place by employing Data Envelopment Analysis (DEA). Results suggest that U.S. trade partners’ port logistics are the most limiting trade barrier for the U.S. manufacturing industries, followed by the distance between the U.S. and its trade partners, the tariff imposed by the U.S., and the tariff imposed by the trading partner. | Ranking trade resistance variables using data envelopment analysis |
S0377221715004750 | A new one factor model with a random volatility parameter is presented in this paper for pricing of electricity futures contracts. It is shown that the model is more tractable than multi-factor jump diffusion models and yields an approximate closed-form pricing formula for the electricity futures prices. On real market data, it is shown that the performance of the new model compares favourably with two existing models in the literature, viz. a two factor jump diffusion model and its jump free version, i.e., a two factor linear Gaussian model, in terms of ability to predict one day ahead futures prices. Further, a multi-stage procedure is suggested and implemented for calibration of the two factor jump diffusion model, which alleviates the difficulty in calibration due to a large number of parameters and pricing formulae which involve numerical evaluation of integrals. We demonstrate the utility of our new model, as well as the utility of the calibration procedure for the existing two factor jump diffusion model, by model calibration and price forecasting experiments on three different futures price data sets from Nord pool electricity data. For the jump diffusion model, we also investigate empirically whether it performs better in terms of futures price prediction than a corresponding, jump-free linear Gaussian model. Finally, we investigate whether an explicit calibration of jump risk premium in the jump diffusion model adds value to the quality of futures price prediction. Our experiments do not yield any evidence that modelling jumps leads to a better price prediction in electricity markets. | Electricity futures price models: Calibration and forecasting |
S0377221715004762 | This article presents a mixed integer nonlinear programming model to find the optimal selling price and replenishment policy for a particular type of product in a supply chain defined by a single retailer and multiple potential suppliers. Each supplier offers all-unit quantity discounts as an incentive mechanism. Multiple orders are allowed to be submitted to the selected suppliers during a repeating order cycle. The demand rate is considered to be not constant but dependent upon the selling price. The model provides the optimal number of orders and corresponding order quantities for the selected suppliers, and the optimal demand rate and selling price that maximize the total profit per time unit under suppliers’ capacity and quality constraints. In addition, we provide sufficient conditions under which there exists an optimal solution where the retailer only orders from one supplier. We also apply the Karush–Kuhn–Tucker conditions to investigate the impact of supplier's capacity on the optimal sourcing strategy. The results show that, there may exist a range of capacity values for the dominating supplier, where the retailer's optimal sourcing strategy is to consider multiple suppliers without fully utilizing the dominating supplier's capacity. A numerical example is presented to illustrate the proposed model. | Determining the retailer's replenishment policy considering multiple capacitated suppliers and price-sensitive demand |
S0377221715004774 | Defective capital assets may be quickly restored to their operational condition by replacing the item that has failed. The item that is replaced is called the Line Replaceable Unit (LRU), and the so-called LRU definition problem is the problem of deciding on which item to replace upon each type of failure: when a replacement action is required in the field, service engineers can either replace the failed item itself or replace a parent assembly that holds the failed item. One option may be fast but expensive, while the other may take longer but against lower cost. We consider a maintenance organization that services a fleet of assets, so that unavailability due to maintenance downtime may be compensated by acquiring additional standby assets. The objective of the LRU-definition problem is to minimize the total cost of item replacement and the investment in additional assets, given a constraint on the availability of the fleet of assets. We link this problem to the literature. We also present two cases to show how the problem is treated in practice. We next model the problem as a mixed integer linear programming formulation, and we use a numerical experiment to illustrate the model, and the potential cost reductions that using such a model may lead to. | Defining line replaceable units |
S0377221715004786 | In classical branch-and-bound algorithms, the branching disjunction is often based on a single variable, which is a special case of the more general approach that involves multiple variables. In this paper, we present a new approach to generate good general branching disjunctions based on the shape of the polyhedron and interior-point concepts. The approach is based on approximating the feasible polyhedron using Dikin’s inscribed ellipsoid, calculated using the analytic center from interior-point methods. We use the fact that the width of the ellipsoid in a given direction has a closed form expression to formulate a quadratic problem whose optimal solution is a thin direction of the ellipsoid. While solving a quadratic problem at each node of the branch-and-bound tree is impractical, we use an efficient neighborhood search heuristic for its solution. We report computational results on hard mixed integer problems from the literature showing that the proposed approach leads to smaller branch-and-bound trees and a reduction in the computational time as compared with classical branching and strong branching. As the computation of the analytic center is a bottleneck, we finally test the approach within a general interior-point based Benders decomposition where the analytic center is readily available, and show clear dominance of the approach over classical branching. | Improved branching disjunctions for branch-and-bound: An analytic center approach |
S0377221715004798 | This paper shows how the maximum covering and patrol routing problem (MCPRP) can be modeled as a minimum cost network flow problem (MCNFP). Based on the MCNFP model, all available benchmark instances of the MCPRP can be solved to optimality in less than 0.4s per instance. It is furthermore shown that several practical additions to the MCPRP, such as different start and end locations of patrol cars and overlapping shift durations can be modeled by a multi-commodity minimum cost network flow model and solved to optimality in acceptable computational times given the sizes of practical instances. | A minimum cost network flow model for the maximum covering and patrol routing problem |
S0377221715004804 | Mammography is known to be the most effective way of breast cancer detection. The efficacy of mammography screening guidelines is highly associated with women's compliance with these recommendations. Currently, none of the existing policies takes women's behavior into consideration; instead, perfect adherence is assumed. However, an earlier longitudinal data analysis has revealed that women's compliance with mammography guidelines remained low in recent years. In this study, we develop a randomized discrete-time finite-horizon partially observable Markov chain model to evaluate a wide range of screening mammography policies, incorporating heterogeneity in women's adherence behaviors. Considering potential harms of mammography tests (e.g., risk of developing radiation-induced breast cancer, false negatives, false positives, etc.), policies with varying starting age, ending age and frequency of screening mammograms at different ages are compared in terms of total quality adjusted life years (QALYs) and lifetime breast cancer mortality risk. Our results show that women with perfect adherence do not always experience higher QALYs. In fact, for some policies, including the American Cancer Society (ACS) policy, the general population with various adherence levels has higher QALYs than women with perfect adherence. However, in terms of lifetime breast cancer mortality risk, higher/perfect adherence always results in lower risk of dying from breast cancer. This implies that the benefits of mammography in decreasing death from breast cancer outweigh the increased risk of developing radiation-induced breast cancer from mammographic screening. This study can facilitate healthcare providers to tailor screening mammography recommendations based on their patients' estimated adherence levels. | Evaluation of breast cancer mammography screening policies considering adherence behavior |
S0377221715004816 | In this paper we consider a segment of a supply chain comprising an inventory and a transportation system that cooperate in the fulfillment of stochastic customer orders. The inventory is operated under a discrete time (r, s, q) policy with backorders. The transportation system consists of an in-house transportation capacity which can be extended by costly external transportation capacity (such as a third-party logistics provider). We show that in a system of this kind stock-outs and the resulting accumulation of inventory backorders introduces volatility in the workload of the transportation process. Geunes and Zeng (2001) have shown for a base-stock system, that backordering decreases the variability of transportation orders. Our findings show that in inventory systems with order cycles longer than one period the opposite is true. In both cases, however, inventory decisions and transportation decision must be taken simultaneously. We present a procedure to compute the probability distribution of the number of transportation orders and the resulting excess transportation requirements or rather transportation costs. We show that the increase of transportation costs resulting from a safety stock reduction may offset the change of the inventory costs. This effect may have a significant impact on general optimality statements for multi-echelon inventory systems. | Integrated optimization of safety stock and transportation capacity |
S0377221715004828 | We address an important problem in the context of traffic management and control related to the optimum location of vehicle-ID sensors on the links of a network to derive route flow volumes. We consider both the full observability version of the problem, where one seeks for the minimum number of sensors (or minimum cost) such that all the route flow volumes can be derived, and the estimation version of the problem, that arises when there is a limited budget in the location of sensors. Four mathematical formulations are presented. These formulations improve the existing ones in the literature since they better define the feasible region of the problem by taking into account the temporal dimension of the license plate scanning process. The resulting mathematical formulations are solved to optimality and compared with the existing mathematical formulations. The results show that new and better solutions can be achieved with less computational effort. We also present two heuristic approaches: a greedy algorithm and a tabu search algorithm that are able to efficiently solve the analyzed problems and they are a useful tool able to find a very good trade-off between quality of the solution and computational time. | Vehicle-ID sensor location for route flow recognition: Models and algorithms |
S0377221715004841 | This paper deals with a generalized version of the capacitated p-center problem. The model takes into account the possibility that a center might suffer a disruption (being unavailable to serve some demand) and assumes that every site will be covered by its closest available center. The problem is of interest when the goal is to locate emergency centers while, at the same time, taking precautions against an unforeseen incident (natural disaster, labor strike, accident…) which can cause failure of a center. We consider different formulations for the problem and extensive computational tests are reported, showing the potentials and limits of each formulation in several types of instances. Finally, a preprocessing phase for fixing variables has been developed and different families of valid inequalities have been proposed to strengthen the most promising formulations, obtaining in some cases much better resolution times. | Capacitated p-center problem with failure foresight |
S0377221715004853 | This paper is concerned with the single-item inventory system with resource constraint and all-units quantity discount under continuous review where demand is stochastic and discrete. In most actual inventory systems, the resource available for inventory management is limited and the system is able to confront the resource shortage by charging more costs. Considering the resource constraint as a soft constraint beside a quantity discount opportunity makes the model more practical. An optimization problem is formulated for finding an optimal (r, Q) policy for the problem in which the per unit resource usage depends on unit purchasing price. The properties of the cost function are investigated and then an algorithm based on a one-dimensional search procedure is proposed for finding an optimal (r, Q) policy which minimizes the expected system costs and converges to a global optimum. Based on the properties of the partially conditioned cost functions, the presented algorithm is modified such that its search path to optimal policy is changed. Experimental results show that the performance of the modified version of the presented algorithm is much better than the original algorithm in various environments of test problems. | An optimal (r, Q) policy in a stochastic inventory system with all-units quantity discount and limited sharable resource |
S0377221715004865 | Considering key uncertainties and health policy options in the reorganization of a long-term care (LTC) network is crucial. This study proposes a stochastic mixed integer linear programming model for planning the delivery of LTC services within a network of care where such aspects are modeled in an integrated manner. The model assists health care planners on how to plan the delivery of the entire range of LTC services – institutional, home-based and ambulatory services – when the main policy objective is the minimization of expected costs and while respecting satisficing levels of equity. These equity levels are modeled as constraints, ensuring the attainment of equity of access, equity of utilization, socioeconomic equity and geographical equity. The proposed model provides planners with key information on: when and where to locate services and with which capacity, how to distribute this capacity across services and patient groups, and which changes to the network of care are needed over time. Model outputs take into account the uncertainty surrounding LTC demand, and consider strategic health policy options adopted by governments. The applicability of the model is demonstrated through the resolution of a case study in the Great Lisbon region in Portugal with estimates on the equity-cost trade-off for several equity dimensions being provided. | An integrated approach for planning a long-term care network with uncertainty, strategic policy and equity considerations |
S0377221715004877 | Innovative approaches to the assessment and management of medical technologies use a combination of health technology assessment (HTA) and operations research methods, specifically multiple-criteria decision analysis (MCDA). The purpose of this article is to develop methodological support and provide a theoretical justification for decision support in the selection of medical devices under conditions of uncertainty, using MRI systems as an example. The goal of the method application has been formulated as follows: determine a ranked list of MRI systems for contributory health organisations administered by regional authorities (regional hospitals) in the Czech Republic. An analytic hierarchy process (AHP) and the Delphi method were used to identify experts’ preferences and for consensus building. The expert group was selected based on eight complex-valued criteria, and each expert was given a weighting factor. A set of 13 MRI systems and the 14 key default specifications that play the most important roles when hospitals select MRIs for purchase were defined. Strong conformity (W ≥ 0.6, p < 0.05) within the experts' judgments was revealed. A prediction regarding alternatives, weights and changes in priority vectors over the following 8 years has been provided. The developed approach is useful in decision support when selecting medical devices under conditions of uncertainty by hospitals. | Multi-criteria decision analysis for supporting the selection of medical devices under uncertainty |
S0377221715004889 | The most important pan-European football tournaments include ties where two clubs play each other over two matches and the aggregate score determines which is admitted to the next stage of the competition. A number of stakeholders may be interested in assessing the chances of progression for either of the clubs once the score of the first match (leg) is known. The paper asks what would be a “good” result for a team in the first leg. Employing data from 6,975 contests, modelling reveals that what constitutes a good result has changed substantially over time. Generally, clubs which play at home in the first leg have become more likely to convert any given first-leg result to eventual success. Taking this trend into account, and controlling for team and country strength, a probit model is presented for use in generating probability estimates for which team will progress conditional on the first-leg scoreline. Illustrative results relate to ties where two average teams play each other and to ties where a relatively weak club plays home-first against a relatively strong club. Given that away goals serve as a tie-breaker should aggregate scores be equal after the two matches, the results also quantify how great the damage is when a home-first club concedes an away goal. | What is a good result in the first leg of a two-legged football match? |
S0377221715004890 | In this article, we consider the T 2 control chart for bivariate samples of size n with observations that are not only cross-correlated but also autocorrelated. The cross covariance matrix of the sample mean vectors is derived with the assumption that the observations are described by a multivariate first order autoregressive model – VAR (1). The combined effect of the correlation and autocorrelation on the performance of the T 2 chart is also investigated. Earlier studies proved that changes in only one variable are detected faster when the variables are correlated. This result extends to the case that one or both variables are also autocorrelated. | The effect of the autocorrelation on the performance of the T 2 chart |
S0377221715004907 | In this paper, we address the binary classification problem, in which one is given a set of observations, characterized by a number of (binary and non-binary) attributes and wants to determine which class each observation belongs to. The proposed classification algorithm is based on the Logical Analysis of Data (LAD) technique and belongs to the class of supervised learning algorithms. We introduce a novel metaheuristic-based approach for pattern generation within LAD. The key idea relies on the generation of a pool of patterns for each given observation of the training set. Such a pool is built with one or more criteria in mind (e.g., diversity, homogeneity, coverage, etc.), and is paramount in the achievement of high classification accuracy, as shown by the computational results we obtained. In addition, we address one of the major concerns of many data mining algorithms, i.e., the fine-tuning and calibration of parameters. We employ here a novel technique, called biased Random-Key Genetic Algorithm that allows the calibration of all the parameters of the algorithm in an automatic fashion, hence reducing the fine-tuning effort required and enhancing the performance of the algorithm itself. We tested the proposed approach on 10 benchmark instances from the UCI repository and we proved that the algorithm is competitive, both in terms of classification accuracy and running time. | A pool-based pattern generation algorithm for logical analysis of data with automatic fine-tuning |
S0377221715004919 | The tactical berth allocation problem (BAP) concerns the allocation of favorite berthing positions to ships that periodically call at the terminals. This paper investigates the tactical-level berth allocation scheduling models. First a deterministic model for tactical BAP is formulated with considering the periodicity of schedule. However, in reality, the number of containers that need to be handled (discharging & loading) for each ship is uncertain in the ship's future periods. Thus for the tactical BAP, there is significant uncertainty with respect to the operation time (dwell time) of ships, which further complicates the traditional berth allocation decisions. From stochastic perspective, this paper proposes both a stochastic programming formulation that can cope with arbitrary probability distributions of ships’ operation time deviation, and a robust formulation that is applicable to situations in which limited information about probability distributions is available. The relationship between the two models is also investigated in an analytic way. Some meta-heuristic algorithms are suggested for solving the models. Numerical experiments are performed to validate the effectiveness of the proposed models and the efficiency of the proposed solution algorithms. The experiments also compare the above stochastic programming formulation and the robust formulation models, as well as evaluate their potential benefits in practice. This study finds that the robust method can derive a near optimal solution to the stochastic model in a fast way, and also has the benefit of limiting the worst-case outcome of the tactical BAP decisions. | Tactical berth allocation under uncertainty |
S0377221715004920 | In this article, we provide a new methodology for optimizing a portfolio of wind farms within a market environment, for two Market Designs (exogenous prices and endogenous prices). Our model is built on an agent based representation of suppliers and generators interacting in a certain number of geographic demand markets, organized as two tiered systems. Assuming rational expectation of the agents with respect to the outcome of the real-time market, suppliers take forward positions, which act as signals in the day-ahead market, to compensate for the uncertainty associated with supply and demand. Then, generators optimize their bilateral trades with the generators in the other markets. The Nash Equilibria resulting from this Signaling Game are characterized using Game Theory. The Markowitz Frontier, containing the set of efficient wind farm portfolios, is derived theoretically as a function of the number of wind farms and of their concentration. Finally, using a case study of France, Germany and Belgium, we simulate the Markowitz Frontier contour in the expected cost-risk plane. | Wind farm portfolio optimization under network capacity constraints |
S0377221715004932 | To a large extent, electricity markets worldwide still rely on deterministic procedures for clearing energy and reserve auctions. However, increasing shares of the production mix consist of renewable sources whose nature is stochastic and non-dispatchable, as their output is uncertain and cannot be controlled by the operators of the production units. Stochastic programming models allow the joint determination of the day-ahead energy and reserve dispatch accounting for the uncertainty in the output from these sources. However, the size of these models gets quickly out of hand as a large number of scenarios are needed to properly represent the uncertainty. In this work, we take an alternative approach and cast the problem as an adaptive robust optimization problem. The resulting day-ahead energy and reserve schedules yield the minimum system cost, accounting for the cost of the redispatch decisions at the balancing (real-time) stage, in the worst-case realization of the stochastic production within a specified uncertainty set. We propose a novel reformulation of the problem that allows considering general polyhedral uncertainty sets. In a case-study, we show that, in comparison to a risk-averse stochastic programming model, the robust optimization approach progressively trades off optimality in expectation with improved performance in terms of risk. These differences, however, gradually taper off as the level of risk-aversion increases for the stochastic programming approach. Computational studies show that the robust optimization model scales well with the size of the power system, which is promising in view of real-world applications of this approach. | A robust optimization approach to energy and reserve dispatch in electricity markets |
S0377221715004944 | In this paper we model the multi-level uncapacitated facility location problem as two different combinatorial optimization problems. The first model is the classical representation of the problem which uses a set of vertices as combinatorial objects to represent solutions whereas in the second model we propose the use of a set of paths. An interesting observation is that the real-valued set function associated with the first combinatorial problem does not satisfy the submodular property, whereas the set function associated with the second problem does satisfy this property. This illustrates the fact that submodularity is not a property intrinsic to an optimization problem but rather to its mathematical representation. | Multi-level facility location as the maximization of a submodular set function |
S0377221715004956 | A District Cooling System (DCS) is an interconnected system encompassing a centralized chiller plant, a Thermal Energy Storage (TES) unit, a piping network, and clusters of consumers’ buildings. The main function of a DCS is to produce and deliver chilled water to satisfy the cooling demand of a scattered set of buildings. DCSs are recognized to be highly energy efficient, and therefore constitute an environment-friendly alternative to the traditional power-driven air conditioning systems being operated at individual buildings. In this paper, we investigate the optimal design and operation of a DCS so that the total investment and operational costs are minimized. This involves optimizing decisions related to chiller plant capacity, storage tank capacity, piping network size and layout, and quantities to be produced and stored during every period of time. To this end, mixed-integer programming (MIP) models, that explicitly capture the structural aspects as well as both pressure- and temperature-related requirements, are developed and tested. The results of computational experiments that demonstrate the practical effectiveness of the proposed models are also presented. | Optimization models for a single-plant District Cooling System |
S0377221715004968 | In onshore oilfields, several sucker-rod pumps are deployed over a large geographic area to lift oil from the bottom of production wells. Powered by electric rotary machines, the rod pumps operate according to cyclic control policies that alternate between on and off pumping periods which are designed to drive maximum production. This cyclic behavior gives rise to the problem of scheduling pumpoff operations in order to minimize the system power peak and thereby smoothen the power-consumption profile. To this end, this paper develops MILP formulations for the coordination of control policies and their reconfiguration during operations. Following a column-based approach or using integer variables to model the power-consumption profile, the resulting MILP formulations are put to the test in a host of synthetic, but representative oilfields. | Scheduling pumpoff operations in onshore oilfields under electric-power constraints |
S0377221715005172 | In many managerial situations it is important to consider both risk and reward simultaneously. This is a challenging task using standard techniques that are applied for solving sequential stochastic optimization problems since these techniques are designed to consider only one objective at a time—either maximizing reward or minimizing risk. In applications such as operational decisions for start-ups, this can be particularly restricting, since managers need to make trade-offs between profitability driven growth and the risk of bankruptcy. We extend in several ways prior work that has addressed the inventory issue for start-ups to minimize the risk of bankruptcy. The primary contribution of this paper is to present a novel approach to track mean as well as variance of a set of policies in a dynamic stochastic programming model and using the mean-variance solutions in a simple heuristic for creating efficient risk-reward frontiers. This is a challenging task from an implementation standpoint, since this requires carrying information on both risk and reward simultaneously for each state, which standard stochastic programming solution methods are not designed to do. We also illustrate the use of our methodology in a richer model of start-up operations where, in addition to inventory issues, advertising decisions are also considered. | Analyzing operational risk-reward trade-offs for start-ups |
S0377221715005184 | We study the operational problem of a make-to-order contract manufacturer seeking to integrate production scheduling and transportation planning for improved performance under commit-to-delivery model. The manufacturer produces customer orders on a set of unrelated parallel lines/processors, accounting for release dates and sequence dependent setup times. A set of shipping options with different costs and transit times is available for order delivery through the third party logistics service providers. The objective is to manufacture and deliver multiple customer orders by selecting from the available shipping options, before preset due dates to minimize total cost of fulfilling orders, including tardiness penalties. We model the problem as a mixed integer programming model and provide a novel decomposition scheme to solve the problem. An exact dynamic programming model and a heuristics approach are presented to solve the subproblems. The performance of the solution algorithm is tested through a set of experimental studies and results are presented. The algorithm is shown to efficiently solve the test cases, even the complex instances, to near optimality. | Integrated production and logistics planning: Contract manufacturing and choice of air/surface transportation |
S0377221715005196 | In today’s competitive business environment, quick service with minimal waiting time is an important factor for customers when choosing a service. Many service organizations guarantee a uniform lead-time to all customers in order to gain competitive advantages in the market. In selecting a lead-time to quote, the firm has to take into consideration not only how customers will react to the delivery time guarantee, but also whether it has adequate capacity to fulfill the commitment. A short lead-time can bring both benefits and costs. It can increase customer demand, but might require a higher capacity level. We present a mathematical model and a solution method for determining the optimal quoted lead-time and capacity level for a profit-maximizing firm with time-varying and lead-time sensitive demand. The firm incurs convex capacity costs and pays lateness penalties whenever the actual lead-time exceeds the quoted lead-time. A few studies have been conducted on the relationship between uniform lead-time, capacity, demand, and overall profitability. However, none of them takes the time variation of demand into account. Our work differs from previous research in that we explicitly model such a demand pattern. | Capacity and lead-time management when demand for service is seasonal and lead-time sensitive |
S0377221715005202 | The single-product, single-period newsvendor problem with two decision variables, namely price and stock quantity, is considered. The performance measure, in addition to the expected revenue, includes the variance of the income scaled with a risk parameter. We present conditions for the concavity of this risk-sensitive performance measure and the uniqueness of the optimal solution for both risk-averse and risk-seeking cases under the additive demand model, and compare the results to others previously published. These conditions are introduced in terms of the lost sales rate elasticity. Furthermore, we provide numerical examples that aim to endorse the theoretical results herein explained. | A price-setting newsvendor problem under mean-variance criteria |
S0377221715005214 | Coordinating supply chains is an important goal for contract designers because it enables the channel members to increase their profits. Recently, many experimental studies have shown that behavioral aspects have to be taken into account when choosing the type of contract and specifying the contract parameters. In this paper, we analyze behavioral aspects of revenue-sharing contracts. We extend the classical normative decision model by incorporating reference-dependent valuation into the decision model and show how this affects inventory decisions. We conduct different lab experiments to test our model. As a result, human inventory decisions deviate from classical normative predictions, and we find evidence for reference-dependent valuation of human decision makers. We also show how contract designers can use the insights we gained to design better contracts. | Reference points in revenue sharing contracts—How to design optimal supply chain contracts |
S0377221715005226 | The daily scheduling of an operating theatre is a highly constrained problem. In addition to standard scheduling constraints, many additional constraints on human and material resources encountered in real life should be taken into account. These constraints concern the priority of operations, the affinities between surgical team members, renewable and non-renewable resources, various sizes in the block scheduling strategy, and the surgical team's preferences/availabilities. We developed two models in our research work, using mixed-integer and constraint programming respectively. These were compared using a real-life case in order to determine which one coped better with a highly constrained problem. A cross-comparison of the experimental results shows that the mixed-integer programming model provides a better performance using the weighted sum objective function than using the makespan minimization objective function. Conversely, the constraint programming model is better suited to the makespan minimization objective function than to the weighted sum objective function. The originality of this research lies on three levels: (1) two models are presented in detail and compared using real data; (2) constraint programming is used to schedule the operating theatre; (3) some new constraints are taken into account, such as the affinities between team members in the composition of surgical teams, and the priorities of patients such as diabetics. | Scheduling operating theatres: Mixed integer programming vs. constraint programming |
S0377221715005238 | The problem of allocating a pool of cross-trained workers across multiple departments, units, or work centers is important for both manufacturing and service environments. Within the context of services, such as hospital nursing, the problem has commonly been formulated as a nonlinear assignment problem with an operationally-oriented objective function that focuses on the maximization of service utility as a function of deviations from target staffing levels in the departments. However, service managers might also deem it appropriate to consider human resource-oriented goals, such as accommodating worker preferences, avoiding decay of skill productivity, or the provision of training. We present a bicriterion formulation of the nonlinear worker assignment problem that incorporates both operational and human resource objective criteria. An algorithm for generating the entire Pareto efficient set associated with the bicriterion model is subsequently presented. A small numerical example is used to illustrate the bicriterion model and algorithm. A second example based on a test problem from the literature is also contained in the paper, and a third example is provided in an online supplement. In addition, a simulation experiment was conducted to evaluate the sensitivity of the algorithm to a variety of environmental characteristics. | A bicriterion algorithm for the allocation of cross-trained workers based on operational and human resource objectives |
S0377221715005263 | This paper concerns the problem of decomposing a network flow into an integral path flow such that the length of the longest path is minimized. It is shown that this problem is NP-hard in the strong sense. Two approximation algorithms are proposed for the problem: the longest path elimination (LPE) algorithm and the balanced flow propagation (BFP) algorithm. We analyze the properties of both algorithms and present the results of experimental studies examining their performance and efficiency. | Integral flow decomposition with minimum longest path length |
S0377221715005275 | There is increased interest in deploying charging station infrastructure for electric vehicles, due to the increasing adoption of such vehicles to reduce emissions. However, there are a number of key challenges for providing high quality of service to such vehicles, stemming from technological reasons. One of them is due to the relative slow charging times and the other is due to the relative limited battery range. Hence, developing efficient routing strategies of electric vehicles requesting charging to stations that have available charging resources is an important component of the infrastructure. In this work, we propose a queueing modeling framework for the problem at hand and develop such routing strategies that optimise a performance metric related to vehicles’ sojourn time in the system. By incorporating appropriate weights into the well-known dynamic routing discipline “Join-the-Shortest-Queue”, we show that the proposed routing strategies not only do they maximise the queueing system’s throughput, but also significantly mitigate the vehicle’s sojourn time. The strategies are also adaptive in nature and responsive to changes in the speed of charging at the stations, the distribution of the vehicles’ point of origin when requesting service, the traffic congestion level and the vehicle speed; all the above are novel aspects and compatible with the requirements of a modern electric vehicle charging infrastructure. | Optimal routing for electric vehicle service systems |
S0377221715005287 | A minimization problem with a convex separable objective function subject to linear equality constraints and box constraints (bounds on the variables) is considered. Necessary and sufficient optimality condition is proved for a feasible solution to be an optimal solution to this problem. Primal-dual analysis is also included. Examples of some convex separable objective functions for the considered problem are presented. | On the solution of multidimensional convex separable continuous knapsack problem with bounded variables |
S0377221715005299 | In a recently published paper by Liu et al. [Liu, F., Zhang, W.G., Wang, Z.X. (2012). A goal programming model for incomplete interval multiplicative preference relations and its application in group decision-making. European Journal of Operational Research 218, 747–754], two equations are introduced to define consistency of incomplete interval multiplicative preference relations (IMPRs) and employed to develop a goal programming model for estimating missing values. This note illustrates that such consistency definition and estimation model are technically incorrect. New transitivity conditions are proposed to define consistent IMPRs, and a two-stage goal programming approach is devised to estimate missing values for incomplete IMPRs. | A note on “A goal programming model for incomplete interval multiplicative preference relations and its application in group decision-making” |
S0377221715005305 | We study a class of vector optimization problems with a C-convex objective function under linear constraints. We extend the proximal point algorithm used in scalar optimization to vector optimization. We analyze both the global and local convergence results for the new algorithm. We then apply the proximal point algorithm to a supply chain network risk management problem under bi-criteria considerations. | A new algorithm for linearly constrained c-convex vector optimization with a supply chain network risk application |
S0377221715005317 | In the Synchronized Pickup and Delivery Problem (SPDP), user-specified transportation requests from origin to destination points have to be serviced by a fleet of homogeneous vehicles. The task is to find a set of minimum-cost routes satisfying pairing and precedence, capacities, and time windows. Additionally, temporal synchronization constraints couple the service times at the pickup and delivery locations of the customer requests in the following way: a request has to be delivered within prespecified minimum and maximum time lags (called ride times) after it has been picked up. The presence of these ride-time constraints severely complicates the subproblem of the natural column-generation formulation of the SPDP so that it is not clear if their integration into the subproblem pays off in an integer column-generation approach. Therefore, we develop four branch-and-cut-and-price algorithms for the SPDP based on column-generation formulations that use different subproblems. Two of these subproblems are considered for the first time in this paper have not been studied before. We derive new dominance rules and labeling algorithms for their effective solution. Extensive computational results indicate that integrating either both types of ride-time constraints or only the maximum ride-time constraints into the subproblem results in the strongest overall approach. | A comparison of column-generation approaches to the Synchronized Pickup and Delivery Problem |
S0377221715005329 | The Permutation Flow Shop Scheduling Problem (PFSP) is a complex combinatorial optimization problem. PFSP has been widely studied as a static problem using heuristics and metaheuristics. In reality, PFSPs are not usually static, but are rather dynamic, as customer orders are placed at random time intervals. In the dynamic problem, two tasks must be considered: (i) should a new order be accepted? and (ii) if accepted, how can this schedule be ordered, when some orders may be already under process and or be in the queue for processing? For the first task, we propose a simple heuristic based decision process, and for the second task, we developed a Genetic Algorithm (GA) based approach that is applied repeatedly for re-optimization as each new order arrives. The usefulness of the proposed approach has been demonstrated by solving a set of test problems. In addition the proposed approach, along with a simulation model, has been tested for maximizing the revenue of a flow shop production business under different order arrival scenarios. Finally, a case study is presented to show the applicability of the proposed approach in practice. | A real-time order acceptance and scheduling approach for permutation flow shop problems |
S0377221715005330 | A problem of profit oriented disassembly line design and balancing with possible partial disassembly and presence of hazardous parts is studied. The objective is to design a production line providing a maximal revenue with balanced workload. Task times are assumed to be random variables with known normal probability distributions. The cycle time constraints are to be jointly satisfied with at least a predetermined probability level. An AND/OR graph is used to model the precedence relationships among tasks. Several lower and upper–bounding schemes are developed using second order cone programming and convex piecewise linear approximation. To show the relevance and applicability of the proposed approach, a set of instances from the literature are solved to optimality. | Second order conic approximation for disassembly line design with joint probabilistic constraints |
S0377221715005342 | This research is motivated by the capacity allocation problem at a major provider of customized products to the oil and gas drilling industry. We formulate a finite-horizon, discrete-time, dynamic programming model in which a firm decides how to reserve capacity for emergency demand and how to prioritize two classes of regular demand. While regular demand can be backlogged, emergency demand will be lost if not fulfilled within the period of its arrival. Since backlogging cost accumulates over time, we find it optimal for the firm to adopt a dynamic prioritization policy that evaluates the priorities of different classes of regular demand every period. The optimal prioritization involves metrics that measure backlogging losses from various perspectives. We fully characterize the firm’s optimal prioritization and reservation policy. Those characterizations shed light on managerial insights. | Prioritizing regular demand while reserving capacity for emergency demand |
S0377221715005354 | During manufacturing of products, all produced items are considered as perfect in general. This viewpoint of taking all finished products are perfect is not correct always. Defective items may occur during the production process for several reasons. This paper describes a deteriorating production process which randomly shifts to out-of-control state from in-control state. In case of full inspection policy, expected total cost together with inspection cost results higher inventory cost. Therefore, product inspection policy is better to use for reducing inspection costs. During product inspection process, inspectors may choose falsely a defective item as non-defective and vice-versa. Type I and Type II errors are incorporated in this model to make more realistic rather than existing models. This model includes a warranty policy for some fixed time periods. Some numerical examples, sensitivity analysis, and graphical representations are given to illustrate this model. | Product inspection policy for an imperfect production system with inspection errors and warranty cost |
S0377221715005378 | An extended version of the flexible job shop problem is tackled in this work. The considered extension to the classical flexible job shop problem allows the precedences between the operations to be given by an arbitrary directed acyclic graph instead of a linear order. Therefore, the problem consists of allocating the operations to the machines and sequencing them in compliance with the given precedences. The goal in the present work is the minimization of the makespan. A list scheduling algorithm is introduced and its natural extension to a beam search method is proposed. Numerical experiments assess the efficiency of the proposed approaches. | List scheduling and beam search methods for the flexible job shop scheduling problem with sequencing flexibility |
S0377221715005408 | Every year, hundreds of thousands of people are affected by natural disasters. The number of casualties is usually increased by lack of clean water, food, shelter, and adequate medical care during the aftermath. One of the main problems influencing relief distribution is the state of the post-disaster road network. In this paper, we consider the problem of scheduling the emergency repair of a rural road network that has been damaged by the occurrence of a natural disaster. This problem, which we call the Network Repair Crew Scheduling and Routing Problem addresses the scheduling and routing of a repair crew optimizing accessibility to the towns and villages that demand humanitarian relief by repairing roads. We develop both an exact dynamic programming (DP) algorithm and an iterated greedy-randomized constructive procedure to solve the problem and compare the performance of both approaches on small- to medium-scale instances. Our numerical analysis of the solution structure validates the optimization model and provides managerial insights into the problem and its solutions. | Network repair crew scheduling and routing for emergency relief distribution problem |
S0377221715005421 | We present a general construction that allows to extend a given subcopula to a copula in such a way that the extension is affine on some specific segments of the copula domain. This construction is hence applied to provide convergence theorems for approximating a copula in strong convergence and in D 1-metric (related to the Markov kernel representation of a copula). | Convergence results for patchwork copulas |
S0377221715005433 | We consider an online matching problem with concave returns. This problem is a generalization of the traditional online matching problem and has vast applications in online advertising. In this work, we propose a dynamic learning algorithm that achieves near-optimal performance for this problem when the inputs arrive in a random order and satisfy certain conditions. The key idea of our algorithm is to learn the input data pattern dynamically: we solve a sequence of carefully chosen partial allocation problems and use their optimal solutions to assist with the future decisions. Our analysis belongs to the primal-dual paradigm; however, the absence of linearity of the objective function and the dynamic feature of the algorithm makes our analysis quite unique. We also show through numerical experiments that our algorithm performs well for test data. | A dynamic learning algorithm for online matching problems with concave returns |
S0377221715005445 | More and more companies in the routing industry are providing consistent service to gain competitive advantage. However, improved service consistency comes at the price of higher routing cost, i.e., routing cost and service consistency are conflicting objectives. In this paper, we extend the generalized consistent vehicle routing problem (GenConVRP) by considering several objective functions: improving driver consistency and arrival time consistency, and minimizing routing cost are independent objectives of the problem. We refer to the problem as the multi-objective generalized consistent vehicle routing problem (MOGenConVRP). A multi-objective optimization approach enables a thorough trade-off analysis between the conflicting objective functions. The results of this paper should help companies in finding adequate consistency goals to aim for. Results are generated for several test instances by two exact solution approaches and one heuristic. The exact approaches are based on the ϵ -constraint framework and are used to solve small test instances to optimality. Large instances with up to 199 customers and a planning horizon of 5 days are solved by multi directional large neighborhood search (MDLNS) that combines the multi directional local search framework and the LNS for the GenConVRP. The solution quality of the heuristic is evaluated by examining five multi-objective quality indicators. We find that MDLNS is an eligible solution approach for performing a meaningful trade-off analysis. Our analysis shows that a 70 percent better arrival time consistency is achieved by increasing travel cost by not more than 3.84 percent, on average; visiting each customer by the same driver each time is significantly more expensive than allowing at least two different drivers per customer; in many cases, arrival time consistency and driver consistency can be improved simultaneously. | The multi-objective generalized consistent vehicle routing problem |
S0377221715005457 | Recently Georgiev, Luc, and Pardalos (2013), [Robust aspects of solutions in deterministic multiple objective linear programming, European Journal of Operational Research, 229(1), 29–36] introduced the notion of robust efficient solutions for linear multi-objective optimization problems. In this paper, we extend this notion to nonlinear case. It is shown that, under the compactness of the feasible set or convexity, each robust efficient solution is a proper efficient solution. Some necessary and sufficient conditions for robustness, with respect to the tangent cone and the non-ascent directions, are proved. An optimization problem for calculating a robustness radius followed by a comparison between the newly-defined robustness notion and two existing ones is presented. Moreover, some alterations of objective functions preserving weak/proper/robust efficiency are studied. | Robustness in nonsmooth nonlinear multi-objective programming |
S0377221715005469 | The solution of several operations research problems requires the creation of a quantitative model. Sensitivity analysis is a crucial step in the model building and result communication process. Through sensitivity analysis we gain essential insights on model behavior, on its structure and on its response to changes in the model inputs. Several interrogations are possible and several sensitivity analysis methods have been developed, giving rise to a vast and growing literature. We present an overview of available methods, structuring them into local and global methods. For local methods, we discuss Tornado diagrams, one way sensitivity functions, differentiation-based methods and scenario decomposition through finite change sensitivity indices, providing a unified view of the associated sensitivity measures. We then analyze global sensitivity methods, first discussing screening methods such as sequential bifurcation and the Morris method. We then address variance-based, moment-independent and value of information-based sensitivity methods. We discuss their formalization in a common rationale and present recent results that permit the estimation of global sensitivity measures by post-processing the sample generated by a traditional Monte Carlo simulation. We then investigate in detail the methodological issues concerning the crucial step of correctly interpreting the results of a sensitivity analysis. A classical example is worked out to illustrate some of the approaches. | Sensitivity analysis: A review of recent advances |
S0377221715005470 | Whether an elderly parent should move/be moved into an assisted care facility can be a difficult decision for families and one that is increasingly evident as the ‘baby-boomer’ population ages. This paper explores decisions about care for elderly family members and in particular, whether a parent should move (or be moved) into an assisted care facility (ACF). The problematic situation based on personal experience of the authors is explored using two different methods as problem structuring aids, providing critical insights into the dilemma facing many families. Boardman's Soft Systems Methodology (BSSM) was used along with the Evaporating Cloud (EC) from the Theory of Constraints (TOC) methodology. This use of two methods in multimethodological fashion as complementary lenses allowed the elicitation, clarification and elaboration of assumptions underlying the issue of whether the parent should move into an ACF. Multiple avenues for resolving the issue are surfaced, along with several opportunities for further research. The paper contributes to community OR by showing how the different frames can work to address the fraught situation in which families can find themselves, as adult children seek to safeguard their elderly parents by accessing ACF, while also endeavouring to maintain satisfactory family relationships. The paper makes a unique contribution, not just in terms of highlighting the eldercare situation and suggesting ways forward, but also in terms of multimethodological use of BSSM and TOC. Finally, it is significant that the case study originated in the USA, where ‘Soft OR’ methods are rarely applied. | Insights into the eldercare conundrum through complementary lenses of Boardman's SSM and TOC's Evaporating Cloud |
S0377221715005482 | This paper discusses univariate density estimation in situations when the sample (hard information) is supplemented by “soft” information about the random phenomenon. These situations arise broadly in operations research and management science where practical and computational reasons severely limit the sample size, but problem structure and past experiences could be brought in. In particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum likelihood estimator that incorporates any, possibly random, soft information through an arbitrary collection of constraints. We illustrate the breadth of possibilities by discussing soft information about shape, support, continuity, smoothness, slope, location of modes, symmetry, density values, neighborhood of known density, moments, and distribution functions. The maximization takes place over spaces of extended real-valued semicontinuous functions and therefore allows us to consider essentially any conceivable density as well as convenient exponential transformations. The infinite dimensionality of the optimization problem is overcome by approximating splines tailored to these spaces. To facilitate the treatment of small samples, the construction of these splines is decoupled from the sample. We discuss existence and uniqueness of the estimator, examine consistency under increasing hard and soft information, and give rates of convergence. Numerical examples illustrate the value of soft information, the ability to generate a family of diverse densities, and the effect of misspecification of soft information. | Fusion of hard and soft information in nonparametric density estimation |
S0377221715005494 | Condition monitoring (CM) and manual inspection are increasingly used in industry to identify a system's state so that necessary preventive maintenance (PM) decisions can be made. In this paper, we present a model that considers a single-unit system subject to both CM and additional manual inspections. There are two preset control limits: an inspection threshold and a preventive replacement (PR) threshold. When a CM measurement is equal to or greater than the inspection threshold but is less than the PR threshold, a manual inspection activity is initiated. When a CM measurement is greater than the PR threshold, a PR activity should be carried out. The system's degradation process evolves according to a two-stage failure process: the normal working stage, which is from new to the initial point that a defect occurs, with the CM measurement coming from a stochastic process; and the delay-time stage, which is from the initial point that a defect occurs until the point of failure, with the CM measurement coming from an increasing stochastic process. We assume that a manual inspection is perfect in that it can always identify which of these two stages the system is in. In our study, the decision variables are the CM interval and the inspection threshold, and we aim to minimize the expected cost per unit time. We provide a numerical example to demonstrate the applicability and solution procedure of the model. | Preventive replacement for systems with condition monitoring and additional manual inspections |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.