FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0377221715005500 | Six Sigma is a widely used method to improve processes from various industry sectors. The target failure rate for Six Sigma projects is 3.4 parts per million or 2 parts per billion. In this paper, we show that when a process is exponential, attaining such performances may require a larger reduction in variation (i.e., greater quality-improvement effort). In addition, identifying whether the process data are of non-normal distribution is important to more accurately estimate the effort required to improve the process. A key finding of this study is that, for a low kσ level, the amount of variation reduction required to improve an exponentially distributed process is less than that of a normally distributed process. On the other hand, for a higher kσ level, the reverse scenario is the case. This study also analyzes processes following Gamma and Weibull distributions, and the results further support our concern that simply reporting the Sigma level as an indication of the quality of a product or process can be misleading. Two optimization models are developed to illustrate the effect of underestimating the quality-improvement effort on the optimal solution to minimize cost. In conclusion, the classical and widely used assumption of a normally distributed process may lead to implementation of quality-improvement strategies or the selection of Six Sigma projects that are based on erroneous solutions. | Six Sigma performance for non-normal processes |
S0377221715005512 | We propose a new speed and departure time optimization algorithm for the pollution-routing problem (PRP), which runs in quadratic time and returns an optimal schedule. This algorithm is embedded into an iterated local search-based metaheuristic to achieve a combined speed, scheduling and routing optimization. The start of the working day is set as a decision variable for individual routes, thus enabling a better assignment of human resources to required demands. Some routes that were evaluated as unprofitable can now appear as viable candidates later in the day, leading to a larger search space and further opportunities of distance optimization via better service consolidation. Extensive computational experiments on available PRP benchmark instances demonstrate the good performance of the algorithm. The flexible departure times from the depot contribute to reduce the operational costs by 8.36% on the considered instances. | A speed and departure time optimization algorithm for the pollution-routing problem |
S0377221715005524 | In this paper, the resource allocation problem in multi-class dynamic PERT networks with finite capacity of concurrent projects (COnstant Number of Projects In Process (CONPIP)) is studied. The dynamic PERT network is modeled as a queuing network, where new projects from different classes (types) are generated according to independent Poisson processes with different rates over the time horizon. Each activity of a project is performed at a devoted service station with one server located in a node of the network, whereas activity durations for different classes in each service station are independent and exponentially distributed random variables with different service rates. Indeed, the projects from different classes may be different in their precedence networks and also the durations of the activities. For modeling the multi-class dynamic PERT networks with CONPIP, we first consider every class separately and convert the queueing network of every class into a proper stochastic network. Then, by constructing a proper finite-state continuous-time Markov model, a system of differential equations is created to compute the project completion time distribution for any particular project. The problem is formulated as a multi-objective model with three objectives to optimally control the resources allocated to the service stations. Finally, we develop a simulated annealing (SA) algorithm to solve this multi-objective problem, using the goal attainment formulation. We also compare the SA results against the results of a discrete-time approximation of the original optimal control problem, to show the effectiveness of the proposed solution technique. | Resource allocation in multi-class dynamic PERT networks with finite capacity |
S0377221715005536 | In this paper we study two reverse auction formats in a single period setting, the sealed pay-as-bid and the open format, when suppliers are capacity constrained. In the pay-as-bid format we characterize the asymmetric bidding equilibrium for the case of two suppliers with uniformly distributed cost. We find that the pay-as-bid auction allocates business inefficiently and that a supplier’s bid is nonincreasing in the opponent’s capacity and is typically decreasing in its own capacity. We then characterize a descending price-clock open auction implementation and find that it is optimal and that the buyer’s expected cost decreases as capacity is more evenly spread. Finally, we find that the pay-as-bid auction results in a higher expected cost to the buyer as compared to the open auction. | Procurement auctions with capacity constrained suppliers |
S0377221715005548 | The investigated problem is the analysis of customers’ strategic behavior in a single server Markovian M 2/M/1 queue with batch arrivals of two customers with a reward-cost structure. At their arrival time, customers can decide to join the queue or to balk. The utility of each one depends on his decision, on his partner’s decision and on the system state. Two cases are considered: when the system provides the information about its state (observable case), and when this information is not provided (unobservable case). Both problems are modeled as games in extensive form with complete and imperfect information. We give the Nash equilibria for each corresponding game and we compare between both cases in order to determine the policy which arranges the system’s manager. | Customers’ strategic behavior in batch arrivals M 2/M/1 queue |
S0377221715005561 | The financial health of local governments has attracted considerable interest over the past couple of decades. In this study, we follow a benchmarking perspective and introduce a multi-attribute financial evaluation model that allows peer assessments to be made, including comparisons over time, while differentiating between managerial performance and the effect of the external environment. The model is applied to a large database involving the entire population of French municipalities over the period 2000–2012 to assess how the reforms implemented over the past decade (taxation and decentralization) have affected the financial performance of local governments in France. The findings are of particular interest to both the academia and policymakers including local public authorities and central governments. | A novel multi-attribute benchmarking approach for assessing the financial performance of local governments: Empirical evidence from France |
S0377221715005779 | We introduce a novel metaheuristic methodology to improve the initialization of a given deterministic or stochastic optimization algorithm. Our objective is to improve the performance of the considered algorithm, called core optimization algorithm, by reducing its number of cost function evaluations, by increasing its success rate and by boosting the precision of its results. In our approach, the core optimization is considered as a sub-optimization problem for a multi-layer line search method. The approach is presented and implemented for various particular core optimization algorithms: Steepest Descent, Heavy-Ball, Genetic Algorithm, Differential Evolution and Controlled Random Search. We validate our methodology by considering a set of low and high dimensional benchmark problems (i.e., problems of dimension between 2 and 1000). The results are compared to those obtained with the core optimization algorithms alone and with two additional global optimization methods (Direct Tabu Search and Continuous Greedy Randomized Adaptive Search). These latter also aim at improving the initial condition for the core algorithms. The numerical results seem to indicate that our approach improves the performances of the core optimization algorithms and allows to generate algorithms more efficient than the other optimization methods studied here. A Matlab optimization package called “Global Optimization Platform” (GOP), implementing the algorithms presented here, has been developed and can be downloaded at: http://www.mat.ucm.es/momat/software.htm | A multi-layer line search method to improve the initialization of optimization algorithms |
S0377221715005780 | We present a scheme for coordinating decentralized parties that share central resources but hold private information about their decision problems modeled as linear programs. This setting is of particular importance for supply chains, in which the plans of independent, often legally separated, parties have to be synchronized. The scheme is based on an iterative generation and exchange of proposals regarding the parties’ input to or withdrawal from the central resources (i.e. primal information). We prove that the system-wide optimum can be identified in a finite number of steps. A simple numerical example illustrates the information exchange and the models involved when coordinating a two-stage supply chain. | Coordinating decentralized linear programs by exchange of primal information |
S0377221715005792 | This paper investigates the weak convergence of general non-Gaussian GARCH models together with an application to the pricing of European style options determined using an extended Girsanov principle and a conditional Esscher transform as the pricing kernel candidates. Applying these changes of measure to asymmetric GARCH models sampled at increasing frequencies, we obtain two risk neutral families of processes which converge to different bivariate diffusions, which are no longer standard Hull–White stochastic volatility models. Regardless of the innovations used, the GARCH implied diffusion limit based on the Esscher transform can be obtained by applying the minimal martingale measure under the physical measure. However, we further show that for skewed GARCH driving noise, the risk neutral diffusion limit of the extended Girsanov principle exhibits a non-zero market price of volatility risk which is proportional to the market price of the equity risk, where the constant of proportionality depends on the skewness and kurtosis of the underlying distribution. Our theoretical results are further supported by numerical simulations and a calibration exercise to observed market quotes. | Non-Gaussian GARCH option pricing models and their diffusion limits |
S0377221715005809 | Inter-dependency among the decision criteria and difficulty of establishing a common membership grade are two important issues to be addressed in multi-criteria group decision making (MCGDM) problems in the environment of uncertainty. The main purpose of this paper is to define the Choquet integral operator for interval-valued intuitionistic hesitant fuzzy sets (IVIHFS) and to extend the technique for order preference by similarity to ideal solution (TOPSIS) method using Choquet integral operator in interval-valued intuitionistic hesitant fuzzy environment. In the present study we define interval-valued intuitionistic hesitant fuzzy Choquet integral (IVIHFCI) operator and extend the definition of hamming distance for the elements of IVIHFS. Using IVIHFCI operator and hamming distance for IVIHFS, we extend TOPSIS method for MCGDM for interval-valued intuitionistic hesitant fuzzy environment considering the interaction phenomena among the decision criteria. An illustrative example has also been taken in the present study to understand the proposed method. | Interval-valued intuitionistic hesitant fuzzy Choquet integral based TOPSIS method for multi-criteria group decision making |
S0377221715005810 | In this paper, a combinatorial optimization model is proposed to efficiently select security safeguards in order to protect IT infrastructures and systems. The approach is designed to provide very concrete decision support for an organization as a whole or separately for specific systems. It can be applied in practice without requiring the decision maker himself to collect extensive input data. This is accomplished by using an existing comprehensive and highly accepted knowledge base as a basis for decision making. For our analysis, we use the publicly available IT baseline protection catalogues of the German Federal Office for Information Security (BSI). The catalogues contain more than 500 threats and over 1200 safeguard alternatives to choose from. Applying our model, it is possible to make use of this knowledge and determine optimal selections of safeguards according to given security requirements. The approach supports the decision maker in establishing an effective baseline security strategy. | Optimal selection of IT security safeguards from an existing knowledge base |
S0377221715005822 | We extend the well-known spatial competition model (d’Aspremont, Gabszewicz & Thisse, 1979) to a continuous time model in which two firms compete in each instance. Our focus is on the entry timing decisions of firms and their optimal locations. We demonstrate that the leader has an incentive to locate closer to the center to delay the follower’s entry, leading to a non-maximum differentiation outcome. We also investigate how exogenous parameters affect the leader’s location and firms’ values and, in particular, numerically show that the profit of the leader changes non-monotonically with an increase in the transport cost parameter. | Product differentiation and entry timing in a continuous time spatial competition model |
S0377221715005834 | In Chen, Cook, Kao, and Zhu (2013), it is demonstrated, as a network DEA pitfall, that while the multiplier and envelopment DEA models are dual models and equivalent under the standard DEA, such is not necessarily true for the two types of network DEA models in deriving divisional efficiency scores and frontier projections. As a reaction to this work, we demonstrate that the duality in the standard DEA naturally migrates to the two-stage network DEA. Formulas are developed to obtain frontier projections and divisional efficiency scores using a DEA model's and its dual solutions. The case of Taiwanese non-life insurance companies is revisited using the newly developed approach. | A note on two-stage network DEA model: Frontier projection and duality |
S0377221715005846 | We consider the problem of scheduling a set of n jobs on m identical and parallel batching machines. The machines have identical capacities equal to K and the jobs have identical processing times equal to p. Job j has a size sj , a due date dj and a profit Rj . Several jobs can be batched together and processed by a machine, provided that the total size of the jobs in the batch does not exceed the machine capacity K. The company will earn a profit of Rj dollars if job j is delivered by time dj ; otherwise, it earns nothing. A third party logistic (3PL) provider will be used to deliver the jobs. The 3PL provider picks up the jobs at times T 1 < T 2 < ⋅⋅⋅ < Tz , and vk (1 ≤ k ≤ z) vehicles will be provided for delivery at time Tk . The vehicles have identical capacities equal to C. The objective is to find a production and delivery schedule so as to maximize the total profit that the company can earn. We show that the problem is solvable in polynomial time if the jobs have identical sizes, but it becomes unary NP-hard if the jobs have different sizes. We propose heuristics for various NP-hard cases and analyze their performances. | Integrated production and delivery on parallel batching machines |
S0377221715005858 | The motivation of this paper is to introduce a hybrid Rolling Genetic Algorithm-Support Vector Regression (RG-SVR) model for optimal parameter selection and feature subset combination. The algorithm is applied to the task of forecasting and trading the EUR/USD, EUR/GBP and EUR/JPY exchange rates. The proposed methodology genetically searches over a feature space (pool of individual forecasts) and then combines the optimal feature subsets (SVR forecast combinations) for each exchange rate. This is achieved by applying a fitness function specialized for financial purposes and adopting a sliding window approach. The individual forecasts are derived from several linear and non-linear models. RG-SVR is benchmarked against genetically and non-genetically optimized SVRs and SVMs models that are dominating the relevant literature, along with the robust ARBF-PSO neural network. The statistical and trading performance of all models is investigated during the period of 1999–2012. As it turns out, RG-SVR presents the best performance in terms of statistical accuracy and trading efficiency for all the exchange rates under study. This superiority confirms the success of the implemented fitness function and training procedure, while it validates the benefits of the proposed algorithm. | Modeling, forecasting and trading the EUR exchange rates with hybrid rolling genetic algorithms—Support vector regression forecast combinations |
S0377221715005871 | Dispersion is a desirable element inherent in many location problems. For example, dispersive strategies are used in the location of franchise stores, bank branches, defensive missile silo placement, halfway homes, and correctional facilities, or where there is need to be dispersed as much as possible in order to minimize impacts. Two classic models that capture the essence of dispersion between facilities involve: (1) locating exactly p-facilities while maximizing the smallest distance of separation between any two of them, and (2) maximizing the number of facilities that are being located subject to the condition that each facility is no closer than r-distance to its closest neighboring facility. The latter of these two problems is called the anti-covering problem, the subject of this paper. Virtually all past research has involved an attempt to solve for the “best or maximal packing” solution to a given anti-covering problem. This paper deals with what one may call the worst case solution of an anti-covering problem. That is, what is the smallest number of needed facilities and their placement such that their placement thwarts or prevents any further facility placement without violating the r-separation requirement? We call this the disruptive anti-covering location problem. It is disruptive in the sense that such a solution would efficiently prevent an optimal packing from occurring. We present an integer linear program model for this new location problem, provide example problems which indicate that very disruptive configurations exist, and discuss the generation of a range of stable levels to this problem. | The disruptive anti-covering location problem |
S0377221715005883 | This paper studies a valuation framework for financial contracts subject to reference and counterparty default risks with collateralization requirement. We propose a fixed point approach to analyze the mark-to-market contract value with counterparty risk provision, and show that it is a unique bounded and continuous fixed point via contraction mapping. This leads us to develop an accurate iterative numerical scheme for valuation. Specifically, we solve a sequence of linear inhomogeneous PDEs, whose solutions converge to the fixed point price function. We apply our methodology to compute the bid and ask prices for both defaultable equity and fixed-income derivatives, and illustrate the non-trivial effects of counterparty risk, collateralization ratio and liquidation convention on the bid-ask spreads. | Pricing derivatives with counterparty risk and collateralization: A fixed point approach |
S0377221715005895 | Empirical evidence on how cognitive factors impact the effectiveness of model-supported group decision making is lacking. This study reports on an experiment on the effects of need for closure, defined as a desire for definite knowledge on some issue and the eschewal of ambiguity. The study was conducted with over 40 postgraduate student groups. A quantitative analysis shows that compared to groups low in need for closure, groups high in need for closure experienced less conflict when using Value-Focused Thinking to make a budget allocation decision. Furthermore, low need for closure groups used the model to surface conflict and engaged in open discussions to come to an agreement. By contrast, high need for closure groups suppressed conflict and used the model to put boundaries on the discussion. Interestingly, both groups achieve similar levels of consensus, and high need for closure groups are more satisfied than low need for closure groups. A qualitative analysis of a subset of groups reveals that in high need for closure groups only a few participants control the model building process, and final decisions are not based on the model but on simpler tools. The findings highlight the need to account for the effects of cognitive factors when designing and deploying model-based support for practical interventions. | Different paths to consensus? The impact of need for closure on model-supported group conflict management |
S0377221715005901 | In this paper we consider a routing problem where uncapacitated vehicles are loaded with goods, requested by the customers, that arrive at the depot over time. The arrival time of a product at the depot is called its release date. We consider two variants of the problem. In the first one a deadline to complete the distribution is given and the total distance traveled is minimized. In the second variant no deadline is given and the total time needed to complete the distribution is minimized. While both variants are in general NP-hard, we show that they can be solved in polynomial time if the underlying graph has a special structure. | Complexity of routing problems with release dates |
S0377221715005913 | We consider a hard decentralized scheduling problem with heterogeneous machines and competing job sets that belong to different self-interested stakeholders (agents). The determination of a beneficial solution, i.e., a respective contract in terms of a common schedule, is particularly difficult due to information asymmetry and self-interested behavior of the involved agents. The agents intend to minimize their individual costs that consist of tardiness cost and their share of the machine operating cost. The aim of this study is to find socially beneficial outcomes by means of negotiation mechanisms that comply with decentralized information and conflicting interests. For this purpose, we present an automated negotiation protocol, which is inspired by metaheuristics, along with a set of optional building blocks. In the protocol, new solutions are iteratively generated, as mutations of a single provisional contract, and proposed to the agents, while feasible rules with quotas restrict the acceptance decisions of the agents. The computational experiments show that the protocol—without central information and subject to strategic behavior—can achieve high quality solutions which are very close to results from centralized multi-criteria procedures. Particular building block configurations yield improved outcomes. Concluding, the considered scheduling problem enhances standard scheduling models by incorporating multiple stakeholders, nonlinear cost functions, and machine operating cost, whereas the presented negotiation approach contributes to the methodology and practice of collaborative decision making. | Design of automated negotiation mechanisms for decentralized heterogeneous machine scheduling |
S0377221715005925 | Dynamic pricing of commodities without knowing the exact relation between price and demand is a much-studied problem. Most existing studies assume that the parameters describing the market are constant during the selling period. This severely reduces their practical applicability, since, in reality, market characteristics may change all the time, without the firm always being aware of it. In the present paper we study dynamic pricing and learning in a changing market environment. We introduce a methodology that enables the price manager to hedge against changes in the market, and provide explicit upper bounds on the regret - a measure of the performance of the firm’s pricing decisions. In addition, this methodology guides the selection of the optimal way to estimate the market process. We provide numerical examples from practically relevant situations to illustrate the methodology. | Tracking the market: Dynamic pricing and learning in a changing environment |
S0377221715005937 | Manufacturers in the western world need to exploit and perfect all their strengths to reduce the flight of manufacturing to global outsourcing destinations. Use of automated manufacturing systems (AMSs) is one such strength that needs to be improved to perfection. One area for improvement is the management of uncertainties on the production floor. This paper explores strategies for modifying detailed event list schedules following the occurrence of an interruption. Advanced planning and scheduling (APS) software packages provide a detailed advance plan of production events. However, the execution of this advance plan is disrupted by a myriad of unanticipated interruptions, such as machine breakdowns, yield variations, and hot jobs. The alternatives available to respond to such interruptions can be classified in four groups: regenerating the complete schedule using APS, switching to dispatching mode, modifying the existing schedule, and continuing to follow the schedule and letting the production system gradually absorb the impact of the interruption. Regeneration of the complete schedule using APS requires a large computation effort, may result in large changes in the schedule, and hence is not recommended. This paper reports on an experimental study for evaluating 10 strategies for responding to machine failures in AMSs that broadly fall in the latter three groups. The strategies are evaluated using simulation under an experimental design with manufacturing scenario, load level, severity and duration of interruptions as factors. The results are analyzed to understand the strengths and weaknesses of the considered strategies and develop recommendations. | Dispatching strategies for managing uncertainties in automated manufacturing systems |
S0377221715005949 | In this paper we study a continuous time stochastic inventory model for a commodity traded in the spot market and whose supply purchase is affected by price and demand uncertainty. A firm aims at meeting a random demand of the commodity at a random time by maximizing total expected profits. We model the firm’s optimal procurement problem as a singular stochastic control problem in which controls are nondecreasing processes and represent the cumulative investment made by the firm in the spot market (a so-called stochastic ‘monotone follower problem’). We assume a general exponential Lévy process for the commodity’s spot price, rather than the commonly used geometric Brownian motion, and general convex holding costs. We obtain necessary and sufficient first order conditions for optimality and we provide the optimal procurement policy in terms of a base inventory process; that is, a minimal time-dependent desirable inventory level that the firm’s manager must reach at any time. In particular, in the case of linear holding costs and exponentially distributed demand, we are also able to obtain the explicit analytic form of the optimal policy and a probabilistic representation of the optimal revenue. The paper is completed by some computer drawings of the optimal inventory when spot prices are given by a geometric Brownian motion and by an exponential jump-diffusion process. In the first case we also make a numerical comparison between the value function and the revenue associated to the classical static “newsvendor” strategy. | Optimal dynamic procurement policies for a storable commodity with Lévy prices and convex holding costs |
S0377221715005950 | Owing to the technological innovations and the changing consumer perceptions, remanufacturing has gained vast economic potential in the past decade. Nevertheless, major OEMs, in a variety of sectors, remain reluctant about establishing their own remanufacturing capability and use recycling as a means to satisfy the extended producer responsibility. Their main concerns seem to be the potential for the cannibalization of their primary market by remanufactured products and the uncertainty in the return stream in terms of its volume and quality. This paper aims at assisting OEMs in the development of their remanufacturing strategy, with an outlook of pursuing the opportunities presented by the inherent uncertainties. We present a two-stage stochastic closed-loop supply chain design model that incorporates the uncertainties in the market size, the return volume as well as the quality of the returns. The proposed framework also explicitly represents the difference in customer valuations of the new and the remanufactured products. The arising stochastic mixed-integer quadratic program is not amenable to solution via commercial software. Therefore, we develop a solution procedure by integrating sample average approximation with the integer L-shaped method. In order to gather solid managerial insights, we present a case study based on BSH, a leading producer of home appliances headquartered in Germany. Our analysis reveals that, while the reverse network configuration is rather robust, the extent of the firm’s involvement in remanufacturing is quite sensitive to the costs associated with each product recovery option as well as the relative valuation of the remanufactured products by the customers. In the context of the BSH case, we find that among the sources of uncertainty, the market size has the most profound effect on the overall profitability, and it is desirable to build sufficient expansion flexibility in the forward network configuration. | Supply chain design for unlocking the value of remanufacturing under uncertainty |
S0377221715005962 | Linguistic decision making systems represent situations that cannot be assessed with numerical information but it is possible to use linguistic variables. This paper introduces new linguistic aggregation operators in order to develop more efficient decision making systems. The linguistic probabilistic weighted average (LPWA) is presented. Its main advantage is that it considers subjective and objective information in the same formulation and considering the degree of importance that each concept has in the aggregation. A key feature of the LPWA operator is that it considers a wide range of linguistic aggregation operators including the linguistic weighted average, the linguistic probabilistic aggregation and the linguistic average. Further generalizations are presented by using quasi-arithmetic means and moving averages. An application in linguistic multi-criteria group decision making under subjective and objective risk is also presented in the context of the European Union law. | Subjective and objective information in linguistic multi-criteria group decision making |
S0377221715005974 | We investigate the inverse convex ordered 1-median problem on unweighted trees under the cost functions related to the Chebyshev norm and the Hamming distance. By the special structure of the problem under Chebyshev norm, we deduce the so-called maximum modification to modify the edge lengths of the tree. Additionally, the cost function of the problem receives only finite values under the bottleneck Hamming distance. Therefore, we can find the optimal cost of the problem by applying binary search. It is shown that both of the problems, under Chebyshev norm and under the bottleneck Hamming distance, can be solved in O(n 2log n) time in all situations, with or without essential topology changes. Here, n is the number of vertices of the tree. Finally, we prove that the problem under weighted sum Hamming distance is NP-hard. | The inverse convex ordered 1-median problem on trees under Chebyshev norm and Hamming distance |
S0377221715005986 | Through an additive efficiency decomposition (AED) approach in data envelopment analysis (DEA), we evaluate the management and investment efficiencies of Investment Trust Corporations (ITCs) in Taiwan for the period 2007–2011. Furthermore, the AED-DEA and a network-based ranking approach in DEA are jointly used to rank and identify ITCs that can be treated as benchmarks. Frontier projections for ITCs are also highlighted. In general, our results demonstrate that the sample ITCs have higher investment efficiency than the management efficiency. Foreign ITCs appear to be the most efficient ones as compared to local ITCs and financial-holding ITCs. Finally, we construct a competitive map to help mutual fund managers improve their operating performance, resource allocation, and investment strategy formulation. | Exploring the benchmarks of the Taiwanese investment trust corporations: Management and investment efficiency perspectives |
S0377221715005998 | On the basis of an extensive interdisciplinary literature review proactive decision-making (PDM) is conceptualised as a multidimensional concept. We conduct five studies with over 4000 participants from various countries for developing and validating a theoretically consistent and psychometrically sound scale of PDM. The PDM concept is developed and appropriate items are derived from literature. Six dimensions are conceptualised: the four proactive cognitive skills ‘systematic identification of objectives’, ‘systematic search for information’, ‘systematic identification of alternatives’, and ‘using a decision radar’, and the two proactive personality traits ‘showing initiative’ and ‘striving for improvement’. Using principal component factor analysis and subsequent item analysis as well as confirmatory factor analysis, six conceptually distinct dimensional factors are identified and tested acceptably reliable and valid. Our results are remarkably similar for individuals who are decision-makers, decision analysts, both or none of both with different levels of experience. There is strong evidence that individuals with high scores in a PDM factor, e.g. proactive cognitive skills or personality traits, show a significantly higher decision satisfaction. Thus, the PDM scale can be used in future research to analyse other concepts. Furthermore, the scale can be applied, e.g. by staff teams to work on OR problems effectively or to inform a decision analyst about the decision behaviour in an organisation. | Developing and validating the multidimensional proactive decision-making scale |
S0377221715006001 | In this paper a definition of industry inefficiency in cost constrained production environments is introduced. This definition uses the indirect directional distance function and quantifies the inefficiency of the industry in terms of the overall output loss, given the industry cost budget. The industry inefficiency indicator is then decomposed into sources components: reallocation inefficiency arising from sub-optimal configuration of the industry; firm inefficiency arising from a failure to select optimal input quantities (given the prevalent inputs prices); firm inefficiency due to lack of best practices. The method is illustrated using data on Ontario electricity distributors. These data show that lack of best practices is only a minor component of the overall inefficiency of the industry (less than 10 percent), with reallocation inefficiency accounting for more than 75 percent of the overall inefficiency of the system. An analysis based on counter-factual input prices is conducted in order to illustrate how the model can be used to estimate the effects of a change in the regulation regime. | Cost constrained industry inefficiency |
S0377221715006013 | In recent advances in solving the problem of transmission network expansion planning, the use of robust optimization techniques has been put forward, as an alternative to stochastic mathematical programming methods, to make the problem tractable in realistic systems. Different sources of uncertainty have been considered, mainly related to the capacity and availability of generation facilities and demand, and making use of adaptive robust optimization models. The mathematical formulations for these models give rise to three-level mixed-integer optimization problems, which are solved using different strategies. Although it is true that these robust methods are more efficient than their stochastic counterparts, it is also correct that solution times for mixed-integer linear programming problems increase exponentially with respect to the size of the problem. Because of this, practitioners and system operators need to use computationally efficient methods when solving this type of problem. In this paper the issue of improving computational performance by taking different features from existing algorithms is addressed. In particular, we replace the lower-level problem with a dual one, and solve the resulting bi-level problem using a primal cutting plane algorithm within a decomposition scheme. By using this alternative and simple approach, the computing time for solving transmission expansion planning problems has been reduced drastically. Numerical results in an illustrative example, the IEEE-24 and IEEE 118-bus test systems demonstrate that the algorithm is superior in terms of computational performance with respect to the existing methods. | Robust transmission network expansion planning in energy systems: Improving computational performance |
S0377221715006219 | This paper considers a single machine scheduling problem in which each job to be scheduled belongs to a family and setups are required between jobs belonging to different families. Each job requires a certain amount of resource that is supplied through upstream processes. Therefore, schedules must be generated in such a way that the total resource demand does not exceed the resource supply up to any point in time. The goal is to find a schedule minimising total tardiness with respect to the given due dates of the jobs. A mathematical formulation and a heuristic solution approach for two variants of the problem are presented. Computational experiments show that the proposed heuristic outperforms a state-of-the-art commercial mixed integer programming solver both in terms of solution quality and computation time. | Minimising total tardiness for a single machine scheduling problem with family setups and resource constraints |
S0377221715006220 | Pairwise comparison is an important tool in multi-attribute decision making. Pairwise comparison matrices (PCM) have been applied for ranking criteria and for scoring alternatives according to a given criterion. Our paper presents a special application of incomplete PCMs: ranking of professional tennis players based on their results against each other. The selected 25 players have been on the top of the ATP rankings for a shorter or longer period in the last 40 years. Some of them have never met on the court. One of the aims of the paper is to provide ranking of the selected players, however, the analysis of incomplete pairwise comparison matrices is also in the focus. The eigenvector method and the logarithmic least squares method were used to calculate weights from incomplete PCMs. In our results the top three players of four decades were Nadal, Federer and Sampras. Some questions have been raised on the properties of incomplete PCMs and remains open for further investigation. | An application of incomplete pairwise comparison matrices for ranking top tennis players |
S0377221715006232 | An international audio equipment manufacturer would like to help its customers reduce unit shipping costs by adjusting order quantity according to product preference. We introduce the problem faced by the manufacturer as the Multiple Container Loading Problem with Preference (MCLPP) and propose a combinatorial formulation for the MCLPP. We develop a two-phase algorithm to solve the problem. In phase one, we estimate the most promising region of the solution space based on performance statistics of the sub-problem solver. In phase two, we find a feasible solution in the promising region by solving a series of 3D orthogonal packing problems. A unique feature of our approach is that we try to estimate the average capability of the sub-routine algorithm for the single container loading problem in phase one and take it into account in the overall planning. To obtain a useful estimate, we randomly generate a large set of single container loading problem instances that are statistically similar to the manufacturer’s historical order data. We generate a large set of test instances based on the historical data provided by the manufacturer and conduct extensive computational experiments to demonstrate the effectiveness of our approach. | The multiple container loading problem with preference |
S0377221715006244 | Two standard approaches to predicting the expected values of simulation outputs are either execution of the simulation itself or the use of a metamodel. In this work we propose a methodology that enables both approaches to be combined. When a prediction for a new input is required the procedure is to augment the metamodel forecast with additional simulation outputs for a given input. The key benefit of the method is that it is possible to reach the desired prediction accuracy at a new input faster than in the case when no initial metamodel is present. We show that such a procedure is computationally simple and can be applied to, for instance, web-based simulations, where response time to user actions is often crucial. In this analysis we focus on stochastic kriging metamodels. We show that if this type of metamodel is used and we assume that its metaparameters are fixed, then updating such a metamodel with new observations is equivalent to a Bayesian forecast combination under the known variance assumption. Additionally we observe that using metamodel predictions of variance instead of point estimates for estimation of stochastic kriging metamodes can lead to improved metamodel performance. | A method for the updating of stochastic kriging metamodels |
S0377221715006256 | Recently, multi-objective particle swarm optimization (MOPSO) has shown the effectiveness in solving multi-objective optimization problems (MOPs). However, most MOPSO algorithms only adopt a single search strategy to update the velocity of each particle, which may cause some difficulties when tackling complex MOPs. This paper proposes a novel MOPSO algorithm using multiple search strategies (MMOPSO), where decomposition approach is exploited for transforming MOPs into a set of aggregation problems and then each particle is assigned accordingly to optimize each aggregation problem. Two search strategies are designed to update the velocity of each particle, which is respectively beneficial for the acceleration of convergence speed and the keeping of population diversity. After that, all the non-dominated solutions visited by the particles are preserved in an external archive, where evolutionary search strategy is further performed to exchange useful information among them. These multiple search strategies enable MMOPSO to handle various kinds of MOPs very well. When compared with some MOPSO algorithms and two state-of-the-art evolutionary algorithms, simulation results show that MMOPSO performs better on most of test problems. | A novel multi-objective particle swarm optimization with multiple search strategies |
S0377221715006268 | In this paper we address the computation of indifference regions in the weight space for multiobjective integer and mixed-integer linear programming problems and the graphical exploration of this type of information for three-objective problems. We present a procedure to compute a subset of the indifference region associated with a supported nondominated solution obtained by the weighted-sum scalarization. Based on the properties of these regions and their graphical representation for problems with up to three objective functions, we propose an algorithm to compute all extreme supported nondominated solutions adjacent to a given solution and another one to compute all extreme supported nondominated solutions to a three-objective problem. The latter is suitable to characterize solutions in delimited nondominated areas or to be used as a final exploration phase. A computer implementation is also presented. | Graphical exploration of the weight space in three-objective mixed integer linear programs |
S0377221715006281 | Warehousing has been traditionally viewed as a non-value-adding activity but in recent years a number of new developments have meant that supply chain logistics have become critical to profitability. This paper focuses specifically on order-picking which is a key factor affecting warehouse performance. Order picking is the operation of retrieving goods from specified storage locations based on customer orders. Today’s warehouses face challenges for greater responsiveness to customer orders that require more flexibility than conventional strategies can offer. Hence, dynamic order-picking strategies that allow for changes of pick-lists during a pick cycle have attracted attention recently. In this paper we introduce an interventionist routing algorithm for optimising the dynamic order-picking routes. The algorithm is tested using a set of simulations based on an industrial case example. The results indicate that under a range of conditions, the proposed interventionist routing algorithm can outperform both static and heuristic dynamic order-picking routing algorithms. | An algorithm for dynamic order-picking in warehouse operations |
S0377221715006293 | We consider the characterization of optimal pricing strategies for a pediatric vaccine manufacturing firm operating in an oligopolistic market. The pediatric vaccine pricing problem (PVPP) is formulated as a bilevel mathematical program wherein the upper level models a firm that selects profit-maximizing vaccine prices while the lower level models a representative customer’s vaccine purchasing decision to satisfy a given, recommended childhood immunization schedule (RCIS) at overall minimum cost. Complicating features of the bilevel program include the bilinear nature of the upper-level objective function and the binary nature of the lower-level decision variables. We develop and test variants of three heuristics to identify the pricing scheme that will maximize a manufacturer’s profit: a Latin Hypercube Sampling (LHS) of the upper-level feasible region, an LHS enhanced by a Nelder–Meade search from each price point, and an LHS enhanced by a custom implementation of the Cyclic Coordinate Method from each price point. The practicality of the PVPP is demonstrated via application to the analysis of the 2014 United States pediatric vaccine private sector market. Testing results indicate that a robust sampling method combined with local search is the superlative solution method among those examined and, in the current market, that a manufacturer acting unilaterally has the potential to increase profit per child completing the RCIS by 35 percent (from 231.84 to 312.55 dollars) for GlaxoSmithKline, 47 percent (from 63.96 to 93.70 dollars) for Merck, and 866 percent (from 25.99 to 251.04 dollars) for Sanofi Pasteur over that obtained via current pricing mechanisms. | A bilevel formulation of the pediatric vaccine pricing problem |
S0377221715006311 | Clinical trials have traditionally followed a fixed design, in which randomization probabilities of patients to various treatments remains fixed throughout the trial and specified in the protocol. The primary goal of this static design is to learn about the efficacy of treatments. Response-adaptive designs, on the other hand, allow clinicians to use the learning about treatment effectiveness to dynamically adjust randomization probabilities of patients to various treatments as the trial progresses. An ideal adaptive design is one where patients are treated as effectively as possible without sacrificing the potential learning or compromising the integrity of the trial. We propose such a design, termed Jointly Adaptive, that uses forward-looking algorithms to fully exploit learning from multiple patients simultaneously. Compared to the best existing implementable adaptive design that employs a multiarmed bandit framework in a setting where multiple patients arrive sequentially, we show that our proposed design improves health outcomes of patients in the trial by up to 8.6 percent, in expectation, under a set of considered scenarios. Further, we demonstrate our design’s effectiveness using data from a recently conducted stent trial. This paper also adds to the general understanding of such models by showing the value and nature of improvements over heuristic solutions for problems with short delays in observing patient outcomes. We do this by showing the relative performance of these schemes for maximum expected patient health and maximum expected learning objectives, and by demonstrating the value of a restricted-optimal-policy approximation in a practical example. | Response-adaptive designs for clinical trials: Simultaneous learning from multiple patients |
S0377221715006323 | Group model building (GMB) is a participatory approach to using system dynamics in group decision-making and problem structuring. This paper considers the published quantitative evidence base for GMB since the earlier literature review by Rouwette et al. (2002), to consider the level of understanding on three basic questions: what does it achieve, when should it be applied, and how should it be applied or improved? There have now been at least 45 such studies since 1987, utilising controlled experiments, field experiments, pretest/posttest, and observational research designs. There is evidence of GMB achieving a range of outcomes, particularly with regard to the behaviour of participants and their learning through the process. There is some evidence that GMB is more effective at supporting communication and consensus than traditional facilitation, however GMB has not been compared to other problem structuring methods. GMB has been successfully applied in a range of contexts, but there is little evidence on which to select between different GMB tools, or to understand when certain tools may be more appropriate. There is improving evidence on how GMB works, but this has not yet been translated into changing practice. Overall the evidence base for GMB has continued to improve, supporting its use for improving communication and agreement between participants in group decision processes. This paper argues that future research in group model building would benefit from three main shifts: from single cases to multiple cases; from controlled settings to applied settings; and by augmenting survey results with more objective measures. | Recent evidence on the effectiveness of group model building |
S0377221715006335 | In this paper, we focused on characterizing and solving the multiple objective programming problems which have some imprecision of a vague nature in their formulation. The Rough Set Theory is only used in modeling the vague data in such problems, and our contribution in data mining process is confined only in the “post-processing stage”. These new problems are called rough multiple objective programming (RMOP) problems and classified into three classes according to the place of the roughness in the problem. Also, new concepts and theorems are introduced on the lines of their crisp counterparts; e.g. rough complete solution, rough efficient set, rough weak efficient set, rough Pareto front, weighted sum problem, etc. To avoid the prolongation of this paper, only the 1st-class, where the decision set is a rough set and all the objectives are crisp functions, is investigated and discussed in details. Furthermore, a flowchart for solving the 1st-class RMOP problems is presented. | Rough multiple objective programming |
S0377221715006347 | We examine vendor-managed inventory (VMI) systems with stockout-cost sharing between a supplier and a customer using an EOQ model with shortages allowed under limited storage capacity, in which a stockout penalty is charged to the supplier when stockouts occur at the customer. In the VMI systems the customer and the supplier minimize their own costs in designing a VMI contract and making replenishment decisions, respectively. We compare the VMI systems with an integrated supplier–customer system where the supply chain total cost is minimized. We show that VMI with stockout-cost sharing and the integrated supplier–customer system result in the same replenishment decisions and system performance if and only if the supplier's reservation cost is equal to the minimum supply chain total cost of the integrated system. On the other hand, we also show how VMI along with fixed transfer payments as well as stockout-cost sharing can lead to the supply chain coordination regardless of the supplier's reservation cost. We also provide several interesting computational results. | Supply chain coordination in vendor-managed inventory systems with stockout-cost sharing under limited storage capacity |
S0377221715006359 | This study concerns a method of selecting the best subset of explanatory variables in a multiple linear regression model. Goodness-of-fit measures, for example, adjusted R 2, AIC, and BIC, are generally used to evaluate a subset regression model. Although variable selection with regard to these measures is usually performed with a stepwise regression method, it does not always provide the best subset of explanatory variables. In this paper, we propose mixed integer second-order cone programming formulations for selecting the best subset of variables with respect to adjusted R 2, AIC, and BIC. Computational experiments show that, in terms of these measures, the proposed formulations yield better solutions than those provided by common stepwise regression methods. | Mixed integer second-order cone programming formulations for variable selection in linear regression |
S0377221715006372 | This paper investigates the optimal product distribution strategy for a manufacturer that uses dual-channel supply chains. We assume that two symmetric manufacturers facing price competition distribute products through (1) a retail channel only, (2) a direct channel only, or (3) both retail and direct channels. Our most notable result is that even though the two manufacturers are symmetric, a subgame perfect equilibrium always arises, including an asymmetric distribution policy, where one manufacturer distributes products only through the direct channel, while the other manufacturer distributes through both the direct channel and the retail channel. A practical implication of this result is that a symmetric distribution policy is not necessarily optimal for a manufacturer encountering price competition. In particular, when another competing manufacturer distributes products through its dual channels, a manufacturer should not similarly adopt a dual-channel distribution strategy just to counter the rival's dual-channel strategy. Such a symmetric dual-channel distribution strategy would trigger the most intense inter-brand competition, eroding not only the rival's profit, but also its own profit. | Asymmetric product distribution between symmetric manufacturers using dual-channel supply chains |
S0377221715006384 | The time that a part may spend in a buffer between successive operations is limited in some manufacturing processes. Parts that wait too long must be reworked or discarded due to the risk of quality degradation. In this paper, we present an analytic formulation for the steady-state probability distribution of the time a part spends in a two-machine, one-buffer transfer line (the part sojourn time). To do so, we develop a set of recurrence equations for the conditional probability of a part’s sojourn time, given the number of parts already in the buffer when it arrives and the state of the downstream machine. Then we compute the unconditional probabilities of the part sojourn time using the total probability theorem. Numerical results are provided to demonstrate how the shape of the distribution depends on machine reliability and the buffer size. The analytic formulation is also applied to approximately compute the part sojourn time distribution in a given buffer of a long line. Comparison with simulation shows good agreement. | Part sojourn time distribution in a two-machine line |
S0377221715006396 | The product line pricing problem is generally defined as a seller's task to determine the optimal prices for each product in a product line while accounting for both demand-side and supply-side restrictions. In this paper, we contribute to the existing literature by incorporating consumers’ budget considerations into the seller's optimisation problem. Despite playing an important role in many applications, such as the pricing of tickets for sporting or theatre seasons, budget constraints have not been considered in the academic literature in the context of product line pricing to date. By building on the assumptions and the model formulation recently proposed in a previous study by [Burkart, W. R. et al. (2012). Product line pricing for services with capacity constraints and dynamic substitution. European Journal of Operational Research, 219(2), 347--359] for standard product line pricing, we propose a number of new mixed-integer linear formulations assuming budget-constrained consumers. The formulations differ in terms of how consumers handle their individual budget limitations. To solve the underlying problems, we propose a customised branch-and-bound procedure that relies on various novel problem-specific bound arguments explicitly exploiting the consumers’ budget limitations. Experimental tests show that the branch-and-bound procedure clearly outperforms IBM ILOG CPLEX, which is unable to solve problems for even medium-sized instances. Furthermore, based on a number of scenarios, we derive managerial insights regarding, e.g., the overall impact of considering budget constraints on the seller's revenue as well as the impact of correctly anticipating the type of consumer purchase behaviour. | Optimal product line pricing in the presence of budget-constrained consumers |
S0377221715006402 | In this paper, we present a dynamic optimal control model of process–product innovation with learning by doing, and extend the model of Chenavaz (2012) to an even more general model in which the firm's cost functions of product and process innovation depend on both the innovation investments and the knowledge accumulations of product and process innovation; furthermore, in our paper, the product price, the investments of product and process innovation are decision variables; the product quality, production cost, the change rates of knowledge accumulations of product and process innovation are state variables. The main objective of this paper is to analyze the relationships between these variables, and investigate the model's optimal conditions and characteristics. Further, we solve the model with some numerical examples, and sensitivity analysis is conducted to study the effect of changing the parameters and coefficients on the objective function value. | Dynamic optimal control of process–product innovation with learning by doing |
S0377221715006414 | The paper examines the role of the strategic planning process in excellence management systems (EMSs) and attempts to contribute evidence of how the efficient EMS works, by an analysis of the synergies and relationships between the critical factors of total quality management (TQM) and the organisation's results. In order to reach these objectives, the excellence model of the European Foundation for Quality Management (EFQM) was used as a framework. The methodology used was the Partial Least Squares (PLS) technique. The data were collected from a sample of 225 Spanish firms, candidates for excellence awards, which have been subjected to the complete self- and external-assessment process. The results showed that the actions and the commitment of the leaders and the people to quality (EFQM enablers social factors) must be made effective through the design and implementation of a schematic of the key processes, suitable resource management and the establishment of alliances with the main suppliers and partners. Another critical issue for the success of TQM is the need to achieve integration of the quality values, objectives and practices into the strategic planning process. Moreover, the results also show how the management of the EFQM enablers technical factors differs based on the degree of excellence with which the strategic planning process is employed in the organisations which form the sample. | The role of strategic planning in excellence management systems |
S0377221715006426 | A buffer sizing method based on comprehensive resource tightness is proposed in order to better reflect the relationships between activities and improve the accuracy of project buffer determination. Physical resource tightness is initially determined by setting a critical value of resource availability according to the law of diminishing marginal returns. The design structure matrix (DSM) is then adopted to analyze the information flow between activities and calculate the rework time resulting from the information interaction and the information resource tightness. Finally, the project buffer size is adjusted and determined by means of comprehensive resource tightness which consists of physical resource tightness and information resource tightness. The experimental results indicate that the proposed method considers the effect of comprehensive resource tightness on a project buffer, thus overcoming the deficiencies of traditional methods which consider only physical resource tightness and ignore information resource tightness. The size of the project buffer determined by the proposed method is more reasonable, thus signifying that it can doubly optimize project duration and cost. | Project buffer sizing of a critical chain based on comprehensive resource tightness |
S0377221715006438 | The task of covert intelligence agents is to detect and interdict terror plots. Kaplan (2010) treats terror plots as customers and intelligence agents as servers in a queuing model. We extend Kaplan’s insight to a dynamic model that analyzes the inter-temporal trade-off between damage caused by terror attacks and prevention costs to address the question of how many agents to optimally assign to such counter-terror measures. We compare scenarios which differ with respect to the extent of the initial terror threat and study the qualitative robustness of the optimal solution. We show that in general, the optimal number of agents is not simply proportional to the number of undetected plots. We also show that while it is sensible to deploy many agents when terrorists are moderately efficient in their ability to mount attacks, relatively few agents should be deployed if terrorists are inefficient (giving agents many opportunities for detection), or if terrorists are highly efficient (in which case agents become relatively ineffective). Furthermore, we analyze the implications of a policy that constraints the number of successful terror attacks to never increase. We find that the inclusion of a constraint preventing one of the state variables to grow leads to a continuum of steady states, some which are much more costly to society than the more forward-looking optimal policy that temporarily allows the number of terror attacks to increase. | Optimal control of a terror queue |
S0377221715006451 | Open pit mine design optimization under uncertainty is one of the most critical and challenging tasks in the mine planning process. This paper describes the implementation of a minimum cut network flow algorithm for the optimal production phase and ultimate pit limit design under commodity price or market uncertainty. A new smoothing splines algorithm with sequential Gaussian simulation generates multiple commodity price scenarios, and a computationally efficient stochastic framework accommodates the joint representation and processing of the mining block economic values that result from these commodity price scenarios. A case study at an existing iron mining operation demonstrates the performance of the proposed method, and a comparison with conventional deterministic approach shows a higher cumulative metal production coupled with a 48% increase in the net present value (NPV) of the operation. | Production phase and ultimate pit limit design under commodity price uncertainty |
S0377221715006463 | Credit scoring models are important tools in the credit granting process. These models measure the credit risk of a prospective client based on idiosyncratic variables and macroeconomic factors. However, small and medium sized enterprises (SMEs) are subject to the effects of the local economy. From a data set with the localization and default information of 9 million Brazilian SMEs, provided by Serasa Experian (the largest Brazilian credit bureau), we propose a measure of the local risk of default based on the application of ordinary kriging. This variable has been included in logistic credit scoring models as an explanatory variable. These models have shown better performance when compared to models without this variable. A gain around 7 percentage points of KS and Gini was observed. | Spatial dependence in credit risk and its improvement in credit scoring |
S0377221715006475 | The growth of e-commerce in the past decade has opened the door to a new and exciting opportunity for retailers to better target different segments of the customer population. In this paper, we develop an analytical framework to study the impact of an “online-to-store” channel on the demand allocations and profitability of a retailer who sells products to customers through multiple distribution channels. This new channel can help the retailer tap new customer segments and generate additional demand, but may also hurt the retailer by cannibalizing existing channels and increasing operating costs. The analytical model allows us to evaluate these fundamental tradeoffs and provide useful managerial insights regarding the specific product and market characteristics that are most conducive for increasing profitability. Our analysis provides some simple conditions under which adding an online-to-store channel would lead to higher profits for products that are only available online. If the product is also available in-store, the analysis becomes more complex. In this case, we performed numerical experiments to generate insights on when the OS channel should be used. Our results imply that the retailer needs to carefully select the set of products to be offered through the online-to-store channel. | Impact of an “online-to-store” channel on demand allocation, pricing and profitability |
S0377221715006487 | One of the primary elements of a sustainable manufacturing initiative is that of energy efficiency. Line balancing can be used to design efficient manufacturing systems for paced assembly lines when the operation times are known, but may provide inefficient assignments with variable task times. Thus, we propose the use of unpaced synchronous lines as an alternative to paced lines when there is considerable variability in task times. While a great deal of research has been conducted on the line-balancing problem for paced synchronous production lines as well as for unpaced asynchronous lines, relatively little has focused on the unpaced synchronous configuration, despite its practical relevance. This research addresses this type of production line, with stochastic task completion times, by formulating an appropriate model and developing and evaluating a variety of solution methodologies utilizing extreme value theory as well as simulation. Computational results are presented to gain insight into the design and operation of unpaced synchronous systems. | Designing energy-efficient serial production lines: The unpaced synchronous line-balancing problem |
S0377221715006499 | Gift cards are replacing cash and holiday products as gifts for many consumers. We analyze the effect of gift card sales on a retailer’s optimal stocking level of holiday products and his expected profit within the newsvendor model framework. We derive the sufficient condition for the optimal stocking level of holiday products. We find that increased gift card sales decrease the retailer’s optimal stocking level. The effect of gift card sales on the expected profits depends on treatment of unredeemed gift card balances by the state and the type of non-holiday products gift card redeemers buy with gift cards. When these balances remain with the retailer, even small non-redemptions of gift cards will cause gift card sales to increase most retailers’ expected profit. When balances are treated as abandoned property which must be turned to the state after a specified period of time, gift cards sales may increase or decrease the retailer’s expected profit. If gift card redeemers buy products with gift cards they would not have bought from the retailer in cash, then the optimal expected profit is likely to increase. Gift card profitability increases with demand variability and the post-holiday markdown required to sell holiday products to bargain hunters. The performance of gift cards also depends on consumers’ reservation prices for non-holiday products. If unredeemed gift card balances are collected by the state, then profitability of gift cards increases with increased consumer reservation prices for non-holiday products. | Effects of gift cards on optimal order and discount of seasonal products |
S0377221715006505 | Because sustainable scheduling is attracting increasing amounts of attention from many manufacturing companies and energy is a central concern regarding sustainability, the purpose of this paper is to develop a research framework for “energy-efficient scheduling” (EES). EES approaches are scheduling approaches that have the objective of improving energy efficiency. Based on an iterative methodology, we review, analyze, and synthesize the current state of the literature and propose a completely new research framework to structure the research field. In doing so, the three dimensions “energetic coverage”, “energy supply”, and “energy demand” are introduced and used to classify the literature. Each of these dimensions contains categories and attributes to specify energy-related characteristics that are relevant for EES. We further provide an empirical analysis of the reviewed literature and emphasize the benefits that can be achieved by EES in practice. | Energy-efficient scheduling in manufacturing companies: A review and research framework |
S0377221715006517 | A serial manufacturing system generally consists of multiple and different dedicated processing stages that are aligned sequentially to produce a specific end product. In such a system, the intermediate and end product quality generally varies due to setting of in-process variables at a specific stage and also due to interdependency between the stages. In addition, the output quality at each individual stage may be judged by multiple correlated end product characteristics (so-called ‘multiple responses’). Thus, achieving the optimal product quality, considering the setting conditions at multiple stages with multiple correlated responses at individual stage is a critical and difficult task for practitioners. The solution to such a problem necessitates building data driven empirical response function(s) at individual stage. These response function(s) may be nonlinear and multimodal in nature. Although extensive research works are reported for single-stage multiple response optimization (MRO) problems, there exist little evidence on work addressing multistage MRO problem with more than two sequential stages. This paper attempts to develop an efficient and simplified solution approach for a typical serial multistage MRO problem. The proposed approach integrates a modified desirability function and an ant colony-based metaheuristic search strategy to determine the best process setting conditions in serial multistage system. Usefulness of the approach is verified by using a real life case on serial multistage rolled aluminum sheet manufacturing process. | A multistage and multiple response optimization approach for serial manufacturing system |
S0377221715006529 | Multi-criteria decision analysis (MCDA) is a valuable resource within operations research and management science. Various MCDA methods have been developed over the years and applied to decision problems in many different areas. The outranking approach, and in particular the family of ELECTRE methods, continues to be a popular research field within MCDA, despite its more than 40 years of existence. In this paper, a comprehensive literature review of English scholarly papers on ELECTRE and ELECTRE-based methods is performed. Our aim is to investigate how ELECTRE and ELECTRE-based methods have been considered in various areas. This includes area of applications, modifications to the methods, comparisons with other methods, and general studies of the ELECTRE methods. Although a significant amount of literature on ELECTRE is in a language different from English, we focus only on English articles, because many researchers may not be able to perform a study in some of the other languages. Each paper is categorized according to its main focus with respect to ELECTRE, i.e. if it considers an application, performs a review, considers ELECTRE with respect to the problem of selecting an MCDA method or considers some methodological aspects of ELECTRE. A total of 686 papers are included in the review. The group of papers considering an application of ELECTRE consists of 544 papers, and these are further categorized into 13 application areas and a number of sub-areas. In addition, all papers are classified according to the country of author affiliation, journal of publication, and year of publication. For the group of applied papers, the distribution by ELECTRE version vs. application area and ELECTRE version vs. year of publication are provided. We believe that this paper can be a valuable source of information for researchers and practitioners in the field of MCDA and ELECTRE in particular. | ELECTRE: A comprehensive literature review on methodologies and applications |
S0377221715006530 | It has been around 30 years since the heterogeneous vehicle routing problem was introduced, and significant progress has since been made on this problem and its variants. The aim of this survey paper is to classify and review the literature on heterogeneous vehicle routing problems. The paper also presents a comparative analysis of the metaheuristic algorithms that have been proposed for these problems. | Thirty years of heterogeneous vehicle routing |
S0377221715006542 | Physical inventories constitute a significant proportion of companiesâ investments in today's competitive environment. The trade-off between customer service levels and inventory reserves is addressed in practice by statistical inventory software solutions; given the tremendous number of Stock Keeping Units (SKUs) that contemporary organisations deal with, such solutions are fully automated. However, empirical evidence suggests that managers habitually judgementally adjust the output of such solutions, such as replenishment orders or re-order levels. This research is concerned with the value being added, or not, when statistically derived inventory related decisions (Order-Up-To (OUT) levels in particular) are judgementally adjusted. We aim at developing our current understanding on the effects of incorporating human judgement into inventory decisions; to our knowledge such effects do not appear to have been studied empirically before and this is the first endeavour to do so. A number of research questions are examined and a simulation experiment is performed, using an extended database of approximately 1800 SKUs from the electronics industry, in order to evaluate human judgement effects. The linkage between adjustments and their justification is also evaluated; given the apparent lack of comprehensive empirical evidence in this area, including the field of demand forecasting, this is a contribution in its own right. Insights are offered to academics, to facilitate further research in this area, practitioners, to enable more constructive intervention into statistical inventory solutions, and software developers, to consider the interface with human decision makers. | The effects of integrating management judgement into OUT levels: In or out of context? |
S0377221715006554 | The bullwhip effect refers to the phenomenon where order variability increases as the orders move upstream in the supply chain. This paper provides a review of the bullwhip literature which adopts empirical, experimental and analytical methodologies. Early econometric evidence of bullwhip is highlighted. Findings from empirical and experimental research are compared with analytical and simulation results. Assumptions and approximations for modelling the bullwhip effect in terms of demand, forecast, delay, replenishment policy, and coordination strategy are considered. We identify recent research trends and future research directions concerned with supply chain structure, product type, price, competition and sustainability. | The bullwhip effect: Progress, trends and directions |
S0377221715006566 | Psychological heuristics are formal models for making decisions that (i) rely on core psychological capacities (e.g., recognizing patterns or recalling information from memory), (ii) do not necessarily use all available information, and process the information they use by simple computations (e.g., ordinal comparisons or un-weighted sums), and (iii) are easy to understand, apply and explain. The contribution of this article is fourfold: First, the conceptual foundation of the psychological heuristics research program is provided, along with a discussion of its relationship to soft and hard OR. Second, empirical evidence and theoretical analyses are presented on the conditions under which psychological heuristics perform on par with or even better than more complex standard models in decision problems such as multi-attribute choice, classification, and forecasting, and in domains as varied as health, economics and management. Third, we demonstrate the application of the psychological heuristics approach to the problem of reducing civilian casualties in military stability operations. Finally, we discuss the role that psychological heuristics can play in OR theory and practice. | On the role of psychological heuristics in operational research; and a demonstration in military stability operations |
S0377221715006578 | This article is a critical review of methods integrating environmental aspects into productive efficiency. We describe the classic modelling approach relying on the weak disposability assumption, and explain the major recent developments around the inclusion of undesirable outputs in production technology modelling, namely the materials balance principles and the weak G-disposability, the by-production modelling and the cost disposability assumption, and the unified model under natural and managerial disposability concepts. We discuss the limits inherent in each methodology and suggest future research perspectives. | Modelling pollution-generating technologies in performance benchmarking: Recent developments, limits and future prospects in the nonparametric framework |
S0377221715006591 | Undiscounted Markov decision processes (UMDP's) can formulate optimal stochastic control problems that minimize the expected total cost per period for various systems. We propose new approximate dynamic programming (ADP) algorithms for large-scale UMDP's that can solve the curses of dimensionality. These algorithms, called simulation-based modified policy iteration (SBMPI) algorithms, are extensions of the simulation-based modified policy iteration method (SBMPIM) (Ohno, 2011) for optimal control problems of multistage JIT-based production and distribution systems with stochastic demand and production capacity. The main new concepts of the SBMPI algorithms are that the simulation-based policy evaluation step of the SBMPIM is replaced by the partial policy evaluation step of the modified policy iteration method (MPIM) and that the algorithms starts from the expected total cost per period and relative value estimated by simulating the system under a reasonable initial policy. For numerical comparisons, the optimal control problem of the three-stage JIT-based production and distribution system with stochastic demand and production capacity is formulated as a UMDP. The demand distribution is changed from a shifted binomial distribution in Ohno (2011) to a Poisson distribution and near-optimal policies of the optimal control problems with 35,973,840 states are computed by the SBMPI algorithms and the SBMPIM. The computational result shows that the SBMPI algorithms are at least 100 times faster than the SBMPIM in solving the numerical problems and are robust with respect to initial policies. Numerical examples are solved to show an effectiveness of the near optimal control utilizing the SBMPI algorithms compared with optimized pull systems with optimal parameters computed utilizing the SBOS (simulation-based optimal solutions) from Ohno (2011). | New approximate dynamic programming algorithms for large-scale undiscounted Markov decision processes and their application to optimize a production and distribution system |
S0377221715006621 | Eco-innovation is recognized as a determinant of success or failure of environmental protection efforts in the long run. This paper attempts to examine China's eco-innovation gains in response to the energy saving and emissions reduction (ESER) policy enforced during 2006–2010. We first construct an integrated analysis framework to evaluate the changes of energy and environmental performance used as the proxy of eco-innovation, and then the intertemporal change of China's eco-innovation gains as well as the regional differences is investigated. The results indicate that China had accelerated its process of eco-innovations during 2006–2010 when a series of ESER policies were enforced. The developments and wide adoptions of advanced energy saving and environmentally friendly technologies serve as the primary driving forces, while upgrading management skills and organizational designs contribute relatively little. Furthermore, the realizing paths of cross-region eco-innovations in China are obviously discrepant. | Analysis on China's eco-innovations: Regulation context, intertemporal change and regional differences |
S0377221715006633 | We develop an exact solution framework for the Consistent Traveling Salesman Problem. This problem calls for identifying the minimum-cost set of routes that a single vehicle should follow during the multiple time periods of a planning horizon, in order to provide consistent service to a given set of customers. Each customer may require service in one or multiple time periods and the requirement for consistent service applies at each customer location that requires service in more than one time period. This requirement corresponds to restricting the difference between the earliest and latest vehicle arrival-times, across the multiple periods, to not exceed some given allowable limit. We present three mixed-integer linear programming formulations for this problem and introduce a new class of valid inequalities to strengthen these formulations. The new inequalities are used in conjunction with traditional traveling salesman inequalities in a branch-and-cut framework. We test our framework on a comprehensive set of benchmark instances, which we compiled by extending traveling salesman instances from the well-known TSPLIB library into multiple periods, and show that instances with up to 50 customers, requiring service over a 5-period horizon, can be solved to guaranteed optimality. Our computational experience suggests that enforcing arrival-time consistency in a multi-period setting can be achieved with merely a small increase in total routing costs. | A branch-and-cut framework for the consistent traveling salesman problem |
S0377221715006645 | We study a passenger–taxi problem in this paper. The objective to maximize the social welfare and optimize the allocation of taxi market resources. We analyze the strategic behavior of passengers who decide whether to join the system or balk in both observable and unobservable cases. In observable case, we obtain the optimal selfish threshold that maximizes their individual revenues and give the conditions of the existence of the optimal selfless threshold that maximize the social welfare. In unobservable case, we discuss the equilibrium strategies for the selfish passengers and derive the optimal arrival rate for the socially concerned passengers. Further, we analyze how the government controls the number of taxis by subsidizing taxis or levying a tax on taxis. | Optimization and strategic behavior in a passenger–taxi service system |
S0377221715006657 | Stimulated by the growing interest in behavioural issues in the management sciences, research scholars have begun to address the implications of behavioural insights for Operational Research (OR). This current work reviews some foundational debates on the nature of OR to serve as a theoretical backdrop to orient a discussion on a behavioural perspective and OR. The paper addresses a specific research need by outlining that there is a distinct and complementary contribution of a behavioural perspective to OR. However, there is a need to build a theoretical base in which the insights from classical behavioural research is just one of a number of convergent building blocks that together point towards a compelling basis for behavioural OR. In particular, the focus of the paper is a framework that highlights the collective nature of OR practice and provides a distinct and interesting line of enquiry for future research. | Behavioural operational research: Towards a framework for understanding behaviour in OR interventions |
S0377221715006669 | Scenario planning is a method widely used by strategic planners to address uncertainty about the future. However, current methods either fail to address the future behaviour and impact of stakeholders or they treat the role of stakeholders informally. We present a practical decision-analysis-based methodology for analysing stakeholder objectives and likely behaviour within contested unfolding futures. We address issues of power, interest, and commitment to achieve desired outcomes across a broad stakeholder constituency. Drawing on frameworks for corporate social responsibility (CSR), we provide an illustrative example of our approach to analyse a complex contested issue that crosses geographic, organisational and cultural boundaries. Whilst strategies can be developed by individual organisations that consider the interests of others – for example in consideration of an organisation's CSR agenda – we show that our augmentation of scenario method provides a further, nuanced, analysis of the power and objectives of all concerned stakeholders across a variety of unfolding futures. The resulting modelling framework is intended to yield insights and hence more informed decision making by individual stakeholders or regulators. | A decision-analysis-based framework for analysing stakeholder behaviour in scenario planning |
S0377221715006670 | This paper introduces the fleet size and mix location-routing problem with time windows (FSMLRPTW) which extends the location-routing problem by considering a heterogeneous fleet and time windows. The main objective is to minimize the sum of vehicle fixed cost, depot cost and routing cost. We present mixed integer programming formulations, a family of valid inequalities and we develop a powerful hybrid evolutionary search algorithm (HESA) to solve the problem. The HESA successfully combines several metaheuristics and offers a number of new advanced efficient procedures tailored to handle heterogeneous fleet dimensioning and location decisions. We evaluate the strengths of the proposed formulations with respect to their ability to find optimal solutions. We also investigate the performance of the HESA. Extensive computational experiments on new benchmark instances have shown that the HESA is highly effective on the FSMLRPTW. | The fleet size and mix location-routing problem with time windows: Formulations and a heuristic algorithm |
S0377221715006682 | For mid-term demand forecasting, the accuracy, stability, and ease of use of the forecasting method are considered important user requirements. We propose a new forecasting method using linearization of the hazard rate formula of the Bass model. In the proposal, reduced non-linear least square method is used to determine the market potential estimate, after the estimates for the coefficient of innovation and the coefficient of imitation are obtained by using ordinary least square method with the new linearization of the Bass model. Validations of 29 real data sets and 36 simulation data sets show that the proposed method is accurate and stable. Considering the user requirements, our method could be suitable for mid-term forecasting based on the Bass model. It has high forecasting accuracy and superior stability, is easy to understand, and can be programmed using software such as MS Excel and Matlab. | Easy, reliable method for mid-term demand forecasting based on the Bass model: A hybrid approach of NLS and OLS |
S0377221715006694 | In the evaluation of network reliability, the objectives are to model reliability of networks appropriately and to compute it in a realistic time. We can consider various models of reliability of networks, and Bienstock (1991) first introduced a geographical failure model where each failure is represented by a 2-dimensional region. In this model, we consider the situation that geographical networks such as road networks may be damaged by externally caused disasters, and such disasters may destroy several links of the networks simultaneously, rather than each link independently. Recently, Neumayer–Efrat–Modiano (2012) investigated the max-flow problem and the min-cut problem under the Circular Disk Failure Model, in which the shape of each failure is restricted to be a disk. Under this model, Kobayashi–Otsuki (2014) gave polynomial time algorithms to find optimal solutions of these two problems. In this paper, we improve the algorithms and evaluate their performance by computational experiments. Although our improvements work only when the max-flow value is equal to the min-cut value, this condition holds in almost all practical cases. Owing to the improvements, we can find in a realistic time optimal solutions of the max-flow problem and the min-cut problem in large networks under the Circular Disk Failure Model. As a realistic instance, we analyze reliability of a road network in NewYork consisting of 264,346 nodes. | Improved max-flow min-cut algorithms in a Circular Disk Failure Model with application to a road network |
S0377221715006700 | We address the problem of determining the cross-training that a work team needs in order to cope with demand mix variation and absences. We consider the case in which all workers can be trained on all tasks, the workforce is a resource that determines the capacity and a complete forecasting of demand is not available. The demand mix variation that the organization wants to be able to cope with is fixed by establishing a maximum time to devote to each product. We contend that this approach is straightforward, has managerial practicality and can be applied to a broad range of practical scenarios. It is required that the demand mix variation be met, even if there are a certain level of absences. To numerically solve the mathematical problem, a constraint-based selection procedure is developed, which we term CODEMI. We provide illustrated examples demonstrating solution quality for the approximation, and we report on an illustrative set of computational cases. | Calibrating cross-training to meet demand mix variation and employee absence |
S0377221715006712 | Research on the planning and control of combined make-to-order/make-to-stock or hybrid production systems often takes a typical MTO or MTS perspective. We examine the benefits of a hybrid planning approach without priority for either MTO or MTS. We develop a Markov Decision Process model for a two-product hybrid system to determine when to manufacture MTS and MTO products. Contrary to earlier studies with this approach, this study includes a positive lead time for MTO products. We characterize optimal policies and show how decisions should be based on both inventory level and backlog state of MTO products. Especially discriminating between states with and states without backlog of MTO orders is shown to be important in determining whether to increase MTS stock. Savings of up to 65 percent are achieved compared to policies that prioritize MTS or MTO. | Hybrid MTO-MTS production planning: An explorative study |
S0377221715006724 | Continuous positive airway pressure therapy (CPAP) is known to be the most efficacious treatment for obstructive sleep apnoea (OSA). Unfortunately, poor adherence behaviour in using CPAP reduces its effectiveness and thereby also limits beneficial outcomes. In this paper, we model the dynamics and patterns of patient adherence behaviour as a basis for designing effective and economical interventions. Specifically, we define patient CPAP usage behaviour as a state and develop Markov models for diverse patient cohorts in order to examine the stochastic dynamics of CPAP usage behaviours. We also examine the impact of behavioural intervention scenarios using a Markov decision process (MDP), and suggest a guideline for designing interventions to improve CPAP adherence behaviour. Behavioural intervention policy that addresses economic aspects of treatment is imperative for translation to clinical practice, particularly in resource-constrained environments that are clinically engaged in the chronic care of OSA. | Modelling adherence behaviour for the treatment of obstructive sleep apnoea |
S0377221715006736 | This paper addresses the problem of finding multiple near-optimal, spatially-dissimilar paths that can be considered as alternatives in the decision making process, for finding optimal corridors in which to construct a new road. We further consider combinations of techniques for reducing the costs associated with the computation and increasing the accuracy of the cost formulation. Numerical results for five algorithms to solve the dissimilar multipath problem show that a “bidirectional approach” yields the fastest running times and the most robust algorithm. Further modifications of the algorithms to reduce the running time were tested and it is shown that running time can be reduced by an average of 56 percent without compromising the quality of the results. | Multiple-path selection for new highway alignments using discrete algorithms |
S0377221715006748 | There is continuing interest in the trend of costs associated with pollution abatement activities. We specify an environmental production technology to model the joint production of good and bad outputs. The joint production model calculates pollution abatement costs and identifies changes in these costs associated with: (1) technical change, (2) input changes, and (3) changes in bad output production. Estimates of the relative importance of each factor are estimated using data from 1995 to 2005 for a sample of coal-fired power plants in the United States. Finally, we discuss the potential usefulness of the decomposition model for identifying discrepancies between ex ante and ex post pollution abatement costs that are linked to the underlying joint production model. | Technical change and pollution abatement costs |
S0377221715006761 | Cross-efficiency evaluation, as an extension tool of data envelopment analysis (DEA), has been widely applied in evaluating and ranking decision making units (DMUs). Unfortunately, the cross-efficiency scores generated may not be Pareto optimal, which has reduced the effectiveness of this method. To solve this problem, we propose a cross-efficiency evaluation approach based on Pareto improvement, which contains two models (Pareto optimality estimation model and cross-efficiency Pareto improvement model) and an algorithm. The Pareto optimality estimation model is used to estimate whether the given set of cross-efficiency scores are Pareto-optimal solutions. If these cross-efficiency scores are not Pareto optimal, the Pareto improvement model is then used to make cross-efficiency Pareto improvement for all the DMUs. In contrast to other cross-efficiency approaches, our approach always obtains a set of Pareto-optimal cross efficiencies under the predetermined weight selection principles for these DMUs. In addition, if the proposed algorithm terminates at its step 3, the evaluation results generated by our approach unify self-evaluation, peer-evaluation, and common-weight-evaluation in DEA cross-efficiency evaluation. Specifically, the self-evaluated efficiency and the peer-evaluated efficiency converge to the same common-weight-evaluated efficiency when the algorithm stops. This will make the evaluation results more likely to be accepted by all the DMUs. | DEA cross-efficiency evaluation based on Pareto improvement |
S0377221715006773 | Globally automakers are facing pressure from their stakeholders to follow sustainable business practices and produce products that are less harmful to the environment. The introduction of gas guzzling automobiles in the US market despite the increasingly stringent emission norms highlights the widening gap between the goals of the regulators and the automakers, demanding a fresh outlook at the regulatory framework. In this paper, we therefore propose a composite regulatory standard that not only allows the regulators to control various environmental standards, but also provides automakers with an opportunity to exploit scale economies and synergies in product development. Our results show that under composite regulations, sufficiently high economies of scale will ensure higher traditional and environmental qualities as well as higher profits for the automaker while operating in two markets as opposed to a single market. We also find under the composite regulations that, when more demanding norms are in place, despite positive synergies between traditional and environmental quality attributes, higher environmental quality is not guaranteed unless the scale economies are sufficiently high. Our work has implications for regulatory authorities in evaluating alternative policy design under heterogeneous market characteristics and technological synergies. | Design for the environment: Impact of regulatory policies on product development |
S0377221715006785 | This article argues that OR interventions, particularly problem structuring methods (PSM), are complex events that cannot be understood by conventional methods alone. In this paper an alternative approach is introduced, where the units of analysis are the activity systems constituted by and constitutive of PSM interventions. The paper outlines the main theoretical and methodological concerns that need to be appreciated in studying PSM interventions. The paper then explores activity theory as an approach to study them. A case study describing the use of this approach is provided. | Understanding behaviour in problem structuring methods interventions with activity theory |
S0377221715006797 | There is overwhelming evidence that performance ratings and evaluations are context dependent. A special case of such context effects is the decoy effect, which implies that the inclusion of a dominated alternative can influence the preference for non-dominated alternatives. Adapting the well-known experimental setting from the area of consumer behavior to the performance evaluation context of Data Envelopment Analysis (DEA), an experiment was conducted. The results show that adding a dominated decision making unit (DMU) to the set of DMUs augments the attractiveness of certain dominating DMUs and that DEA efficiency scores discriminating between efficient and inefficient DMUs serve as an appropriate debiasing procedure. The mention of the existence of slacks for distinguishing between strong and weak efficient DMUs also contributes to reducing the decoy effect, but it is also associated to other unexpected effects. | The decoy effect in relative performance evaluation and the debiasing role of DEA |
S0377221715006803 | This research aims at tackling a real-world long-haul freight transportation problem where tractors are allowed to exchange semi-trailers through several transshipment points until a request reaches its destiny. The unique characteristics of the considered logistics network allow for providing long-haul services by means of short-haul jobs, drastically reducing empty truck journeys. A greater flexibility is achieved with faster responses. Furthermore, the planning goals as well as the nature of the considered trips led to the definition of a new problem, the long-haul freight transportation problem with multiple transshipment locations. A novel mathematical formulation is developed to ensure resource synchronization while including realistic features, which are commonly found separately in the literature. Considering the complexity and dimension of this routing and scheduling problem, a mathematical programming heuristic (matheuristic) is developed with the objective of obtaining good quality solutions in a reasonable amount of time, considering the logistics business context. We provide a comparison between the results obtained for 79 real-world instances. The developed solution method is now the basis of a decision support system of a Portuguese logistics operator (LO). | A long-haul freight transportation problem: Synchronizing resources to deliver requests passing through multiple transshipment locations |
S0377221715006815 | We study the impact of stochastic lead times with order crossover on inventory costs and safety stocks in the order-up-to (OUT) policy. To motivate our research we present global logistics data which violates the traditional assumption that lead time demand is normally distributed. We also observe that order crossover is a common and important phenomenon in real supply chains. We present a new method for determining the distribution of the number of open orders. Using this method we identify the distribution of inventory levels when orders and the work-in-process are correlated. This correlation is present when demand is auto-correlated, demand forecasts are generated with non-optimal methods, or when certain ordering policies are present. Our method allows us to obtain exact safety stock requirements for the so-called proportional order-up-to (POUT) policy, a popular, implementable, linear generalization of the OUT policy. We highlight that the OUT replenishment policy is not cost optimal in global supply chains, as we are able to demonstrate the POUT policy always outperforms it under order cross-over. We show that unlike the constant lead-time case, minimum safety stocks and minimal inventory variance do not always lead to minimum costs under stochastic lead-times with order crossover. We also highlight an interesting side effect of minimizing inventory costs under stochastic lead times with order crossover with the POUT policy—an often significant reduction in the order variance. | Inventory management for stochastic lead times with order crossovers |
S0377221715007031 | The conflict resolution problem in Air Traffic Management is tackled in this paper by using a mixed integer linear approximation to a Mixed Integer Nonlinear Optimization (MINO) model that we have presented elsewhere. The aim of the problem consists of providing a new aircraft configuration such that every conflict situation is avoided, a conflict being an event in which two or more aircraft violate the minimum safety distance that they must keep in flight. The initial information consists of the aircraft configuration in a certain time instant: position, velocity, heading angle and flight level. The proposed approach allows the aircraft to perform any of the three possible maneuvers: velocity, turn angle and flight level changes. The nonlinear model involves trigonometric functions which make it difficult to solve, in addition to the integer variables related to flight level changes, among other auxiliary variables. A multicriteria scheme based on Goal Programming is also presented. In order to provide a good solution in short computing time, a Sequential Mixed Integer Linear Optimization (SMILO) approach is proposed. A comparison between the results obtained by using the state-of-the-art MINO solver Minotaur and SMILO is performed to assess the solution’s quality. Based on the computational results that we have obtained in a broad testbed we have experimented with, SMILO provides a very close solution to the one provided by Minotaur practically for all the instances. SMILO requires a very small computing time that makes the approach very suitable for helping to solve real-life operational situations. | Multiobjective optimization for aircraft conflict resolution. A metaheuristic approach |
S0377221715007043 | The literature on data envelopment analysis (DEA) often employs multiplier models that incorporate very small (theoretically infinitesimal) lower bounds on the input and output weights. Computational problems arising from the solution of such programs are well known. In this paper we identify an additional theoretical problem that may arise if such bounds are used in a multiplier model with weight restrictions. Namely, we show that the use of small lower bounds may lead to the identification of an efficient target with negative inputs. We suggest a corrected model that overcomes this problem. | On single-stage DEA models with weight restrictions |
S0377221715007055 | The dynamics of brands diffusion emerge partly from heterogeneous consumers’ interaction in social e-commerce and this social interaction influences adoption decisions. The agent-based simulation is a methodology that is well suited for modeling collective diffusion dynamics. Using optimal pricing mechanism and industry data, we introduce an agent-based model to replicate the evolution process of market share for multiple brands competing online. The proposed model helps understand the role of knowledge in the diffusion of competitive brands. It shows that when multiple brands face online competition, innovativeness, brand image, self-perceived utility and electronic word of mouth (e-WOM) all have significant effect on online shoppers’ decisions and have a bearing on brands’ market performance. Consumers often derive their value (utility) of a brand based on price, quality, rating, etc. When consumers rely more on self-perceived utility, e-WOM has more positive effects on market share. Depending on whether a firm's competitive advantage is in innovation, price, web content, or use of social media, different online strategies should be employed for different brands to achieve market success. | Impacts of knowledge on online brand success: an agent-based model for online market share enhancement |
S0377221715007067 | We consider in this paper a single-item lot sizing problem with a periodic carbon emission constraint. In each period, the carbon emission constraint defines an upper limit on the average emission per product. Different modes are available, each one is characterized by its own cost and carbon emission parameters. The problem consists in selecting the modes used in each period such that no carbon emission constraint is violated, and the cost of satisfying all the demands on a given time horizon is minimized. This problem has been introduced in Absi et al. (2013), and has been shown polynomially solvable when only unit carbon emissions are considered. In this paper, we extend the analysis for this constraint to the realistic case of a fixed carbon emission associated with each mode, in addition to its unit carbon emission. We establish that this generalization renders the problem NP-hard. Several dominant properties are presented, and two dynamic programming algorithms are proposed. We also establish that the problem can be solved in polynomial time for a fixed number of modes when carbon emission parameters are stationary. | The single-item green lot-sizing problem with fixed carbon emissions |
S0377221715007079 | We study a firm that makes new products in the first period and collects used products through trade-in, along with new product sale, in the second period. To conduct a convincing analysis, we initially evaluate the problem in a duopoly situation in which one firm (firm A) implements trade-in and the other one (firm B) does not. We subsequently introduce the competitive environment in a two-period planning horizon to identify thresholds that determine the trade-in operations, and then derive the equilibrium decisions of the resulting scenarios. We characterize the optimal production quantities that are associated with parameter b (the sum of used product salvage value and government subsidy) in the Nash equilibrium. Results indicate that adopting trade-in could bring competitive advantage for firm A in terms of market share and profit. If the new product sale is comparatively profitable, then the trade-in firm may forgo some of the collection margin by raising the trade-in rebate and selling additional units to increase new product sale in the second period. Moreover, the total collection quantity does not always increase with government subsidy. We consequently expand the model to the case where both firms compete in trade-in and derive the corresponding decision space of the duopoly firms. Finally, we explore the effect of adopting trade-in on consumer surplus and compare it in the two models. | The effect of implementing trade-in strategy on duopoly competition |
S0377221715007080 | When different supply chain parties have private information, some form of information sharing is required to improve supply chain performance. However, it might be difficult to ensure truthful information transfer when firms can benefit from distorting their private information. To investigate the impact of dishonest information transfer, we consider a single-supplier single-retailer supply chain that operates under a contract with a revenue sharing clause, providing the retailer incentive to underreport sales revenues. In practice, suppliers utilize audits based on statistical tools that, for example, compare the retailers’ sales reports and order quantities to limit, but not necessarily eliminate, cheating. We investigate the impact of such limited cheating on the different supply chain constituents. We show that when the retailer can exert sales effort, a supplier might benefit from the retailer's dishonesty. Our findings also suggest that if the retailer's negotiation power is high or if retailer effort is effective, the supplier should reduce the retailer's revenue share and absorb some of the demand risk to increase retailer participation. When facing a less powerful or less capable retailer, the supplier might be better off extracting profitability upfront through a higher wholesale price. | Don't ask, don't tell: Sharing revenues with a dishonest retailer |
S0377221715007092 | Using state-of-the-art frontier efficiency methodologies, we study the efficiency and productivity of Swiss insurance companies in the life, property/casualty, and reinsurance sectors from 1997–2013. In this context, we provide the first empirical analysis of internationalization strategies of insurance companies, a topic of high interest in the business and economics literature, but one that has to date not been the focus of efficiency studies in the insurance sector. We find that productivity and efficiency have improved with regard to property/casualty and reinsurance. In the case of life insurance, productivity and efficiency diminished; however, life insurance firms with higher levels of international business exhibit superior efficiency levels. We observe that diversification strategies directed to the European market are more beneficial compared to those targeting markets outside of Europe. | The determinants of efficiency and productivity in the Swiss insurance industry |
S0377221715007109 | A preventive maintenance policy that considers information provided by observing the failure history of a repairable system is proposed. For a system that is to be operated for a long time, it is shown that the proposed policy will have a lower expected cost than a periodical one which does not take into account the failure history. Statistical inference using both maximum likelihood point estimates and bootstrap confidence intervals is discussed. The proposed policy is applied to a real situation involving maintenance of off-road engines owned by a Brazilian mining company. A simulation study compares the performance between the maintenance policy proposed and the periodical one. | Dynamics of an optimal maintenance policy for imperfect repair models |
S0377221715007110 | We propose a novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems that have to consider a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improves upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. However, the decomposition phase is required to attain optimality gaps. Use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately. | New bounding and decomposition approaches for MILP investment problems: Multi-area transmission and generation planning under policy constraints |
S0377221715007134 | This paper provides a review of stochastic Data Envelopment Analysis (DEA). We discuss extensions of deterministic DEA in three directions: (i) deviations from the deterministic frontier are modeled as stochastic variables, (ii) random noise in terms of measurement errors, sample noise, and specification errors is made an integral part of the model, and (iii) the frontier is stochastic as is the underlying Production Possibility Set (PPS). Stochastic DEA utilizes non-parametric convex or conical hull reference technologies based upon axioms from production theory accompanied by a statistical foundation in terms of axioms from statistics or distributional assumptions. The approaches allow for an estimation of stochastic inefficiency compared to a deterministic or a stochastic PPS and for statistical inference while maintaining an axiomatic foundation. Focus is on bridges and differences between approaches within the field of Stochastic DEA including semi-parametric Stochastic Frontier Analysis (SFA) and Chance Constrained DEA (CCDEA). We argue that statistical inference based upon homogenous bootstrapping in contrast to a management science approach imposes a restrictive structure on inefficiency, which may not facilitate the communication of results of the analysis to decision makers. Semi-parametric SFA and CCDEA differ w.r.t. the modeling of noise and stochastic inefficiency. The two approaches are in spite of the inherent differences shown to be complements in the sense that the stochastic PPSs obtained by the two approaches share basic similarities in the case of one output and multiple inputs. Recent contributions related to (i) disentangling of random noise and random inefficiency and (ii) obtaining smooth shape constrained estimators of the frontier are discussed. | Stochastic Data Envelopment Analysis—A review |
S0377221715007146 | Green supplier development focuses on helping organizations integrate activities to improve the natural environmental performance of their supply chains. These green-supplier-development programs require substantial resources and investments by a buyer company. Investigation into investment management in this context has only begun. This paper introduces a methodology to help manage investment in green-supplier-development and business-supplier-development practices. Managing these practices and their outcomes requires managing of a large sets of data. We propose a combination of rough set theoretic and fuzzy clustering means (FCM) approaches; first to simplify, and then sharpen the focus on the complex environment of evaluation of the investment decisions. The combined methodology, based on performance measures of supplier practices and agreed-upon investment objectives, identifies a set of guidelines that can help make decisions about sound investments in the supplier practices more effectively and judiciously. Various steps involved in the methodology are illustrated through using an example developed to highlight the salient steps and issues of the methodology. We show how the results may be interpreted to obtain many insights useful from both practical and research perspectives. Although the impetus to developing this methodology came from sustainability considerations, the methodology is general enough to be applicable in other areas where management and evaluation of investments is based on large data sets. | Complex investment decisions using rough set and fuzzy c-means: An example of investment in green supply chains |
S0377221715007158 | Whilst Data Envelopment Analysis (DEA) is the most commonly used non-parametric benchmarking approach, the interpretation and application of DEA results can be limited by the fact that radial improvement potentials are identified across variables. In contrast, Multi-directional Efficiency Analysis (MEA) facilitates analysis of the nature and structure of the inefficiencies estimated relative to variable-specific improvement potentials. This paper introduces a novel method for utilizing the additional information available in MEA. The distinguishing feature of our proposed method is that it enables analysis of differences in inefficiency patterns between subgroups. Identifying differences, in terms of which variables the inefficiency is mainly located on, can provide management or regulators with important insights. The patterns within the inefficiencies are represented by so-called inefficiency contributions, which are defined as the relative contributions from specific variables to the overall levels of inefficiencies. A statistical model for distinguishing the inefficiency contributions between subgroups is proposed and the method is illustrated on a data set on Chinese banks. | Introducing and modeling inefficiency contributions |
S0377221715007171 | This paper presents a fuzzy lung allocation system (FLAS) in order to determine which potential recipients would receive a lung for transplantation when it becomes available in the USA. The developed system deals with the vagueness and fuzziness of the decision making of the medical experts in order to achieve accurate lung allocation processes in terms of transplant survival time and functional status after transplantation. The proposed approach is based on a real data set from the United Network for Organ Sharing (UNOS) to investigate how well it mimics the experience of transplant physicians in the field of lung allocation. The results are very promising in terms of both prediction accuracy (with an R2 value of 83.2 percent and an overall accuracy of 82.1 percent) along with better interpretation capabilities and hence are superior to the existing techniques in literature. Furthermore, the proposed decision process provides a more effective (i.e. accurate), time-efficient, and systematic decision support tool for this problem with two criteria being considered i.e. graft survival time and functional status after transplantation. | FLAS: Fuzzy lung allocation system for US-based transplantations |
S0377221715007183 | This paper presents a cross-country comparison of significant predictors of small business failure between Italy and the UK. Financial measures of profitability, leverage, coverage, liquidity, scale and non-financial information are explored, some commonalities and differences are highlighted. Several models are considered, starting with the logistic regression which is a standard approach in credit risk modelling. Some important improvements are investigated. Generalised Extreme Value (GEV) regression is applied in contrast to the logistic regression in order to produce more conservative estimates of default probability. The assumption of non-linearity is relaxed through application of BGEVA, non-parametric additive model based on the GEV link function. Two methods of handling missing values are compared: multiple imputation and Weights of Evidence (WoE) transformation. The results suggest that the best predictive performance is obtained by BGEVA, thus implying the necessity of taking into account the low volume of defaults and non-linear patterns when modelling SME performance. WoE for the majority of models considered show better prediction as compared to multiple imputation, suggesting that missing values could be informative. | A comparative analysis of the UK and Italian small businesses using Generalised Extreme Value models |
S0377221715007195 | This paper presents a cost allocation scheme for a horizontal cooperation of traveling salesmen that is implemented a priori and provides expected costs for the coalition members. The cost allocation is determined using the core concept. To compute the value of the characteristic function over the whole planning horizon the TSP with release dates combined with simulation is used. The developed core computation algorithm, based on mathematical programming techniques, provides a core element or, in case of an empty core, a least-core element. To decrease the computational effort of core computation a row generation procedure is implemented. The developed computation study tests the computational performance of the solution procedure. | Core-based cost allocation in the cooperative traveling salesman problem |
S0377221715007201 | Practical experience and scientific research show that there is scope for improving the performance of inventory control systems by delaying a replenishment order that is otherwise triggered by generalised and all too often inappropriate assumptions. This paper presents the first analysis of the most commonly used continuous (s, S) policies with delayed ordering for inventory systems with compound demand. We analyse policies with a constant delay for all orders as well as more flexible policies where the delay depends on the order size. For both classes of policies and general demand processes, we derive optimality conditions for the corresponding delays. In a numerical study with Erlang distributed customer inter-arrival times, we compare the cost performance of the optimal policies with no delay, a constant delay and flexible delays. Sensitivity results provide insights into when the benefit of delaying orders is most pronounced, and when applying flexible delays is essential. | On the benefits of delayed ordering |
S0377221715007213 | We propose a non-parametric, three-stage strategy for efficiency estimation in which the Richardson–Lucy blind deconvolution algorithm is used to identify firm-specific inefficiencies from the residuals corrected for the expected inefficiency μ. The performance of the proposed algorithm is evaluated against the method of moments under 16 scenarios assuming μ = 0 . The results show that the Richardson–Lucy blind deconvolution method does not generate null or zero values due to wrong skewness or low kurtosis of inefficiency distribution, that it is insensitive to the distributional assumptions, and that it is robust to data noise levels and heteroscedasticity. We apply the Richardson–Lucy blind deconvolution method to Finnish electricity distribution network data sets, and we provide estimates for efficiencies that are otherwise inestimable when using the method of moments and correct ranks of firms with similar efficiency scores. | Non-parametric efficiency estimation using Richardson–Lucy blind deconvolution |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.