FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0377221713006115
Conventional two-stage data envelopment analysis (DEA) models measure the overall performance of a production system composed of two stages (processes) in a specified period of time, where variations in different periods are ignored. This paper takes the operations of individual periods into account to develop a multi-period two-stage DEA model, which is able to measure the overall and period efficiencies at the same time, with the former expressed as a weighted average of the latter. Since the efficiency of a two-stage system in a period is the product of the two process efficiencies, the overall efficiency of a decision making unit (DMU) in the specified period of time can be decomposed into the process efficiency of each period. Based on this decomposition, the sources of inefficiency in a DMU can be identified. The efficiencies measured from the model can also be used to calculate a common-weight global Malmquist productivity index (MPI) between two periods, in that the overall MPI is the product of the two process MPIs. The non-life insurance industry in Taiwan is used to verify the proposed model, and to explain why some companies performed unsatisfactorily in the specified period of time.
Multi-period efficiency and Malmquist productivity index in two-stage production systems
S0377221713006127
In a recent paper, Soni and Shah [Soni, H., Shah, N. H. (2008). Optimal ordering policy for stock-dependent demand under progressive payment scheme. European Journal of Operational Research 184(1), 91–100] developed a model to find the optimal ordering policy for a retailer with stock-dependent demand and a supplier offering a progressive payment scheme to the retailer. This note corrects some errors in the formulation of the model of Soni and Shah. It also extends their work by assuming that the credit interest rate of the retailer may exceed the interest rate charged by the supplier. Numerical examples illustrate the benefits of these modifications.
A note on: Optimal ordering policy for stock-dependent demand under progressive payment scheme
S0377221713006139
We consider an order acceptance and scheduling model with machine availability constraints. The manufacturer (machine) is assumed to be available to process orders only within a number of discontinuous time intervals. To capture the real-life behavior of a typical manufacturer who has restrictions of time availability to process orders, our model allows the manufacturer to reject or outsource some of the orders. When an order is rejected or outsourced, an order-dependent cost of penalty will occur. The objective is to minimize the makespan of all accepted orders plus the total penalty of all rejected/outsourced orders. We study the approximability of the model and some of its important special cases.
Order acceptance and scheduling with machine availability constraints
S0377221713006140
We consider a new problem of constructing some required structures in digraphs, where all arcs installed in such required structures are supposed to be cut from some pieces of a specific material of length L. Formally, we consider the model: a digraph D = (V, A; w), a structure S and a specific material of length L, where w: A → R+, we are asked to construct a subdigraph D′ from D, having the structure S, such that each arc in D′ is constructed by a part of a piece or/and some whole pieces of such a specific material, the objective is to minimize the number of pieces of such a specific material to construct all arcs in D′.For the two required structures: (1) a directed path from a node s to a node t and (2) a strongly connected spanning subdigraph, we design four approximation algorithms with constant performance ratios to solve the both problems, respectively.
Approximation algorithms for constructing some required structures in digraphs
S0377221713006152
In this paper, we consider an inventory–routing problem (IRP) in a large petroleum and petrochemical enterprise group. Compared to many other IRPs, the problem in this paper includes some special aspects due to the operational constraints, such as hours-of-service regulations of the company and the industry. Also, in some cases, it is more important to avoid stock out for any station, rather than purely focusing on transportation cost minimization. The objective is to minimize the maximum of the route travel time, which is not addressed in the literature so far. We present a tabu search algorithm to tackle the problem, which builds in an efficient and effective procedure to improve the search quality in each iteration. Moreover, lower bounds of reasonable sized problems, which are intractable in the formulated mathematical model by existing optimization software, are obtained via Lagrangian relaxation technique. Computational results indicate that the lower bounds are tight and the tabu search is capable of providing near optimal, close-to-lower-bound solutions in a computational time effective manner.
An inventory–routing problem with the objective of travel time minimization
S0377221713006164
Iterated greedy search is a simple and effective metaheuristic for combinatorial problems. Its flexibility enables the incorporation of components from other metaheuristics with the aim of obtaining effective and powerful hybrid approaches. We propose a tabu-enhanced destruction mechanism for iterated greedy search that records the last removed objects and avoids removing them again in subsequent iterations. The aim is to provide a more diversified and successful search process with regards to the standard destruction mechanism, which selects the solution components for removal completely at random. We have considered the quadratic multiple knapsack problem as the application domain, for which we also propose a novel local search procedure, and have developed experiments in order to assess the benefits of the proposal. The results show that the tabu-enhanced iterated greedy approach, in conjunction with the new local search operator, effectively exploits the problem-knowledge associated with the requirements of the problem considered, attaining competitive results with regard to the corresponding state-of-the-art algorithms.
Tabu-enhanced iterated greedy algorithm: A case study in the quadratic multiple knapsack problem
S0377221713006176
Additive multi-attribute value models and additive utility models with discrete outcome sets are widely applied in both descriptive and normative decision analysis. Their non-parametric application allows preference inference by analyzing sets of general additive value functions compatible with the observed or elicited holistic pair-wise preference statements. In this paper, we provide necessary and sufficient conditions for the preference inference based on a single preference statement, and sufficient conditions for the inference based on multiple preference statements. In our computational experiments all inferences could be made with these conditions. Moreover, our analysis suggests that the non-parametric analyses of general additive value models are unlikely to be useful by themselves for decision support in contexts where the decision maker preferences are elicited in the form of holistic pair-wise statements.
Preference inference with general additive value models and holistic pair-wise statements
S0377221713006188
In this paper, we consider a two-state (up and down) network consisting of n links. We study the D-spectrum based dynamic reliability of the network under the assumption that the links are subject to failure according to a nonhomogeneous Poisson process. Several mixture representations are provided for the reliability function of residual lifetime of used networks, under different conditions on the status of the network or its links. These representations enable us to explore the residual reliability of operating networks in terms of the reliability functions of residual lifetimes of upper record values. The distribution function of inactivity time of a network is examined under the condition that the network has failed by inspection time t. Stochastic ordering properties of the residual lifetimes of networks under conditional D-spectra are investigated. Several examples and graphs are also provided to illustrate the established results.
Dynamic network reliability modeling under nonhomogeneous Poisson processes
S0377221713006413
Two-stage stochastic linear programming is a classical model in operations research. The usual approach to this model requires detailed information on distribution of the random variables involved. In this paper, we only assume the availability of the first and second moments information of the random variables. By using duality of semi-infinite programming and adopting a linear decision rule, we show that a deterministic equivalence of the two-stage problem can be reformulated as a second-order cone optimization problem. Preliminary numerical experiments are presented to demonstrate the computational advantage of this approach.
Two-stage stochastic linear programs with incomplete information on uncertainty
S0377221713006425
Production optimization of gas-lifted oil wells under facility, routing and pressure constraints is a challenging problem, which has attracted the interest of operations engineers aiming to drive economic gains and scientists for its inherent complexity. The hardness of this problem rests on the non-linear characteristics of the multidimensional well-production and pressure-drop functions, as well as the discrete routing decisions. To this end, this work develops several formulations in Mixed-Integer Linear Programming (MILP) using multidimensional piecewise-linear models to approximate the non-linear functions with domains spliced in hypercubes and simplexes. Computational and simulation analyses were performed considering a synthetic but realistic oil field modeled with a multiphase-flow simulator. The purpose of the analyses was to assess the relative performance of the MILP formulations and their impact on the simulated oil production.
A computational analysis of multidimensional piecewise-linear models with applications to oil production optimization
S0377221713006437
Consider a firm, called the buyer, that satisfies its demand over two periods by assigning both demands to a supplier via a second-price procurement auction; call this the Standard auction. In the hope of lowering its purchase cost, the firm is considering an alternative procedure in which it will also allow bids on each period individually, where there can be either one or two winners covering the two demands; call this the Multiple Winner auction. Choosing the Multiple Winner auction over the Standard auction can in fact result in a higher cost to the buyer. We provide a bound on how much greater the buyer’s cost can be in the Multiple Winner auction and show that this bound is tight. We then sharpen this bound for two scenarios that can arise when the buyer announces his demands close to the beginning of the demand horizon. Under a monotonicity condition, we achieve a further sharpening of the bound in one of the scenarios. Finally, this monotonicity condition allows us to generalize this bound to the T-period case in which bids are allowed on any subset of period demands.
Revenue deficiency under second-price auctions in a supply-chain setting
S0377221713006449
For measuring technical efficiency relative to a log-linear technology, a generalized multiplicative directional distance function (GMDDF) is developed using the framework of multiplicative directional distance function (MDDF). Furthermore, a computational procedure is suggested for its estimation. The GMDDF serves as a comprehensive measure of efficiency in revealing Pareto-efficient targets as it accounts for all possible input and output slacks. This measure satisfies several desirable properties of an ideal efficiency measure such as strong monotonicity, unit invariance, translation invariance, and positive affine transformation invariance. This measure can be easily implemented in any standard DEA software and provides the decision makers with the option of specifying preferable direction vectors for incorporating their decision-making preferences. Finally, to demonstrate the ready applicability of our proposed measure, an illustrative empirical analysis is conducted based on real-life data set of 20 hardware computer companies in India.
A generalized multiplicative directional distance function for efficiency measurement in DEA
S0377221713006450
Demand fluctuations that cause variations in output levels will affect a firm’s technical inefficiency. To assess this demand effect, a demand-truncated production function is developed and an “effectiveness” measure is proposed. Often a firm can adjust some input resources influencing the output level in an attempt to match demand. We propose a short-run capacity planning method, termed proactive data envelopment analysis, which quantifies the effectiveness of a firm’s production system under demand uncertainty. Using a stochastic programming DEA approach, we improve upon short-run capacity expansion planning models by accounting for the decreasing marginal benefit of inputs and estimating the expected value of effectiveness, given demand. The law of diminishing marginal returns is an important property of production function; however, constant marginal productivity is usually assumed for capacity expansion problems resulting in biased capacity estimates. Applying the proposed model in an empirical study of convenience stores in Japan demonstrates the actionable advice the model provides about the levels of variable inputs in uncertain demand environments. We conclude that the method is most suitable for characterizing production systems with perishable goods or service systems that cannot store inventories.
Proactive data envelopment analysis: Effective production and capacity expansion in stochastic environments
S0377221713006462
This paper considers a manufacturing supply chain with multiple suppliers in the presence of multiple uncertainties such as uncertain material supplies, stochastic production times, and random customer demands. The system is subject to supply and production capacity constraints. We formulate the integrated inventory management policy for raw material procurement and production control using the stochastic dynamic programming approach. We then investigate the supplier base reduction strategies and the supplier differentiation issue under the integrated inventory management policy. The qualitative relationships between the supplier base size, the supplier capabilities and the total expected cost are established. Insights into differentiating the procurement decisions to different suppliers are provided. The model further enables us to quantitatively achieve the trade-off between the supplier base reduction and the supplier capability improvement, and quantify the supplier differentiation in terms of procurement decisions. Numerical examples are given to illustrate the results.
Integrated inventory management and supplier base reduction in a supply chain with multiple uncertainties
S0377221713006474
Most real-life decision-making activities require more than one objective to be considered. Therefore, several studies have been presented in the literature that use multiple objectives in decision models. In a mathematical programming context, the majority of these studies deal with two objective functions known as bicriteria optimization, while few of them consider more than two objective functions. In this study, a new algorithm is proposed to generate all nondominated solutions for multiobjective discrete optimization problems with any number of objective functions. In this algorithm, the search is managed over (p −1)-dimensional rectangles where p represents the number of objectives in the problem and for each rectangle two-stage optimization problems are solved. The algorithm is motivated by the well-known ε-constraint scalarization and its contribution lies in the way rectangles are defined and tracked. The algorithm is compared with former studies on multiobjective knapsack and multiobjective assignment problem instances. The method is highly competitive in terms of solution time and the number of optimization models solved.
A new algorithm for generating all nondominated solutions of multiobjective discrete optimization problems
S0377221713006486
The bi-objective Pollution-Routing Problem is an extension of the Pollution-Routing Problem (PRP) which consists of routing a number of vehicles to serve a set of customers, and determining their speed on each route segment. The two objective functions pertaining to minimization of fuel consumption and driving time are conflicting and are thus considered separately. This paper presents an adaptive large neighborhood search algorithm (ALNS), combined with a speed optimization procedure, to solve the bi-objective PRP. Using the ALNS as the search engine, four a posteriori methods, namely the weighting method, the weighting method with normalization, the epsilon-constraint method and a new hybrid method (HM), are tested using a scalarization of the two objective functions. The HM combines adaptive weighting with the epsilon-constraint method. To evaluate the effectiveness of the algorithm, new sets of instances based on real geographic data are generated, and a library of bi-criteria PRP instances is compiled. Results of extensive computational experiments with the four methods are presented and compared with one another by means of the hypervolume and epsilon indicators. The results show that HM is highly effective in finding good-quality non-dominated solutions on PRP instances with 100 nodes.
The bi-objective Pollution-Routing Problem
S0377221713006498
This note studies the relationships between different aspects of agent’s preferences toward risk. We show that, under the assumptions of non-satiation and bounded marginal utility, prudence implies risk aversion (imprudence implies risk loving) and that temperance implies prudence (intemperance implies imprudence). The implications of these results for comparing risks in the cases of increase in risk, increase in downside risk and increase in outer risk are discussed.
New results on the relationship among risk aversion, prudence and temperance
S0377221713006504
Given an undirected graph G =(V, E), a k-club is a subset of nodes that induces a subgraph with diameter at most k. The k-club problem is to find a maximum cardinality k-club. In this study, we use a linear programming relaxation standpoint to compare integer formulations for the k-club problem. The comparisons involve formulations known from the literature and new formulations, built in different variable spaces. For the case k =3, we propose two enhanced compact formulations. From the LP relaxation standpoint these formulations dominate all other compact formulations in the literature and are equivalent to a formulation with a non-polynomial number of constraints. Also for k =3, we compare the relative strength of LP relaxations for all formulations examined in the study (new and known from the literature). Based on insights obtained from the comparative study, we devise a strengthened version of a recursive compact formulation in the literature for the k-club problem (k >1) and show how to modify one of the new formulations for the case k =3 in order to accommodate additional constraints recently proposed in the literature.
An analytical comparison of the LP relaxations of integer models for the k-club problem
S0377221713006516
This article results from our collaborative project with a Finnish bank aiming to evaluate the sales performance of bank branches. The management wishes to evaluate the branches’ ability to generate profit, which rules out the pure technical efficiency considerations. The branches operate in heterogeneous environments. We deal with the heterogeneity by subdividing the branches according to the bank specification into overlapping clusters and analyze each cluster separately. The prices of the branch outputs are hard to assess as the results from the sales efforts can only be observed with long delays. We employ benchmark units similarly as in value efficiency analysis (VEA). However, we extend VEA in two ways. First, in standard VEA the benchmark unit is assumed to yield the maximum profit among the set of feasible technologies; instead, our benchmark technology may or may not be in the feasible set. Second, we consider efficiency tests employing a benchmark with respect to both profit and return. We propose a solution strategy for these extensions. The bank uses the study to support decisions concerning new branches, changes in the operations of inefficient branches, and actions aiming to more flexible deployment of the staff.
Bank branch sales evaluation using extended value efficiency analysis
S0377221713006528
The intensification of livestock operations in the last few decades has resulted in an increased social concern over the environmental impacts of livestock operations and thus making appropriate manure management decisions increasingly important. A socially acceptable manure management system that simultaneously achieves the pressing environmental objectives while balancing the socio-economic welfare of farmers and society at large is needed. Manure management decisions involve a number of decision makers with different and conflicting views of what is acceptable in the context of sustainable development. This paper developed a decision-making tool based on a multiple criteria decision making (MCDM) approach to address the manure management problems in the Netherlands. This paper has demonstrated the application of compromise programming and goal programming to evaluate key trade-offs between socio-economic benefits and environmental sustainability of manure management systems while taking decision makers’ conflicting views of the different criteria into account. The proposed methodology is a useful tool in assisting decision makers and policy makers in designing policies that enhance the introduction of economically, socially and environmentally sustainable manure management systems.
A multiple criteria decision making approach to manure management systems in the Netherlands
S0377221713006541
This paper defines and analyzes a generalization of the classical minimum vertex cover problem to the case of two-layer interdependent networks with cascading node failures that can be caused by two common types of interdependence. Previous studies on interdependent networks mainly addressed the issues of cascading failures from a numerical simulations perspective, whereas this paper proposes an exact optimization-based approach for identifying a minimum-cardinality set of nodes, whose deletion would effectively disable both network layers through cascading failure mechanisms. We analyze the computational complexity and linear 0–1 formulations of the defined problems, as well as prove an LP approximation ratio result that generalizes the well-known 2-approximation for the classical minimum vertex cover problem. In addition, we introduce the concept of a “depth of cascade” (i.e., the maximum possible length of a sequence of cascading failures for a given interdependent network) and show that for any problem instance this parameter can be explicitly derived via a polynomial-time procedure.
Minimum vertex cover problem for coupled interdependent networks with cascading failures
S0377221713006747
In the last decade several papers appeared on facility location problems that incorporate customer demand by the multinomial logit model. Three linear reformulations of the original non-linear model have been proposed so far. In this paper, we discuss these models in terms of solvability. We present empirical findings based on synthetic data.
A comparison of linear reformulations for multinomial logit choice probabilities in facility location models
S0377221713006759
This paper reviews articles on cooperative advertising, a topic which has gained substantial interest in the recent years. Thereby, we first briefly distinguish five different definitions of cooperative advertising which can be found in operations research literature. After that, we concentrate on vertical cooperative advertising, which is the most common object of investigation and is understood as a financial agreement where a manufacturer offers to pay a certain share of his retailer’s advertising expenditures. In total, we identified 58 scientific papers considering mathematical modeling of vertical cooperative advertising. These articles are then analyzed with regard to their general model setting (e.g., the underlying supply chain structure and design of the cooperative advertising program). After that, we explain the different demand and cost functions that are employed, whereupon we distinguish between static and dynamic models. The last dimension of our review is dedicated to the game-theoretic concepts which are mostly used to reflect different forms of distribution of power within the channel.
Cooperative advertising models in supply chain management: A review
S0377221713006760
In this paper, we address the 2-dimensional vector packing problem where an optimal layout for a set of items with two independent dimensions has to be found within the boundaries of a rectangle. Many practical applications in areas such as the telecommunications, transportation and production planning lead to this combinatorial problem. Here, we focus on the computation of fast lower bounds using original approaches based on the concept of dual-feasible functions. Until now, all the dual-feasible functions proposed in the literature were 1-dimensional functions. In this paper, we extend the principles of dual-feasible functions to the m-dimensional case by introducing the concept of vector packing dual-feasible function, and we propose and analyze different new families of functions. All the proposed approaches were tested extensively using benchmark instances described in the literature. Our computational results show that these functions can approximate very efficiently the best known lower bounds for this problem and improve significantly the convergence of branch-and-bound algorithms.
Multidimensional dual-feasible functions and fast lower bounds for the vector packing problem
S0377221713006772
There are new opportunities for the application of problem structuring methods to address science and technology risk conflicts through stakeholder dialogue. Most previous approaches to addressing risk conflicts have been developed from a traditional risk communication perspective, which tends to construct engagement between stakeholders based on the assumption that scientists evaluate technologies using facts, and lay participants do so based on their values. ‘Understanding the facts’ is generally privileged, so the value framings of experts often remain unexposed, and the perspectives of lay participants are marginalized. When this happens, risk communication methodologies fail to achieve authentic dialogue and can exacerbate conflict. This paper introduces ‘Issues Mapping’, a problem structuring method that enables dialogue by using visual modelling techniques to clarify issues and develop mutual understanding between stakeholders. A case study of the first application of Issues Mapping is presented, which engaged science and community protagonists in the genetic engineering debate in New Zealand. Participant and researcher evaluations suggest that Issues Mapping helped to break down stereotypes of both scientists and environmental activists; increased mutual understanding; reduced conflict; identified common ground; started building trust; and supported the emergence of policy options that all stakeholders in the room could live with. The paper ends with some reflections and priorities for further research.
Issues Mapping: A problem structuring method for addressing science and technology conflicts
S0377221713006784
We study a scheduling problem with rejection on a single serial batching machine, where the objectives are to minimize the total completion time and the total rejection cost. We consider four different problem variations. The first is to minimize the sum of the two objectives. The second and the third are to minimize one objective, given an upper bound on the value of the other objective and the last is to find a Pareto-optimal solution for each Pareto-optimal point. We provide a polynomial time procedure to solve the first variation and show that the three other variations are NP-hard. For solving the three NP -hard problems, we construct a pseudo-polynomial time algorithm. Finally, for one of the NP -hard variants of the problem we propose an FPTAS, provided some conditions hold.
The single machine serial batch scheduling problem with rejection to minimize total completion time and total rejection cost
S0377221713006796
In this study we deal with network routing decisions and approximate performance evaluation approaches for generalized open queuing networks (OQN), in which commodities enter the network, receive service at one or more arcs and then leave the network. Exact performance evaluation has been applied for the analysis of Jackson OQN, where the arrival and service processes of the commodities are assumed to be Poisson. However, the Poisson processes’ hypotheses are not a plausible or acceptable assumption for the analysis of generalized OQN, as their arrival and service processes can be much less variable than Poisson processes, resulting in overestimated system performance measures and inappropriate flow routing solutions. In this paper we merge network routing algorithms and network decomposition methods to solve multicommodity flow problems in generalized OQN. Our focus is on steady-state performance measures as average delays and waiting times in queue. The main contributions are twofold: (i) to highlight that solving the corresponding multicommodity flow problem by representing the generalized OQN as a Jackson OQN may be a poor approximation and may lead to inaccurate estimates of the system performance measures, and (ii) to present a multicommodity flow algorithm based on a routing step and on an approximate decomposition step, which leads to much more accurate solutions. Computational results are presented in order to show the effectiveness of the proposed approach.
Approximate decomposition methods for the analysis of multicommodity flow routing in generalized queuing networks
S0377221713006802
In this research, two crucial optimization problems of berth allocation and yard assignment in the context of bulk ports are studied. We discuss how these problems are interrelated and can be combined and solved as a single large scale optimization problem. More importantly we highlight the differences in operations between bulk ports and container terminals which highlights the need to devise specific solutions for bulk ports. The objective is to minimize the total service time of vessels berthing at the port. We propose an exact solution algorithm based on a branch and price framework to solve the integrated problem. In the proposed model, the master problem is formulated as a set-partitioning problem, and subproblems to identify columns with negative reduced costs are solved using mixed integer programming. To obtain sub-optimal solutions quickly, a metaheuristic approach based on critical-shaking neighborhood search is presented. The proposed algorithms are tested and validated through numerical experiments based on instances inspired from real bulk port data. The results indicate that the algorithms can be successfully used to solve instances containing up to 40 vessels within reasonable computational time.
A branch-and-price algorithm to solve the integrated berth allocation and yard assignment problem in bulk ports
S0377221713006814
We consider the problem of scheduling products with components on a single machine, where changeovers incur fixed costs. The objective is to minimize the weighted sum of total flow time and changeover cost. We provide properties of optimal solutions and develop an explicit characterization of optimal sequences, while showing that this characterization has recurrent properties. Our structural results have interesting implications for practitioners, primarily that the structure of optimal sequences is robust to changes in demand.
Optimal single machine scheduling of products with components and changeover cost
S0377221713006826
In the present article, we propose a new control chart for monitoring high quality processes. More specifically, we suggest declaring the monitored process out of control, by exploiting a compound rule couching on the number of conforming units observed between the (i −1)th and the ith nonconforming item and the number of conforming items observed between the (i −2)th and the ith nonconforming item. Our numerical experimentation demonstrates that the proposed control chart, in most of the cases, exhibits a better (or at least equivalent) performance than its competitors.
A compound control chart for monitoring and controlling high quality processes
S0377221713006838
In opaque selling certain characteristics of the product or service are hidden from the consumer until after purchase, transforming a differentiated good into somewhat of a commodity. Opaque selling has become popular in service pricing as it allows firms to sell their differentiated products at higher prices to regular brand loyal customers while simultaneously selling to non-loyal customers at discounted prices. We develop a stylized model of consumer choice that illustrates the role of opaque selling in market segmentation. We model a firm selling a product via three selling channels: a regular full information channel, an opaque posted price channel and an opaque bidding channel where consumers specify the price they are willing to pay. We illustrate the segmentation created by opaque selling as well as compare optimal revenues and prices for sellers using regular full information channels with those using opaque selling mechanisms in conjunction with regular channels. We also study the segmentation and policy changes induced by capacity constraints.
Pricing and market segmentation using opaque selling mechanisms
S0377221713006851
Suppliers network in the global context under price discounts and uncertain fluctuations of currency exchange rates have become critical in today’s world economy. We study the problem of suppliers’ selection in the presence of uncertain fluctuations of currency exchange rates and price discounts. We specifically consider a buyer with multiple sites sourcing a product from heterogeneous suppliers and address both the supplier selection and purchased quantity decision. Suppliers are located worldwide and pricing is offered in suppliers’ local currencies. Exchange rates from the local currencies of suppliers to the standard currency of the buyer are subject to uncertain fluctuations overtime. In addition, suppliers offer discounts as a function of the total quantity bought by the different customer’ sites over the time horizon irrespective of the quantity purchased by each site. We first provide a literature review on the overlapping items of suppliers’ selection and risk due to currency. Then, we model the problem using the mixed integer scenario-based stochastic programming method. The objective is to minimize the total system expected cost (purchased price+inventory cost+transportation cost+supplier management cost). Finally, we conduct numerical studies to show the value of the proposed model and we discuss some relevant managerial insights into the theory and practice of supply chain management research.
A scenario-based stochastic model for supplier selection in global context with multiple buyers, currency fluctuation uncertainties, and price discounts
S0377221713006863
This paper investigates the construction of an automatic algorithm selection tool for the multi-mode resource-constrained project scheduling problem (MRCPSP). The research described relies on the notion of empirical hardness models. These models map problem instance features onto the performance of an algorithm. Using such models, the performance of a set of algorithms can be predicted. Based on these predictions, one can automatically select the algorithm that is expected to perform best given the available computing resources. The idea is to combine different algorithms in a super-algorithm that performs better than any of the components individually. We apply this strategy to the classic problem of project scheduling with multiple execution modes. We show that we can indeed significantly improve on the performance of state-of-the-art algorithms when evaluated on a set of unseen instances. This becomes important when lots of instances have to be solved consecutively. Many state-of-the-art algorithms perform very well on a majority of benchmark instances, while performing worse on a smaller set of instances. The performance of one algorithm can be very different on a set of instances while another algorithm sees no difference in performance at all. Knowing in advance, without using scarce computational resources, which algorithm to run on a certain problem instance, can significantly improve the total overall performance.
An automatic algorithm selection approach for the multi-mode resource-constrained project scheduling problem
S0377221713006875
Modeling the evolution of networks is central to our understanding of large communication systems, and more general, modern economic and social systems. The research on social and economic networks is truly interdisciplinary and the number of proposed models is huge. In this survey we discuss a small selection of modeling approaches, covering classical random graph models, and game-theoretic models to analyze the evolution of social networks. Based on these two basic modeling paradigms, we introduce co-evolutionary models of networks and play as a potential synthesis.
Evolution of social networks
S0377221713006887
The objective of this manuscript is to introduce a decision methodology that allows manufacturing firms to evaluate which supplier is the most suitable partner for the implementation of a collaborative CO2 reduction management approach. The decision problem is developed for the fast-moving consumer goods (FMCGs) industry, which currently ranks among the ten largest CO2 emitting industries worldwide. In this paper, the evaluation and selection of the most suitable supplier is performed using the analytic network process (ANP), a decision-making technique that allows practitioners to solve complex decision structures. The key contributions of the present paper reside in the combination of literature and case-based derived decision criteria, aimed at enhancing judgment validity, with particular emphasis on a collaborative setting, which is highly relevant in the present context as the focal firms often lack the necessary skills for sustainability and, at the same time, are responsible for sustainability in the supply chain. The practical application of the ANP model at a major FMCG company yields robust results corroborated through a sensitivity analysis.
Strategic analysis of manufacturer-supplier partnerships: An ANP model for collaborative CO2 reduction management
S0377221713006899
In this work, we investigate the Resilient Multi-level Hop-constrained Network Design (RMHND) problem, which consists of designing hierarchical telecommunication networks, assuring resilience against random failures and maximum delay guarantees in the communication. Three mathematical formulations are proposed and algorithms based on the proposed formulations are evaluated. A Branch-and-price algorithm, which is based on a delayed column generation approach within a Branch-and-bound framework, is proven to work well, finding optimal solutions for practical telecommunication scenarios within reasonable time. Computational results show that algorithms based on the compact formulations are able to prove optimality for instances of limited size in the scenarios of interest while the proposed Branch-and-price algorithm exhibits a much better performance.
Branch-and-price algorithm for the Resilient Multi-level Hop-constrained Network Design
S0377221713006905
We study competition between an original equipment manufacturer (OEM) and an independently operating remanufacturer (IO). Different from the existing literature, the OEM and IO compete not only for selling their products but also for collecting returned products (cores) through their acquisition prices. We consider a two-period model with manufacturing by the OEM in the first period, and manufacturing as well as remanufacturing in the second period. We find the optimal policies for both players by establishing a Nash equilibrium in the second period, and then determine the optimal manufacturing decision for the OEM in the first period. This leads to a number of managerial insights. One interesting result is that the acquisition price of the OEM only depends on its own cost structure, and not on the acquisition price of the IO. Further insights are obtained from a numerical investigation. We find that when the cost benefits of remanufacturing diminishes and the IO has more chance to collect the available cores, the OEM manufactures less in the first period as the market in the second period gets larger to protect its market share. Finally, we consider the case where consumers have lower willingness to pay for the remanufactured products and find that in that case remanufacturing becomes less profitable overall.
Competition for cores in remanufacturing
S0377221713006917
In this paper, we address the problem of planning the patient flow in hospitals subject to scarce medical resources with the objective of maximizing the contribution margin. We assume that we can classify a large enough percentage of elective patients according to their diagnosis-related group (DRG) and clinical pathway. The clinical pathway defines the procedures (such as different types of diagnostic activities and surgery) as well as the sequence in which they have to be applied to the patient. The decision is then on which day each procedure of each patient’s clinical pathway should be done, taking into account the sequence of procedures as well as scarce clinical resources, such that the contribution margin of all patients is maximized. We develop two mixed-integer programs (MIP) for this problem which are embedded in a static and a rolling horizon planning approach. Computational results on real-world data show that employing the MIPs leads to a significant improvement of the contribution margin compared to the contribution margin obtained by employing the planning approach currently practiced. Furthermore, we show that the time between admission and surgery is significantly reduced by applying our models.
Scheduling the hospital-wide flow of elective patients
S0377221713006929
In this article, we consider a decision process in which vaccination is performed in two phases to contain the outbreak of an infectious disease in a set of geographic regions. In the first phase, a limited number of vaccine doses are allocated to each region; in the second phase, additional doses may be allocated to regions in which the epidemic has not been contained. We develop a simulation model to capture the epidemic dynamics in each region for different vaccination levels. We formulate the vaccine allocation problem as a two-stage stochastic linear program (2-SLP) and use the special problem structure to reduce it to a linear program with a similar size to that of the first stage problem. We also present a Newsvendor model formulation of the problem which provides a closed form solution for the optimal allocation. We construct test cases motivated by vaccine planning for seasonal influenza in the state of North Carolina. Using the 2-SLP formulation, we estimate the value of the stochastic solution and the expected value of perfect information. We also propose and test an easy to implement heuristic for vaccine allocation. We show that our proposed two-phase vaccination policy potentially results in a lower attack rate and a considerable saving in vaccine production and administration cost.
Optimal two-phase vaccine allocation to geographically different regions under uncertainty
S0377221713006930
Efficient and reliable home delivery is crucial for the economic success of online retailers. This is especially challenging for attended home deliveries in metropolitan areas where logistics service providers face congested traffic networks and customers expect deliveries in tight delivery time windows. Our goal is to develop and compare strategies that maximize the profits of a logistics service provider by accepting as many delivery requests as possible, while assessing the potential impact of a request on the service quality of a delivery tour. Several acceptance mechanisms are introduced, differing in the amount of travel time information that is considered in the decision of whether a delivery request can be accommodated or not. A real-world inspired simulation framework is used for comparison of acceptance mechanisms with regard to profits and service quality. Computational experiments utilizing this simulation framework investigate the effectiveness of acceptance mechanisms and help identify when more advanced travel time information may be worth the additional data collection and computational efforts.
Customer acceptance mechanisms for home deliveries in metropolitan areas
S0377221713006942
Companies that maintain capital goods (e.g., airplanes or power plants) often face high costs, both for holding spare parts and due to downtime of their technical systems. These costs can be reduced by pooling common spare parts between multiple companies in the same region, but managers may be unsure about how to share the resulting costs or benefits in a fair way that avoids free riders. To tackle this problem, we study several players, each facing a Poisson demand process for an expensive, low-usage item. They share a stock point that is controlled by a continuous-review base stock policy with full backordering under an optimal base stock level. Costs consist of penalty costs for backorders and holding costs for on-hand stock. We propose to allocate the total costs proportional to players’ demand rates. Our key result is that this cost allocation rule satisfies many appealing properties: it makes all separate participants and subgroups of participants better off, it stimulates growth of the pool, it can be easily implemented in practice, and it induces players to reveal their private information truthfully. To obtain these game theoretical results, we exploit novel structural properties of the cost function in our (S −1, S) inventory model.
Pooling of spare parts between multiple users: How to share the benefits?
S0377221713006954
Typical questionnaires administered by financial advisors to assess financial risk tolerance mostly contain stereotypes of people, have seemingly unscientific scoring approaches and often treat risk as a one-dimensional concept. In this work, a mathematical tool was developed to assess relative risk tolerance using Data Envelopment Analysis (DEA). At its core, it is a novel questionnaire that characterizes risk by its four distinct elements: propensity, attitude, capacity, and knowledge. Over 180 individuals were surveyed and their responses were analyzed using the Slacks-based measure type of DEA efficiency model. Results show that the multidimensionality of risk must be considered for complete assessment of risk tolerance. This approach also provides insight into the relationship between risk, its elements and other variables. Specifically, the perception of risk varies by gender as men are generally less risk averse than women. In fact, risk attitude and knowledge scores are consistently lower for women, while there is no statistical difference in their risk capacity and propensity compared to men. The tool can also serve as a “risk calculator” for an appropriate and defensible method to meet legal compliance requirements, known as the “Know Your Client” rule, that exist for Canadian financial institutions and their advisors.
Two-stage financial risk tolerance assessment using data envelopment analysis
S0377221713006966
The Delay Constrained Relay Node Placement Problem (DCRNPP) frequently arises in the Wireless Sensor Network (WSN) design. In WSN, Sensor Nodes are placed across a target geographical region to detect relevant signals. These signals are communicated to a central location, known as the Base Station, for further processing. The DCRNPP aims to place the minimum number of additional Relay Nodes at a subset of Candidate Relay Node locations in such a manner that signals from various Sensor Nodes can be communicated to the Base Station within a pre-specified delay bound. In this paper, we study the structure of the projection polyhedron of the problem and develop valid inequalities in form of the node-cut inequalities. We also derive conditions under which these inequalities are facet defining for the projection polyhedron. We formulate a branch-and-cut algorithm, based upon the projection formulation, to solve DCRNPP optimally. A Lagrangian relaxation based heuristic is used to generate a good initial solution for the problem that is used as an initial incumbent solution in the branch-and-cut approach. Computational results are reported on several randomly generated instances to demonstrate the efficacy of the proposed algorithm.
Optimal relay node placement in delay constrained wireless sensor network design
S0377221713006978
In this paper we consider the Cumulative Capacitated Vehicle Routing Problem (CCVRP), which is a variation of the well-known Capacitated Vehicle Routing Problem (CVRP). In this problem, the traditional objective of minimizing total distance or time traveled by the vehicles is replaced by minimizing the sum of arrival times at the customers. We propose a branch-and-cut-and-price algorithm for obtaining optimal solutions to the problem. To the best of our knowledge, this is the first published exact algorithm for the CCVRP. We present computational results based on a set of standard CVRP benchmarks and investigate the effect of modifying the number of vehicles available.
A branch-and-cut-and-price algorithm for the cumulative capacitated vehicle routing problem
S0377221713007170
We consider the optimal asset allocation problem in a continuous-time regime-switching market. The problem is to maximize the expected utility of the terminal wealth of a portfolio that contains an option, an underlying stock and a risk-free bond. The difficulty that arises in our setting is finding a way to represent the return of the option by the returns of the stock and the risk-free bond in an incomplete regime-switching market. To overcome this difficulty, we introduce a functional operator to generate a sequence of value functions, and then show that the optimal value function is the limit of this sequence. The explicit form of each function in the sequence can be obtained by solving an auxiliary portfolio optimization problem in a single-regime market. And then the original optimal value function can be approximated by taking the limit. Additionally, we can also show that the optimal value function is a solution to a dynamic programming equation, which leads to the explicit forms for the optimal value function and the optimal portfolio process. Furthermore, we demonstrate that, as long as the current state of the Markov chain is given, it is still optimal for an investor in a multiple-regime market to simply allocate his/her wealth in the same way as in a single-regime market.
Portfolio optimization in a regime-switching market with derivatives
S0377221713007182
Because most commercial passenger airlines operate on a hub-and-spoke network, small disturbances can cause major disruptions in their planned schedules and have a significant impact on their operational costs and performance. When a disturbance occurs, the airline often applies a recovery policy in order to quickly resume normal operations. We present in this paper a large neighborhood search heuristic to solve an integrated aircraft and passenger recovery problem. The problem consists of creating new aircraft routes and passenger itineraries to produce a feasible schedule during the recovery period. The method is based on an existing heuristic, developed in the context of the 2009 ROADEF Challenge, which alternates between three phases: construction, repair and improvement. We introduce a number of refinements in each phase so as to perform a more thorough search of the solution space. The resulting heuristic performs very well on the instances introduced for the challenge, obtaining the best known solution for 17 out of 22 instances within five minutes of computing time and 21 out of 22 instances within 10minutes of computing time.
Improvements to a large neighborhood search heuristic for an integrated aircraft and passenger recovery problem
S0377221713007194
Markowitz formulated the portfolio optimization problem through two criteria: the expected return and the risk, as a measure of the variability of the return. The classical Markowitz model uses the variance as the risk measure and is a quadratic programming problem. Many attempts have been made to linearize the portfolio optimization problem. Several different risk measures have been proposed which are computationally attractive as (for discrete random variables) they give rise to linear programming (LP) problems. About twenty years ago, the mean absolute deviation (MAD) model drew a lot of attention resulting in much research and speeding up development of other LP models. Further, the LP models based on the conditional value at risk (CVaR) have a great impact on new developments in portfolio optimization during the first decade of the 21st century. The LP solvability may become relevant for real-life decisions when portfolios have to meet side constraints and take into account transaction costs or when large size instances have to be solved. In this paper we review the variety of LP solvable portfolio optimization models presented in the literature, the real features that have been modeled and the solution approaches to the resulting models, in most of the cases mixed integer linear programming (MILP) models. We also discuss the impact of the inclusion of the real features.
Twenty years of linear programming based portfolio optimization
S0377221713007200
Column generation, combined with an appropriate integer programming technique, has shown to be a powerful tool for solving huge integer programmes arising in various applications. In these column generation approaches, the master problem is often of a set partitioning type. The set partitioning polytope has the quasi-integrality property, which enables the use of simplex pivots for finding improved integer solutions, each of which is associated with a linear programming basis. By combining such pivots with column generation, one obtains a method where each found solution to a restricted master problem is feasible, integer, and associated with a dual solution that can be used in a column generation step. This paper presents a framework for such an all-integer column generation approach to set partitioning problems. We give the basic principles of all-integer pivots and all-integer column generation. We also state optimality conditions and introduce means for preserving a basis in the event that a heuristic is applied to the master problem. These extensions introduce flexibility in the design of a specific solution scheme of this kind, and with proper settings optimal or approximate solutions can be sought for.
All-integer column generation for set partitioning: Basic principles and extensions
S0377221713007212
This paper investigates the twin effects of supply chain visibility (SCV) and supply chain risk (SCR) on supply chain performance. Operationally, SCV has been linked to the capability of sharing timely and accurate information on exogenous demand, quantity and location of inventory, transport related cost, and other logistics activities throughout an entire supply chain. Similarly, SCR can be viewed as the likelihood that an adverse event has occurred during a certain epoch within a supply chain and the associated consequences of that event which affects supply chain performance. Given the multi-faceted attributes of the decision making process which involves many stages, objectives, and stakeholders, it beckons research into this aspect of the supply chain to utilize a fuzzy multi-objective decision making approach to model SCV and SCR from an operational perspective. Hence, our model incorporates the objectives of SCV maximization, SCR minimization, and cost minimization under the constraints of budget, customer demand, production capacity, and supply availability. A numerical example is used to demonstrate the applicability of the model. Our results suggest that decision makers tend to mitigate SCR first then enhance SCV.
A multi-objective approach to supply chain visibility and risk
S0377221713007224
Motivated by the observations that the direct sales channel is increasingly used for customized products and that retailers wield leadership, we develop in this paper a retailer-Stackelberg pricing model to investigate the product variety and channel structure strategies of manufacturer in a circular spatial market. To avoid channel conflict, we consider the commonly observed case where the indirect channel sells standard products whereas the direct channel offers custom products. Our analytical results indicate that if the reservation price in the indirect channel is sufficiently low, adding the direct channel raises the unit wholesale price and retail price in the indirect channel due to customization in the direct channel. Despite the fact that dual channels for the retailer may dominate the single indirect channel, we find that the motivation for the manufacturer to use dual channels decreases with the unit production cost, while increases with (i) the marginal cost of variety, (ii) the retailer’s marginal selling cost, and (iii) the customer’s fit cost. Interestingly, our equilibrium analysis demonstrates that it is more likely for the manufacturer to use dual channels under the retailer Stackelberg channel leadership scenario than under the manufacturer Stackelberg scenario if offering a greater variety is very expensive. When offering a greater variety is inexpensive, the decentralization of the indirect channel may invert the manufacturer’s channel structure decision. Furthermore, endogenization of product variety will also invert the channel structure decision if the standard product’s reservation price is sufficiently low.
Product variety and channel structure strategy for a retailer-Stackelberg supply chain
S0377221713007236
Adjacency constraints along with even flow harvest constraints are important in long term forest planning. Simulated annealing (SA) is previously successfully applied when addressing such constraints. The objective of this paper was to assess the performance of SA under three new methods of introducing biased probabilities in the management unit (MU) selection and compare them to the conventional method that assumes uniform probabilities. The new methods were implemented as a search vector approach based on the number of treatment schedules describing sequences of silvicultural treatments over time and standard deviation of net present value within MUs (Methods 2 and 3, respectively), and by combining the two approaches (Method 4). We constructed three hundred hypothetical forests (datasets) for three different landscapes characterized by different initial age class distributions (young, normal and old). Each dataset encompassed 1600 management units. The evaluation of the methods was done by means of objective function values, first feasible iteration and time consumption. Introducing a bias in the MU selection improves solutions compared to the conventional method (Method 1). However, an increase of computational time is in general needed for the new methods. Method 4 is the best alternative because, for large parts of the datasets, produced the best average and maximum objective function values and had lower time consumption than Methods 2 and 3. Although Method 4 performed very well, Methods 2 and 3 should not be neglected because for a considerable number of datasets the maximum objective function values were obtained by these methods.
Applying simulated annealing using different methods for the neighborhood search in forest planning problems
S0377221713007248
Increased rates of mortgage foreclosures in the U.S. have had devastating social and economic impacts during and after the 2008 financial crisis. As part of the response to this problem, nonprofit organizations such as community development corporations (CDCs) have been trying to mitigate the negative impacts of mortgage foreclosures by acquiring and redeveloping foreclosed properties. We consider the strategic resource allocation decisions for these organizations which involve budget allocations to different neighborhoods under cost and return uncertainty. Based on interactions with a CDC, we develop stochastic integer programming based frameworks for this decision problem, and assess the practical value of the models by using real-world data. Both policy-related and computational analyses are performed, and several insights such as the trade-offs between different objectives, and the efficiency of different solution approaches are presented.
Stochastic models for strategic resource allocation in nonprofit foreclosed housing acquisitions
S0377221713007261
The Pickup and Delivery Problem with Shuttle routes (PDPS) is a special case of the Pickup and Delivery Problem with Time Windows (PDPTW) where the trips between the pickup points and the delivery points can be decomposed into two legs. The first leg visits only pickup points and ends at some delivery point. The second leg is a direct trip – called a shuttle – between two delivery points. This optimization problem has practical applications in the transportation of people between a large set of pickup points and a restricted set of delivery points. This paper proposes three mathematical models for the PDPS and a branch-and-cut-and-price algorithm to solve it. The pricing sub-problem, an Elementary Shortest Path Problem with Resource Constraints (ESPPRC), is solved with a labeling algorithm enhanced with efficient dominance rules. Three families of valid inequalities are used to strengthen the quality of linear relaxations. The method is evaluated on generated and real-world instances containing up to 193 transportation requests. Instances with up to 87 customers are solved to optimality within a computation time of one hour.
A branch-and-cut-and-price approach for the pickup and delivery problem with shuttle routes
S0377221713007273
Promethee II is a prominent method for multi-criteria decision aid (MCDA) that builds a complete ranking on a set of potential actions by assigning each of them a so-called net flow score. However, to calculate these scores, each pair of actions has to be compared, causing the computational load to increase quadratically with the number of actions, eventually leading to prohibitive execution times for large decision problems. For some problems, however, a trade-off between the ranking’s accuracy and the required evaluation time may be acceptable. Therefore, we propose a piecewise linear model that approximates Promethee II’s net flow scores and reduces the computational complexity (with respect to the number of actions) from quadratic to linear at the cost of some wrongly ranked actions. Simulations on artificial problem instances allow us to quantify this time/quality trade-off and to provide probabilistic bounds on the problem size above which our model satisfyingly approximates Promethee II’s rankings. They show, for instance, that for decision problems of 10,000 actions evaluated on 7 criteria, the Pearson correlation coefficient between the original scores and our approximation is of at least 0.97. When put in balance with computation times that are more than 7000 times faster than for the Promethee II model, the proposed approximation model represents an interesting alternative for large problem instances.
Approximating Promethee II’s net flow scores by piecewise linear value functions
S0377221713007285
Consider a random vector, and assume that a set of its moments information is known. Among all possible distributions obeying the given moments constraints, the envelope of the probability distribution functions is introduced in this paper as distributional robust probability function. We show that such a function is computable in the bi-variate case under some conditions. Connections to the existing results in the literature and its applications in risk management are discussed as well.
On distributional robust probability functions and their computations
S0377221713007297
It is widely accepted in forecasting that a combination model can improve forecasting accuracy. One important challenge is how to select the optimal subset of individual models from all available models without having to try all possible combinations of these models. This paper proposes an optimal subset selection algorithm from all individual models using information theory. The experimental results in tourism demand forecasting demonstrate that the combination of the individual models from the selected optimal subset significantly outperforms the combination of all available individual models. The proposed optimal subset selection algorithm provides a theoretical approach rather than experimental assessments which dominate literature.
A combination selection algorithm on forecasting
S0377221713007303
This paper presents a preference-based method to handle optimization problems with multiple objectives. With an increase in the number of objectives the computational cost in solving a multi-objective optimization problem rises exponentially, and it becomes increasingly difficult for evolutionary multi-objective techniques to produce the entire Pareto-optimal front. In this paper, an evolutionary multi-objective procedure is combined with preference information from the decision maker during the intermediate stages of the algorithm leading to the most preferred point. The proposed approach is different from the existing approaches, as it tries to find the most preferred point with a limited budget of decision maker calls. In this paper, we incorporate the idea into a progressively interactive technique based on polyhedral cones. The idea is also tested on another progressively interactive approach based on value functions. Results are provided on two to five-objective unconstrained as well as constrained test problems.
An interactive evolutionary multi-objective optimization algorithm with a limited number of decision maker calls
S0377221713007315
In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an NP -hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem.
The Subset Sum game
S0377221713007327
Deterministic mine planning models along a time horizon have proved to be very effective in supporting decisions on sequencing the extraction of material in copper mines. Some of these models have been developed for, and used successfully by CODELCO, the Chilean state copper company. In this paper, we wish to consider the uncertainty in a very volatile parameter of the problem, namely, the copper price along a given time horizon. We represent the uncertainty by a multistage scenario tree. The resulting stochastic model is then converted into a mixed 0–1 Deterministic Equivalent Model using a compact representation. We first introduce the stochastic model that maximizes the expected profit along the time horizon over all scenarios (i.e., as in a risk neutral environment). We then present several approaches for risk management in a risk averse environment. Specifically, we consider the maximization of the Value-at-Risk and several variants of the Conditional Value-at-Risk (one of them is new), the maximization of the expected profit minus the weighted probability of having an undesirable scenario in the solution provided by the model, and the maximization of the expected profit subject to stochastic dominance constraints recourse-integer for a set of profiles given by the pairs of target profits and bounds on either the probability of failure or the expected profit shortfall. We present an extensive computational experience on the actual problem, by comparing the risk neutral approach, the tested risk averse strategies and the performance of the traditional deterministic approach that uses the expected value of the uncertain parameters. The results clearly show the advantage of using the risk neutral strategy over the traditional deterministic approach, as well as the advantage of using any risk averse strategy over the risk neutral one.
Medium range optimization of copper extraction planning under uncertainty in future copper prices
S0377221713007339
Growing competition and economic recession is driving the need for more rapid redesign of operations enabled by innovative technologies. The acquisition, development and implementation of systems to manage customer complaints and control the quality assurance process is a critical area for engineering and manufacturing companies. Multimethodologies, and especially those that can bridge ‘soft’ and ‘hard’ OR practices, have been seen as a possible means to facilitate rapid problem structuring, the analysis of alternative process design and then the specification through to implementation of systems solutions. Despite the many ‘hard’ and ‘soft’ OR problem structuring and management methods available, there are relatively few detailed empirical research studies of how they can be combined and conducted in practice. This study examines how a multimethodology was developed, and used successfully, in an engineering company to address customer complaints/concerns, both strategically and operationally. The action research study examined and utilised emerging ‘soft’ OR theory to iteratively develop a new framework that encompasses problem structuring through to technology selection and adoption. This was based on combining Soft Systems Methodology (SSM) for problem exploration and structuring, learning theories and methods for problem diagnosis, and technology management for selecting between alternatives and implementing the solution. The results show that, through the use of action research and the development of a contextualised multimethodology, stakeholders within organisations can participate in the design of new systems and more rapidly adopt technology to address the operational problems of customer complaints in more systemic, innovative and informed ways.
SSM and technology management: Developing multimethodology through practice
S0377221713007340
This paper addresses the joint quay crane and truck scheduling problem at a container terminal, considering the coordination of the two types of equipment to reduce their idle time between performing two successive tasks. For the unidirectional flow problem with only inbound containers, in which trucks go back to quayside without carrying outbound containers, a mixed-integer linear programming model is formulated to minimize the makespan. Several valid inequalities and a property of the optimal solutions for the problem are derived, and two lower bounds are obtained. An improved Particle Swarm Optimization (PSO) algorithm is then developed to solve this problem, in which a new velocity updating strategy is incorporated to improve the solution quality. For small sized problems, we have compared the solutions of the proposed PSO with the optimal solutions obtained by solving the model using the CPLEX software. The solutions of the proposed PSO for large sized problems are compared to the two lower bounds because CPLEX could not solve the problem optimally in reasonable time. For the more general situation considering both inbound and outbound containers, trucks may go back to quayside with outbound containers. The model is extended to handle this problem with bidirectional flow. Experiment shows that the improved PSO proposed in this paper is efficient to solve the joint quay crane and truck scheduling problem.
Modeling and solution of the joint quay crane and truck scheduling problem
S0377221713007352
Despite their implementations in a wide variety of applications, there are very few instances where every item sold at a retail store is RFID-tagged. While the business case for expensive items to be RFID tagged may be somewhat clear, we claim that even ‘cheap’ items (i.e., those that cost less than an RFID tag) should be RFID tagged for retailers to benefit from efficiencies associated with item-level visibility. We study the relative price premiums a retailer with RFID tagged items can command as well as the retailer’s profit to illustrate the significance of item-level RFID-tagging both cheap and expensive items at a retail store. Our results indicate that, under certain conditions, item-level RFID tagging of items that cost less than an RFID tag has the potential to generate significant benefits to the retailer. The retailer is also better off tagging all items regardless of their relative price with respect to that of an RFID tag compared to the case where only the expensive item is RFID-tagged.
Should retail stores also RFID-tag ‘cheap’ items?
S0377221713007364
The mixed integer quadratic programming (MIQP) reformulation by Zheng, Sun, Li, and Cui (2012) for probabilistically constrained quadratic programs (PCQP) recently published in EJOR significantly dominates the standard MIQP formulation (Ruszczynski, 2002; Benati & Rizzi, 2007) which has been widely adopted in the literature. Stimulated by the dimensionality problem which Zheng et al. (2012) acknowledge themselves for their reformulations, we study further the characteristics of PCQP and develop new MIQP reformulations for PCQP with fewer variables and constraints. The results from numerical tests demonstrate that our reformulations clearly outperform the state-of-the-art MIQP in Zheng et al. (2012).
New reformulations for probabilistically constrained quadratic programs
S0377221713007376
A simple augmented ∊-constraint (SAUGMECON) method is put forward to generate all non-dominated solutions of multi-objective integer programming (MOIP) problems. The SAUGMECON method is a variant of the augmented ∊-constraint (AUGMECON) method proposed in 2009 and improved in 2013 by Mavrotas et al. However, with the SAUGMECON method, all non-dominated solutions can be found much more efficiently thanks to our innovations to algorithm acceleration. These innovative acceleration mechanisms include: (1) an extension to the acceleration algorithm with early exit and (2) an addition of an acceleration algorithm with bouncing steps. The same numerical example in Lokman and Köksalan (2012) is used to illustrate workings of the method. Then comparisons of computational performance among the method proposed by Özlen and Azizogˇlu (2009), Özlen et al. (2012), the method developed by Lokman and Köksalan (2012) and the SAUGMECON method are made by solving randomly generated general MOIP problem instances as well as special MOIP problem instances such as the MOKP and MOSP problem instances presented in Table 4 in Lokman and Köksalan (2012). The experimental results show that the SAUGMECON method performs the best among these methods. More importantly, the advantage of the SAUGMECON method over the method proposed by Lokman and Köksalan (2012) turns out to be increasingly more prominent as the number of objectives increases.
A simple augmented ∊-constraint method for multi-objective mathematical integer programming problems
S0377221713007388
This paper presents a novel method to quantify the effects of human-related factors on the risk of failure in manufacturing industries. When failures can be caused by operators, the decision maker must intervene to mitigate operator-related risk. There are numerous intervention methods possible; we develop a revenue model that provides the decision-maker with a systematic tool to perform a cost-benefit analysis, balancing the advantage of risk reduction, against the direct cost of the intervention method. A method is developed to incorporate human-related factors, in addition to machine-related factors, in machine failure analysis. This enables the revenue model to use the expected uptime and the probability of failure, given the operator skill level and working conditions, to calculate the expected revenue associated with each intervention method. A case study of a manufacturing company is considered incorporating two possible intervention methods: reducing the production rate to provide more cognition time and adding a shift expert to guide the operators. Different courses of action are chosen for the various skill-shift scenarios presented.
Choosing the optimal intervention method to reduce human-related machine failures
S0377221713007406
For a given set of nodes in the plane the min-power centre is a point such that the cost of the star centred at this point and spanning all nodes is minimised. The cost of the star is defined as the sum of the costs of its nodes, where the cost of a node is an increasing function of the length of its longest incident edge. The min-power centre problem provides a model for optimally locating a cluster-head amongst a set of radio transmitters, however, the problem can also be formulated within a bicriteria location model involving the 1-centre and a generalised Fermat-Weber point, making it suitable for a variety of facility location problems. We use farthest point Voronoi diagrams and Delaunay triangulations to provide a complete geometric description of the min-power centre of a finite set of nodes in the Euclidean plane when cost is a quadratic function. This leads to a new linear-time algorithm for its construction when the convex hull of the nodes is given. We also provide an upper bound for the performance of the centroid as an approximation to the quadratic min-power centre. Finally, we briefly describe the relationship between solutions under quadratic cost and solutions under more general cost functions.
A geometric characterisation of the quadratic min-power centre
S0377221713007418
We propose an allocation process for economic risk capital using an internal sequential auction in which investment allowances are based on marginal risk contributions. Division managers have incentive to give truthful bids because of bonus payments, which are linear in the division’s profit and linked to the auction bids. With our model, the auction process reaches an equilibrium identical to the optimal allocation if division managers have no diverging interests. When division managers do have diverging preferences in terms of empire building, headquarters faces a trade-off between incurring opportunity costs for achieving a suboptimal allocation and bonus costs paid to division managers to overcome their diverging interests. However, bonus costs are partially offset by proceeds from the auction. Depending on the model parameters, total agency costs can become negative. We show that for large values of new risk capital to be allocated, headquarters can always choose a level of bonus payments so that total costs are negative.
Allocation of risk capital on an internal market
S0377221713007431
Convergence speed and diversity of nondominated solutions are two important performance indicators for Multi-Objective Evolutionary Algorithms (MOEAs). In this paper, we propose a Resource Allocation (RA) model based on Game Theory to accelerate the convergence speed of MOEAs, and a novel Double-Sphere Crowding Distance (DSCD) measure to improve the diversity of nondominated solutions. The mechanism of RA model is that the individuals in each group cooperate with each other to get maximum benefits for their group, and then individuals in the same group compete for private interests. The DSCD measure uses hyper-spheres consisting of nearest neighbors to estimate the crowding degree. Experimental results on convergence speed and diversity of nondominated solutions for benchmark problems and a real-world problem show the efficiency of these two proposed techniques.
Resource allocation model and double-sphere crowding distance for evolutionary multi-objective optimization
S0377221713007443
In this work we consider a Transportation Location Routing Problem (TLRP) that can be seen as an extension of the two stage Location Routing Problem, in which the first stage corresponds to a transportation problem with truck capacity. Two objectives are considered in this research, reduction of distribution cost and balance of workloads for drivers in the routing stage. Here, we present a mathematical formulation for the bi-objective TLRP and propose a new representation for the TLRP based on priorities. This representation lets us manage the problem easily and reduces the computational effort, plus, it is suitable to be used with both local search based and evolutionary approaches. In order to demonstrate its efficiency, it was implemented in two metaheuristic solution algorithms based on the Scatter Tabu Search Procedure for Non-Linear Multiobjective Optimization (SSPMO) and on the Non-dominated Sorting Genetic Algorithm II (NSGA-II) strategies. Computational experiments showed efficient results in solution quality and computing time.
Solving a bi-objective Transportation Location Routing Problem by metaheuristic algorithms
S0377221713007455
We explore the effect of balancing unbalanced panel data when estimating primal productivity indices using non-parametric frontier estimators. First, we list a series of pseudo-solutions aimed at making an unbalanced panel balanced. Then, we discuss some intermediate solutions (e.g., balancing 2-years by 2years). Furthermore, we link this problem with a variety of literatures on infeasibilities, statistical inference of non-parametric frontier estimators, and the index theory literature focusing on the dynamics of entry and exit in industries. We then empirically illustrate these issues comparing both Malmquist and Hicks–Moorsteen productivity indices on two data sets. In particular, we test for the differences in distribution when comparing balanced and unbalanced results for a given index and when comparing Malmquist and Hicks–Moorsteen productivity indices for a given type of data set. The latter tests are crucial in answering the question to which extent the Malmquist index can approximate the Hicks–Moorsteen index that has a Total Factor Productivity (TFP) interpretation. Finally, we draw up a list of remaining issues that could benefit from further exploration.
Comparing Malmquist and Hicks–Moorsteen productivity indices: Exploring the impact of unbalanced vs. balanced panel data
S0377221713007467
We propose a new power index based on the minimum sum representation (MSR) of a weighted voting game. The MSR offers a redesign of a voting game, such that voting power as measured by the MSR index becomes proportional to voting weight. The MSR index is a coherent measure of power that is ordinally equivalent to the Banzhaf, Shapley–Shubik and Johnston indices. We provide a characterization for a bicameral meet as a weighted game or a complete game, and show that the MSR index is immune to the bicameral meet paradox. We discuss the computation of the MSR index using a linear integer program and the inverse MSR problem of designing a weighted voting game with a given distribution of power.
The minimum sum representation as an index of voting power
S0377221713007479
This paper examines the relationship between seasonality, idiosyncratic risk and mutual fund returns using multifactor models. We use a large sample containing the return histories of 728 UK mutual funds over a 23-year period to measure fund performance. We present evidence that idiosyncratic risk cannot be eliminated, we also find evidence of seasonality in all fund categories. Specifically, we find a close relation between the seasonality and the end of the tax-year. We document that the idiosyncratic risk puzzle cannot explain seasonality in fund performance in the UK. Although, we do find that idiosyncratic risk can account for the seasonality in the month of April. Thus, the results show a link between the tax-loss selling hypothesis in April and idiosyncratic risk in that month. Finally, we report evidence that idiosyncratic risk is negatively related to expected returns for most fund classes.
Seasonality and idiosyncratic risk in mutual fund performance
S0377221713007686
A problem of decision making under uncertainty in which the choice must be made between two sets of alternatives instead of two single ones is considered. A number of choice rules are proposed and their main properties are investigated, focusing particularly on the generalizations of stochastic dominance and statistical preference. The particular cases where imprecision is present in the utilities or in the beliefs associated to two alternatives are considered.
Decision making with imprecise probabilities and utilities by means of statistical preference and stochastic dominance
S0377221713007698
In this paper, we extend the multiple traveling repairman problem by considering a limitation on the total distance that a vehicle can travel; the resulting problem is called the multiple traveling repairmen problem with distance constraints (MTRPD). In the MTRPD, a fleet of identical vehicles is dispatched to serve a set of customers. Each vehicle that starts from and ends at the depot is not allowed to travel a distance longer than a predetermined limit and each customer must be visited exactly once. The objective is to minimize the total waiting time of all customers after the vehicles leave the depot. To optimally solve the MTRPD, we propose a new exact branch-and-price-and-cut algorithm, where the column generation pricing subproblem is a resource-constrained elementary shortest-path problem with cumulative costs. An ad hoc label-setting algorithm armed with bidirectional search strategy is developed to solve the pricing subproblem. Computational results show the effectiveness of the proposed method. The optimal solutions to 179 out of 180 test instances are reported in this paper. Our computational results serve as benchmarks for future researchers on the problem.
Branch-and-price-and-cut for the multiple traveling repairman problem with distance constraints
S0377221713007704
In today’s global free market, third-party logistics providers (3PLs) are becoming increasingly important. This paper studies a problem faced by a 3PL operating a warehouse in Shanghai, China, under contract with a major manufacturer of children’s clothing based in the United States. At the warehouse, the 3PL receives textile parcel shipments from the suppliers located in China; each shipment is destined for different retail stores located across the United Sates. These shipments must be consolidated and loaded into containers of varying sizes and costs, and then sent along shipping routes to different destination ports. An express company, such as UPS and FedEx, unloads the shipments from the containers at the destination ports and distributes them to their corresponding stores or retailers by parcel delivery. The objective is to find an allocation that minimizes the total container transportation and parcel delivery costs. We formulate the problem into an integer programming model, and also propose a memetic algorithm approach to solve the problem practically. A demonstration of a good solution to this problem was a decisive factor in the awarding of the contract to the 3PL in question.
The freight consolidation and containerization problem
S0377221713007716
Many methods to elicit preference models in multi-attribute decision making rely on evaluations of a set of sample alternatives by decision makers. Using orthogonal design methods to create this set of alternatives might require respondents to evaluate unrealistic alternatives. In this paper, we perform an empirical study to analyze whether the presence of such implausible alternatives has an effect on the quality of utility elicitation. Using a new approach to measure consistency, we find that implausible alternatives in fact, have a positive effect on consistency of intra-attribute preference information and consistency with dominance, but do not affect inter-attribute preference information.
Implausible alternatives in eliciting multi-attribute value functions
S0377221713007728
Most classical scheduling research assumes that the objectives sought are common to all jobs to be scheduled. However, many real-life applications can be modeled by considering different sets of jobs, each one with its own objective(s), and an increasing number of papers addressing these problems has appeared over the last few years. Since so far the area lacks a unified view, the studied problems have received different names (such as interfering jobs, multi-agent scheduling, and mixed-criteria), some authors do not seem to be aware of important contributions in related problems, and solution procedures are often developed without taking into account existing ones. Therefore, the topic is in need of a common framework that allows for a systematic recollection of existing contributions, as well as a clear definition of the main research avenues. In this paper we review multicriteria scheduling problems involving two or more sets of jobs and propose an unified framework providing a common definition, name and notation for these problems. Moreover, we systematically review and classify the existing contributions in terms of the complexity of the problems and the proposed solution procedures, discuss the main advances, and point out future research lines in the topic.
A common framework and taxonomy for multicriteria scheduling problems with interfering and competing jobs: Multi-agent scheduling problems
S0377221713007741
An equilibrium network design model is formulated to determine the optimal configuration of a vehicle sharing program (VSP). A VSP involves a fleet of vehicles (bicycles, cars, or electric vehicles) positioned strategically across a network. In a flexible VSP, users are permitted to check out vehicles to perform trips and return the vehicles to stations close to their destinations. VSP operators need to determine an optimal configuration in terms of station locations, vehicle inventories, and station capacities, that maximizes revenue. Since users are likely to use the VSP resources only if their travel utilities improve, a generalized equilibrium based approach is adopted to design the system. The model takes the form of a bi-level, mixed-integer program. Model properties of uniqueness, inefficiency of equilibrium, and transformations that lead to an exact solution approach are presented. Computational tests on several synthetic instances demonstrate the nature of the equilibrium configuration, the trade-offs between operator and user objectives, and insights for deploying such systems.
Equilibrium network design of shared-vehicle systems
S0377221713007753
In this paper, we study how an informal, long-term relationship between a manufacturer and a retailer performs in turbulent market environments characterized by uncertain demand. We show that the long-term partnership based on repeated interaction is sustainable under price-only contracts when the supply chain partners are sufficiently patient. That is, the channel can be coordinated over a long time horizon when the factor whereby the members discount the future value of this trusting relationship is sufficiently high. Second, above the minimum discount factor, a range of wholesale prices exists that can sustain the long-term partnership, and there are different possible profit divisions between the two players. Third, when the market is turbulent, i.e., either the expected demand or the demand variance changes from period to period according to a probabilistic law, it is typically less possible to sustain the long-term partnership in a booming market or in a market with low demand variability. Finally, obtaining more information about future market fluctuation may not help the supply chain to sustain the long-term partnership, due to partners’ strategic considerations. With the availability of the market signal, total supply chain profits increase, but the retailer may even be worse-off.
Sustaining long-term supply chain partnerships using price-only contracts
S0377221713007765
This work investigates how bargaining power affects negotiations between manufacturers and reverse logistics providers in reverse supply chains under government intervention using a novel three-stage reverse supply chain model for two scenarios, a reverse logistics provider alliance and no reverse logistics provider alliance. Utilizing the asymmetric Nash bargaining game, this work seeks equilibrium negotiation solutions. Analytical results indicate that the reverse logistics provider alliance increases the bargaining power of reverse logistics providers when negotiating with a manufacturer for a profitable recycled-component supply contract; however, manufacturer profits are often reduced. Particularly in the case of an recycled-component vender-dominated market, a reverse logistics alliance with extreme bargaining power may cause a counter-profit effect that results in the decreases of profits for all players involved, including buyers (i.e., manufacturers) and allied recycled-component venders (i.e., reverse logistics providers). Additional managerial insights are provided for discussion.
Alliance or no alliance—Bargaining power in competing reverse supply chains
S0377221713007777
So far, in the nonparametric literature only full frontier nonparametric methods have been applied to search for economies of scope and scale, particularly the data envelopment analysis method (DEA). However, these methods present some drawbacks that might lead to biased results. This paper proposes a methodology based on more robust partial frontier nonparametric methods to look for scope and scale economies. Through this methodology it is possible to assess the robustness of these economies, and in particular to assess the influence that extreme data or outliers might have on them. The influence of the imposition of convexity on the production set of firms was also investigated. This methodology was applied to the water utilities that operated in Portugal between 2002 and 2008. There is evidence of economies of vertical integration and economies of scale in drinking water supply utilities and in water and wastewater utilities operating mainly in the retail segment. Economies of scale were found in water and wastewater utilities operating exclusively in the wholesale, and in some of these utilities diseconomies of scope were also found. The proposed methodology also allowed us to conclude that the existence of some smaller utilities makes the minimum optimal scales go down.
Computing economies of vertical integration, economies of scope and economies of scale using partial frontier nonparametric methods
S0377221713007789
Practically all organizations seek to create value by selecting and executing portfolios of actions that consume resources. Typically, the resulting value is uncertain, and thus organizations must take decisions based on ex ante estimates about what this future value will be. In this paper, we show that the Bayesian modeling of uncertainties in this selection problem serves to (i) increase the expected future value of the selected portfolio, (ii) raise the expected number of selected actions that belong to the optimal portfolio ex post, and (iii) eliminate the expected gap between the realized ex post portfolio value and the estimated ex ante portfolio value. We also propose a new project performance measure, defined as the probability that a given action belongs to the optimal portfolio. Finally, we provide analytic results to determine which actions should be re-evaluated to obtain more accurate value estimates before portfolio selection. In particular, we show that the optimal targeting of such re-evaluations can yield a much higher portfolio value in return for the total resources that are spent on the execution of actions and the acquisition of value estimates.
Optimal strategies for selecting project portfolios using uncertain value estimates
S0377221713007790
The assumption of a homothetic production function is often maintained in production economics. In this paper we explore the possibility of maintaining homotheticity within a nonparametric DEA framework. The main contribution of this paper is to use the approach suggested by Hanoch and Rothschild (1972) to define a homothetic reference technology. We focus on the largest subset of data points that is consistent with such a homothetic production function. We use the HR-approach to define a piecewise linear homothetic convex reference technology. We propose this reference technology with the purpose of adding structure to the flexible non-parametric BCC DEA estimator. Motivation for why such additional structure sometimes is warranted is provided. An estimation procedure derived from the BCC-model and from a maintained assumption of homotheticity is proposed. The performance of the estimator is analyzed using simulation.
A homothetic reference technology in data envelopment analysis
S0377221713007807
Chaotic phenomena, chaos amplification and other interesting nonlinear behaviors have been observed in supply chain systems. Chaos can be defined theoretically if the dynamics under study are produced only by deterministic factors. However, deterministic settings rarely present themselves in reality. In fact, real data are typically unknown. How can the chaos theory and its related methodology be applied in the real world? When the demand is stochastic, the interpretation and distribution of the Lyapunov exponents derived from the effective inventory at different supply chain levels are not similar to those under deterministic demand settings. Are the observed dynamics of the effective inventory random, chaotic, or simply quasi-chaos? In this study, we investigate a situation whereby the chaos analysis is applied to a time series as if its underlying structure, deterministic or stochastic, is unknown. The result shows clear distinction in chaos characterization between the two categories of demand process, deterministic vs. stochastic. It also highlights the complexity of the interplay between stochastic demand processes and nonlinear dynamics. Therefore, caution should be exercised in interpreting system dynamics when applying chaos analysis to a system of unknown underlying structure. By understanding this delicate interplay, decision makers have the better chance to tackle the problem correctly or more effectively at the demand end or the supply end.
Interpreting supply chain dynamics: A quasi-chaos perspective
S0377221713007819
In this paper, we use stochastic dynamic programming to model the choice of a municipality which has to design an optimal waste management program under uncertainty about the price of recyclables in the secondary market. The municipality can, by undertaking an irreversible investment, adopt a flexible program which integrates the existing landfill strategy with recycling, keeping the option to switch back to landfilling, if profitable. We determine the optimal share of waste to be recycled and the optimal timing for the investment in such a flexible program. We find that adopting a flexible program rather than a non-flexible one, the municipality: (i) invests in recycling capacity under circumstances where it would not do so otherwise; (ii) invests earlier; and (iii) benefits from a higher expected net present value.
Flexible waste management under uncertainty
S0377221713007820
Based on an application in forestry, we study the dense k-subgraph problem: Given a parameter k ∈ N and an undirected weighted graph G, the task is to find a subgraph of G with k vertices such that the sum of the weights of the induced edges is maximized. The problem is well-known to be NP-hard and difficult to approximate if the underlying graph does not satisfy the triangle inequality. In the present paper, we develop a fast preprocessing routine which results in a graph still containing a 1 + 1 k -approximation for the problem. The key idea is to identify vertices which are of low interest to an optimal solution for the problem due to falling below a special ‘threshold’. Using this information, we initiate a chain-reaction of vertex eliminations to reduce the number of vertices in the input graph without losing a lot of information. This graph reduction step runs in polynomial time. The success of this preprocessing step mainly depends on finding a large threshold, which ensures that many vertices can be removed. For this purpose, we devise an efficient algorithm which yields a provably optimal threshold. Finally, we present empiric studies for our application. Even though the graphs tied to our application in forestry have several hundred vertices and do not satisfy the triangle inequality, they exhibit special properties which yield a very favorable performance of our approach. The pruning step typically removes more than 90% of the vertices, and thus enables an optimal solution of the problem on the reduced graph.
Threshold-based preprocessing for approximating the weighted dense k-subgraph problem
S0377221713007832
For a current deregulated power system, a large amount of operating reserve is often required to maintain the reliability of the power system using traditional approaches. In this paper, we propose a two-stage robust optimization model to address the network constrained unit commitment problem under uncertainty. In our approach, uncertain problem parameters are assumed to be within a given uncertainty set. We study cases with and without transmission capacity and ramp-rate limits (The latter case was described in Zhang and Guan (2009), for which the analysis part is included in Section 3 in this paper). We also analyze solution schemes to solve each problem that include an exact solution approach and an efficient heuristic approach that provides tight lower and upper bounds for the general network constrained robust unit commitment problem. The final computational experiments on an IEEE 118-bus system verify the effectiveness of our approaches, as compared to the nominal model without considering the uncertainty.
Two-stage network constrained robust unit commitment problem
S0377221713007844
This paper presents a modified Variable Neighborhood Search (VNS) heuristic algorithm for solving the Discrete Ordered Median Problem (DOMP). This heuristic is based on new neighborhoods’ structures that allow an efficient encoding of the solutions of the DOMP avoiding sorting in the evaluation of the objective function at each considered solution. The algorithm is based on a data structure, computed in preprocessing, that organizes the minimal necessary information to update and evaluate solutions in linear time without sorting. In order to investigate the performance, the new algorithm is compared with other heuristic algorithms previously available in the literature for solving DOMP. We report on some computational experiments based on the well-known N-median instances of the ORLIB with up to 900 nodes. The obtained results are comparable or superior to existing algorithms in the literature, both in running times and number of best solutions found.
A modified variable neighborhood search for the discrete ordered median problem
S0377221713007856
The efficiency in production is often analysed as technical efficiency using the production frontier function. Efficiency scores are usually based on distance computations to the frontier in an m + s-dimensional space, where m inputs produce s outputs. In addition, efficiency improvements consider the total consumption of each input. However, in many cases, the “consumption” of each input can be divided into input-consumption sections (ICSs), and trade-off among the ICSs is possible. This share framework can be used for computing efficiency. This analysis provides information about both the total optimal consumption of each input, as does data envelopment analysis, and the most efficient allocation of the “consumption” among the ICSs. This paper studies technical efficiency using this approach and applies it to the olive oil sector in Andalusia (Spain). A non-parametrical methodology is presented, and an input-oriented Multi-Criteria Linear Programming model (MLP) is proposed. The analysis is developed at global, input and ICSs levels, defining the extent of satisfaction achieved at all these levels for each company, in accordance with their own preferences. The companies’ preferences are modelled with their utility function and their set of weights. MLP offers more detailed information to assist decision makers than other models previously proposed in the literature. In addition to this application, it is concluded that there is room for improvement in the olive oil sector, particularly in the management of the skilled labour. Additionally, the solutions with two opposite scenarios indicate that the model is suitable for the intended decision making process.
A new multicriteria approach for the analysis of efficiency in the Spanish olive oil sector by modelling decision maker preferences
S0377221713007868
A budget-constrained buyer wants to purchase items from a shortlisted set. Items are differentiated by observable quality and sellers have private reserve prices for their items. The buyer’s problem is to select a subset of maximal quality. Money does not enter the buyer’s objective function, but only his constraints. Sellers quote prices strategically, inducing a knapsack game. We report the Bayesian optimal mechanism for the buyer’s problem. We find that simultaneous take-it-or-leave-it offers are interim optimal.
Bayesian optimal knapsack procurement
S0377221713007881
We study a two-machine flowshop scheduling problem with time-dependent deteriorating jobs, i.e. the processing times of jobs are an increasing function of their starting time. The objective is to minimize the total completion time subject to minimum makespan. We propose a mixed integer programming model, and develop two pairwise interchange algorithms and a branch-and-bound procedure to solve the problem while using several dominance conditions to limit the size of the search tree. Several polynomial-time solvable special cases are discussed. Finally, numerical studies are performed to examine the effectiveness and the efficiency of the proposed algorithms.
Bicriteria hierarchical optimization of two-machine flow shop scheduling problem with time-dependent deteriorating jobs
S0377221713007893
In modern production systems, customized mass production of complex products, such as automotive or white goods, is often realized at assembly lines with a high degree of manual labor. For firms that apply assembly systems, the assembly line balancing problem (ALBP) arises, which is to assign optimally tasks to stations or workers with respect to some constraints and objectives. Although the literature provides a number of relevant models and efficient solution methods for ALBP, firms, in most cases, do not use this knowledge to balance their lines. Instead, the planning is mostly performed manually by numerous planners responsible for small sub-problems. This is because of the lack of data, like the precedence relations between the tasks to be performed. Such data is hard to collect and to maintain updated. Klindworth, Otto, and Scholl (2012) proposed an approach to collect and to maintain the data on precedence relations between tasks at a low cost, as well as to produce new high-quality feasible assembly balances based on this data. They utilize the knowledge on former production plans available in the firm. However, due to reliance on the single source of information, their concept needs long warming-up periods. Therefore, we enhance the concept by incorporating multiple sources of information available at firms, as the modular structure, and present guidelines on how to conduct valuable interviews. The proposed interview enhancements improve the achieved results significantly. As a result, our approach generates more efficient new feasible assembly line balances without requiring such long warming-up periods.
Multiple-source learning precedence graph concept for the automotive industry
S0377221713007911
This paper provides an overview of developments in robust optimization since 2007. It seeks to give a representative picture of the research topics most explored in recent years, highlight common themes in the investigations of independent research teams and highlight the contributions of rising as well as established researchers both to the theory of robust optimization and its practice. With respect to the theory of robust optimization, this paper reviews recent results on the cases without and with recourse, i.e., the static and dynamic settings, as well as the connection with stochastic optimization and risk theory, the concept of distributionally robust optimization, and findings in robust nonlinear optimization. With respect to the practice of robust optimization, we consider a broad spectrum of applications, in particular inventory and logistics, finance, revenue management, but also queueing networks, machine learning, energy systems and the public good. Key developments in the period from 2007 to present include: (i) an extensive body of work on robust decision-making under uncertainty with uncertain distributions, i.e., “robustifying” stochastic optimization, (ii) a greater connection with decision sciences by linking uncertainty sets to risk theory, (iii) further results on nonlinear optimization and sequential decision-making and (iv) besides more work on established families of examples such as robust inventory and revenue management, the addition to the robust optimization literature of new application areas, especially energy systems and the public good.
Recent advances in robust optimization: An overview
S0377221713007923
This work deals with the continuous time lot-sizing inventory problem when demand and costs are time-dependent. We adapt a cost balancing technique developed for the periodic-review version of our problem to the continuous-review framework. We prove that the solution obtained costs at most twice the cost of an optimal solution. We study the numerical complexity of the algorithm and generalize the policy to several important extensions while preserving its performance guarantee of two. Finally, we propose a modified version of our algorithm for the lot-sizing model with some restricted settings that improves the worst-case bound.
Approximation algorithms for deterministic continuous-review inventory lot-sizing problems with time-varying demand
S0377221713007935
This paper studies an operational problem arising at a container terminal, consisting of scheduling a yard crane to carry out a set of container storage and retrieval requests in a single container block. The objective is to minimize the total travel time of the crane to carry out all requests. The block has multiple input and output (I/O) points located at both the seaside and the landside. The crane must move retrieval containers from the block to the I/O points, and must move storage containers from the I/O points to the block. The problem is modeled as a continuous time integer programming model and the complexity is proven. We use intrinsic properties of the problem to propose a two-phase solution method to optimally solve the problem. In the first phase, we develop a merging algorithm which tries to patch subtours of an optimal solution of an assignment problem relaxation of the problem and obtain a complete crane tour without adding extra travel time to the optimal objective value of the relaxed problem. The algorithm requires common I/O points to patch subtours. This is efficient and often results in obtaining an optimal solution of the problem. If an optimal solution has not been obtained, the solution of the first phase is embedded in the second phase where a branch-and-bound algorithm is used to find an optimal solution. The numerical results show that the proposed method can quickly obtain an optimal solution of the problem. Compared to the random and Nearest Neighbor heuristics, the total travel time is on average reduced by more than 30% and 14%, respectively. We also validate the solution method at a terminal.
An exact method for scheduling a yard crane
S0377221713007947
In this paper, we use a biform-game approach for analyzing the impact of surplus division in supply chains on investment incentives. In the first stage of the game, firms decide non-cooperatively on investments. In the second stage, the surplus is shared according to the Shapley value. We find that all firms have inefficiently low investment incentives which, however, depend on their position in the supply chain. Cross-subsidies for investment costs can mitigate, but not eliminate the underinvestment problem. Vertical integration between at least some firms.yields efficient investments, but may nevertheless reduce the aggregated payoff of the firms. We show how the size of our effects depends on the structure of the supply chain and the efficiency of the investment technology. Various extensions demonstrate that our results are qualitatively robust.
Surplus division and investment incentives in supply chains: A biform-game analysis
S0377221713007959
Since Markowitz (1952) formulated the portfolio selection problem, many researchers have developed models aggregating simultaneously several conflicting attributes such as: the return on investment, risk and liquidity. The portfolio manager generally seeks the best combination of stocks/assets that meets his/her investment objectives. The Goal Programming (GP) model is widely applied to finance and portfolio management. The aim of this paper is to present the different variants of the GP model that have been applied to the financial portfolio selection problem from the 1970s to nowadays.
Financial portfolio management through the goal programming model: Current state-of-the-art
S0377221713007960
In this paper the combined fleet-design, ship-scheduling and cargo-routing problem with limited availability of ships in liner shipping is considered. A composite solution approach is proposed in which the ports are first aggregated into port clusters to reduce the problem size. When the cargo flows are disaggregated, a feeder service network is introduced to ship the cargo within a port cluster. The solution method is tested on a problem instance containing 58 ports on the Asia–Europe trade lane of Maersk. The best obtained profit gives an improvement of more than 10% compared to the reference network based on the Maersk network.
Methods for strategic liner shipping network design
S0377221713007972
In this paper we study a firm’s disposition decision for returned end-of-use products, which can either be remanufactured and sold, or dismantled into parts that can be reused. We formulate this problem as a multi-period stochastic dynamic program, and find the structure of the optimal policy, which consists of monotonic switching curves. Specifically, if it is optimal to remanufacture in a given period and for given inventory levels, then it is also optimal to remanufacture when the inventory of part(s) is higher or the inventory of remanufactured product is lower.
Dismantle or remanufacture?
S0377221713007984
In this note we derive alternative weighting schemes that complement those of Färe and Zelenyuk (2003) for consistent aggregation of Farrell efficiencies when the technology exhibits (global) constant returns to scale.
A postscript on aggregate Farrell efficiencies