FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0377221713002105
In the paper, we consider three quadratic optimization problems which are frequently applied in portfolio theory, i.e., the Markowitz mean–variance problem as well as the problems based on the mean–variance utility function and the quadratic utility. Conditions are derived under which the solutions of these three optimization procedures coincide and are lying on the efficient frontier, the set of mean–variance optimal portfolios. It is shown that the solutions of the Markowitz optimization problem and the quadratic utility problem are not always mean–variance efficient. The conditions for the mean–variance efficiency of the solutions depend on the unknown parameters of the asset returns. We deal with the problem of parameter uncertainty in detail and derive the probabilities that the estimated solutions of the Markowitz problem and the quadratic utility problem are mean–variance efficient. Because these probabilities deviate from one the above mentioned quadratic optimization problems are not stochastically equivalent. The obtained results are illustrated by an empirical study.
On the equivalence of quadratic optimization problems commonly used in portfolio theory
S0377221713002117
In this article, we present a new exact algorithm for solving the simple assembly line balancing problem given a determined cycle time (SALBP-1). The algorithm is a station-oriented bidirectional branch-and-bound procedure based on a new enumeration strategy that explores the feasible solutions tree in a non-decreasing idle time order. The procedure uses several well-known lower bounds, dominance rules and a new logical test based on the assimilation of the feasibility problem for a given cycle time and number of stations (SALBP-F) to a maximum-flow problem. The algorithm has been tested with a series of computational experiments on a well-known set of problem instances. The tests show that the proposed algorithm outperforms the best existing exact algorithm for solving SALBP-1, verifying optimality for 264 of the 269 benchmark instances.
An enumeration procedure for the assembly line balancing problem based on branching by non-decreasing idle time
S0377221713002129
Under circumstances of increasing environmental pressures from markets and regulators, focal companies in supply chains have recognized the importance of greening their supply chain through green supplier development programs. Various studies have started to explore the inter-relationships between green supply chain management and supplier performance. Much of this performance can be achieved only with suppliers’ involvement in green supplier development programs. But, the literature focusing on green supplier development programs and supplier involvement propensity is very limited. In addition, formal tools and models for focal companies to evaluate these inter-relationships, especially considering propensity of suppliers’ involvement, are even rarer. To help address this gap in the literature, we introduce a grey analytical network process-based (grey ANP-based) model to identify green supplier development programs that will effectively improve suppliers’ performance. We further comprehensively evaluate green supplier development programs with explicit consideration of suppliers’ involvement propensity levels. A real world example is introduced to demonstrate the effectiveness of the model. We end with a discussion of managerial implications and present some directions for further research.
Evaluating green supplier development programs with a grey-analytical network process-based methodology
S0377221713002130
This paper presents a genetic algorithm for solving the resource-constrained project scheduling problem. The innovative component of the algorithm is the use of a magnet-based crossover operator that can preserve up to two contiguous parts from the receiver and one contiguous part from the donator genotype. For this purpose, a number of genes in the receiver genotype absorb one another to have the same order and contiguity they have in the donator genotype. The ability of maintaining up to three contiguous parts from two parents distinguishes this crossover operator from the powerful and famous two-point crossover operator, which can maintain only two contiguous parts, both from the same parent. Comparing the performance of the new procedure with that of other procedures indicates its effectiveness and competence.
A competitive magnet-based genetic algorithm for solving the resource-constrained project scheduling problem
S0377221713002142
Interactive approaches employing cone contraction for multi-criteria mixed integer optimization are introduced. In each iteration, the decision maker (DM) is asked to give a reference point (new aspiration levels). The subsequent Pareto optimal point is the reference point projected on the set of admissible objective vectors using a suitable scalarizing function. Thereby, the procedures solve a sequence of optimization problems with integer variables. In such a process, the DM provides additional preference information via pair-wise comparisons of Pareto optimal points identified. Using such preference information and assuming a quasiconcave and non-decreasing value function of the DM we restrict the set of admissible objective vectors by excluding subsets, which cannot improve over the solutions already found. The procedures terminate if all Pareto optimal solutions have been either generated or excluded. In this case, the best Pareto point found is an optimal solution. Such convergence is expected in the special case of pure integer optimization; indeed, numerical simulation tests with multi-criteria facility location models and knapsack problems indicate reasonably fast convergence, in particular, under a linear value function. We also propose a procedure to test whether or not a solution is a supported Pareto point (optimal under some linear value function).
Cone contraction and reference point methods for multi-criteria mixed integer optimization
S0377221713002154
We study a problem of tactical planning in a divergent supply chain. It involves decisions regarding production, inventory, internal transportation, sales and distribution to customers. The problem is motivated by the context of a company in the speciality oils industry. The overall objective at tactical level is to maximize contribution and, in order to achieve this, the planning has been divided into two separate problems. The first problem concerns sales where the final sales and distribution planning is decentralized to individual sellers. The second problem concerns production, transportation and inventory planning through refineries, hubs and depots and is managed centrally with the aim of minimizing costs. Due to this decoupling, the solution of the two problems needs to be coordinated in order to achieve the overall objective. In the company, this is pursued through an internal price system aiming at giving the sellers the incentives needed to align their decisions with the overall objective. We propose and discuss linear programming models for the decoupled and integrated planning problems. We present numerical examples to illustrate potential effects of integration and coordination and discuss the advantages and disadvantages of the integrated over the decoupled approach. While the total contribution is higher in the integrated approach, it has also been found that the sellers’ contribution can be considerably lower. Therefore, we also suggest contribution sharing rules to achieve a solution where both the company and the sellers attain a better outcome under the integrated planning.
Speciality oils supply chain optimization: From a decoupled to an integrated planning approach
S0377221713002166
Natural earthquake disasters are unprecedented incidents which take many lives as a consequence and cause major damages to lifeline infrastructures. Various agencies in a country are responsible for reducing such adverse impacts within specific budgets. These responsibilities range from before to after the incident, targeting one of the main phases of disaster management (mitigation, preparedness, and response). Use of OR in disaster management and coordination of its phases has been mostly ignored and highly recommended in former reviews. This paper presents a formulation to coordinate three main agencies and proposes a heuristic approach to solve the different introduced sub-problems. The results show an improvement of 7.5–24% when the agencies are coordinated. entire population of sub-region r in region l vulnerable population ratio of sub-region r in region l vulnerable population of sub-region r in region l (=P rl ·q rl ) The CiTy Disaster Level (death toll) improvement ratio of sub-region r in region l (between 0 and 1) budget for investment in the building renovation sector existing emergency supplies in region k inventory cost of one unit of humanitarian goods additional level of humanitarian goods level of relief supplies sent from sub-region kto sub-region rl budget for investment in the emergency response sector travel time from sub-region k to sub-region rl survival function which describes the efficiency of goods mobility full cost of retrofitting link (i, j)∈ L failure probability of link (i, j)∈ L retrofitting ratio of link (i, j)∈ L budget for investment in the transportation sector
A multi-agent optimization formulation of earthquake disaster prevention and management
S0377221713002178
This note proposes an alternative procedure for identifying violated subtour elimination constraints (SECs) in branch-and-cut algorithms for elementary shortest path problems. The procedure is also applicable to other routing problems, such as variants of travelling salesman or shortest Hamiltonian path problems, on directed graphs. The proposed procedure is based on computing the strong components of the support graph. The procedure possesses a better worst-case time complexity than the standard way of separating SECs, which uses maximum flow algorithms, and is easier to implement.
A note on the separation of subtour elimination constraints in elementary shortest path problems
S0377221713002208
Considering the inherent connection between supplier selection and inventory management in supply chain networks, this article presents a multi-period inventory lot-sizing model for a single product in a serial supply chain, where raw materials are purchased from multiple suppliers at the first stage and external demand occurs at the last stage. The demand is known and may change from period to period. The stages of this production–distribution serial structure correspond to inventory locations. The first two stages stand for storage areas for raw materials and finished products in a manufacturing facility, and the remaining stages symbolize distribution centers or warehouses that take the product closer to customers. The problem is modeled as a time-expanded transshipment network, which is defined by the nodes and arcs that can be reached by feasible material flows. A mixed integer nonlinear programming model is developed to determine an optimal inventory policy that coordinates the transfer of materials between consecutive stages of the supply chain from period to period while properly placing purchasing orders to selected suppliers and satisfying customer demand on time. The proposed model minimizes the total variable cost, including purchasing, production, inventory, and transportation costs. The model can be linearized for certain types of cost structures. In addition, two continuous and concave approximations of the transportation cost function are provided to simplify the model and reduce its computational time.
A dynamic inventory model with supplier selection in a serial supply chain structure
S0377221713002221
In this paper, we consider a latent Markov process governing the intensity rate of a Poisson process model for software failures. The latent process enables us to infer performance of the debugging operations over time and allows us to deal with the imperfect debugging scenario. We develop the Bayesian inference for the model and also introduce a method to infer the unknown dimension of the Markov process. We illustrate the implementation of our model and the Bayesian approach by using actual software failure data.
A Markov modulated Poisson model for software reliability
S0377221713002233
We consider a consignment contract with consumer non-defective returns behavior. In our model, an upstream vendor contracts with a downstream retailer. The vendor decides his consignment price charged to the retailer for each unit sold and his refund price for each returned item, and then the retailer sets her retail price for selling the product. The vendor gets paid based on net sold units and salvages unsold units as well as returned items in a secondary market. Under the framework, we study and compare two different consignment arrangements: the retailer/vendor manages consignment inventory (RMCI/VMCI) programs. To study the impact of return policy, we discuss a consignment contract without return policy as a benchmark. We show that whether or not the vendor offers a return policy, it is always beneficial for the channel to delegate the inventory decision to the vendor. We find that the vendor’s return policy depends crucially on the salvage value of returns. If the product has no salvage value, the vendor’s optimal decision is not to offer a return policy; otherwise, the vendor can gain more profit by offering a return policy when the salvage value turns out to be positive.
The impact of consumer returns policies on consignment contracts with inventory control
S0377221713002245
Trade credit arises when a buyer delays payment for purchased goods or services. Its nature has predominantly been an area of inquiry for researchers from the disciplines of finance, marketing, and economics but it has received relatively little attention in other domains. In our article, we provide an integrative review of the existing literature and discuss conflicting study outcomes. We organize the relevant literature into seven areas of inquiry and analyze four in detail: trade credit motives, order quantity decisions, credit term decisions, and settlement period decisions. Additionally, we derive a detailed agenda for future research in these areas.
A review of trade credit literature: Opportunities for research in operations
S0377221713002452
Emission trading schemes such as the European Union Emissions Trading System (EUETS) attempt to reconcile economic efficiency with ecological efficiency by creating financial incentives for companies to invest in climate-friendly innovations. Using real options methodology, we demonstrate that under uncertainty, economic and ecological efficiency continue to be mutually exclusive. This problem is even worse if a climate-friendly project depends on investing in of a whole supply chain. We model a sequential bargaining game in a supply chain where the parties negotiate over implementation of a carbon dioxide (CO2) saving investment project. We show that the outcome of their bargaining is not economically efficient and even less ecologically efficient. Furthermore, we show that a supply chain becomes less economically efficient and less ecologically efficient with every additional chain link. Finally, we make recommendations for how managers or politicians can improve the situation and thereby increase economic as well as ecological efficiency and thus also the eco-efficiency of supply chains.
Timing and eco(nomic) efficiency of climate-friendly investments in supply chains
S0377221713002464
We present a novel generic programming implementation of a column-generation algorithm for the generalized staff rostering problem. The problem is represented as a generalized set partitioning model, which is able to capture commonly occurring problem characteristics given in the literature. Columns of the set partitioning problem are generated dynamically by solving a pricing subproblem, and constraint branching in a branch-and-bound framework is used to enforce integrality. The pricing problem is formulated as a novel three-stage nested shortest path problem with resource constraints that exploits the inherent problem structure. A very efficient implementation of this pricing problem is achieved by using generic programming principles in which careful use of the C++ pre-processor allows the generator to be customized for the target problem at compile-time. As well as decreasing run times, this new approach creates a more flexible modeling framework that is well suited to handling the variety of problems found in staff rostering. Comparison with a more-standard run-time customization approach shows that speedups of around a factor of 20 are achieved using our new approach. The adaption to a new problem is simple and the implementation is automatically adjusted internally according to the new definition. We present results for three practical rostering problems. The approach captures all features of each problem and is able to provide high-quality solutions in less than 15minutes. In two of the three instances, the optimal solution is found within this time frame.
Branch-and-price for staff rostering: An efficient implementation using generic programming and nested column generation
S0377221713002476
Because individual interpretations of the analytic hierarchy process (AHP) linguistic scale vary for each user, this study proposes a novel framework that AHP decision makers can use to generate numerical scales individually, based on the 2-tuple linguistic modeling of AHP scale problems. By using the concept of transitive calibration, individual characteristics in understanding the AHP linguistic scale are first defined. An algorithm is then proposed for detecting the individual characteristics from the linguistic pairwise comparison data that is associated with each of the AHP individual decision makers. Finally, a nonlinear programming model is proposed to generate individual numerical scales that optimally match the obtained individual characteristics. Two well-known numerical examples are re-examined using the proposed framework to demonstrate its validity.
Numerical scales generated individually for analytic hierarchy process
S0377221713002488
In this paper we develop a supply contract for a two-echelon manufacturer–retailer supply chain with a bidirectional option, which may be exercised as either a call option or a put option. Under the bidirectional option contract, we derive closed-form expressions for the retailer’s optimal order strategies, including the initial order strategy and the option purchasing strategy, with a general demand distribution. We also analytically examine the feedback effects of the bidirectional option on the retailer’s initial order strategy. In addition, taking a chain-wide perspective, we explore how the bidirectional option contract should be set to attain supply chain coordination.
Coordination of supply chains with bidirectional option contracts
S0377221713002506
We consider the optimal ship navigation problem wherein the goal is to find the shortest path between two given coordinates in the presence of obstacles subject to safety distance and turn-radius constraints. These obstacles can be debris, rock formations, small islands, ice blocks, other ships, or even an entire coastline. We present a graph-theoretic solution on an appropriately-weighted directed graph representation of the navigation area obtained via 8-adjacency integer lattice discretization and utilization of the A ∗ algorithm. We explicitly account for the following three conditions as part of the turn-radius constraints: (1) the ship’s left and right turn radii are different, (2) ship’s speed reduces while turning, and (3) the ship needs to navigate a certain minimum number of lattice edges along a straight line before making any turns. The last constraint ensures that the navigation area can be discretized at any desired resolution. Once the optimal (discrete) path is determined, we smoothen it to emulate the actual navigation of the ship. We illustrate our methodology on an ice navigation example involving a 100,000 DWT merchant ship and present a proof-of-concept by simulating the ship’s path in a full-mission ship handling simulator.
Optimal ship navigation with safety distance and realistic turn constraints
S0377221713002518
Products that are not recycled at the end of their life increasingly damage the environment. In a collection – remanufacturing scheme, these end-of-life products can generate new profits. Designed on the personal computers industry, this study defines an analytical model used to explore the implications of recycling on the reverse supply chain from an efficiency perspective for all participants in the process. The cases considered for analysis are the two- and three-echelon supply chains, where we first look at the decentralized reverse setting followed by the coordinated setting through implementation of revenue sharing contract. We define customer willingness to return obsolete units as a function of the discount offered by the retailer in exchange for recycling devices with a remanufacturing value. The results show that performance measures and total supply chain profits improve through coordination with revenue sharing contracts on both two- and three-echelon reverse supply chains.
Reverse supply chain coordination by revenue sharing contract: A case for the personal computers industry
S0377221713002531
Several major environmental issues like biodiversity loss and climate change currently concern the international community. These topics that are related to the development of human societies have become increasingly important since the United Nations Conference on Environment and Development (UNCED) or Earth Summit in Rio de Janeiro in 1992. In this article, we are interested in the first issue. We present here many examples of the help that using mathematical programming can provide to decision-makers in the protection of biodiversity. The examples we have chosen concern the selection of nature reserves, the control of adverse effects caused by landscape fragmentation, including the creation or restoration of biological corridors, the ecological exploitation of forests, the control of invasive species, and the maintenance of genetic diversity. Most of the presented models are – or can be approximated with – linear-, quadratic- or fractional-integer formulations and emphasize spatial aspects of conservation planning. Many of them represent decisions taken in a static context but temporal dimension is also considered. The problems presented are generally difficult combinatorial optimization problems, some are well solved and others less well. Research is still needed to progress in solving them in order to deal with real instances satisfactorily. Moreover, relations between researchers and practitioners have to be strengthened. Furthermore, many recent achievements in the field of robust optimization could probably be successfully used for biodiversity protection, a domain in which many data are uncertain.
Mathematical optimization ideas for biodiversity conservation
S0377221713002543
The Sequential Probability Ratio Test (SPRT) control chart is a powerful tool for monitoring manufacturing processes. It is highly suitable for the applications where testing is destructive or very expensive, such as the automobile airbags test. This article studies the effect of the Average Sample Number (ASN) (i.e., the average sample size) on the chart’s performance. A design algorithm is proposed to develop the optimal SPRT chart for monitoring the fraction nonconforming p of Bernoulli processes. By optimizing the ASN and other charting parameters, the average detection speed of the SPRT chart is almost doubled. It is also found that the optimal SPRT chart significantly outperforms the optimal np and binomial CUSUM charts, in terms of Average Number of Defectives (AND), under different combinations of the design specifications. It is observed that the SPRT chart using a relatively smaller ASN and a shorter sampling interval (h) has a higher overall detection effectiveness.
Optimal average sample number of the SPRT chart for monitoring fraction nonconforming
S0377221713002555
Every item produced, transported, used and discarded within a Supply Chain (SC) generates costs and creates an impact on the environment. The increase of forward flows as effects of market globalization and reverse flows due to legislation, warranty, recycling and disposal activities affect the ability of a modern SC to be economically and environmentally sustainable. In this context, the study considers an innovative sustainable closed loop SC problem. It first introduces a linear programming model that aims to minimize the total SC costs. Environmental sustainability is guaranteed by the complete reprocessing of an end-of-life product, the re-use of components, the disposal of unusable parts sent directly from the manufacturers, with a closed loop transportation system that maximizes transportation efficiency. Secondly, the authors consider the problem by means of a parametrical study, by analyzing the economical sustainability of the proposed CLSC model versus the classical Forward Supply Chain model (FWSC) from two perspectives: Case 1, the ‘traditional company perspective’, where the SC ends at the customers, and the disposal costs are not included in the SC, and Case 2, the ‘social responsibility company perspective’, where the disposal costs are considered within the SC. The relative impact of the different variables in the SC structure and the applicability of the proposed model, in terms of total costs, SC structure and social responsibility, are investigated thoroughly and the results are reported at the conclusion of the paper.
Sustainable SC through the complete reprocessing of end-of-life products by manufacturers: A traditional versus social responsibility company perspective
S0377221713002567
This paper presents a generalized weighted vertex p-center (WVPC) model that represents uncertain nodal weights and edge lengths using prescribed intervals or ranges. The objective of the robust WVPC (RWVPC) model is to locate p facilities on a given set of candidate sites so as to minimize worst-case deviation in maximum weighted distance from the optimal solution. The RWVPC model is well-suited for locating urgent relief distribution centers (URDCs) in an emergency logistics system responding to quick-onset natural disasters in which precise estimates of relief demands from affected areas and travel times between URDCs and affected areas are not available. To reduce the computational complexity of solving the model, this work proposes a theorem that facilitates identification of the worst-case scenario for a given set of facility locations. Since the problem is NP-hard, a heuristic framework is developed to efficiently obtain robust solutions. Then, a specific implementation of the framework, based on simulated annealing, is developed to conduct numerical experiments. Experimental results show that the proposed heuristic is effective and efficient in obtaining robust solutions. We also examine the impact of the degree of data uncertainty on the selected performance measures and the tradeoff between solution quality and robustness. Additionally, this work applies the proposed RWVPC model to a real-world instance based on a massive earthquake that hit central Taiwan on September 21, 1999.
Robust weighted vertex p-center model considering uncertain data: An application to emergency management
S0377221713002579
Robust portfolios reduce the uncertainty in portfolio performance. In particular, the worst-case optimization approach is based on the Markowitz model and form portfolios that are more robust compared to mean–variance portfolios. However, since the robust formulation finds a different portfolio from the optimal mean–variance portfolio, the two portfolios may have dissimilar levels of factor exposure. In most cases, investors need a portfolio that is not only robust but also has a desired level of dependency on factor movement for managing the total portfolio risk. Therefore, we introduce new robust formulations that allow investors to control the factor exposure of portfolios. Empirical analysis shows that the robust portfolios from the proposed formulations are more robust than the classical mean–variance approach with comparable levels of exposure on fundamental factors.
Robust portfolios that do not tilt factor exposure
S0377221713002580
The equilibrium and socially optimal balking strategies are investigated for unobservable and observable single-server classical retrial queues. There is no waiting space in front of the server. If an arriving customer finds the server idle, he occupies the server immediately and leaves the system after service. Otherwise, if the server is found busy, the customer decides whether or not to enter a retrial pool with infinite capacity and becomes a repeated customer, based on observation of the system and the reward–cost structure imposed on the system. Accordingly, two cases with respect to different levels of information are studied and the corresponding Nash equilibrium and social optimization balking strategies for all customers are derived. Finally, we compare the equilibrium and optimal behavior regarding these two information levels through numerical examples.
Strategic joining in M/M/1 retrial queues
S0377221713002592
Apart from the well-known weaknesses of the standard Malmquist productivity index related to infeasibility and not accounting for slacks, already addressed in the literature, we identify a new and significant drawback of the Malmquist–Luenberger index decomposition that questions its validity as an empirical tool for environmental productivity measurement associated with the production of bad outputs. In particular, we show that the usual interpretation of the technical change component in terms of production frontier shifts can be inconsistent with its numerical value, thereby resulting in an erroneous interpretation of this component that passes on to the index itself. We illustrate this issue with a simple numerical example. Finally, we propose a solution for this inconsistency issue based on incorporating a new postulate for the technology related to the production of bad outputs.
On the inconsistency of the Malmquist–Luenberger index
S0377221713002658
Asset allocation among diverse financial markets is essential for investors especially under situations such as the financial crisis of 2008. Portfolio optimization is the most developed method to examine the optimal decision for asset allocation. We employ the hidden Markov model to identify regimes in varied financial markets; a regime switching model gives multiple distributions and this information can convert the static mean–variance model into an optimization problem under uncertainty, which is the case for unobservable market regimes. We construct a stochastic program to optimize portfolios under the regime switching framework and use scenario generation to mathematically formulate the optimization problem. In addition, we build a simple example for a pension fund and examine the behavior of the optimal solution over time by using a rolling-horizon simulation. We conclude that the regime information helps portfolios avoid risk during left-tail events.
Dynamic asset allocation for varied financial markets under regime switching framework
S0377221713002671
A popular assumption in the current literature on remanufacturing is that the whole new product is produced by an integrated manufacturer, which is inconsistent with most industries. In this paper, we model a decentralised closed-loop supply chain consisting of a key component supplier and a non-integrated manufacturer, and demonstrate that the interaction between these players significantly impacts the economic and environmental implications of remanufacturing. In our model, the non-integrated manufacturer can purchase new components from the supplier to produce new products, and remanufacture used components to produce remanufactured products. Thus, the non-integrated manufacturer is not only a buyer but also a rival to the supplier. In a steady state period, we analyse the performances of an integrated manufacturer and the decentralised supply chain. We find that, although the integrated manufacturer always benefits from remanufacturing, the remanufacturing opportunity may constitute a lose–lose situation to the supplier and the non-integrated manufacturer, making their profits be lower than in an identical supply chain without remanufacturing. In addition, the non-integrated manufacturer may be worse off with a lower remanufacturing cost or a larger return rate of used products due to the interaction with the supplier. We further demonstrate that the government-subsidised remanufacturing in the non-integrated (integrated) manufacturer is detrimental (beneficial) to the environment.
Don’t forget your supplier when remanufacturing
S0377221713002683
Participatory budgets are becoming increasingly popular in many municipalities all around the world. The underlying idea is to allow citizens to participate in the allocation of a municipal budget. Many advantages have been suggested for such experiences, including legitimization and more informed and transparent decisions. There are many conceivable variants of such processes. However, in most cases both its design and implementation are carried out in an informal way. In this paper we propose a methodology to design a participatory budget process based on a multicriteria decision making model.
On deciding how to decide: Designing participatory budget processes
S0377221713002695
This paper presents a novel solution heuristic to the General Lotsizing and Scheduling Problem for Parallel production Lines (GLSPPL). The GLSPPL addresses the problem of simultaneously deciding about the sizes and schedules of production lots on parallel, heterogeneous production lines with respect to scarce capacity, sequence-dependent setup times and deterministic, dynamic demand of multiple products. Its objective is to minimize inventory holding, sequence-dependent setup and production costs. The new heuristic iteratively decomposes the multi-line problem into a series of single-line problems, which are easier to solve. Different approaches for decomposition and for the iteration between a modified multi-line master problem and the single-line subproblems are proposed. They are compared with an existing solution method for the GLSPPL by means of medium-sized and large practical problem instances from different types of industries. The new methods prove to be superior with respect to both solution quality and computation time.
A decomposition approach for the General Lotsizing and Scheduling Problem for Parallel production Lines
S0377221713002701
In the Prize-Collecting Steiner Tree Problem (PCStT) we are given a set of customers with potential revenues and a set of possible links connecting these customers with fixed installation costs. The goal is to decide which customers to connect into a tree structure so that the sum of the link costs plus the revenues of the customers that are left out is minimized. The problem, as well as some of its variants, is used to model a wide range of applications in telecommunications, gas distribution networks, protein–protein interaction networks, or image segmentation. In many applications it is unrealistic to assume that the revenues or the installation costs are known in advance. In this paper we consider the well-known Bertsimas and Sim (B&S) robust optimization approach, in which the input parameters are subject to interval uncertainty, and the level of robustness is controlled by introducing a control parameter, which represents the perception of the decision maker regarding the number of uncertain elements that will present an adverse behavior. We propose branch-and-cut approaches to solve the robust counterparts of the PCStT and the Budget Constraint variant and provide an extensive computational study on a set of benchmark instances that are adapted from the deterministic PCStT inputs. We show how the Price of Robustness influences the cost of the solutions and the algorithmic performance. Finally, we adapt our recent theoretical results regarding algorithms for a general class of B&S robust optimization problems for the robust PCStT and its budget and quota constrained variants.
Exact approaches for solving robust prize-collecting Steiner tree problems
S0377221713002713
This paper proposes a test for whether data are over-represented in a given production zone, i.e. a subset of a production possibility set which has been estimated using the non-parametric Data Envelopment Analysis (DEA) approach. A binomial test is used that relates the number of observations inside such a zone to a discrete probability weighted relative volume of that zone. A Monte Carlo simulation illustrates the performance of the proposed test statistic and provides good estimation of both facet probabilities and the assumed common inefficiency distribution in a three dimensional input space. Potential applications include tests for whether benchmark units dominate more (or less) observations than expected.
Testing over-representation of observations in subsets of a DEA technology
S0377221713002725
We propose and apply a novel approach for modeling special-day effects to predict electricity demand in Korea. Notably, we model special-day effects on an hourly rather than a daily basis. Hourly specified predictor variables are implemented in the regression model with a seasonal autoregressive moving average (SARMA) type error structure in order to efficiently reflect the special-day effects. The interaction terms between the hour-of-day effects and the hourly based special-day effects are also included to capture the unique intraday patterns of special days more accurately. The multiplicative SARMA mechanism is employed in order to identify the double seasonal cycles, namely, the intraday effect and the intraweek effect. The forecast results of the suggested model are evaluated by comparing them with those of various benchmark models for the following year. The empirical results indicate that the suggested model outperforms the benchmark models for both special- and non-special day predictions.
Modeling special-day effects for forecasting intraday electricity demand
S0377221713002737
Competitive location problems can be characterized by the fact that the decisions made by others will affect our own payoffs. In this paper, we address a discrete competitive location game in which two decision-makers have to decide simultaneously where to locate their services without knowing the decisions of one another. This problem arises in a franchising environment in which the decision-makers are the franchisees and the franchiser defines the potential sites for locating services and the rules of the game. At most one service can be located at each site, and one of the franchisees has preferential rights over the other. This means that if both franchisees are interested in opening the service in the same site, only the one that has preferential rights will open it. We consider that both franchisees have budget constraints, but the franchisee without preferential rights is allowed to show interest in more sites than the ones she can afford. We are interested in studying the influence of the existence of preferential rights and overbidding on the outcomes for both franchisees and franchiser. A model is presented and an algorithmic approach is developed for the calculation of Nash equilibria. Several computational experiments are defined and their results are analysed, showing that preferential rights give its holder a relative advantage over the other competitor. The possibility of overbidding seems to be advantageous for the franchiser, as well as the inclusion of some level of asymmetry between the two decision-makers.
Two-player simultaneous location game: Preferential rights and overbidding
S0377221713002749
Lateral transshipments are an effective strategy to pool inventories. We present a Semi-Markov decision problem formulation for proactive and reactive transshipments in a multi-location continuous review distribution inventory system with Poisson demand and one-for-one replenishment policy. For a two-location model we state the monotonicity of an optimal policy. In a numerical study, we compare the benefits of proactive and different reactive transshipment rules. The benefits of proactive transshipments are the largest for networks with intermediate opportunities of demand pooling and the difference between alternative reactive transshipment rules is negligible.
A Semi-Markov decision problem for proactive and reactive transshipments between multiple warehouses
S0377221713002750
This paper presents a new relaxation technique to globally optimize mixed-integer polynomial programming problems that arise in many engineering and management contexts. Using a bilinear term as the basic building block, the underlying idea involves the discretization of one of the variables up to a chosen accuracy level (Teles, J.P., Castro, P.M., Matos, H.A. (2013). Multiparametric disaggregation technique for global optimization of polynomial programming problems. J. Glob. Optim. 55, 227–251), by means of a radix-based numeric representation system, coupled with a residual variable to effectively make its domain continuous. Binary variables are added to the formulation to choose the appropriate digit for each position together with new sets of continuous variables and constraints leading to the transformation of the original mixed-integer non-linear problem into a larger one of the mixed-integer linear programming type. The new underestimation approach can be made as tight as desired and is shown capable of providing considerably better lower bounds than a widely used global optimization solver for a specific class of design problems involving bilinear terms.
Univariate parameterization for global optimization of mixed-integer polynomial problems
S0377221713002762
This paper surveys the academic OR/analytics literature describing research into the laws and rules of sports and sporting competitions. The literature is divided into post hoc analyses and proposals for future changes, and is also divided into laws/rules of sports themselves and rules/organisation of tournaments or competitions. The survey outlines a large number of studies covering 21 sports in many parts of the world. The analytical approaches most commonly used are found to be various forms of regression analysis and simulation. Issues highlighted by this survey include the different views of what constitutes fairness and the frequency with which changes produce unintended consequences.
OR analysis of sporting rules – A survey
S0377221713002993
This paper focuses on detecting nuclear weapons on cargo containers using port security screening methods, where the nuclear weapons would presumably be used to attack a target within the United States. This paper provides a linear programming model that simultaneously identifies optimal primary and secondary screening policies in a prescreening-based paradigm, where incoming cargo containers are classified according to their perceived risk. The proposed linear programming model determines how to utilize primary and secondary screening resources in a cargo container screening system given a screening budget, prescreening classifications, and different device costs. Structural properties of the model are examined to shed light on the optimal screening policies. The model is illustrated with a computational example. Sensitivity analysis is performed on the ability of the prescreening in correctly identifying prescreening classifications and secondary screening costs. Results reveal that there are fewer practical differences between the screening policies of the prescreening groups when prescreening is inaccurate. Moreover, devices that can better detect shielded nuclear material have the potential to substantially improve the system’s detection capabilities.
An integrated model for screening cargo containers
S0377221713003007
We analyze a duopoly where capacity-constrained firms offer an established product and have the option to offer an additional new and differentiated product. We show that the firm with the smaller capacity on the established market has a higher incentive to innovate and reaches a larger market share on the market for the new product. An increase in capacity of the larger firm can prevent its competitor from innovating, whereas an increase in capacity of the smaller firm cannot prevent innovation of its larger competitor. In equilibrium the firm with smaller capacity on the established market might outperform the larger firm with respect to total payoffs.
New product introduction and capacity investment by incumbents: Effects of size on strategy
S0377221713003019
It is well observed that individual behaviour can have an effect on the efficiency of queueing systems. The impact of this behaviour on the economic efficiency of public services is considered in this paper where we present results concerning the congestion related implications of decisions made by individuals when choosing between facilities. The work presented has important managerial implications at a public policy level when considering the effect of allowing individuals to choose between providers. We show that in general the introduction of choice in an already inefficient system will not have a negative effect. Introducing choice in a system that copes with demand will have a negative effect.
Selfish routing in public services
S0377221713003020
We consider a time-based inventory control policy for a two-level supply chain with one warehouse and multiple retailers in this paper. Let the warehouse order in a fixed base replenishment interval. The retailers are required to order in intervals that are integer-ratio multiples of the base replenishment interval at the warehouse. The warehouse and the retailers each adopt an order-up-to policy, i.e. order the needed stock at a review point to raise the inventory position to a fixed order-up-to level. It is assumed that the retailers face independent Poisson demand processes and no transshipments between them are allowed. The contribution of the study is threefold. First, we assume that when facing a shortage the warehouse allocates the remaining stock to the retailers optimally to minimize system cost in the last minute before delivery and provide an approach to evaluate the exact system cost. Second, we characterize the structural properties and develop an exact optimal solution for the inventory control system. Finally, we demonstrate that the last minute optimal warehouse stock allocation rule we adopt dominates the virtual allocation rule in which warehouse stock is allocated to meet retailer demand on a first-come first-served basis with significant cost benefits. Moreover, the proposed time-based inventory control policy can perform equally well or better than the commonly used stock-based batch-ordering policy for distribution systems with multiple retailers.
A periodic-review inventory control policy for a two-level supply chain with multiple retailers and stochastic demand
S0377221713003032
This paper considers the problem of determining operation and maintenance schedules for a containership equipped with various subsystems during its sailing according to a pre-determined navigation schedule. The operation schedule, which specifies working time of each subsystem, determines the due-date of each maintenance activity and the maintenance schedule specifies the actual start time of each maintenance activity. The main constraints are subsystem requirements, workforce availability, working time limitation, and inter-maintenance time. To represent the problem mathematically, a mixed integer programming model is developed. Then, due to the complexity of the problem, we suggest a heuristic algorithm that minimizes the sum of earliness and tardiness between the due-date and the actual start time for each maintenance activity. Computational experiments were done on various test instances and the results are reported. In particular, a case study was done on a real instance and a significant amount of improvement is reported over the experience based conventional method.
Operation and preventive maintenance scheduling for containerships: Mathematical model and solution algorithm
S0377221713003044
We consider a two-stage decision problem, in which an online retailer first makes optimal decisions on his profit margin and free-shipping threshold, and then determines his inventory level. We start by developing the retailer’s expected profit function. Then, we use publicly-available statistics to find the best-fitting distribution for consumers’ purchase amounts and the best-fitting function for conversion rate (i.e., probability that an arriving visitor places an online order with the retailer). We show that: (i) a reduction of the profit margin does not significantly affect the standard deviation of consumers’ order sizes (purchase amounts) but increases the average order size; whereas, (ii) variations in a positive finite free-shipping threshold affect both the average value and the standard deviation of the order sizes. We then use Arena to simulate the online retailing system and OptQuest to find the retailer’s optimal decisions and maximum profit. Next, we perform a sensitivity analysis to examine the impact of the ratio of the unit holding and salvage cost to the unit shipping cost on the retailer’s optimal decisions. We also draw some important managerial insights.
Online retailers’ promotional pricing, free-shipping threshold, and inventory decisions: A simulation-based analysis
S0377221713003056
The computational time required by interior-point methods is often dominated by the solution of linear systems of equations. An efficient specialized interior-point algorithm for primal block-angular problems has been used to solve these systems by combining Cholesky factorizations for the block constraints and a conjugate gradient based on a power series preconditioner for the linking constraints. In some problems this power series preconditioner resulted to be inefficient on the last interior-point iterations, when the systems became ill-conditioned. In this work this approach is combined with a splitting preconditioner based on LU factorization, which works well for the last interior-point iterations. Computational results are provided for three classes of problems: multicommodity flows (oriented and nonoriented), minimum-distance controlled tabular adjustment for statistical data protection, and the minimum congestion problem. The results show that, in most cases, the hybrid preconditioner improves the performance and robustness of the interior-point solver. In particular, for some block-angular problems the solution time is reduced by a factor of 10.
Improving an interior-point approach for large block-angular problems by hybrid preconditioners
S0377221713003068
Most time series forecasting methods assume the series has no missing values. When missing values exist, interpolation methods, while filling in the blanks, may substantially modify the statistical pattern of the data, since critical features such as moments and autocorrelations are not necessarily preserved. In this paper we propose to interpolate missing data in time series by solving a smooth nonconvex optimization problem which aims to preserve moments and autocorrelations. Since the problem may be multimodal, Variable Neighborhood Search is used to trade off quality of the interpolation (in terms of preservation of the statistical pattern) and computing times. Our approach is compared with standard interpolation methods and illustrated on both simulated and real data.
Time series interpolation via global optimization of moments fitting
S0377221713003081
New variants of greedy algorithms, called advanced greedy algorithms, are identified for knapsack and covering problems with linear and quadratic objective functions. Beginning with single-constraint problems, we provide extensions for multiple knapsack and covering problems, in which objects must be allocated to different knapsacks and covers, and also for multi-constraint (multi-dimensional) knapsack and covering problems, in which the constraints are exploited by means of surrogate constraint strategies. In addition, we provide a new graduated-probe strategy for improving the selection of variables to be assigned values. Going beyond the greedy and advanced greedy frameworks, we describe ways to utilize these algorithms with multi-start and strategic oscillation metaheuristics. Finally, we identify how surrogate constraints can be utilized to produce inequalities that dominate those previously proposed and tested utilizing linear programming methods for solving multi-constraint knapsack problems, which are responsible for the current best methods for these problems. While we focus on 0–1 problems, our approaches can readily be adapted to handle variables with general upper bounds.
Advanced greedy algorithms and surrogate constraint methods for linear and quadratic knapsack and covering problems
S0377221713003093
This work addresses harvest planning problems that arise in the production of sugar and alcohol from sugar cane in Brazil. The planning is performed for two planning horizons, tactical and operational planning, such that the total sugar content in the harvested cane is maximized. The tactical planning comprises the entire harvest season that averages seven months. The operational planning considers a horizon from seven to thirty days. Both problems are solved by mixed integer programming. The tactical planning is well handled. The model for the operational planning extends the one for the tactical planning and is presented in detail. Valid inequalities are introduced and three techniques are proposed to speed up finding quality solutions. These include pre-processing by grouping and filtering the distance matrix between fields, hot starting with construction heuristic solutions, and dividing and sequentially solving the resulting MIP program. Experiments are run over a set of real world and artificial instances. A case study illustrates the benefits of the proposed planning.
Harvest planning in the Brazilian sugar cane industry via mixed integer programming
S0377221713003111
One of the most important issues for aggregating preferences rankings is the determination of the weights associated with the different ranking places. To avoid the subjectivity in determining the weights, Cook and Kress (1990) [5] suggested evaluating each candidate with the most favorable scoring vector for him/her. With this purpose, various models based on Data Envelopment Analysis have appeared in the literature. Although these methods do not require predetermine the weights subjectively, some of them have a serious drawback: the relative order between two candidates may be altered when the number of first, second, …, kth ranks obtained by other candidates changes, although there is not any variation in the number of first, second, …, kth ranks obtained by both candidates. In this paper we propose a model that allows each candidate to be evaluated with the most favorable weighting vector for him/her and avoids the previous drawback. Moreover, in some cases, we give a closed expression for the score assigned with our model to each candidate.
Aggregating preferences rankings with variable weights
S0377221713003123
While significant progress has been made, analytic research on principal-agent problems that seek closed-form solutions faces limitations due to tractability issues that arise because of the mathematical complexity of the problem. The principal must maximize expected utility subject to the agent’s participation and incentive compatibility constraints. Linearity of performance measures is often assumed and the Linear, Exponential, Normal (LEN) model is often used to deal with this complexity. These assumptions may be too restrictive for researchers to explore the variety of relationships between compensation contracts offered by the principal and the effort of the agent. In this paper we show how to numerically solve principal-agent problems with nonlinear contracts. In our procedure, we deal directly with the agent’s incentive compatibility constraint. We illustrate our solution procedure with numerical examples and use optimization methods to make the problem tractable without using the simplifying assumptions of a LEN model. We also show that using linear contracts to approximate nonlinear contracts leads to solutions that are far from the optimal solutions obtained using nonlinear contracts. A principal-agent problem is a special instance of a bilevel nonlinear programming problem. We show how to solve principal-agent problems by solving bilevel programming problems using the ellipsoid algorithm. The approach we present can give researchers new insights into the relationships between nonlinear compensation schemes and employee effort.
Solving nonlinear principal-agent problems using bilevel programming
S0377221713003135
We develop and implement linear formulations of general Nth order stochastic dominance criteria for discrete probability distributions. Our approach is based on a piece-wise polynomial representation of utility and its derivatives and can be implemented by solving a relatively small system of linear inequalities. This approach allows for comparing a given prospect with a discrete set of alternative prospects as well as for comparison with a polyhedral set of linear combinations of prospects. We also derive a linear dual formulation in terms of lower partial moments and co-lower partial moments. An empirical application to historical stock market data suggests that the passive stock market portfolio is highly inefficient relative to actively managed portfolios for all investment horizons and for nearly all investors. The results also illustrate that the mean–variance rule and second-order stochastic dominance rule may not detect market portfolio inefficiency because of non-trivial violations of non-satiation and prudence.
General linear formulations of stochastic dominance criteria
S0377221713003147
Because of cost and time limit factors, the number of samples is usually small in the early stages of manufacturing systems, and the scarcity of actual data will cause problems in decision-making. In order to solve this problem, this paper constructs a counter-intuitive hypothesis testing method by choosing the maximal p-value based on a two-parameter Weibull distribution to enhance the estimate of a nonlinear and asymmetrical shape of product lifetime distribution. Further, we systematically generate virtual data to extend the small data set to improve learning robustness of product lifetime performance. This study provides simulated data sets and two practical examples to demonstrate that the proposed method is a more appropriate technique to increase estimation accuracy of product lifetime for normal or non-normal data with small sample sizes.
A new approach to assess product lifetime performance for small data sets
S0377221713003159
The nested L-shaped method is used to solve two- and multi-stage linear stochastic programs with recourse, which can have integer variables on the first stage. In this paper we present and evaluate a cut consolidation technique and a dynamic sequencing protocol to accelerate the solution process. Furthermore, we present a parallelized implementation of the algorithm, which is developed within the COIN-OR framework. We show on a test set of 51 two-stage and 42 multi-stage problems, that both of the developed techniques lead to significant speed ups in computation time.
Dynamic sequencing and cut consolidation for the parallel hybrid-cut nested L-shaped method
S0377221713003160
This paper studies properties of an estimator of mean–variance portfolio weights in a market model with multiple risky assets and a riskless asset. Theoretical formulas for the mean square error are derived in the case when asset excess returns are multivariate normally distributed and serially independent. The sensitivity of the portfolio estimator to errors arising from the estimation of the covariance matrix and the mean vector is quantified. It turns out that the relative contribution of the covariance matrix error depends mainly on the Sharpe ratio of the market portfolio and the sampling frequency of historical data. Theoretical studies are complemented by an investigation of the distribution of portfolio estimator for empirical datasets. An appropriately crafted bootstrapping method is employed to compute the empirical mean square error. Empirical and theoretical estimates are in good agreement, with the empirical values being, in general, higher.
Theoretical and empirical estimates of mean–variance portfolio sensitivity
S0377221713003172
The main problem of portfolio optimization is parameter estimation error. Various methods have been suggested to mitigate this problem, among which are shrinkage, resampling, Bayesian updating, naïve diversification, and imposing constraints on the portfolio weights. This study suggests two substantial extensions of the constrained optimization approach: the Variance-Based Constraints (VBC), and the Global Variance-Based Constraints (GVBC) methods. By the VBC method the constraint imposed on the weight of a given stock is inversely proportional to its standard deviation: the higher a stock’s sample standard deviation, the higher the potential estimation error of its parameters, and therefore the tighter the constraint imposed on its weight. GVBC employs a similar idea, but instead of imposing a sharp boundary constraint on each stock, a quadratic “cost” is assigned to deviations from the naive 1/N weight, and a single global constraint is imposed on the total cost of all deviations. Comparing ten optimization methods we find that the two new suggested methods typically yield the best performance, as measured by the Sharpe ratio. GVBC ranks first. These results are obtained for two different datasets, and are also robust to the number of assets under consideration.
The benefits of differential variance-based constraints in portfolio optimization
S0377221713003184
The term “hypernetwork” (more precisely, s-hypernetwork and (s, d)-hypernetwork) has been recently adopted to denote some logical structures contained in a directed hypergraph. A hypernetwork identifies the core of a hypergraph model, obtained by filtering off redundant components. Therefore, finding hypernetworks has a notable relevance both from a theoretical and from a computational point of view. In this paper we provide a simple and fast algorithm for finding s-hypernetworks, which substantially improves on a method previously proposed in the literature. We also point out two linearly solvable particular cases. Finding an (s, d)-hypernetwork is known to be a hard problem, and only one polynomially solvable class has been found so far. Here we point out that this particular case is solvable in linear time.
Finding hypernetworks in directed hypergraphs
S0377221713003196
This contribution compares existing and newly developed techniques for geometrically representing mean–variance–skewness portfolio frontiers based on the rather widely adapted methodology of polynomial goal programming (PGP) on the one hand and the more recent approach based on the shortage function on the other hand. Moreover, we explain the working of these different methodologies in detail and provide graphical illustrations in relation to the goal programming literature in operations research. Inspired by these illustrations, we prove two new results: a formal relation between both approaches and a generalization of the well-known one fund separation theorem from traditional mean–variance portfolio theory.
Portfolio selection with skewness: A comparison of methods and a generalized one fund result
S0377221713003202
This paper addresses two important issues that may affect the operations efficiency in the recycling industry. First, the industry contains many small-scale and inefficient recycling firms, especially in developing countries. Second, the output from recycling a waste product often yields multiple recycled products that cannot all be sold efficiently by a single firm. To address these two issues, this paper examines how different firms can cooperate in their recycling and pricing decisions using cooperative game theory. Recycling operations under both joint and individual productions with different cost structures are considered. Decisions include the quantity of waste product to recycle and the price at which to sell each recycled product on each firm’s market. These decisions can be made jointly by multiple cooperating firms to maximize total profit. We design allocation schemes for maximized total profit to encourage cooperation among all firms. Managerial insights are provided from both environmental and economic perspectives.
On the cooperation of recycling operations
S0377221713003214
Surrogate constraint relaxation was proposed in the 1960s as an alternative to the Lagrangian relaxation for solving difficult optimization problems. The duality gap in the surrogate relaxation is always as good as the duality gap in the Lagrangian relaxation. Over the years researchers have proposed procedures to reduce the gap in the surrogate constraint. Our aim is to review models that close the surrogate duality gap. Five research streams that provide procedures with zero duality gap are identified and discussed. In each research stream, we will review major results, discuss limitations, and suggest possible future research opportunities. In addition, relationships between models if they exist, are also discussed.
Zero duality gap in surrogate constraint optimization: A concise review of models
S0377221713003226
This paper deals with a mean–variance optimal portfolio selection problem in presence of risky assets characterized by low-frequency trading and, therefore, low liquidity. To model the dynamics of illiquid assets, we introduce pure-jump processes. This leads to the development of a portfolio selection model in a mixed discrete/continuous time setting. We pursue the twofold scope of analyzing and comparing either long-term investment strategies as well as short-term trading rules. The theoretical model is analyzed by applying extensive Monte Carlo experiments, in order to provide useful insights from a financial perspective.
Mean–Variance portfolio selection in presence of infrequently traded stocks
S0377221713003238
The Omega ratio is a recent performance measure proposed to overcome the known shortcomings of the Sharpe ratio. Until recently, the Omega ratio was thought to be computationally intractable, and research was focused on heuristic optimization procedures. We have shown elsewhere that the Omega ratio optimization is equivalent to a linear program and hence can be solved exactly in polynomial time. This permits the investigation of more complex and realistic variants of the problem. The standard formulation of the Omega ratio requires perfect information for the probability distribution of the asset returns. In this paper, we investigate the problem arising from the probability distribution of the asset returns being only partially known. We introduce the robust variant of the conventional Omega ratio that hedges against uncertainty in the probability distribution. We examine the worst-case Omega ratio optimization problem under three types of uncertainty – mixture distribution, box and ellipsoidal uncertainty – and show that the problem remains tractable.
Worst-case robust Omega ratio
S0377221713003421
This study investigates multiperiod service level (MSL) policies in supply chains facing a stochastic customer demand. The objective of the supply chains is to construct integrated replenishment plans that satisfy strict stockout-oriented performance measures which apply across a multiperiod planning horizon. We formulate the stochastic service level constraints for the fill rate, ready rate, and conditional expected stockout MSL policies. The modeling approach is based on the concept of service level trajectory and provides reformulations of the stochastic planning problems associated with each MSL policy that can be efficiently solved with off-the-shelf optimization solvers. The approach enables the handling of correlated and non-stationary random variables, and is flexible enough to accommodate the implementation of fair service level policies, the assignment of differentiated priority levels per products, or the introduction of response time requirements. We use an earthquake disaster management case study to show the applicability of the approach and derive practical implications about service level policies.
Probabilistic modeling of multiperiod service levels
S0377221713003433
In this paper, we study the shortest path tour problem in which a shortest path from a given origin node to a given destination node must be found in a directed graph with non-negative arc lengths. Such path needs to cross a sequence of node subsets that are given in a fixed order. The subsets are disjoint and may be different-sized. A polynomial-time reduction of the problem to a classical shortest path problem over a modified digraph is described and two solution methods based on the above reduction and dynamic programming, respectively, are proposed and compared with the state-of-the-art solving procedure. The proposed methods are tested on existing datasets for this problem and on a large class of new benchmark instances. The computational experience shows that both the proposed methods exhibit a consistent improved performance in terms of computational time with respect to the existing solution method.
Solving the shortest path tour problem
S0377221713003445
We present a novel mathematical model and a mathematical programming based approach to deliver superior quality solutions for the single machine capacitated lot sizing and scheduling problem with sequence-dependent setup times and costs. The formulation explores the idea of scheduling products based on the selection of known production sequences. The model is the basis of a matheuristic, which embeds pricing principles within construction and improvement MIP-based heuristics. A partial exploration of distinct neighborhood structures avoids local entrapment and is conducted on a rule-based neighbor selection principle. We compare the performance of this approach to other heuristics proposed in the literature. The computational study carried out on different sets of benchmark instances shows the ability of the matheuristic to cope with several model extensions while maintaining a very effective search. Although the techniques described were developed in the context of the problem studied, the method is applicable to other lot sizing problems or even to problems outside this domain.
Pricing, relaxing and fixing under lot sizing and scheduling
S0377221713003457
In the last few years, according to the evolution of financial markets and the enforcement of international supervisory requirements, an increasing interest has been devoted to risk integration. The original focus on individual risk estimation has been replaced by the growing prominence of top-down and bottom-up risk integration perspectives. Following this latter way, we bring together different approaches developed in the recent literature elaborating a general model to assess banking solvency in both the long-run (economic capital) as well as in the short period (liquidity mismatching). We consider banking capability to face credit, interest rate and liquidity risks associated to macro-economic shocks affecting both assets and liabilities. Following the perspective of commercial banks, we concentrate on information available in the risk management practice to propose an easy to implement statistical framework. We put in place this framework estimating its scenario generation parameters on Italian macro-economic time series from 1990 to 2009. Once applied to a stylized commercial bank, we compare the results of our approach to regulatory capital requirements. We emphasize the need for policy makers as well as risk managers, to take into account the entire balance sheet structure to assess banking solvency. assets value at time t assets value change due to shocks asset liquidity inflows for period q ⩽ h asset loss corresponding to −ΔA loss due to defaults for the period h equity value at time t economic capital, expected shortfall economic capital, value at risk liquidity haircut for obligor i income for the period h liabilities value at time t liabilities value change due to shocks liability liquidity outflows for period q ⩽ h liquidity mismatching net interest for period h probability of default for obligor i performing net interest for period h present value interest rate for node d at time t health index for sector s liquidity shrinkage at time t indicator function for obligor i macro-economic variables at time t shocked macro-economic variables
Integrated bank risk modeling: A bottom-up statistical framework
S0377221713003469
This paper develops an innovative objectives-oriented approach with one evaluation model and three optimization models for managing the implementation of a set of critical success strategies (CSSs) for an enterprise resource planning (ERP) project in an organization. To evaluate the CSSs based on their contribution to the organizational objectives, the evaluation model addresses an important issue of measuring the relationship between objectives in a three-level hierarchy involving the organization, its functional departments, and the ERP project. To determine the optimal management priority for implementing the CSSs from the organization’s perspective, the three optimization models maximize their total implementation value by integrating individual departments’ management preferences. An empirical study is conducted to demonstrate how these models work and how their outcomes can provide practical insights and implications in planning and managing the implementation of the CSSs for an ERP project.
Managing critical success strategies for an enterprise resource planning project
S0377221713003470
Risk aversion is a prevalent phenomenon when sufficiently large amounts are at risk. In this paper, we introduce a new prescriptive approach for coping with risk in sequential decision problems with discrete scenario space. We use Conditional Value-at-Risk (CVaR) risk measure as optimization criterion and prove that there is an explicit linear representation of the proposed model for the problem.
Decision tree analysis for a risk averse decision maker: CVaR Criterion
S0377221713003482
An efficiency indicator of industry configuration (allowing for entry/exit of firms) is presented which accounts for four sources components: (1) size inefficiencies arising from firms which can be conveniently split into smaller units; (2) efficiency gains realized through merger of firms; (3) re-allocation of inputs and outputs among firms; (4) technical inefficiencies. The indicator and its components are computed using linear and mixed-integer programming (data envelopment analysis models). A method to monitor the evolution of these components in time is introduced. Data on hospitals in Australia show that technical inefficiency of hospitals accounts for less than 15% of total industry inefficiency, with 40% attributable to size inefficiencies and the rest to potential mergers and re-allocation effects.
Industry structural inefficiency and potential gains from mergers and break-ups: A comprehensive approach
S0377221713003494
As China’s reform steps into the ‘deep water zone’ where value complexity becomes paramount, general-purpose decision-making aids such as Operational Research (OR) are increasingly confronted with the challenge of dealing with interest conflicts. However, due to historical events and institutional circumstances, OR in China to date is largely constrained by a technocratic approach which is not fit for purpose. Encouragingly, recent OR innovations inside China signify a conscious move to embrace value plurality and tackle social conflicts. OR is not merely a neutral tool for solving technical problems, but a world-building discourse that shapes society. The future of OR, particularly Soft OR, in China will be determined by whether OR workers are willing and capable to act as institutional entrepreneurs promoting scientific and democratic decision-making that deepens the reform toward an open, just and prosperous society. The implications go beyond the OR community and China’s borders.
Soft OR in China: A critical report
S0377221713003500
Managing knowledge based resource capabilities has become very important in recent years and during a finite horizon it seems to be reasonable to develop the capabilities intensively at the beginning as one can utilize those over a longer period of time. With the help of multi-period models we check the validity of this idea and characterize the dynamics of development activities. The paper identifies the factors that shape these dynamics and from the behavior of these factors we conclude when the dynamics can be increasing or decreasing. We point out that in stable environment there is tendency for decreasing dynamics but future expectations can significantly modify this outcome. Relationships between the successful or less successful implementation of a business strategy and the dynamics of improvement activities are highlighted as well. For specific model structures explicit solutions are derived.
Multi-period models for analyzing the dynamics of process improvement activities
S0377221713003512
In a recent paper by Li and Cheng [Li,S.K., Cheng, Y.S., 2007. Solving the puzzles of structural efficiency. European Journal of Operational Research 180(2), 713–722], they developed the shadow price model to solve the existing puzzles of structural efficiency theoretically. However, we observe that the optimal shadow price vector in the shadow price model by Li and Cheng (2007) is not always unique. As a result, the decomposition of the structural efficiency is arbitrarily generated, depending on the shadow price vector we choose. Finally, an example with multiple inputs and outputs is used to illustrate the phenomenon.
A comment on “solving the puzzles of structural efficiency”
S0377221713003524
This paper introduces a general continuous-time mathematical framework for solution of dynamic mean–variance control problems. We obtain theoretical results for two classes of functionals: the first one depends on the whole trajectory of the controlled process and the second one is based on its terminal-time value. These results enable the development of numerical methods for mean–variance problems for a pre-determined risk-aversion coefficient. We apply them to study optimal trading strategies pursued by fund managers in response to various types of compensation schemes. In particular, we examine the effects of continuous monitoring and scheme’s symmetry on trading behavior and fund performance.
Investment strategies and compensation of a mean–variance optimizing fund manager
S0377221713003536
This paper addresses the problem of finding an effective distribution plan to deliver free newspapers from a production plant to subway, bus, or tram stations. The overall goal is to combine two factors: first, the free newspaper producing company wants to minimize the number of vehicle trips needed to distribute all newspapers produced at the production plant. Second, the company is interested in minimizing the time needed to consume all newspapers, i.e., the time needed to get all the newspapers taken by the final readers. The resulting routing problem combines aspects of the vehicle routing problem with time windows, the inventory routing problem, and additional constraints related to the production schedule. We propose a formulation and different heuristic approaches, as well as a hybrid method. Computational tests with real world data show that the hybrid method is the best in various problem settings.
A heuristic algorithm for the free newspaper delivery problem
S0377221713003548
During the emergency response to mass casualty incidents decisions relating to the extrication, treatment and transporting of casualties are made in a real-time, sequential manner. In this paper we describe a novel combinatorial optimization model of this problem which acknowledges its temporal nature by employing a scheduling approach. The model is of a multi-objective nature, utilizing a lexicographic view to combine objectives in a manner which capitalizes on their natural ordering of priority. The model includes pertinent details regarding the stochastic nature of casualty health, the spatial nature of multi-site emergencies and the dynamic capacity of hospitals. A Variable Neighborhood Descent metaheuristic is employed in order to solve the model. The model is evaluated over a range of potential problems, with results confirming its effective and robust nature.
A multi-objective combinatorial model of casualty processing in major incident response
S0377221713003561
We address a truck scheduling problem that arises in intermodal container transportation, where containers need to be transported between customers (shippers or receivers) and container terminals (rail or maritime) and vice versa. The transportation requests are handled by a trucking company which operates several depots and a fleet of homogeneous trucks that must be routed and scheduled to minimize the total truck operating time under hard time window constraints imposed by the customers and terminals. Empty containers are considered as transportation resources and are provided by the trucking company for freight transportation. The truck scheduling problem at hand is formulated as Full-Truckload Pickup and Delivery Problem with Time Windows (FTPDPTW) and is solved by a 2-stage heuristic solution approach. This solution method was specially designed for the truck scheduling problem but can be applied to other problems as well. We assess the quality of our solution approach on several computational experiments.
A truck scheduling problem arising in intermodal container transportation
S0377221713003573
In this paper, we demonstrate how to model a discrete-time dynamic process on a non-periodic time domain with applications to operations research. We introduce a discrete-time model of inventory with deterioration on domains where time points may be unevenly spaced over a time interval. We formalize the average cost function composed of storage, depreciation and back-ordering costs. The optimal condition is given to locate the optimal point that minimizes the average cost function. Finally, we present simulations to demonstrate how a manager can use this model to make inventory decisions.
Inventory model of deteriorating items on non-periodic discrete-time domains
S0377221713003585
This paper addresses a vehicle scheduling problem encountered in home health care logistics. It concerns the delivery of drugs and medical devices from the home care company’s pharmacy to patients’ homes, delivery of special drugs from a hospital to patients, pickup of bio samples and unused drugs and medical devices from patients. The problem can be considered as a special vehicle routing problem with simultaneous delivery and pickup and time windows, with four types of demands: delivery from depot to patient, delivery from a hospital to patient, pickup from a patient to depot and pickup from a patient to a medical lab. Each patient is visited by one vehicle and each vehicle visits each node at most once. Patients are associated with time windows and vehicles with capacity. Two mixed-integer programming models are proposed. We then propose a Genetic Algorithm (GA) and a Tabu Search (TS) method. The GA is based on a permutation chromosome, a split procedure and local search. The TS is based on route assignment attributes of patients, an augmented cost function, route re-optimization, and attribute-based aspiration levels. These approaches are tested on test instances derived from existing VRPTW benchmarks.
Heuristic algorithms for a vehicle routing problem with simultaneous delivery and pickup and time windows in home health care
S0377221713003597
Scientific Research Assessment (SRA) is receiving increasing attention in both academic and industry. More and more organizations are recognizing the importance of SRA for the optimal use of scarce resources. In this paper, a vague set theory based decision support approach is proposed for SRA. Specifically, a family of parameterized S-OWA operator is developed for the aggregation of vague assessments. The proposed approach is introduced to evaluate the research funding programs of the National Natural Science Foundation of China (NSFC). It provides a soft and expansive way to help the decision maker in NSFC to make his decisions. The proposed approach can also be used for some other agencies to make similar assessment.
A vague set based decision support approach for evaluating research funding programs
S0377221713003603
Group decision making is a type of decision problem in which multiple experts acting collectively, analyze problems, evaluate alternatives, and select a solution from a collection of alternatives. As the natural language is the standard representation of those concepts that humans use for communication, it seems natural that they use words (linguistic terms) instead of numerical values to provide their opinions. However, while linguistic information is readily available, it is not operational and thus it has to be made usable though expressing it in terms of information granules. To do so, Granular Computing, which has emerged as a unified and coherent framework of designing, processing, and interpretation of information granules, can be used. The aim of this paper is to present an information granulation of the linguistic information used in group decision making problems defined in heterogeneous contexts, i.e., where the experts have associated importance degrees reflecting their ability to handle the problem. The granulation of the linguistic terms is formulated as an optimization problem, solved by using the particle swarm optimization, in which a performance index is maximized by a suitable mapping of the linguistic terms on information granules formalized as sets. This performance index is expressed as a weighted aggregation of the individual consistency achieved by each expert.
A method based on PSO and granular computing of linguistic information to solve group decision making problems defined in heterogeneous contexts
S0377221713003615
Mergers and acquisitions (M&A), private equity and leveraged buyouts, securitization and project finance are characterized by the presence of contractual clauses (covenants). These covenants trigger the technical default of the borrower even in the absence of insolvency. Therefore, borrowers may default on loans even when they have sufficient available cash to repay outstanding debt. This condition is not captured by the net present value (NPV) distribution obtained through a standard Monte Carlo simulation. In this paper, we present a methodology for including the consequences of covenant breach in a Monte Carlo simulation, extending traditional risk analysis in investment planning. We introduce a conceptual framework for modeling technical and material breaches from the standpoint of both lenders and shareholders. We apply this framework to a real case study concerning the project financing of a 64-million euro biomass power plant. The simulation is carried out on the actual model developed by the financial advisor of the project and made available to the authors. Results show that both technical and material breaches have a statistically significant impact on the net present value distribution, and this impact is more relevant when leverage and cost of debt increase.
Risk analysis with contractual default. Does covenant breach matter?
S0377221713003627
The paper examines a new problem in the irregular packing literature that has many applications in industry: two-dimensional irregular (convex) bin packing with guillotine constraints. Due to the cutting process of certain materials, cuts are restricted to extend from one edge of the stock-sheet to another, called guillotine cutting. This constraint is common place in glass cutting and is an important constraint in two-dimensional cutting and packing problems. In the literature, various exact and approximate algorithms exist for finding the two dimensional cutting patterns that satisfy the guillotine cutting constraint. However, to the best of our knowledge, all of the algorithms are designed for solving rectangular cutting where cuts are orthogonal with the edges of the stock-sheet. In order to satisfy the guillotine cutting constraint using these approaches, when the pieces are non-rectangular, practitioners implement a two stage approach. First, pieces are enclosed within rectangle shapes and then the rectangles are packed. Clearly, imposing this condition is likely to lead to additional waste. This paper aims to generate guillotine-cutting layouts of irregular shapes using a number of strategies. The investigation compares three two-stage approaches: one approximates pieces by rectangles, the other two approximate pairs of pieces by rectangles using a cluster heuristic or phi-functions for optimal clustering. All three approaches use a competitive algorithm for rectangle bin packing with guillotine constraints. Further, we design and implement a one-stage approach using an adaptive forest search algorithm. Experimental results show the one-stage strategy produces good solutions in less time over the two-stage approach.
Construction heuristics for two-dimensional irregular shape bin packing with guillotine constraints
S0377221713003639
We study a class of mixed-integer programs for solving linear programs with joint probabilistic constraints from random right-hand side vectors with finite distributions. We present greedy and dual heuristic algorithms that construct and solve a sequence of linear programs. We provide optimality gaps for our heuristic solutions via the linear programming relaxation of the extended mixed-integer formulation of Luedtke et al. (2010) [13] as well as via lower bounds produced by their cutting plane method. While we demonstrate through an extensive computational study the effectiveness and scalability of our heuristics, we also prove that the theoretical worst-case solution quality for these algorithms is arbitrarily far from optimal. Our computational study compares our heuristics against both the extended mixed-integer programming formulation and the cutting plane method of Luedtke et al. (2010) [13]. Our heuristics efficiently and consistently produce solutions with small optimality gaps, while for larger instances the extended formulation becomes intractable and the optimality gaps from the cutting plane method increase to over 5%.
A linear programming approach for linear programs with probabilistic constraints
S0377221713003640
The linear models for the approximate solution of the problem of packing the maximum number of equal circles of the given radius into a given closed bounded domain G are proposed. We construct a grid in G; the nodes of this grid form a finite set of points T, and it is assumed that the centers of circles to be packed can be placed only at the points of T. The packing problems of equal circles with the centers at the points of T are reduced to 0–1 linear programming problems. A heuristic algorithm for solving the packing problems based on linear models is proposed. This algorithm makes it possible to solve packing problems for arbitrary connected closed bounded domains independently of their shape in a unified manner. Numerical results demonstrating the effectiveness of this approach are presented.
Linear models for the approximate solution of the problem of packing equal circles into a given domain
S0377221713003652
Data envelopment analysis (DEA) is a technique for evaluating relative efficiencies of peer decision making units (DMUs) which have multiple performance measures. These performance measures have to be classified as either inputs or outputs in DEA. DEA assumes that higher output levels and/or lower input levels indicate better performance. This study is motivated by the fact that there are performance measures (or factors) that cannot be classified as an input or output, because they have target levels with which all DMUs strive to achieve in order to attain the best practice, and any deviations from the target levels are not desirable and may indicate inefficiency. We show how such performance measures with target levels can be incorporated in DEA. We formulate a new production possibility set by extending the standard DEA production possibility set under variable returns-to-scale assumption based on a set of axiomatic properties postulated to suit the case of targeted factors. We develop three efficiency measures by extending the standard radial, slacks-based, and Nerlove–Luenberger measures. We illustrate the proposed model and efficiency measures by applying them to the efficiency evaluation of 36 US universities.
Incorporating performance measures with target levels in data envelopment analysis
S0377221713003676
This paper presents an approximation model for optimizing reorder points in one-warehouse N-retailer inventory systems subject to highly variable lumpy demand. The motivation for this work stems from close cooperation with a supply chain management software company, Syncron International, and one of their customers, a global spare parts provider. The model heuristically coordinates the inventory system using a near optimal induced backorder cost at the central warehouse. This induced backorder cost captures the impact that a reorder point decision at the warehouse has on the retailers’ costs, and decomposes the multi-echelon problem into solving N +1 single-echelon problems. The decomposition framework renders a flexible model that is computationally and conceptually simple enough to be implemented in practice. A numerical study, including real data from the case company, shows that the new model performs very well in comparison to existing methods in the literature, and offers significant improvements to the case company. With regards to the latter, the new model in general obtains realized service levels much closer to target while reducing total inventory.
A model for heuristic coordination of real life distribution inventory systems with lumpy demand
S0377221713003688
The introduction of individual transferable quotas (ITQs) into a fishery is going to change not only the amount of catch a fleet can take, but often also changes the fleet structure, particularly if total allowable catches are decreased. This can have an impact on the economic, social and environmental outcomes of fisheries management. Management Strategy Evaluation (MSE) modelling approaches are recognised as the most appropriate method for assessing impacts of management, but these require information as to how fleets may change under different management systems. In this study, we test the applicability of data envelopment analysis (DEA) based performance measures as predictors of how a fishing fleet might change under the introduction of ITQs and also at different levels of quota. In particular, we test the assumption that technical efficiency and capacity utilisation are suitable predictors of which boats are likely to exit the fishery. We also consider scale efficiency as an alternative predictor. We apply the analysis to the Torres Strait tropical rock lobster fishery that is transitioning to an ITQ-based management system for one sector of the fishery. The results indicate that capacity utilisation, technical efficiency and scale efficiency are reasonable indicators of who may remain in the fishery post ITQs. We find that the use of these measures to estimate the impacts of lower quota levels provides consistent fleet size estimates at the aggregate level, but which individual vessels are predicted to exit is dependent on the measure used.
DEA-based predictors for estimating fleet size changes when modelling the introduction of rights-based management
S0377221713003822
A Markowitz-type portfolio selection problem is to minimize a deviation measure of portfolio rate of return subject to constraints on portfolio budget and on desired expected return. In this context, the inverse portfolio problem is finding a deviation measure by observing the optimal mean-deviation portfolio that an investor holds. Necessary and sufficient conditions for the existence of such a deviation measure are established. It is shown that if the deviation measure exists, it can be chosen in the form of a mixed CVaR-deviation, and in the case of n risky assets available for investment (to form a portfolio), it is determined by a combination of (n +1) CVaR-deviations. In the later case, an algorithm for constructing the deviation measure is presented, and if the number of CVaR-deviations is constrained, an approximate mixed CVaR-deviation is offered as well. The solution of the inverse portfolio problem may not be unique, and the investor can opt for the most conservative one, which has a simple closed-form representation.
Inverse portfolio problem with mean-deviation model
S0377221713003834
We present a framework for sequential decision making in problems described by graphical models. The setting is given by dependent discrete random variables with associated costs or revenues. In our examples, the dependent variables are the potential outcomes (oil, gas or dry) when drilling a petroleum well. The goal is to develop an optimal selection strategy of wells that incorporates a chosen utility function within an approximated dynamic programming scheme. We propose and compare different approximations, from naive and myopic heuristics to more complex look-ahead schemes, and we discuss their computational properties. We apply these strategies to oil exploration over multiple prospects modeled by a directed acyclic graph, and to a reservoir drilling decision problem modeled by a Markov random field. The results show that the suggested strategies clearly improve the naive or myopic constructions used in petroleum industry today. This is useful for decision makers planning petroleum exploration policies.
Dynamic decision making for graphical models applied to oil exploration
S0377221713003846
This paper presents a literature survey on the fleet size and mix problem in maritime transportation. Fluctuations in the shipping market and frequent mismatches between fleet capacities and demands highlight the relevance of the problem and call for more accurate decision support. After analyzing the available scientific literature on the problem and its variants and extensions, we summarize the state of the art and highlight the main contributions of past research. Furthermore, by identifying important real life aspects of the problem which past research has failed to capture, we uncover the main areas where more research will be needed.
A survey on maritime fleet size and mix problems
S0377221713003858
In this paper, we introduce a new variant of the Vehicle Routing Problem (VRP), namely the Two-Stage Vehicle Routing Problem with Arc Time Windows (TS_VRP_ATWs) which generally emerges from both military and civilian transportation. The TS_VRP_ATW is defined as finding the vehicle routes in such a way that each arc of the routes is available only during a predefined time interval with the objective of overall cost minimization. We propose a Mixed Integer Programming (MIP) formulation and a heuristic approach based on Memetic Algorithm (MA) to solve the TS_VRP_ATW. The qualities of both solution approaches are measured by using the test problems in the literature. Experimental results show that the proposed MIP formulation provides the optimal solutions for the test problems with 25 and 50 nodes, and some test problems with 100 nodes. Results also show that the proposed MA is promising quality solutions in a short computation time.
Two-stage vehicle routing problem with arc time windows: A mixed integer programming formulation and a heuristic approach
S0377221713003871
This paper extends possibilities for analyzing incomplete ordinal information about the parameters of an additive value function. Such information is modeled through preference statements which associate sets of alternatives or attributes with corresponding sets of rankings. These preference statements can be particularly helpful in developing a joint preference representation for a group of decision-makers who may find difficulties in agreeing on numerical parameter values. Because these statements can lead to a non-convex set of feasible parameters, a mixed integer linear formulation is developed to establish a linear model for the computation of decision recommendations. This makes it possible to complete incomplete ordinal information with other forms of incomplete information.
Preference Programming with incomplete ordinal information
S0377221713003883
In this paper we consider the problem of optimal allocation of a redundant component for series, parallel and k-out-of-n systems of more than two components, when all the components are dependent. We show that for this problem is naturally to consider multivariate extensions of the joint bivariates stochastic orders. However, these extensions have not been defined or explicitly studied in the literature, except the joint likelihood ratio order, which was introduced by Shanthikumar and Yao (1991). Therefore we provide first multivariate extensions of the joint stochastic, hazard rate, reversed hazard rate order and next we provide sufficient conditions based on these multivariate extensions to select which component performs the redundancy.
On allocation of redundant components for systems with dependent components
S0377221713003895
A linguistic decision aiding technique for multi-criteria decision is presented. We define a relation between alternatives as multi-criteria semantic dominance (MCSD). It adopts the similar ideal of the stochastic dominance by utilizing the partial information of the decision maker’s preference, which is only ordinal or partially cardinal. The MCSD rules based on three typical types of semanteme functions are introduced and proven. By using these rules, all the alternatives under consideration are divided into two mutually exclusive sets called efficient set and inefficient set. The decision maker who has such a semanteme function will never choose the alternative from the corresponding inefficient set as the optimal one. In such a way, when we analyze the linguistic decision information, the inherent fuzziness of preference can be handled and several controversial operations of the linguistic terms can be avoided. An example is also provided to illustrate the procedure of the proposed method.
Multi-criteria semantic dominance: A linguistic decision aiding technique based on incomplete preference information
S0377221713003901
Information granulation and entropy are main approaches for investigating the uncertainty of information systems, which have been widely employed in many practical domains. In this paper, information granulation and uncertainty measures for interval-valued intuitionistic fuzzy binary granular structures are addressed. First, we propose the representation of interval-valued intuitionistic fuzzy information granules and examine some operations of interval-valued intuitionistic fuzzy granular structures. Second, the interval-valued intuitionistic fuzzy information granularity is introduced to depict the distinguishment ability of an interval-valued intuitionistic fuzzy granular structure (IIFGS), which is a natural extension of fuzzy information granularity. Third, we discuss how to scale the uncertainty of an IIFGS using the extended information entropy and the uncertainty among interval-valued intuitionistic fuzzy granular structures using the expanded mutual information derived from the presented intuitionistic fuzzy information entropy. Fourth, we discovery the relationship between the developed interval-valued intuitionistic fuzzy information entropy and the intuitionistic fuzzy information granularity presented in this paper.
Information granulation and uncertainty measures in interval-valued intuitionistic fuzzy information systems
S0377221713003913
Cellular manufacturing is the cornerstone of many modern flexible manufacturing techniques, taking advantage of the similarities between parts in order to decrease the complexity of the design and manufacturing life cycle. Part-Machine Grouping (PMG) problem is the key step in cellular manufacturing aiming at grouping parts with similar processing requirements or similar design features into part families and by grouping machines into cells associated to these families. The PMG problem is NP-complete and the different proposed techniques for solving it are based on heuristics. In this paper, a new approach for solving the PMG problem is proposed which is based on biclustering. Biclustering is a methodology where rows and columns of an input data matrix are clustered simultaneously. A bicluster is defined as a submatrix spanned by both a subset of rows and a subset of columns. Although biclustering has been almost exclusively applied to DNA microarray analysis, we present that biclustering can be successfully applied to the PMG problem. We also present empirical results to demonstrate the efficiency and accuracy of the proposed technique with respect to related ones for various formations of the problem.
Machine-part cell formation using biclustering
S0377221713003925
Firms that experience uncertainty in demand as well as challenging service levels face, among other things, the problem of managing employee shift numbers. Decisions regarding shift numbers often involve significant expansions or reductions in capacity, in response to changes in demand. In this paper, we quantify the impact of treating shifts in workforce expansion as investments, while considering required service level improvements. The decision to increase shifts, whether by employing temporary workers or hiring permanent employees, is one that involves significant risks. Traditional theories typically consider reversible investments, and thus do not capture the idiosyncrasies involved in shift management, in which costs are not fully reversible. In our study, by using real options theory, we quantify managers’ ability to consider this irreversibility, aiming to enable them to make shift decisions under conditions of uncertainty with the maximum level of flexibility. Our model aims to help managers make more accurate decisions with regard to shift expansion under service level targets, and to defer commitment until future uncertainties can be at least partially resolved. Overall, our investigation contributes to studies on the time required to introduce labour shift changes, while keeping the value of service level improvements in mind.
A real options approach to labour shifts planning under different service level targets
S0377221713003937
Quantitative decision support on personnel planning is often restricted to either rostering or staffing. There exist some approaches in which aspects at the staffing level and the rostering level are treated in a sequential way. Obviously, such practice risks producing suboptimal solutions at both decision levels. These arguments justify an integrated approach towards improving the overall quality of personnel planning. This contribution constitutes (1) the introduction of the roster quality staffing problem and (2) a three-step methodology that enables assessing the appropriateness of a personnel structure for achieving high quality rosters, while relying on an existing rostering algorithm. Based on the rostering assessment result, specific modifications to the personnel structure can be suggested at the staffing level. The approach is demonstrated by means of two different hospital cases, which have it that they are subject to complex rostering constraints. Experimental results show that the three-step methodology indeed points out alternative personnel structures that better comply with the rostering requirements. The roster analysis approach and the corresponding staffing recommendations integrate personnel planning needs at operational and tactical levels.
The roster quality staffing problem – A methodology for improving the roster quality by modifying the personnel structure
S0377221713004098
Transductive learning involves the construction and application of prediction models to classify a fixed set of decision objects into discrete groups. It is a special case of classification analysis with important applications in web-mining, corporate planning and other areas. This paper proposes a novel transductive classifier that is based on the philosophy of discrete support vector machines. We formalize the task to estimate the class labels of decision objects as a mixed integer program. A memetic algorithm is developed to solve the mathematical program and to construct a transductive support vector machine classifier, respectively. Empirical experiments on synthetic and real-world data evidence the effectiveness of the new approach and demonstrate that it identifies high quality solutions in short time. Furthermore, the results suggest that the class predictions following from the memetic algorithm are significantly more accurate than the predictions of a CPLEX-based reference classifier. Comparisons to other transductive and inductive classifiers provide further support for our approach and suggest that it performs competitive with respect to several benchmarks.
A memetic approach to construct transductive discrete support vector machines
S0377221713004104
A real life order-picking configuration that requires multiple pickers to cyclically move around fixed locations in a single direction is considered. This configuration is not the same, but shows similarities to, unidirectional carousel systems described in literature. The problem of minimising the pickers’ travel distance to pick all orders on this system is a variant of the clustered travelling salesman problem. An integer programming (IP) formulation of this problem cannot be solved in a realistic time frame for real life instances of the problem. A relaxation of this IP formulation is proposed that can be used to determine a lower bound on an optimal solution. It is shown that the solution obtained from this relaxation can always be transformed to a feasible solution for the IP formulation that is, at most, within one pick cycle of the lower bound. The computational results and performance of the proposed methods as well as adapted order sequencing approaches for bidirectional carousel systems from literature are compared to one another by means of real life historical data instances obtained from a retail distribution centre.
Order sequencing on a unidirectional cyclical picking line
S0377221713004116
An important issue in the management of urban traffic networks is the estimation of origin–destination (O–D) matrices whose entries represent the travel demands of network users. We discuss the challenges of O–D matrix estimation with incomplete, imprecise data. We propose a fuzzy set-based approach that utilises successive linear approximation. The fuzzy sets used have triangular membership functions that are easy to interpret and enable straightforward calibration of the parameters that weight the discrepancy between observed data and those predicted by the proposed approach. The method is potentially useful when prior O–D matrix entry estimates are unavailable or scarce, requiring trip generation information on origin departures and/or destination arrivals, leading to multiple modelling alternatives. The method may also be useful when there is no O–D matrix that can be user-optimally assigned to the network to reproduce observed link counts exactly. The method has been tested on some numerical examples from the literature and the results compare favourably with the results of earlier methods. It has also been successfully used to estimate O–D matrices for a practical urban traffic network in Brazil.
A fuzzy set-based approach to origin–destination matrix estimation in urban traffic networks with imprecise data
S0377221713004128
This paper addresses the ring star problem (RSP). The goal is to locate a cycle through a subset of nodes of a network aiming to minimize the sum of the cost of installing facilities on the nodes on the cycle, the cost of connecting them and the cost of assigning the nodes not on the cycle to their closest node on the cycle. A fast and efficient evolutionary algorithm is developed which is based on a new formulation of the RSP as a bilevel programming problem with one leader and two independent followers. The leader decides which nodes to include in the ring, one follower decides about the connections of the cycle and the other follower decides about the assignment of the nodes not on the cycle. The bilevel approach leads to a new form of chromosome encoding in which genes are associated to values of the upper level variables. The quality of each chromosome is evaluated by its fitness, by means of the objective function of the RSP. Hence, in order to compute the value of the lower level variables, two optimization problems are solved for each chromosome. The computational results show the efficiency of the algorithm in terms of the quality of the solutions yielded and the computing time. A study to select the best configuration of the algorithm is presented. The algorithm is tested on a set of benchmark problems providing very accurate solutions within short computing times. Moreover, for one of the problems a new best solution is found.
An efficient evolutionary algorithm for the ring star problem
S0377221713004141
In selecting sites for conservation purposes connectivity of habitat is important for allowing species to move freely within a protected area. The aim of the Reserve Network Design Problem is to choose a network of contiguous sites which maximises some conservation objective subject to various constraints. The problem has been solved using both heuristic and exact methods. Heuristic methods can handle much larger problems than exact methods but cannot guarantee an optimal solution. Improvements in both computer power and optimisation algorithms have increased the attractiveness of exact methods. The aim of this work is to formulate an improved algorithm for solving the Reserve Network Design Problem. Based on the concept of the transshipment problem a mixed integer programming model is formulated that achieves contiguity of the selected sites. The model is simpler in concept and to implement than previous exact models and does not require any assumptions about the regular shape of candidate sites. The method easily handles the case where more than one reserve system is required. We illustrate this with an example obtaining the trade-off between the number of contiguous areas and utility. We also illustrate that the important property of compactness can be achieved while maintaining contiguity of selected sites.
A new method to solve the fully connected Reserve Network Design Problem