FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0377221715007225
Mass customization is the new frontier in business competition for both manufacturing and service industries. To improve customer satisfaction, reduce lead-times and shorten costs, families of similar products are built jointly by combining reusable parts that implement the features demanded by the customers. To guarantee the validity of the products derived from mass customization processes, feature dependencies and incompatibilities are usually specified with a variability model. As market demand grows and evolves, variability models become increasingly complex. In such entangled models it is hard to identify which features are essential, dispensable, highly required by other features, or highly incompatible with the remaining features. This paper exposes the limitations of existing approaches to gather such knowledge and provides efficient algorithms to retrieve that information from variability models.
Augmenting measure sensitivity to detect essential, dispensable and highly incompatible features in mass customization
S0377221715007237
An operational research (OR) practitioner designing an intervention needs to engage in a practical process for choosing methods and implementing them. When a team of OR practitioners does this, and/or clients and stakeholders are involved, the social dynamics of designing the approach can be complex. So far, hardly any theory has been provided to support our understanding of these social dynamics. To this end, our paper offers a theory of ‘boundary games’. It is proposed that decision making on the configuration of the OR approach is shaped by communications concerning boundary judgements. These communications involve the OR practitioners in the team (and other participants, when relevant) ‘setting’, ‘following’, ‘enhancing’, ‘wandering outside’, ‘challenging’ and ‘probing’ boundaries concerning the nature of the context and the methods to be used. Empirical vignettes are provided of a project where three OR practitioners with different forms of methodological expertise collaborated on an intervention to support a Regional Council in New Zealand. In deciding how to approach a problem structuring workshop where the Regional Council employees would be participants, the OR team had to negotiate their methodological boundaries in some detail. The paper demonstrates that the theory of boundary games helps to analyse and describe the shifts in thinking that take place in this kind of team decision making. A number of implications for OR practitioners are discussed, including how this theory can contribute to reflective practice and improve awareness of what is happening during communications with OR colleagues, clients and participants.
Boundary games: How teams of OR practitioners explore the boundaries of intervention
S0377221715007249
Despite the complexity of implementing environmentally sustainable practices, an increasing number of firms have invested in eco-activities. This study investigates the association between eco-activities and operating performance over time. Moreover, this study explores the impact of eco-approaches (eco-collaboration and eco-certification) on operating performance. A difference-in-differences research design is established using operating performance data (COMPUSTAT) and eco-activity data (eco-announcements, eco-certification providers, and ASSET4). Empirical results reveal that eco-activities in the computer and electronics industry are associated with increased margin and revenue performance; however, the realization of benefits takes time (improvement typically occurs within three years of the corresponding eco-announcement). Furthermore, although eco-collaboration tends to be expensive to establish, operating performance is improved over the long term. This study also finds that eco-certification is associated with increased improvement in operating performance delivered by eco-activities.
Eco-activities and operating performance in the computer and electronics industry
S0377221715007250
We study models and algorithms for a Network Reduction (NR) problem that entails constructing a reduced network in place of an existing large network so that the shortest path lengths between specified node pairs in this network are equal to or only slightly longer than the corresponding shortest path lengths in the original network. Solving this problem can be very useful both to accelerate shortest path calculations in many practical contexts and to reduce the size of optimization models that contain embedded shortest path problems. This work was motivated by a real problem of scheduling and routing resources to perform spatially dispersed jobs, but also has other applications. We consider two variants of the NR problem—a Min-Size NR problem that minimizes the number of arcs in the reduced network while ensuring that the shortest path lengths in this network are close to the original lengths, and a Min-Length NR problem that minimizes a weighted sum of shortest path lengths over all the specified node pairs while limiting the number of arcs in the reduced network. We model both problems as integer programs with multi-commodity flows, and propose optimization-based heuristic algorithms to solve them. These methods include preprocessing, a shortest path-based procedure with local improvement, and a dual ascent algorithm. We report on the successful applications to reduce the network for practical infrastructure project planning and to condense a transportation network for distribution planning. We also compare these solutions with those obtained using algorithms for minimum length trees.
Models and algorithms for network reduction
S0377221715007262
This paper seeks to determine how the double marginalization phenomenon affects the tradeoff between polluting emissions and abatement activities related to pollution accumulation in a supply chain composed of one manufacturer and one retailer. The environmental consequence of this inefficiency, which emerges in a non-cooperative vertical setting governed by a single-parameter contract, is overlooked in the literature on pollution control. In the setup of a two-stage game, we investigate the impact of double marginalization for non-cooperative equilibrium. To check whether there are differences between dynamic and strategic effects of double marginalization on pollution accumulation, both the open-loop and feedback Nash equilibria are derived over a finite time horizon, with the cooperative solution as a benchmark.
Pollution accumulation and abatement policy in a supply chain
S0377221715007274
The dynamic thermal rating is a new technology that utilizes the capacity of power transmission lines based on the ambient factors and the line condition. It usually offers higher thermal capacity than the traditional static rating. We propose a multi-stage mixed integer programming model and find the optimal investment plan for the dynamic ratings using benders decomposition. The investment plan includes when and which line should be upgraded to dynamic rating and which line should be switched out of service. The problem is decomposed into a master problem and three sub-problems. The master problem explores the candidate lines for both the investment plan and the switching plan, throughout the planning horizon. The sub-problems evaluate the proposed plans in terms of unmet demand and generation cost. Generation and transmission contingencies are also included in the model. We use our model on Garver’s system and IEEE 118-bus power systems to demonstrate our solution approach. We conduct sensitivity analyses and study the uncertainty in real-time thermal ratings, loads, and the discounting rate. Our studies show that the utilization of the dynamic ratings and the practice of transmission switching are complementary and can reduce the cost on the 118-bus system up to 30 percent.
Optimal investment plan for dynamic thermal rating using benders decomposition
S0377221715007286
This paper analyzes the impact of production forecast errors on the expansion planning of a power system and investigates the influence of market design to facilitate the integration of renewable generation. For this purpose, we propose a programming modeling framework to determine the generation and transmission expansion plan that minimizes system-wide investment and operating costs, while ensuring a given share of renewable generation in the electricity supply. Unlike existing ones, this framework includes both a day-ahead and a balancing market so as to capture the impact of both production forecasts and the associated prediction errors. Within this framework, we consider two paradigmatic market designs that essentially differ in whether the day-ahead generation schedule and the subsequent balancing re-dispatch are co-optimized or not. The main features and results of the model set-ups are discussed using an illustrative four-node example and a more realistic 24-node case study.
Impact of forecast errors on expansion planning of power systems with a renewables target
S0377221715007298
The game theoretic perspective in auction bidding has provided a powerful normative framework for the analysis of auctions and it has generated an impressive volume of research contributions. Tracing this research, we review key recent advances, which have greatly expanded our understanding of the operation of auctions and produced a more accurate analysis of mechanism design and optimal bidding. We follow various experimental studies which have shown that the predictive power of auction theory remains in many cases limited. For this reason, we concentrate on research themes, which enhance the applicability of auction theory leading to more realistic descriptions of bidding behaviors. We identify important innovations, which expand the environment of auction bidding and carry us beyond rational decision making. We conclude providing directions of future research and discussing their implications.
Optimal bidding in auctions from a game theory perspective
S0377221715007304
We say that a polygon inscribed in the circle is asymmetric if it contains no two antipodal points being the endpoints of a diameter. Given n diameters of a circle and a positive integer k < n, this paper addresses the problem of computing a maximum area asymmetric k-gon having as vertices k < n endpoints of the given diameters. The study of this type of polygons is motivated by ethnomusiciological applications.
Asymmetric polygons with maximum area
S0377221715007316
In this paper, a hybrid algorithm based on variable neighborhood search and ant colony optimization is proposed to solve the single row facility layout problem. In the proposed algorithm, three neighborhood structures are utilized to enhance the exploitation ability. Meanwhile, new gain techniques are developed to reduce the mathematical calculations of the objective function values. Furthermore, ant colony optimization as the shaking step is used to avoid being stuck at the local optima. In addition, a novel pheromone updating rule has been proposed based on both the best and worst solutions of the ants. A reverse criterion based on edit distance measure is applied to help ants to converge to the best solution and reduce the solution space. Finally, numerical simulation is carried out based on the benchmark instances, and the comparisons with some existing algorithms demonstrate the effectiveness of the proposed algorithm.
Hybridizing variable neighborhood search with ant colony optimization for solving the single row facility layout problem
S0377221715007328
In this paper we consider a single-server queueing model in which the customers arrive according to a versatile point process that includes correlated arrivals. An arriving customer can either request for an individual service or for a cooperative service (to be offered along with other customers with similar requests) with some pre-specified probabilities. There is a limit placed on the number of customers requiring cooperative services at any given time. Assuming the service times to be exponentially distributed with possibly different parameters depending on individual or cooperative services, we analyze this model using matrix-analytic method. Second, we simulate this model to obtain a couple of key performance measures which are difficult to compute analytically as well as numerically to show the benefit of cooperative services in queueing. Interesting numerical examples from both analytical and simulated models are discussed. We believe this type of queueing model, which is very much applicable in service areas, has not been studied in the literature.
Queueing models with optional cooperative services
S0377221715007547
The nucleolus is one of the most important solution concepts in cooperative game theory as a result of its attractive properties - it always exists (if the imputation is non-empty), is unique, and is always in the core (if the core is non-empty). However, computing the nucleolus is very challenging because it involves the lexicographical minimization of an exponentially large number of excess values. We present a method for computing the nucleoli of large games, including some structured games with more than 50 players, using nested linear programs (LP). Although different variations of the nested LP formulation have been documented in the literature, they have not been used for large games because of the large size and number of LPs involved. In addition, subtle issues such as how to deal with multiple optimal solutions and with tight constraint sets need to be resolved in each LP in order to formulate and solve the subsequent ones. Unfortunately, this technical issue has been largely overlooked in the literature. We treat these issues rigorously and provide a new nested LP formulation that is smaller in terms of the number of large LPs and their sizes. We provide numerical tests for several games, including the general flow games, the coalitional skill games and the weighted voting games, with up to 100 players.
Finding the nucleoli of large cooperative games
S0377221715007559
Complete tree search is a highly effective method for tackling Mixed-Integer Programming (MIP) problems, and over the years, a plethora of branching heuristics have been introduced to further refine the technique for varying problems. Yet while each new approach continued to push the state-of-the-art, parallel research began to repeatedly demonstrate that there is no single method that would perform the best on all problem instances. Tackling this issue, portfolio algorithms took the process a step further, by trying to predict the best heuristic for each instance at hand. However, the motivation behind algorithm selection can be taken further still, and used to dynamically choose the most appropriate algorithm for each encountered sub-problem. In this paper we identify a feature space that captures both the evolution of the problem in the branching tree and the similarity among sub-problems of instances from the same MIP models. We show how to exploit these features on-the-fly in order to decide the best time to switch the branching variable selection heuristic and then show how such a system can be trained efficiently. Experiments on a highly heterogeneous collection of hard MIP instances show significant gains over the standard pure approach which commits to a single heuristic throughout the search.
DASH: Dynamic Approach for Switching Heuristics
S0377221715007560
In this paper we generalise existing models of loss-averse preferences. This extension clarifies the impact of stochastic changes in risk on the optimal degree of risk taking. Our more general model highlights an intuitive link between the literature on loss-averse behaviours and the notions of prudence and temperance recently introduced in the literature. We also stress the link between our approach and the use of VaR and CVaR as risk measures.
Loss-averse preferences and portfolio choices: An extension
S0377221715007572
Emissions of greenhouse gases are not as free as they used to be. Under stringent regulations, manufacturers increasingly find that their emissions have a steep monetary, environmental and social price. In manufacturing industry, remanufacturing has an important role to play with its inherent economic, environmental and social opportunities which warrant regulatory action. In this paper, we characterize the optimal emissions taxation policy in order for remanufacturing to deliver those benefits. In particular, using a leader-follower Stackelberg game model, we investigate the impact of emissions taxes on the optimal production and pricing decisions of a manufacturer who could remanufacture its own product. We characterize whether/under what conditions the manufacturer’s decision to remanufacture under emissions regulation reduces its environmental impact (as measured by total greenhouse emissions), whilst increasing its profits (a win-win situation). On the policy side, we delineate how emissions taxes can be instituted to realize the inherent economic, environmental and social benefits of remanufacturing (the triple win of remanufacturing). Two critical components of this analysis are the issue of demand cannibalization from the remanufactured product and the low-emission advantage of remanufacturing. We further investigate the impact of remanufacturing- and society-related factors on the balance among firm-level profits, environmental impact and social welfare, where the collection rate of end-of-use products and the cost to the environment turn out to be decisive in deriving the triple win benefits from remanufacturing. Last, we extend our analyses to an emissions trading setting where emissions are regulated using tradeable permits, and investigate the economic implications of remanufacturing under emissions trading vis-á-vis emissions taxation.
Managing new and remanufactured products to mitigate environmental damage under emissions regulation
S0377221715007584
Component commonality is an efficient mechanism to mitigate the negative impact of a highly diversified product line. In this paper, we address the optimal commonality problem in a real multidimensional space, developing a novel algorithmic approach aimed at transforming a continuous multidimensional decision problem into a discrete decision problem. Moreover, we show that our formulation is equivalent to the k-median facility location problem. It is well known that when several dimensions are included and components’ features are defined in the real line, the number of potential locations grows exponentially, hindering the application of standard integer programming techniques for solving the problem. However, as formulated, the multidimensional component commonality problem is a supermodular minimization problem, a family of problems for which greedy-type heuristics show very good performance. Based on this observation, we provide a collection of descent-greedy algorithms which benefits from certain structural properties of the problem and can handle substantially large instances. Additionally, a MathHeuristic is developed to improve the performance of the algorithms. Finally, results of a number of computational experiments, which testify for the good performance of our heuristics, are presented.
The component commonality problem in a real multidimensional space: An algorithmic approach
S0377221715007596
The problem of the aggregation of multi-agents preference orderings has received considerable attention in the scientific literature, because of its importance for different fields of research. Yager (2001) proposed an algorithm for addressing this problem when the agents’ importance is expressed through a rank-ordering, instead of a set of weights. The algorithm by Yager is simple and automatable but is subject to some constraints, which may limit its range of application: (i) preference orderings should not include incomparable and/or omitted alternatives, and (ii) the fused ordering may sometimes not reflect the majority of the multi-agent preference orderings. The aim of this article is to present a generalized version of the algorithm by Yager, which overcomes the above limitations and, in general, is adaptable to less stringent input data. A detailed description of the new algorithm is supported by practical examples.
A new proposal for fusing individual preference orderings by rank-ordered agents: A generalization of the Yager's algorithm
S0377221715007602
In this paper, a risk-based factorial probabilistic inference method is proposed to address the stochastic objective function and constraints as well as their interactions in a systematic manner. To tackle random uncertainties, decision makers’ risk preferences are taken into account in the decision process. Statistical significance for each of the linear, nonlinear, and interaction effects of risk parameters is uncovered through conducting a multi-factorial analysis. The proposed methodology is applied to a case study of flood control to demonstrate its validity and applicability. A number of decision alternatives are obtained under various combinations of risk levels associated with the objective function and chance constraints, facilitating an in-depth analysis of trade-offs between economic outcomes and associated risks. Dynamic complexities are addressed through a two-stage decision process as well as through capacity expansion planning for flood diversion within a multi-region, multi-flood-level, and multi-option context. Findings from the factorial experiment reveal the multi-level interactions between risk parameters and quantify their contributions to the variability of the total system cost. The proposed method is compared against the fractile criterion optimization model and the chance-constrained programming technique, respectively.
Risk-based factorial probabilistic inference for optimization of flood control systems with correlated uncertainties
S0377221715007614
The aim of this paper is to provide a new straightforward measure-free methodology based on convex hulls to determine the no-arbitrage pricing bounds of an option (European or American). The pedagogical interest of our methodology is also briefly discussed. The central result, which is elementary, is presented for a one period model and is subsequently used for multiperiod models. It shows that a certain point, called the forward point, must lie inside a convex polygon. Multiperiod models are then considered and the pricing bounds of a put option (European and American) are explicitly computed. We then show that the barycentric coordinates of the forward point can be interpreted as a martingale pricing measure. An application is provided for the trinomial model where the pricing measure has a simple geometric interpretation in terms of areas of triangles. Finally, we consider the case of entropic barycentric coordinates in a multi asset framework.
A new elementary geometric approach to option pricing bounds in discrete time models
S0377221715007626
Personnel rostering is a personnel scheduling problem in which shifts are assigned to employees, subject to complex organisational and contractual time-related constraints. Academic advances in this domain mainly focus on solving specific variants of this problem using intricate exact or (meta)heuristic algorithms, while little attention has been devoted to studying the underlying structure of the problems. The general assumption is that these problems, even in their most simplified form, are NP-hard. However, such claims are rarely supported with a proof for the problem under study. The present paper refutes this assumption by presenting minimum cost network flow formulations for several personnel rostering problems. Additionally, these problems are situated among the existing academic literature to obtain insights into what makes personnel rostering hard.
Polynomially solvable personnel rostering problems
S0377221715007638
In this paper we present a novel approach for firm default probability estimation. The methodology is based on multivariate contingent claim analysis and pair copula constructions. For each considered firm, balance sheet data are used to assess the asset value, and to compute its default probability. The asset pricing function is expressed via a pair copula construction, and it is approximated via Monte Carlo simulations. The methodology is illustrated through an application to the analysis of both operative and defaulted firms.
Default probability estimation via pair copula constructions
S0377221715007833
Environmental, social and economic concerns motivate the operation of closed-loop supply chain networks (CLSCN) in many industries. We propose a novel profit maximization model for CLSCN design as a mixed-integer linear program in which there is flexibility in covering the proportions of demand satisfied and returns collected based on the firm's policies. Our major contribution is to develop a novel hybrid robust-stochastic programming (HRSP) approach to simultaneously model two different types of uncertainties by including stochastic scenarios for transportation costs and polyhedral uncertainty sets for demands and returns. Transportation cost scenarios are generated using a Latin Hypercube Sampling method and scenario reduction is applied to consolidate them. An accelerated stochastic Benders decomposition algorithm is proposed for solving this model. To speed up the convergence of this algorithm, valid inequalities are introduced to improve the lower bound quality, and also a Pareto-optimal cut generation scheme is used to strengthen the Benders optimality cuts. Numerical studies are performed to verify our mathematical formulation and also demonstrate the benefits of the HRSP approach. The performance improvements achieved by the valid inequalities and Pareto-optimal cuts are demonstrated in randomly generated instances.
Hybrid robust and stochastic optimization for closed-loop supply chain network design using accelerated Benders decomposition
S0377221715007845
In marketing analytics applications in OR, the modeler often faces the problem of selecting key variables from a large number of possibilities. For example, SKU level retail store sales are affected by inter and intra category effects which potentially need to be considered when deciding on promotional strategy and producing operational forecasts. But no research has yet put this well accepted concept into forecasting practice: an obvious obstacle is the ultra-high dimensionality of the variable space. This paper develops a four steps methodological framework to overcome the problem. It is illustrated by investigating the value of both intra- and inter-category SKU level promotional information in improving forecast accuracy. The method consists of the identification of potentially influential categories, the building of the explanatory variable space, variable selection and model estimation by a multistage LASSO regression, and the use of a rolling scheme to generate forecasts. The success of this new method for dealing with high dimensionality is demonstrated by improvements in forecasting accuracy compared to alternative methods of simplifying the variable space. The empirical results show that models integrating more information perform significantly better than the baseline model when using the proposed methodology framework. In general, we can improve the forecasting accuracy by 12.6 percent over the model using only the SKU's own predictors. But of the improvements achieved, 95 percent of it comes from the intra-category information, and only 5 percent from the inter-category information. The substantive marketing results also have implications for promotional category management.
Demand forecasting with high dimensional data: The case of SKU retail sales forecasting with intra- and inter-category promotional information
S0377221715007857
This paper studies the implications of upstream and/or downstream horizontal mergers on suppliers, retailers and consumers, in a bilateral oligopolistic system. We especially focus on market power and operational synergy benefits that such mergers engender. Starting with a benchmark pre-merger scenario in which firms compete on prices at each level, we find that the above two consequences individually almost have opposite effects on the merging and non-merging firms’ optimal decisions/profits after a merger. Furthermore, even though the effects of upstream and downstream mergers are different, the vertical supply chain partners will always try to reduce their losses if the market power effect dominates, but will take actions that improve their profits if the synergy effect is stronger. The above results are robust enough to hold even when taking into account intra-brand competition among retailers.
Effects of upstream and downstream mergers on supply chain profitability
S0377221715007869
A great deal of recent literature discusses the major anomalies that have appeared in the interest rate market following the credit crunch in August 2007. There were major consequences with regard to the development of spreads between quantities that had remained the same until then. In particular, we consider the spread that opened up between the Libor rate and the OIS rate, and the consequent empirical evidence that FRA rates can no longer be replicated using Libor spot rates due to the presence of a Basis spread between floating legs of different tenors. We develop a credit risk model for pricing Basis Swaps in a multi-curve setup. The Libor rate is considered here as a risky rate, subject to the credit risk of a generic counterparty whose credit quality is refreshed at each fixing date. A defaultable HJM methodology is used to model the term structure of the credit spread, defined through the implied default intensity of the contributing banks of the Libor corresponding to a chosen tenor. A forward credit spread volatility function depending on the entire credit spread term structure is assumed. In this context, we implement the model and obtain the price of Basis Swaps using a numerical scheme based on the Euler–Maruyama stochastic integral approximation and the Monte Carlo method.
A defaultable HJM modelling of the Libor rate for pricing Basis Swaps after the credit crunch
S0377221715007870
The selection of an optimal process mean is an important problem in production planning and quality control research. Most of the previous studies in the field have analyzed the problem for a fixed exogenous price. However, in many realistic situations, besides product quality, product pricing is a paramount factor that determines the purchase behavior in the market. In the experts’ opinion an integrated framework that incorporates pricing as a decision tool could significantly improve a firm’s profitability. Most of the manufacturing firms yield products with distinguishable characteristics and therefore it is desirable to sell these products on the market at differentiated prices. Whereas the market segmentation achieved using differentiated prices is often imperfect, a firm may experience demand leakages. Thus, an optimal price decision must incorporate demand leakage effects for the firm to benefit from its differentiated pricing strategy. In this paper, these issues are addressed by proposing an optimal framework for joint determination of process mean, pricing, production quantity and market segmentation using differentiated pricing. This research discusses a production process that manufactures multi-class (grade) products based on their quality attribute. The products are sold in primary and secondary market at differentiated prices while experiencing demand leakages. The nonconforming items are reworked at an additional cost. Mathematical models are developed to address the problem under both price-dependent deterministic and stochastic demand situations. We propose a harmony search meta-heuristic for solving the models. A numerical experimentation is presented to study the significance of the proposed integrated decision framework.
Joint optimal determination of process mean, production quantity, pricing, and market segmentation with demand leakage
S0377221715007882
Multi-response surface (MRS) optimization in quality design often involves some problems such as correlation among multiple responses, robustness measurement of multivariate process, confliction among multiple goals, prediction performance of the process model and the reliability assessment for optimization results. In this paper, a new Bayesian approach is proposed to address the aforementioned multi-response optimization problems. The proposed approach not only measures the reliability of an acceptable optimization result, but also incorporates expected loss (i.e., bias and robustness) into a uniform framework of Bayesian modeling and optimization. The advantages of this approach are illustrated by one example. The results show that the proposed approach can give more reasonable solutions than the existing approaches when both quality loss and the reliability of optimization results are important issues.
A new Bayesian approach to multi-response surface optimization integrating loss function with posterior probability
S0377221715007894
Rising feed-in from renewable energy sources decreases margins, load factors, and thereby profitability of conventional generation in several electricity markets around the world. At the same time, conventional generation is still needed to ensure security of electricity supply. Therefore, capacity markets are currently being widely discussed as a measure to ensure generation adequacy in markets such as France, Germany, and the United States (e.g., Texas), or even implemented for example in Great Britain. We develop a dynamic capacity investment model to assess the effect of different capacity market design options in three scenarios: (1) no capacity market, (2) a capacity market for new capacity only, and (3) a capacity market for new and existing capacity. We compare the results along the three key dimensions of electricity policy—affordability, reliability, and sustainability. In a Great Britain case study we find that a capacity market increases generation adequacy. Furthermore, our results show that a capacity market can lower the total bill of generation because it can reduce lost load and the potential to exercise market power. Additionally, we find that a capacity market for new capacity only is cheaper than a capacity market for new and existing capacity because it remunerates fewer generators in the first years after its introduction.
Capacity market design options: A dynamic capacity investment model and a GB case study
S0377221715007900
We consider an approach for scheduling the multi-period collection of recyclable materials. Citizens can deposit glass and paper for recycling in small cubes located at several collection points. The cubes are emptied by a vehicle that carries two containers and the material is transported to two treatment facilities. We investigate how the scheduling of emptying and transportation should be done in order to minimize the operation cost, while providing a high service level and ensuring that capacity constraints are not violated. We develop a heuristic solution method for solving the daily planning problem with uncertain accretion rate for materials by considering a rolling time horizon of a few days. We apply a construction heuristic in the first period and re-optimize the solution every subsequent period with a variable neighborhood search. Computational experiments are conducted on real life data.
A variable neighborhood search for the multi-period collection of recyclable materials
S0377221715007912
To improve service delivery, healthcare facilities look toward operations research techniques, discrete event simulation and continuous improvement approaches such as Lean manufacturing. Lean management often includes a Kaizen event to facilitate the acceptance of the project by the employees. Business game is also used as a tool to increase understanding of Lean management concepts. In this paper, we study how a business game can be used jointly with discrete event simulation to test scenarios defined by team members during a Kaizen event. The aim is to allow a rapid and successful implementation of the solutions developed during the Kaizen. Our approach has been used to improve patients’ trajectory in an outpatient hematology–oncology clinic. Patient delays before receiving their treatment were reduced by 74 percent after 19 weeks.
Use of a discrete-event simulation in a Kaizen event: A case study in healthcare
S0377221715007924
It is often stated that involving the client in operational research studies increases conceptual learning about a system which can then be applied repeatedly to other, similar, systems. Our study provides a novel measurement approach for behavioural OR studies that aim to analyse the impact of modelling in long term problem solving and decision making. In particular, our approach is the first to operationalise the measurement of transfer of learning from modelling using the concepts of close and far transfer, and overconfidence. We investigate learning in discrete-event simulation (DES) projects through an experimental study. Participants were trained to manage queuing problems by varying the degree to which they were involved in building and using a DES model of a hospital emergency department. They were then asked to transfer learning to a set of analogous problems. Findings demonstrate that transfer of learning from a simulation study is difficult, but possible. However, this learning is only accessible when sufficient time is provided for clients to process the structural behaviour of the model. Overconfidence is also an issue when the clients who were involved in model building attempt to transfer their learning without the aid of a new model. Behavioural OR studies that aim to understand learning from modelling can ultimately improve our modelling interactions with clients; helping to ensure the benefits for a longer term; and enabling modelling efforts to become more sustainable.
Can involving clients in simulation studies help them solve their future problems? A transfer of learning experiment
S0377221715007936
We propose a new Stochastic Dominance (SD) criterion based on standard risk aversion, which assumes decreasing absolute risk aversion and decreasing absolute prudence. To implement the proposed criterion, we develop linear systems of optimality conditions for a given prospect relative to a discrete or polyhedral choice opportunity set in a general state-space model. An empirical application to historical stock market data shows that small-loser stocks are more appealing to standard risk averters than the existing mean-variance (MV) and higher-order SD criteria suggest, due to their upside potential. Depending on the assumed trading strategy and evaluation horizon, accounting for standardness increases the estimated abnormal returns of these stocks by about 50 to 200 basis points per annum relative to MV and higher-order SD criteria. An analysis of the MV tangency portfolio shows that the opportunity cost of the MV approximation to direct utility maximization can be substantial.
Standard Stochastic Dominance
S0377221715007948
Operational research assumes that organizational decision-making processes can be improved by making them more rigorous and analytical through the application of quantitative and qualitative modeling. However, we have only a limited understanding of how modeling actually affects organizational decision-making behavior, positively or negatively. Drawing from the Carnegie School's tradition of organizational research, this paper identifies two types of organizational decision-making activities where modeling can be applied: routine decision making and problem solving. These two types of decision-making activities have very different implications for model-based decision support, both in terms of the positive and negative behavioral impacts associated with modeling as well as the criteria used to evaluate models and modeling practices. Overall, the paper offers novel insights that help understand why modeling activities are successful (or not), explains why practitioners adopt some approaches more readily than others and points to new opportunities for empirical research and method development.
Model-based organizational decision making: A behavioral lens
S0377221715007961
The ground routing problem consists in scheduling the movements of aircraft on the ground between runways and parking positions while respecting operational and safety requirements in the most efficient way. We present a Mixed Integer Programming (MIP) formulation for routing aircraft along a predetermined path. This formulation is generalized to allow several possible paths. Our model takes into account the classical performance indicators of the literature (the average taxi and completion times) but also the main punctuality indicators of the air traffic industry (the average delay and the on time performance). Then we investigate their relationship through experiments based on real data from Copenhagen Airport (CPH). We show that the industry punctuality indicators are in contradiction with the objective of reducing taxi times and therefore pollution emissions. We propose new indicators that are more sustainable, but also more relevant for stakeholders. We also show that alternate path cannot improve the performance indicators.
The aircraft ground routing problem: Analysis of industry punctuality indicators in a sustainable perspective
S0377221715007985
This paper shows the potential of the Tweedie distribution in the analysis of international trade data. The availability of a flexible model for describing traded quantities is important for several reasons. First, it can provide direct support to policy makers. Second, it allows the assessment of the statistical performance of anti-fraud tools on a large number of data sets artificially generated with known statistical properties, which must comply with real world scenarios. We see the advantages of adopting the Tweedie model in several data sets which are particularly relevant in the anti-fraud context and which show non-trivial features. We also provide a systematic outline of the genesis of the Tweedie distribution and we address a number of relevant statistical and computational issues, such as the development of efficient algorithms both for parameter estimation and for random variate generation.
Modeling international trade data with the Tweedie distribution for anti-fraud and policy support
S0377221715007997
In this study, we discuss linear orders of intuitionistic fuzzy values (IFVs). Then we introduce an intuitionistic fuzzy weighted arithmetic average operator. Some fundamental properties of this operator are investigated. Based on the introduced operator, we propose a new model for intuitionistic fuzzy multi-attributes decision making. The proposed model deals with the degree of membership and degree of nonmembership separately. It is resistant to extreme data.
A new model for intuitionistic fuzzy multi-attributes decision making
S0377221715008000
This paper presents a new solution approach to solve the resource-constrained project scheduling problem in the presence of three types of logical constraints. Apart from the traditional AND constraints with minimal time-lags, these precedences are extended to OR constraints and bidirectional (BI) relations. These logical constraints extend the set of relations between pairs of activities and make the RCPSP definition somewhat different from the traditional RCPSP research topics in literature. It is known that the RCPSP with AND constraints, and hence its extension to OR and BI constraints, is NP-hard. The new algorithm consists of a set of network transformation rules that removes the OR and BI logical constraints to transform them into AND constraints and hereby extends the set of activities to maintain the original logic. A satisfiability (SAT) solver is used to guarantee the original precedence logic and is embedded in a metaheuristic search to resource feasible schedules that respect both the limited renewable resource availability as well as the precedence logic. Computational results on two well-known datasets from literature show that the algorithm can compete with the multi-mode algorithms from literature when no logical constraints are taken into account. When the logical constraints are taken into account, the algorithm can report major reductions in the project makespan for most of the instances within a reasonable time.
An approach using SAT solvers for the RCPSP with logical constraints
S0377221715008012
The reliability of the power plants and transmission lines in the electricity industry is crucial for meeting demand. Consequently, timely maintenance plays a major role reducing breakdowns and avoiding expensive production shutdowns. By now, the literature contains a sound body of work focused on improving decision making in generating units and transmission lines maintenance scheduling. The purpose of this paper is to review that literature. We update previous surveys and provide a more global view of the problem: we study both regulated and deregulated power systems and explore some important features such as network considerations, fuel management, and data uncertainty.
Maintenance scheduling in the electricity industry: A literature review
S0377221715008024
The fresh fruit supply chain is characterized by long supply lead times combined with significant supply and demand uncertainties, and relatively thin margins. These challenges generate a need for management efficiency and the use of modern decision technology tools. We review some of the literature on operational research models applied to the fresh fruit supply chain. It is an attempt to gain a better understanding of the OR methods used in the revised papers apparently independent and oriented towards problem solving rather than theory developing. We conclude by outlining what we see as some of the significant new problems facing the industry like the lack of holistic approaches for the design and management of fresh FSC. Finally, some future research directions are indicated.
Operational research models applied to the fresh fruit supply chain
S0377221715008036
This article describes a methodology developed to find robust solutions to a novel timetabling problem encountered during a course. The problem requires grouping student teams according to diversity/homogeneity criteria and assigning the groups to time-slots for presenting their project results. In this article, we develop a mixed integer programming (MIP) formulation of the problem and then solve it with CPLEX. Rather than simply using the optimal solution reported, we obtain a set of solutions provided by the solution pool feature of the solution engine. We then map these solutions to a network, in which each solution is a node and an edge represents the distance between a pair of solutions (as measured by the number of teams assigned to a different time slot in those solutions). Using a scenario-based exact robustness measure, we test a set of metrics to determine which ones can be used to heuristically rank the solutions in terms of their robustness measure. Using seven semesters’ worth of actual data, we analyze performances of the solution approach and the metrics. The results show that by using the solution pool feature, analysts can quickly obtain a set of Pareto-optimal solutions (with objective function value and the robustness measure as the two criteria). Furthermore, two of the heuristic metrics have strong rank correlation with the robustness measure (mostly above 0.80) making them quite suitable for use in the development of new heuristic search algorithms that can improve the solution pool.
Finding robust timetables for project presentations of student teams
S0377221715008048
Preference rankings virtually appear in all fields of science (political sciences, behavioral sciences, machine learning, decision making and so on). The well-known social choice problem consists in trying to find a reasonable procedure to use the aggregate preferences or rankings expressed by subjects to reach a collective decision. This turns out to be equivalent to estimate the consensus (central) ranking from data and it is known to be a NP-hard problem. A useful solution has been proposed by Emond and Mason in 2002 through the Branch-and-Bound algorithm (BB) within the Kemeny and Snell axiomatic framework. As a matter of fact, BB is a time demanding procedure when the complexity of the problem becomes untractable, i.e. a large number of objects, with weak and partial rankings, in presence of a low degree of consensus. As an alternative, we propose an accurate heuristic algorithm called FAST that finds at least one of the consensus ranking solutions found by BB saving a lot of computational time. In addition, we show that the building block of FAST is an algorithm called QUICK that finds already one of the BB solutions so that it can be fruitfully considered to speed up even more the overall searching procedure if the number of objects is low. Simulation studies and applications on real data allows to show the accuracy and the computational efficiency of our proposal.
Accurate algorithms for identifying the median ranking when dealing with weak and partial rankings under the Kemeny axiomatic approach
S0377221715008061
Increasing attention is given to on-time delivery of goods in the distribution and logistics industry. Due to uncertainties in customer demands, on-time deliveries cannot be ensured frequently. The vehicle capacity may be exceeded along the planned delivery route, and then the vehicle has to return to the depot for reloading of the goods. In this paper, such on-time delivery issues are formulated as a vehicle routing problem with stochastic demands and time windows. Three probabilistic models are proposed to address on-time delivery from different perspectives. The first one aims to search delivery routes with minimum expected total cost. The second one is to maximize the sum of the on-time delivery probabilities to customers. The third one seeks to minimize the expected total cost, while ensuring a given on-time delivery probability to each customer. Having noted that solutions of the proposed models are affected by the recourse policy deployed in cases of route failures, a preventive restocking policy is examined and compared with a detour-to-depot recourse policy. A numerical example indicates that the preventive restocking policy can help obtain better solutions to the proposed models and its effectiveness depends on the solution structure. It is also shown that the third model can be used to determine the minimum number of vehicles required to satisfy customers’ on-time delivery requirements.
On-time delivery probabilistic models for the vehicle routing problem with stochastic demands and time windows
S0377221715008073
The open-pit mine production scheduling problem (MPSP) deals with the optimization of the net present value of a mining asset and has received significant attention in recent years. Several solution methods have been proposed for its deterministic version. However, little is reported in the literature about its stochastic version, where metal uncertainty is accounted for. Moreover, most methods focus on the mining sequence and do not consider the flow of the material once mined. In this paper, a new MPSP formulation accounting for metal uncertainty and considering multiple destinations for the mined material, including stockpiles, is introduced. In addition, four different heuristics for the problem are compared; namely, a tabu search heuristic incorporating a diversification strategy (TS), a variable neighborhood descent heuristic (VND), a very large neighborhood search heuristic based on network flow techniques (NF), and a diversified local search (DLS) that combines VND and NF. The first two heuristics are extensions of existing methods recently proposed in the literature, while the last two are novel approaches. Numerical tests indicate that the proposed solution methods are effective, able to solve in a few minutes up to a few hours instances that standard commercial solvers fail to solve. They also indicate that NF and DLS are in general more efficient and more robust than TS and VND.
Network-flow based algorithms for scheduling production in multi-processor open-pit mines accounting for metal uncertainty
S0377221715008085
Voting problems are central in the area of social choice. In this article, we investigate various voting systems and types of control of elections. We present integer linear programming (ILP) formulations for a wide range of NP-hard control problems. Our ILP formulations are flexible in the sense that they can work with an arbitrary number of candidates and voters. Using the off-the-shelf solver Cplex, we show that our approaches can manipulate elections with a large number of voters and candidates efficiently.
Solving hard control problems in voting systems via integer programming
S0377221715008097
Bayesian inference and probabilistic rough sets (PRSs) provide two methods for data analysis. Both of them use probabilities to express uncertainties and knowledge in data and to make inference about data. Many proposals have been made to combine Bayesian inference and rough sets. The main objective of this paper is to present a unified framework that enables us (a) to review and classify Bayesian approaches to rough sets, (b) to give proper perspectives of existing studies, and (c) to examine basic ingredients and fundamental issues of Bayesian approaches to rough sets. By reviewing existing studies, we identify two classes of Bayesian approaches to PRSsand three fundamental issues. One class is interpreted as Bayesian classification rough sets, which is built from decision-theoretic rough set (DTRS) models proposed by Yao, Wong and Lingras. The other class is interpreted as Bayesian confirmation rough sets, which is built from parameterized rough set models proposed by Greco, Matarazzo and Słowiński. Although the two classes share many similarities in terms of making use of Bayes’ theorem and a pair of thresholds to produce three regions, their semantic interpretations and, hence, intended applications are different. The three fundamental issues are the computation and interpretation of thresholds, the estimation of required conditional probabilities, and the application of derived three regions. DTRS models provide an interpretation and a method for computing a pair of thresholds according to Bayesian decision theory. Naive Bayesian rough set models give a practical technique for estimating probability based on Bayes’ theorem and inference. Finally, a theory of three-way decisions offers a tool for building ternary classifiers. The main contribution of the paper lies in weaving together existing results into a coherent study of Bayesian approaches to rough sets, rather than introducing new specific results.
Two Bayesian approaches to rough sets
S0377221715008103
In this system dynamics simulation study we analyze a series of feedback causal relationships wherein R&D investments create new knowledge stocks, increasing technological knowledge “triggers” and interactions among entities of technological innovation, leading to firm profits through the commercialization process. Major aspects of this study are: First, we provide a holistic modeling approach to technological innovation in order to explicate the hidden causal relationships underlying innovation and the mechanisms which lead to innovative performance. Second, hypotheses pertaining to process and product innovation are tested utilizing the system dynamics model to open the black box of technological innovation incorporating long-term and dynamic perspectives. This study addresses strategic innovation policies vis-a-vis product complexity and suggests that a new paradigm wherein product and process innovations are pursued concurrently instead of sequentially can ensure a firm's sustainable growth.
Opening the technological innovation black box: The case of the electronics industry in Korea
S0377221715008115
In this study, we developed a DEA–based performance measurement methodology that is consistent with performance assessment frameworks such as the Balanced Scorecard. The methodology developed in this paper takes into account the direct or inverse relationships that may exist among the dimensions of performance to construct appropriate production frontiers. The production frontiers we obtained are deemed appropriate as they consist solely of firms with desirable levels for all dimensions of performance. These levels should be at least equal to the critical values set by decision makers. The properties and advantages of our methodology against competing methodologies are presented through an application to a real-world case study from retail firms operating in the US. A comparative analysis between the new methodology and existing methodologies explains the failure of the existing approaches to define appropriate production frontiers when directly or inversely related dimensions of performance are present and to express the interrelationships between the dimensions of performance.
Performance measurement with multiple interrelated variables and threshold target levels: Evidence from retail firms in the US
S0377221715008127
Two important problems arising in traditional asset allocation methods are the sensitivity to estimation error of portfolio weights and the high dimensionality of the set of candidate assets. In this paper, we address both issues by proposing a new criterion for portfolio selection. The new criterion is a two-stage description of the available information, where the q-entropy, a generalized measure of information, is used to code the uncertainty of the data given the parametric model and the uncertainty related to the model choice. The information about the model is coded in terms of a prior distribution that promotes asset weights sparsity. Our approach carries out model selection and estimation in a single step, by selecting a few assets and estimating their portfolio weights simultaneously. The resulting portfolios are doubly robust, in the sense that they can tolerate deviations from both assumed data model and prior distribution for model parameters. Empirical results on simulated and real-world data support the validity of our approach.
Sparse and robust normal and t- portfolios by penalized Lq-likelihood minimization
S0377221715008139
The Teacher Assignment Problem is part of the University Timetabling Problem and involves assigning teachers to courses, taking their preferences into consideration. This is a complex problem, usually solved by means of heuristic algorithms. In this paper a Mixed Integer Linear Programing model is developed to balance teachers’ teaching load (first optimization criterion), while maximizing teachers’ preferences for courses according to their category (second optimization criterion). The model is used to solve the teachers-courses assignment in the Department of Management at the School of Industrial Engineering of Barcelona, in the Universitat Politècnica de Catalunya. Results are discussed regarding the importance given to the optimization criteria. Moreover, to test the model's performance a computational experiment is carried out using randomly generated instances based on real patterns. Results show that the model is proven to be suitable for many situations (number of teachers-courses and weight of the criteria), being useful for departments with similar requests.
A MILP model for the teacher assignment problem considering teachers’ preferences
S0377221715008140
This paper proposes the Flexible and Interactive Tradeoff (FITradeoff) method, for eliciting scaling constants or weights of criteria. The FITradeoff uses partial information about decision maker (DM) preferences to determine the most preferred in a specified set of alternatives, according to an additive model in MAVT (Multi-Attribute Value Theory) scope. This method uses the concept of flexible elicitation for improving the applicability of the traditional tradeoff elicitation procedure. FITradeoff offers two main benefits: the information required from the DM is reduced and the DM does not have to make adjustments for the indifference between two consequences (trade-off), which is a critical issue on the traditional tradeoff procedure. It is easier for the DM to make comparisons of consequences (or outcomes) based on strict preference rather than on indifference. The method is built into a decision support system and applied to two cases on supplier selection, already published in the literature.
A new method for elicitation of criteria weights in additive models: Flexible and interactive tradeoff
S0377221715008152
In this paper, we address a two-echelon humanitarian logistics network design problem involving multiple central warehouses (CWs) and local distribution centers (LDCs) and develop a novel two-stage scenario-based possibilistic-stochastic programming (SBPSP) approach. The research is motivated by the urgent need for designing a relief network in Tehran in preparation for potential earthquakes to cope with the main logistical problems in pre- and post-disaster phases. During the first stage, the locations for CWs and LDCs are determined along with the prepositioned inventory levels for the relief supplies. In this stage, inherent uncertainties in both supply and demand data as well as the availability level of the transportation network's routes after an earthquake are taken into account. In the second stage, a relief distribution plan is developed based on various disaster scenarios aiming to minimize: total distribution time, the maximum weighted distribution time for the critical items, total cost of unused inventories and weighted shortage cost of unmet demands. A tailored differential evolution (DE) algorithm is developed to find good enough feasible solutions within a reasonable CPU time. Computational results using real data reveal promising performance of the proposed SBPSP model in comparison with the existing relief network in Tehran. The paper contributes to the literature on optimization based design of relief networks under mixed possibilistic-stochastic uncertainty and supports informed decision making by local authorities in increasing resilience of urban areas to natural disasters.
Humanitarian logistics network design under mixed uncertainty
S0377221715008164
This paper provides a novel and general framework for the problem of searching parameter space in Monte Carlo simulations. We propose a deterministic online algorithm and a randomized online algorithm to search for suitable parameter values for derivative pricing which are needed to achieve desired precisions. We also give the competitive ratios of the two algorithms and prove the optimality of the algorithms. Experimental results on the performance of the algorithms are presented and analyzed as well.
Optimal search for parameters in Monte Carlo simulation for derivative pricing
S0377221715008176
The objective of this paper is to empirically investigate whether there is a contagion phenomenon between the stock markets during the July–August-2011 stock market crash. When there is a market contagion, we will identify the propagation channel through which the crash is transmitted. Hence, after checking if there is financial contagion between the stock markets, we will see if the transmission mechanism “constraints of wealth” outweighs that of the “portfolio rebalancing”. An additional test covering the interdependence between the stock and bond markets during the crash helps us verify whether the transmission is due either to the “cross-market rebalancing” channel or to the “flight to quality” phenomenon. On the basis of the combination of the copula theory and the directed acyclic graph to study the structure of causal dependence between the stock market during the period that lies between 01/02/2010 and 28/11/2012, we show that the links between countries are different between the pre-crash period and that of the crash. More specifically, the links that do not exist during normal times seem to have a major role during the crash period. We interpret this result as an evidence of the existence of pure contagion. On the one hand, the tests show that the channel of the “portfolio rebalancing” was the major mechanism for the spread of the crash. On the other hand, the phenomenon of the cross-market rebalancing existed only in Germany; whereas, that of the flight to quality was in all the other studied stock markets.
The contagion channels of July–August-2011 stock market crash: A DAG-copula based approach
S0377221715008188
Convex vector (or multi-objective) semi-infinite optimization deals with the simultaneous minimization of finitely many convex scalar functions subject to infinitely many convex constraints. This paper provides characterizations of the weakly efficient, efficient and properly efficient points in terms of cones involving the data and Karush–Kuhn–Tucker conditions. The latter characterizations rely on different local and global constraint qualifications. The results in this paper generalize those obtained by the same authors on linear vector semi-infinite optimization problems.
Constraint qualifications in convex vector semi-infinite optimization
S0377221715008206
Sustainability considerations in manufacturing scheduling, which is traditionally influenced by service oriented performance metrics, have rarely been adopted in the literature. This paper aims to address this gap by incorporating energy consumption as an explicit criterion in shop floor scheduling. Leveraging the variable speed of machining operations leading to different energy consumption levels, we explore the potential for energy saving in manufacturing. We analyze the trade-off between minimizing makespan, a measure of service level and total energy consumption, an indicator for environmental sustainability of a two-machine sequence dependent permutation flowshop. We develop a mixed integer linear multi-objective optimization model to find the Pareto frontier comprised of makespan and total energy consumption. To cope with combinatorial complexity, we also develop a constructive heuristic for fast trade-off analysis between makespan and energy consumption. We define lower bounds for the two objectives under some non-restrictive conditions and compare the performance of the constructive heuristic with CPLEX through design of experiments. The lower bounds that we develop are valid under realistic assumptions since they are conditional on speed factors. The Pareto frontier includes solutions ranging from expedited, energy intensive schedules to prolonged, energy efficient schedules. It can serve as a visual aid for production and sales planners to consider energy consumption explicitly in making quick decisions while negotiating with customers on due dates. We provide managerial insights by analyzing the areas along the Pareto frontier where energy saving can be justified at the expense of reduced service level and vice versa.
Green scheduling of a two-machine flowshop: Trade-off between makespan and energy consumption
S0377221715008218
The number of refuelling stations for AFVs (alternative fuel vehicles) is limited during the early stages of the diffusion of AFVs. Different layouts of these initial stations will result in different degrees of driver concern regarding refueling and will therefore influence individuals’ decisions to adopt AFVs. The question becomes “what is an optimal layout for these initial stations? Should it target all drivers or just a portion of them, and if so, which portion?” Further, how does the number of initial AFV refueling stations influence the adoption of AFVs? This paper explores these questions with agent-based simulations. Using Shanghai as the basis of computational experiments, this paper first generates different optimal layouts using a genetic algorithm to minimize the total concern of different targeted drivers and then conducts agent-based simulations on the diffusion of AFVs with these layouts. The main findings of this study are that (1) targeting drivers in the city center can induce the fastest diffusion of AFVs if AFV technologies are mature and (2) it is possible that a larger number of initial AFV refueling stations may result in slower diffusion of AFVs because these initial stations may not have sufficient customers to survive. The simulations can provide some insights for cities that are trying to promote the diffusion of AFVs.
Optimizing layouts of initial AFV refueling stations targeting different drivers, and experiments with agent-based simulations
S0377221715008231
We address a critical question that many firms are facing today: Can customer data be stored and analyzed in an easy-to-manage and scalable manner without significantly compromising the inferences that can be made about the customers’ transaction activity? We address this question in the context of customer-base analysis. A number of researchers have developed customer-base analysis models that perform very well given detailed individual-level data. We explore the possibility of estimating these models using aggregated data summaries alone, namely repeated cross-sectional summaries (RCSS) of the transaction data. Such summaries are easy to create, visualize, and distribute, irrespective of the size of the customer base. An added advantage of the RCSS data structure is that individual customers cannot be identified, which makes it desirable from a data privacy and security viewpoint as well. We focus on the widely used Pareto/NBD model and carry out a comprehensive simulation study covering a vast spectrum of market scenarios. We find that the RCSS format of four quarterly histograms serves as a suitable substitute for individual-level data. We confirm the results of the simulations on a real dataset of purchasing from an online fashion retailer.
Customer-base analysis using repeated cross-sectional summary (RCSS) data
S0377221715008243
Critical Chain Scheduling and Buffer Management (CC/BM) has shown to provide an effective approach for building robust project schedules and to offer a valuable control tool for coping with schedule variability. Yet, the current buffer monitoring mechanism faces a problem of neglecting the dynamic feature of the project execution and related activity information when taking corrective actions. The schedule risk analysis (SRA) method in a traditional PERT framework, on the other hand, provides important information about the relative activity criticality in relation to the project duration which can highlight management focus. It is implied, however, that control actions are independent from the current project schedule performance. This paper attempts to research these defects of both tracking methods and proposes a new project schedule monitoring framework by introducing the activity cruciality index as a trigger for effective expediting to be integrated into the buffer monitoring process. Furthermore, dynamic action threshold settings that depend on the project progress as well as the buffer penetration are presented and examined in order to exhibit a more accurate control system. Our computational experiment demonstrates the relative dominance of the integrated schedule monitoring methods compared to the predominant buffer management approach in generating better control actions with less effort and an increased tracking efficiency, especially when the increasing buffer trigger point is combined with decreasing sensitivity action threshold values.
Incorporation of activity sensitivity measures into buffer management to manage project schedule risk
S0377221715008255
There is a growing interest towards the design of reusable general purpose search methods that are applicable to different problems instead of tailored solutions to a single particular problem. Hyper-heuristics have emerged as such high level methods that explore the space formed by a set of heuristics (move operators) or heuristic components for solving computationally hard problems. A selection hyper-heuristic mixes and controls a predefined set of low level heuristics with the goal of improving an initially generated solution by choosing and applying an appropriate heuristic to a solution in hand and deciding whether to accept or reject the new solution at each step under an iterative framework. Designing an adaptive control mechanism for the heuristic selection and combining it with a suitable acceptance method is a major challenge, because both components can influence the overall performance of a selection hyper-heuristic. In this study, we describe a novel iterated multi-stage hyper-heuristic approach which cycles through two interacting hyper-heuristics and operates based on the principle that not all low level heuristics for a problem domain would be useful at any point of the search process. The empirical results on a hyper-heuristic benchmark indicate the success of the proposed selection hyper-heuristic across six problem domains beating the state-of-the-art approach.
An iterated multi-stage selection hyper-heuristic
S0377221715008267
This paper addresses the multitrip Cumulative Capacitated Single-Vehicle Routing Problem (mt-CCSVRP). In this problem inspired by disaster logistics, a single vehicle can perform successive trips to serve a set of affected sites and minimize an emergency criterion, the sum of arrival times. Two mixed integer linear programs, a flow-based model and a set partitioning model, are proposed for small instances with 20 sites. An exact algorithm for larger cases transforms the mt-CCSVRP into a resource-constrained shortest path problem where each node corresponds to one trip and the sites to visit become resources. The resulting problem can be solved via an adaptation of Bellman–Ford algorithm to a directed acyclic graph with resource constraints and a cumulative objective function. Seven dominance rules, two upper bounds and five lower bounds speed up the procedure. Computational results on instances derived from classical benchmark problems for the capacitated VRP indicate that the exact algorithm outperforms a commercial MIP solver on small instances and can solve cases with 40 sites to optimality.
Mathematical formulations and exact algorithm for the multitrip cumulative capacitated single-vehicle routing problem
S0377221715008279
In this study, a Disassembly Line Balancing Problem with a fixed number of workstations is considered. The product to be disassembled comprises various components, which are referred to as its parts. There is a specified finite supply of the product to be disassembled and specified minimum release quantities (possible zero) for each part of the product. All units of the product are identical, however different parts can be released from different units of the product. There is a finite number of identical workstations that perform the necessary disassembly operations, referred to as tasks. We present several upper and lower bounding procedures that assign the tasks to the workstations so as to maximize the total net revenue. The computational study has revealed that the procedures produce satisfactory results.
A disassembly line balancing problem with fixed number of workstations
S0377221715008280
Probabilistic forecasts from discrete choice models, which are widely used in marketing science and competitive event forecasting, are often best evaluated out-of-sample using pseudo-coefficients of determination, or pseudo-R 2 s. However, there is a danger of misjudging the accuracy of forecast probabilities of event outcomes, based on observed frequencies, because of issues related to pseudo-R 2s. First, we show that McFadden’s pseudo-R 2 varies predictably with the number of alternatives in the choice set. Then we evaluate the relative merits of two methods (bootstrap and asymptotic) for estimating the variance of pseudo-R 2s so that their values can be appropriately compared across non-nested models. Finally, in the context of competitive event forecasting, where the accuracy of forecasts has direct economic consequence, we derive new R 2 measures that can be used to assess the economic value of forecasts. Throughout, we illustrate using data drawn from UK and Ireland horse race betting markets.
Probabilistic forecasting with discrete choice models: Evaluating predictions with pseudo-coefficients of determination
S0377221715008292
While our society began to recognize the importance to balance the risk performance under different risk measures, the existing literature has confined its research work only under a static mean-risk framework. This paper represents the first attempt to incorporate multiple risk measures into dynamic portfolio selection. More specifically, we investigate the dynamic mean-variance-CVaR (Conditional value at Risk) formulation and the dynamic mean-variance-SFP (Safety-First Principle) formulation in a continuous-time setting, and derive the analytical solutions for both problems. Combining a downside risk measure with the variance (the second order central moment) in a dynamic mean-risk portfolio selection model helps investors control both a symmetric central risk measure and an asymmetric catastrophic downside risk. We find that the optimal portfolio policy derived from our mean-multiple risk portfolio optimization models exhibits a feature of curved V-shape. Our numerical experiments using real market data clearly demonstrate a dominance relationship of our dynamic mean-multiple risk portfolio policies over the static buy-and-hold portfolio policy.
Dynamic mean-risk portfolio selection with multiple risk measures in continuous-time
S0377221715008309
The planar maximal covering location problem (PMCLP) concerns the placement of a given number of facilities anywhere on a plane to maximize coverage. Solving PMCLP requires identifying a candidate locations set (CLS) on the plane before reducing it to the relatively simple maximal covering location problem (MCLP). The techniques for identifying the CLS have been mostly dominated by the well-known circle intersect points set (CIPS) method. In this paper we first review PMCLP, and then discuss the advantages and weaknesses of the CIPS approach. We then present a mean-shift based algorithm for treating large-scale PMCLPs, i.e., MSMC. We test the performance of MSMC against the CIPS approach on randomly generated data sets that vary in size and distribution pattern. The experimental results illustrate MSMC’s outstanding performance in tackling large-scale PMCLPs.
A mean-shift algorithm for large-scale planar maximal covering location problems
S0377221715008310
This paper explores whether factor based credit portfolio risk models are able to predict losses in severe economic downturns such as the recent Global Financial Crisis (GFC) within standard confidence levels. The paper analyzes (i) the accuracy of default rate forecasts, and (ii) whether forecast downturn percentiles (Value-at-Risk, VaR) are sufficient to cover default rate outcomes over a quarterly and an annual forecast horizon. Uninformative maximum likelihood and informative Bayesian techniques are compared as they imply different degrees of uncertainty. We find that quarterly VaR estimates are generally sufficient but annual VaR estimates may be insufficient during economic downturns. In addition, the paper develops and analyzes models based on auto-regressive adjustments of scores, which provide a higher forecast accuracy. The consideration of parameter uncertainty and auto-regressive error terms mitigates the shortfall.
Accuracy of mortgage portfolio risk forecasts during financial crises
S0377221715008322
Hubs are consolidation and dissemination points in many-to-many flow networks. Hub location problem is to locate hubs among available nodes and allocate non-hub nodes to these hubs. The mainstream hub location studies focus on optimal decisions of one decision-maker with respect to some objective(s) even though the markets that benefit hubbing are oligopolies. Therefore, in this paper, we propose a competitive hub location problem where the market is assumed to be a duopoly. Two decision-makers (or firms) sequentially decide locations of their hubs and then customers choose one firm with respect to provided service levels. Each decision-maker aims to maximize his/her own market share. We propose two problems for the leader (former decision-maker) and follower (latter decision-maker): ( r | X p ) h u b − m e d i a n o i d and ( r | p ) h u b − c e n t r o i d problems, respectively. Both problems are proven to be NP-complete. Linear programming models are presented for these problems as well as exact solution algorithms for the ( r | p ) h u b − c e n t r o i d problem. The performance of models and algorithms are tested by computational analysis conducted on CAB and TR data sets.
Hub location under competition
S0377221715008334
The problem considered in this paper is to produce routes and schedules for a fleet of delivery vehicles that minimize the fuel emissions in a road network where speeds depend on time. In the model, the route for each vehicle must be determined, and also the speeds of the vehicles along each road in their paths are treated as decision variables. The vehicle routes are limited by the capacities of the vehicles and time constraints on the total length of each route. The objective is to minimize the total emissions in terms of the amount of Greenhouse Gas (GHG) produced, measured by the equivalent weight of CO2 (CO2e). A column generation based tabu search algorithm is adapted and presented to solve the problem. The method is tested with real traffic data from a London road network. The results are analysed to show the potential saving from the speed adjustment process. The analysis shows that most of the fuel emissions reduction is able to be attained in practice by ordering the customers to be visited on the route using a distance-based criterion, determining a suitable path between customers for each vehicle and travelling as fast as is allowed by the traffic conditions up to a preferred speed.
Fuel emissions optimization in vehicle routing problems with time-varying speeds
S0377221715008346
Existing research on acceptability of pairwise interval comparison matrices focuses on acceptable consistency by controlling their inconsistency levels to within a certain threshold. However, a perfectly consistent but highly indeterminate interval comparison matrix can be unacceptable as it contains little (sometimes no) useful decision information. This paper first analyzes the current definition of acceptable consistency for interval multiplicative comparison matrices (IMCMs) and shows its technical deficiencies. We then introduce a new notion of acceptable IMCMs, considering both inconsistency and indeterminacy levels in IMCMs. A geometric-mean-based index is proposed to measure the indeterminacy ratio of an IMCM, and useful properties are derived for consistent IMCMs and acceptable IMCMs. An indeterminacy-ratio and geometric-mean-based transformation equation is subsequently put forward to convert normalized acceptable interval multiplicative weights into an acceptable IMCM with consistency. By introducing an auxiliary constraint, a logarithmic least square model is established to generate interval multiplicative weights from acceptable IMCMs. A geometric-mean-based possibility degree formula is designed to compare and rank normalized interval multiplicative weights. Two numerical examples are presented to illustrate how to utilize the proposed framework.
Acceptability analysis and priority weight elicitation for interval multiplicative comparison matrices
S0377221715008358
We present an integer linear programming formulation and solution procedure for determining the tightest bounds on cell counts in a multi-way contingency table, given knowledge of a corresponding derived two-way table of rounded conditional probabilities and the sample size. The problem has application in statistical disclosure limitation, which is concerned with releasing useful data to the public and researchers while also preserving privacy and confidentiality. Previous work on this problem invoked the simplifying assumption that the conditionals were released as fractions in lowest terms, rather than the more realistic and complicated setting of rounded decimal values that is treated here. The proposed procedure finds all possible counts for each cell and runs fast enough to handle moderately sized tables.
Obtaining cell counts for contingency tables from rounded conditional frequencies
S0377221715008371
One approach to modelling Loss Given Default (LGD), the percentage of the defaulted amount of a loan that a lender will eventually lose is to model the collections process. This is particularly relevant for unsecured consumer loans where LGD depends both on a defaulter's ability and willingness to repay and the lender's collection strategy. When repaying such defaulted loans, defaulters tend to oscillate between repayment sequences where the borrower is repaying every period and non-repayment sequences where the borrower is not repaying in any period. This paper develops two models – one a Markov chain approach and the other a hazard rate approach to model such payment patterns of debtors. It also looks at simplifications of the models where one assumes that after a few repayment and non-repayment sequences the parameters of the model are fixed for the remaining payment and non-payment sequences. One advantage of these approaches is that they show the impact of different write-off strategies. The models are applied to a real case study and the LGD for that portfolio is calculated under different write-off strategies and compared with the actual LGD results.
Modelling repayment patterns in the collections process for unsecured consumer debt: A case study
S0377221715008383
This paper evaluates the performance of a number of modelling approaches for future mortgage default status. Boosted regression trees, random forests, penalised linear and semi-parametric logistic regression models are applied to four portfolios of over 300,000 Irish owner-occupier mortgages. The main findings are that the selected approaches have varying degrees of predictive power and that boosted regression trees significantly outperform logistic regression. This suggests that boosted regression trees can be a useful addition to the current toolkit for mortgage credit risk assessment by banks and regulators.
An empirical comparison of classification algorithms for mortgage default prediction: evidence from a distressed mortgage market
S0377221715008395
This paper proposes a tabu search algorithm for the vehicle-routing problem with time windows and driver-specific times (VRPTWDST), a variant of the classical VRPTW that uses driver-specific travel and service times to model the familiarity of the different drivers with the customers to visit. We carry out a systematic investigation of the problem on a comprehensive set of newly generated benchmark instances. We find that consideration of driver knowledge in the route planning clearly improves the efficiency of vehicle routes, an effect that intensifies for higher familiarity levels of the drivers. Increased benefits are produced if the familiar customers of drivers are geographically contiguous. Moreover, a higher number of drivers that are familiar with the same (larger) region provides higher benefits compared to a scenario where each driver is only familiar with a dedicated (smaller) region. Finally, our tabu search is able to prove its performance on the Solomon test instances of the closely related VRPTW, yielding high-quality solutions in short time.
The vehicle-routing problem with time windows and driver-specific times
S0377221715008401
Consensus reaching models are widely applied in group decision making problems to improve the group's consensus level before making a common decision. Within the context of the group Analytic Hierarchy Process (AHP), a novel consensus reaching model in a dynamic decision environment is proposed. A Markov chain method can be used to determine the decision makers’ weights of importance for the aggregation process with respect to the group members’ opinion transition probabilities. The proposed group consensus reaching model facilitates a peer to peer opinion exchange process which relieves the group of the need for a moderator by using an automatic feedback mechanism. Moreover, as the elements in the group decision framework change in a dynamic decision making problem, this model provides feedback suggestions that adaptively adjust for each of the decision makers depending on his credibility in each round. The full process of the dynamic adaptive consensus reaching model is presented and its properties are discussed. Finally, a numerical example is given to demonstrate the effectiveness of our model.
A peer-to-peer dynamic adaptive consensus reaching model for the group AHP decision making
S0377221715008413
As suppliers are crucial for successful supply chain management, buying companies have to deal with the risks of supply disruptions due to e.g. labor strikes, natural disasters, supplier bankruptcy, and business failures. Dual sourcing is one potential countermeasure, however, when applying it one loses the full potential of economies of scale. To provide decision support, we analyze the trade-off between risk reduction via dual sourcing under disruption risk and learning benefits on sourcing costs induced by long-term relationships with a single supplier from a buyer’s perspective. The buyer’s optimal volume allocation strategy over a finite dynamic planning horizon is identified and we find that a symmetric demand allocation is not optimal, even if suppliers are symmetric. We obtain insights on how reliability, cost and learning ability of potential suppliers impact the buyer’s sourcing decision and find that the allocation balance increases with learning rate and decreases with reliability and demand level. Further, we quantify the benefit of dual sourcing compared to single sourcing, which increases with learning rate and decreases with reliability. When comparing the optimal policy to heuristic dual sourcing policies, a simple 75:25 allocation rule turns out to be a very robust policy. Finally, we perform sensitivity analysis and find that increasing certainty about supplier reliability and increasing risk aversion of a buyer yield more balanced supply volume allocations among the available suppliers and that the advantage of dual sourcing decreases with uncertainty about supplier reliability. Further, we discuss the impact of demand uncertainty.
Dual sourcing under disruption risk and cost improvement through learning
S0377221715008425
Waste generation forecasting is a complex process that is found to be influenced by some latent influencing parameters and their uncertainties, such as economic growth, demography, individual behaviors, activities and events, and management policies. These hidden features play an important role in forecasting the fluctuations of waste generation. We therefore focus on revealing the trend of waste generation in megacities which face significant influences of social and economic changes to achieve urban sustainable development. To dynamically trace fluctuations caused by these uncertainties, we propose a probability model-driven statistical learning approach which hybridizes a wavelet de-noising, a Gaussian mixture model, and a hidden Markov model. First, to gain the actual underlying trend, wavelet de-noising is used to eliminate the noise of data. Next, the Expectation–Maximization and the Viterbi algorithms are employed for learning parameters and discerning the most probable sequence of hidden states, respectively. Subsequently, the state transition matrix is updated by fractional predictable changes of influencing parameters to perform non-periodic fluctuation problem forecasting, and the forward algorithm is utilized to search the most similar data pattern for the current pattern from historical data in order to forecast the future trend of the periodic fluctuation problem. Finally, we apply the approaches into two kinds of case studies that test both a small dataset and a large dataset. How uncertainty factors influence forecasted results is analyzed in the subsection of results and discussion. The computational results demonstrate that the proposed approaches are effective in solving the municipal waste generation forecasting problem.
Hidden Markov model for municipal waste generation forecasting under uncertainties
S0377221715008449
This paper presents a computation tool for the multi-level capacitated lot-sizing and scheduling problem in hen egg production planning with the aim of minimizing the total cost. A mixed-integer programming model was developed to solve small-size problems. For large-size problems, particle swarm optimization (PSO) was firstly applied. However, the component of traditional PSO for social learning behavior includes only personal and global best positions. Therefore, a variant of PSO such as the particle swarm optimization with combined gbest, lbest and nbest social structures (GLNPSO) which considers multiple social learning terms was proposed. The local search procedure was applied to decide the new sequence of chick and pullet allocation to rapidly converge to a better solution. Moreover, the re-initialization and the re-order strategy were used to improve the possibility of finding an optimal solution in the search space. To test the performance of the algorithm, the two criteria used to measure and evaluate the effectiveness of the proposed algorithm were the performance of the heuristic algorithm (P) obtained by comparing their solutions to optimal solutions, and the relative improvement of the solution (RI) obtained by the firm's current practice with respect to those of traditional PSO and the GLNPSO algorithms. The results demonstrate that the GLNPSO is not only useful for reducing cost compared to the traditional PSO, but also for efficient management of the poultry production system. Furthermore, the method used in this research should prove beneficial to other similar agro-food industries in Thailand and around the world.
A GLNPSO for multi-level capacitated lot-sizing and scheduling problem in the poultry industry
S0377221715008450
We consider a dynamic closed-loop supply chain made up of one manufacturer and one retailer, with both players investing in a product recovery program to increase the rate of return of previously purchased products. End-of use product returns have two impacts. First, they lead to a decrease in the production cost, as manufacturing with used parts is cheaper than using virgin materials. Second, returns boost sales through replacement items. We show that the coordinated solution can be implemented by using so-called incentive strategies, which have the property of being best-reply strategies if each player assumes that the other is also implementing her incentive strategies. A numerical example illustrates the theoretical results.
Incentive strategies for an optimal recovery program in a closed-loop supply chain
S0377221715008462
In this study, we consider the minimax regret 1-sink location problem with accessibility, where all of the weights should always evacuate to the nearest sink point along the shortest path in a dynamic general network with positive edge lengths and uniform edge capacity. Let G = ( V , E ) be an undirected connected simple graph with n vertices and m edges. Each vertex has a weight that is not known exactly but the interval to which it belongs is given. A particular assignment of a weight to each vertex is called a scenario. The problem requires that a point x should be found as the sink on the graph and all the weights evacuate to x such that the maximum regret for all possible scenarios is minimized. We present an O(m 2 n 3) time algorithm to solve this problem.
Minimax regret 1-sink location problem with accessibility in dynamic general networks
S0377221715008474
This paper deals with portfolio selection problems under risk and ambiguity. The investor may be ambiguous with respect to the set of states of nature and their probabilities. Both static and discrete or continuous time dynamic pricing models are included in the analysis. Risk and ambiguity are measured in general settings. The considered risk measures contain, as particular cases, the usual deviations and the coherent and expectation bounded measures of risk. Four contributions seem to be reached. Firstly, necessary and sufficient optimality conditions are given. Secondly, the portfolio selection problem may be frequently solved by linear programming linked methods, despite the fact that risk and ambiguity cannot be given by linear expressions. Thirdly, if there is a market price of risk then there exists a benchmark that creates a robust capital market line when combined with the riskless asset. The global risk of every portfolio may be divided into systemic and specific. Moreover, if there is no ambiguity with respect to the states of nature (only their probabilities are uncertain), then classical CAPM-formulae may be found. Fourthly, some recent pathological findings for ambiguity-free analyses also apply in ambiguous frameworks. In particular, there may exist arbitrage free markets such that the ambiguous agent can guarantee every expected return with a maximum risk bounded from above by zero, i.e. , the capital market line (risk, return) becomes vertical. For instance, in the (non-ambiguous) Black and Scholes model this property holds for important risk measures such as the absolute deviation or the CVaR. Nevertheless, in ambiguous settings, adequate increments of the ambiguity level will allow us to recover capital market lines consistent with the empirical evidence. The introduction of ambiguity may overcome several caveats of many important pricing models.
Good deals and benchmarks in robust portfolio selection
S0377221715008486
Participatory budgets are becoming increasingly popular in many municipalities all over the world. The underlying idea is to allow citizens to participate in the allocation of a fraction of the municipal budget. There are many variants of such processes. However, in most cases they assume a fixed budget based upon a maximum amount of money to be spent. This approach seems lacking, especially in times of crisis when public funding suffers high volatility and widespread cuts. In this paper, we propose a model for participatory budgeting under uncertainty based on stochastic programming.
A participatory budget model under uncertainty
S0377221715008498
We extend a well-known differential oligopoly game to encompass the possibility for production to generate a negative environmental externality, regulated through Pigouvian taxation and price caps. We show that, if the price cap is set so as to fix the tolerable maximum amount of emissions, the resulting equilibrium investment in green R&D is indeed concave in the structure of the industry. Our analysis appears to indicate that inverted-U-shaped investment curves are generated by regulatory measures instead of being a ‘natural’ feature of firms’ decisions.
R&D for green technologies in a dynamic oligopoly: Schumpeter, arrow and inverted-U’s
S0377221715008681
This paper studies strategic decentralization in binary choice composite network congestion games. A player decentralizes if she lets some autonomous agents to decide respectively how to send different parts of her stock from the origin to the destination. This paper shows that, with convex, strictly increasing and differentiable arc cost functions, an atomic splittable player always has an optimal unilateral decentralization strategy. Besides, unilateral decentralization gives her the same advantage as being the leader in a Stackelberg congestion game. Finally, unilateral decentralization of an atomic player has a negative impact on the social cost and on the costs of the other players at the equilibrium of the congestion game.
Strategic decentralization in binary choice composite congestion games
S0377221715008693
There is a long standing, but thin, stream of experimental behavioural research into understanding how modellers within operational research (OR) behave when constructing models, and how individuals use such models to make decisions. Such research aims to better understand the modelling process, using empirical studies to construct a body of knowledge. Drawing upon this research, and experimental behavioural research in associated research areas, this paper aims to summarise the current body of knowledge. It suggests that we have some experimentally generated findings concerning the construction of models, model usage, the impact of model visualisation, and the effect (or lack thereof) of cognitive style on decision quality. The paper also considers how experiments have been operationalised, and particularly the problem of the dependent variable in such research (that is, what beneficial outputs can be measured in an experiment). It concludes with a consideration of what we might come to know through future experimental behavioural research, suggesting a more inclusive approach to experimental design.
Experimental behavioural research in operational research: What we know and what we might come to know
S0377221715008711
A cooperative game consists of a set of players and a characteristic function determining the maximal gain or minimal cost that every subset of players can achieve when they decide to cooperate, regardless of the actions that the other players take. The relationships of closeness among the players should modify the bargaining among them and therefore their payoffs. The first models that have studied this closeness used a priori unions or undirected graphs. In the a priori union model a partition of the big coalition is supposed. Each element of the partition represents a group of players with the same interests. The groups negotiate among them to form the grand coalition and later, inside each one, players bargain among them. Now we propose to use proximity relations to represent leveled closeness of the interests among the players and extending the a priori unions model.
Cooperation among agents with a proximity relation
S0377221715008723
A Projected-Gradient Underdetermined Newton-like algorithm will be introduced for finding a solution of a Horizontal Nonlinear Complementarity Problem (HNCP) corresponding to a feasible solution of a Mathematical Programming Problem with Complementarity Constraints (MPCC). The algorithm employs a combination of Interior-Point Newton-like and Projected-Gradient directions with a line-search procedure that guarantees global convergence to a solution of HNCP or, at least, a stationary point of the natural merit function associated to this problem. Fast local convergence will be established under reasonable assumptions. The new algorithm can be applied to the computation of a feasible solution of MPCC with a target objective function value. Computational experience on test problems from well-known sources will illustrate the efficiency of the algorithm to find feasible solutions of MPCC in practice.
Feasibility problems with complementarity constraints
S0377221715008735
In the prior literature on measuring the efficiency of two-stage processes, there are both radial and non-radial methods of efficiency measurement. In some cases, non-radial methods which allow all inputs, intermediate measures and outputs to change non-proportionally are more appropriate than radial methods, but they do not ensure stage efficiency or allow for the efficiency decomposition of two-stage processes. Based on slacks-based measure (SBM), this paper develops both envelopment-based and multiplier-based models to obtain simultaneously both the frontier projection and the efficiency decomposition. Specifically, we propose the variable intermediate measures SBM (VSBM) model to evaluate the system efficiency of two-stage processes and consider the following three properties of the VSBM model: 1) we derive the efficient DEA frontier projection based on the VSBM model; 2) we address potential conflicts in this model with respect to the intermediate measures; 3) we prove that the system inefficiency is equivalent to the sum of inefficiencies of the two stages. Furthermore, we derive the efficiency decomposition of two-stage processes based on the dual of the VSBM model. Finally, we apply our proposed approach to real data of US commercial banks, and extend our approach to settings in which the assumption of variable returns to scale (VRS) holds or there are more general network structures.
Frontier projection and efficiency decomposition in two-stage processes with slacks-based measures
S0377221715008747
We consider a small- and medium-sized enterprise (SME) with a funding gap intending to invest in a project, of which the cash flow follows a double exponential jump-diffusion process. In contrast to traditional corporate finance theory, we assume the SME is unable to get a loan directly from a bank and hence it enters into a partial guarantee agreement with an insurer and a lender. Utilizing a real options approach, we develop an investment and financing model with a partial guarantee. We explicitly derive the pricing and timing of the option to invest. We find that if the funding gap rises, the option value decreases but its investment threshold first declines and then increases. The larger the guarantee level, the lower the option value and the later the investment. The optimal coupon rate decreases with project risk and a growth of the guarantee level can effectively reduce agency conflicts.
Investment and financing for SMEs with a partial guarantee and jump risk
S0377221715008759
This paper treats an elementary optimization problem, which arises whenever an inbound stream of items is to be intermediately stored in a given number of parallel stacks, so that blockages during their later retrieval are avoided. A blockage occurs whenever an item to be retrieved earlier is blocked by another item with lower priority stored on top of it in the same stack. Our stack loading problem arises, for instance, if containers arriving by vessel are intermediately stored in a container yard of a port or if, during nighttime, successively arriving wagons are to be parked in multiple parallel dead-end rail tracks of a tram depot. We formalize the resulting basic stack loading problem, investigate its computational complexity, and present suited exact and heuristic solution procedures.
The parallel stack loading problem to minimize blockages
S0377221715008760
Primarily this paper addresses those members of the OR community who remain unconvinced that widening the Behavioural Operational Research agenda beyond decision behaviour modelling to encompass research on the modelling process itself, is a desirable and/or necessary step for the parent discipline to take. Using a process perspective that emphasises human activity and the temporal evolution of OR projects, the paper shows that the appeal of this particular strand of BOR lies in its ability to strengthen the bridge between academic OR and its professional practice in which the human and social challenges can be just as important as the intellectual and technical ones. In so doing, this wider remit for BOR better positions practitioners to reduce the reliance that they currently have on apprenticeship and the gradual accumulation of craft skills in meeting the various challenges that they face. An immediate priority outlined in this paper is for academic and practitioner authors to turn further in the direction of relevant theory in an attempt to communicate process understandings of OR interventions through the literature that better resonate with experience.
The what, the why and the how of behavioural operational research—An invitation to potential sceptics
S0377221715008772
This paper deals with a variation of the Heston hybrid model with stochastic interest rate illustrated in Grzelak and Oosterlee (2011). This variation leads to a multi-factor Heston model where one factor is the stochastic interest rate. Specifically, the dynamics of the asset price is described through two stochastic factors: one related to the stochastic volatility and the other to the stochastic interest rate. The proposed model has the advantage of being analytically tractable while preserving the good features of the Heston hybrid model in Grzelak and Oosterlee (2011) and of the multi-factor Heston model in Christoffersen et al. (2009). The analytical treatment is based on an appropriate parametrization of the probability density function which allows us to compute explicitly relevant integrals which define option pricing and moment formulas. The moments and mixed moments of the asset price and log-price variables are given by elementary formulas which do not involve integrals. A procedure to estimate the model parameters is proposed and validated using three different data-sets: the prices of call and put options on the U.S. S&P 500 index, the values of the Credit Agricole index linked policy, Azione Più Capitale Garantito Em.64., and the U.S. three-month, two and ten year government bond yields. The empirical analysis shows that the stochastic interest rate plays a crucial role as a volatility factor and provides a multi-factor model that outperforms the Heston model in pricing options. This model can also provide insights into the relationship between short and long term bond yields.
An explicitly solvable Heston model with stochastic interest rate
S0377221715008784
In this paper, we present complexity results for storage loading problems where the storage area is organized in fixed stacks with a limited common height. Such problems appear in several practical applications, e.g., in the context of container terminals, container ships or warehouses. Incoming items arriving at a storage area have to be assigned to stacks so that certain constraints are respected (e.g., not every item may be stacked on top of every other item). We study structural properties of the general model and special cases where at most two or three items can be stored in each stack. Besides providing polynomial time algorithms for some of these problems, we establish the boundary to NP-hardness.
Complexity results for storage loading problems with stacking constraints
S0377221715008796
This communication complements the DEA model proposed by Lovell and Pastor (1999), by incorporating both positive and negative criteria in the model. As such, we propose a DEA model, known as pure DEA, using a directional distance function approach.
A translation invariant pure DEA model
S0377221715008802
Feeding is the most important cost in the production of growing pigs and has a direct impact on the marketing decisions, growth and the final quality of the meat. In this paper, we address the sequential decision problem of when to change the feed-mix within a finisher pig pen and when to pick pigs for marketing. We formulate a hierarchical Markov decision process with three levels representing the decision process. The model considers decisions related to feeding and marketing and finds the optimal decision given the current state of the pen. The state of the system is based on information from on-line repeated measurements of pig weights and feeding and is updated using a Bayesian approach. Numerical examples are given to illustrate the features of the proposed optimization model.
A hierarchical Markov decision process modeling feeding and marketing decisions of growing pigs
S0377221715008814
In this paper, we investigate the asymptotic behaviors of the loss reservings computed by individual data method and its aggregate data versions by Chain-Ladder (CL) and Bornhuetter–Ferguson (BF) algorithms. It is shown that all deviations of the three reservings from the individual loss reserve (the projection of the outstanding liability on the individual data) converge weakly to a zero-mean normal distribution at the n rate. The analytical forms of the asymptotic variances are derived and compared by both analytical and numerical examples. The results show that the individual method has the smallest asymptotic variance, followed by the BF algorithm, and the CL algorithm has the largest asymptotic variance.
Asymptotic behaviors of stochastic reserving: Aggregate versus individual models
S0377221715008826
This paper proposes a time-computing model using the Graphical Evaluation and Review Technique (GERT) to analyse concurrent New Product Development (NPD) processes. The research presented here differs from previous work carried out on concurrent engineering. First, we conceptualise a concurrent NPD process using the GERT scheduling technique and derive a method of modelling the information and communication complexities within the process. Second, we extend previous research carried out on concurrent engineering and incorporate it within our model. Finally, we present an alternative method of analysing concurrent NPD process for both researchers and project managers alike. The GERT model developed in this paper was successfully employed at two NPD firms located in Ireland and Iran.
The use of a GERT based method to model concurrent product development processes
S0377221715008838
Shipment consolidation has been advocated by researchers and politicians as a means to reduce cost and improve environmental performance of logistics activities. This paper investigates consolidated transport solutions with a common shipment frequency. When a service provider designs such a solution for its customers, she faces a trade-off: to have the most time-sensitive customers join the consolidated solution, the frequency must be high, which makes it difficult to gather enough demand to reach the scale economies of the solution; but by not having the most time-sensitive customers join, there will be less demand per time unit, which also makes it difficult to reach the scale economies. In this paper we investigate the service provider’s pricing and timing problem and the environmental implications of the optimal policy. The service provider is responsible for multiple customers’ transports, and offers all customers two long-term contracts at two different prices: direct express delivery with immediate dispatch at full cost, or consolidated delivery at a given frequency at a reduced cost. It is shown that the optimal policy is largely driven by customer heterogeneity: limited heterogeneity in customers’ costs leads to very different optimal policies compared to large heterogeneity. We argue that the reason so many consolidation projects fail may be due to a strategic mismatch between heterogeneity and consolidation policy. We also show that even if the consolidated solution is implemented, it may lead to a larger environmental impact than direct deliveries due to inventory build-up or a higher-than-optimal frequency of the consolidated transport.
Pricing and timing of consolidated deliveries in the presence of an express alternative: Financial and environmental analysis
S0377221715008851
In a paper published in Management Science in 1984, Korhonen, Wallenius, and Zionts presented the idea and method based on convex-cone dominance in the discrete Multiple Criteria Decision Making framework. In our current paper, we revisit the old idea from a new standpoint and provide the mathematical theory leading to a dual-cone based approach to solving such problems. Our paper makes the old results computationally more tractable. The results provided in the present paper also help extend the theory.
Dual cone approach to convex-cone dominance in multiple criteria decision making
S0377221715008863
In this paper we develop a hybrid metaheuristic for approaching the Pareto front of the bi-objective ring star problem. This problem consists of finding a simple cycle (ring) through a subset of nodes of a network. The aim is to minimize both the cost of connecting the nodes in the ring and the cost of allocating the nodes not in the ring to nodes in the ring. The algorithm preserves the general characteristics of a multiobjective evolutionary algorithm and embeds a local search procedure which deals with multiple objectives. The encoding scheme utilized leads to solving a Traveling Salesman Problem in order to compute the ring associated with the chromosome. This allows the algorithm to implicitly discard feasible solutions which are not efficient. The algorithm also includes an ad-hoc initial population construction which contributes to diversification. Extensive computational experiments using benchmark problems show the performance of the algorithm and reveal the noteworthy contribution of the local search procedure.
MEALS: A multiobjective evolutionary algorithm with local search for solving the bi-objective ring star problem
S0377221715008875
This work gives an improved competitive analysis for an online inventory problem with bounded storage and order costs proposed by Larsen and Wøhlk (2010). We improve the upper bound of the competitive ratio from ( 2 + 1 k ) M m to less than 4 5 ( 2 + 1 k ) M m , where k, M and m are parameters of the given problem. The key idea is to use linear-fractional programming and primal-dual analysis methods to find the upper bound of a central inequality.
A note: An improved upper bound for the online inventory problem with bounded storage and order costs
S0377221715008887
Determining the optimal amount of mailings being sent to customers is crucial. However, this decision depends on various aspects. First, it is important to specify relevant mailing variables. By distinguishing different types of mailings and considering their sizes, we set our study apart from the majority of existing studies. To deal with unobserved heterogeneity we estimate a Mixture of Dirichlet Processes (MDP) whose components are Tobit-2 models. A policy function approach is used to take endogeneity into account. We investigate whether and how consideration of endogeneity leads to different managerial implications. To this end, we determine mailings by dynamic optimization for three individual customers which are prototypical for the segments discovered by the MDP model. We find out that mailings should be avoided altogether more frequently for all three customer types according to the model which takes endogeneity into account.
Investigating the effects of mailing variables and endogeneity on mailing decisions