FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0377221714002859 | Course timetabling is an important and recurring administrative activity in most educational institutions. This article combines a general modeling methodology with effective learning hyper-heuristics to solve this problem. The proposed hyper-heuristics are based on an iterated local search procedure that autonomously combines a set of move operators. Two types of learning for operator selection are contrasted: a static (offline) approach, with a clear distinction between training and execution phases; and a dynamic approach that learns on the fly. The resulting algorithms are tested over the set of real-world instances collected by the first and second International Timetabling competitions. The dynamic scheme statistically outperforms the static counterpart, and produces competitive results when compared to the state-of-the-art, even producing a new best-known solution. Importantly, our study illustrates that algorithms with increased autonomy and generality can outperform human designed problem-specific algorithms. | Effective learning hyper-heuristics for the course timetabling problem |
S0377221714002860 | The assessment of additive value functions in Multicriteria Decision Aid (MCDA) has to face issues of legitimacy and technical difficulties when real decision makers are involved. This paper presents a synergy of three complementary techniques to assess additive models on the whole criteria space. The synergy includes a revised MACBETH technique, the standard MAUT trade-off analysis and UTA-based methods for the assessment of both the marginal value functions and the weighting factors. The paper uses a set of original robustness measures and rules associated with revised MACBETH and UTA in order to manage multiple linear programming solutions and to extract robust conclusions from them. Finally, to illustrate the methods’ synergy, an application example is presented, dealing with the planning of metro extension lines. | A synergy of multicriteria techniques to assess additive value models |
S0377221714002872 | In this paper, both the duration and the cost of an activity are modeled as random variables, and accordingly, the cumulative cost at each time point also becomes a random variable along a project’s progress. We first present the new concept of an “alphorn of uncertainty” (AoU) to describe the domain of cumulative cost variation throughout the life of a project and subsequently apply it to assess the project’s financial status over time. The shape of the alphorn was obtained by mixing Monte Carlo sampling with Gantt chart analysis, which enabled us to determine a project’s financial status related to specific payment modes. To validate the AoU, we designed and conducted an extensive numerical experiment using a randomly generated data set of activity networks. The results indicate that the AoU may be a promising method for the financial management of projects under uncertainty. Furthermore, financial status under uncertain conditions is not sensitive to an activity’s choice of duration distributions or to the form of cost functions. However, payment rules can greatly affect financial status over the duration of a project. alphorn of uncertainty activity network cumulative net value earned value method lower bound upper bound Monte Carlo sampling random variable | The relevance of the “alphorn of uncertainty” to the financial management of projects under uncertainty |
S0377221714002884 | In this paper, we consider the two-dimensional variable-sized bin packing problem (2DVSBPP) with guillotine constraint. 2DVSBPP is a well-known NP-hard optimization problem which has several real applications. A mixed bin packing algorithm (MixPacking) which combines a heuristic packing algorithm with the Best Fit algorithm is proposed to solve the single bin problem, and then a backtracking algorithm which embeds MixPacking is developed to solve the 2DVSBPP. A hybrid heuristic algorithm based on iterative simulated annealing and binary search (named HHA) is then developed to further improve the results of our Backtracking algorithm. Computational experiments on the benchmark instances for 2DVSBPP show that HHA has achieved good results and outperforms existing algorithms. | A hybrid heuristic algorithm for the 2D variable-sized bin packing problem |
S0377221714003105 | This paper presents a new approach for consumer credit scoring, by tailoring a profit-based classification performance measure to credit risk modeling. This performance measure takes into account the expected profits and losses of credit granting and thereby better aligns the model developers’ objectives with those of the lending company. It is based on the Expected Maximum Profit (EMP) measure and is used to find a trade-off between the expected losses – driven by the exposure of the loan and the loss given default – and the operational income given by the loan. Additionally, one of the major advantages of using the proposed measure is that it permits to calculate the optimal cutoff value, which is necessary for model implementation. To test the proposed approach, we use a dataset of loans granted by a government institution, and benchmarked the accuracy and monetary gain of using EMP, accuracy, and the area under the ROC curve as measures for selecting model parameters, and for determining the respective cutoff values. The results show that our proposed profit-based classification measure outperforms the alternative approaches in terms of both accuracy and monetary value in the test set, and that it facilitates model deployment. | Development and application of consumer credit scoring models using profit-based classification measures |
S0377221714003117 | The comparison of mental models of dynamic systems improves our understanding of how people comprehend, interpret, and subsequently influence dynamic management tasks. Approaches to comparing mental models currently used in managerial and organizational cognition research, especially the distance-ratio and the closeness-approaches, have been criticized for not considering essential characteristics of dynamic managerial situations. This paper builds on a recent analysis method developed to compare mental models of dynamics systems, and introduces this mathematical approach to management and organizational researchers by means of the SEXTANT software. It presents the process of mental model elicitation, analysis, comparison, and interpretation. An example with four elicited mental models illustrates the software’s features to analyze and present the results. Then, the software is compared with existing software to map and compare mental models. Our conclusion is that SEXTANT marks a significant step in enabling large-scale studies about mental models of dynamic systems. | The SEXTANT software: A tool for automating the comparative analysis of mental models of dynamic systems |
S0377221714003129 | In this paper we consider aggregate Malmquist productivity index measures which allow inputs to be reallocated within the group (when in output orientation). This merges the single period aggregation results allowing input reallocation of Nesterenko and Zelenyuk (2007) with the aggregate Malmquist productivity index results of Zelenyuk (2006) to determine aggregate Malmquist productivity indexes that are justified by economic theory, consistent with previous aggregation results, and which maintain analogous decompositions to the original measures. Such measures are of direct relevance to firms or countries who have merged (making input reallocation possible), allowing them to measure potential productivity gains and how these have been realised (or not) over time. | Aggregation of Malmquist productivity indexes allowing for reallocation of resources |
S0377221714003130 | In many branch-and-price algorithms, the column generation subproblem consists of computing feasible constrained paths. In the capacitated arc-routing problem (CARP), elementarity constraints concerning the edges to be serviced and additional constraints resulting from the branch-and-bound process together impose two types of loop-elimination constraints. To fulfill the former constraints, it is common practice to rely on a relaxation where loops are allowed. In a k-loop elimination approach all loops of length k and smaller are forbidden. Following Bode and Irnich (2012) for solving the CARP, branching on followers and non-followers is the only known approach to guarantee integer solutions within branch-and-price. However, it comes at the cost of additional task-2-loop elimination constraints. In this paper, we show that a combined ( k , 2 ) -loop elimination in the shortest-path subproblem can be accomplished in a computationally efficient way. Overall, the improved branch-and-price often allows the computation of tighter lower bounds and integer optimal solutions for several instances from standard benchmark sets. | The shortest-path problem with resource constraints with ( k , 2 ) -loop elimination and its application to the capacitated arc-routing problem |
S0377221714003142 | Due to an increasing demand for public transportation and intra-urban mobility, an efficient organization of public transportation has gained significant importance in the last decades. In this paper we present a model formulation for the bus rapid transit route design problem, given a fixed number of routes to be offered. The problem can be tackled using a decomposition strategy, where route design and the determination of frequencies and passenger flows will be dealt with separately. We propose a hybrid metaheuristic based on a combination of Large Neighborhood Search (LNS) and Linear Programming (LP). The algorithm as such is iterative. Decision upon the design of routes will be handled using LNS. The resulting passenger flows and frequencies will be determined by solving a LP. The solution obtained may then be used to guide the exploration of new route designs in the following iterations within LNS. Several problem specific operators are suggested and have been tested. The proposed algorithm compares extremely favorable and is able to obtain high quality solutions within short computational times. | Hybrid large neighborhood search for the bus rapid transit route design problem |
S0377221714003154 | In this paper we study a generalization of the Orienteering Problem (OP) which we call the Clustered Orienteering Problem (COP). The OP, also known as the Selective Traveling Salesman Problem, is a problem where a set of potential customers is given and a profit is associated with the service of each customer. A single vehicle is available to serve the customers. The objective is to find the vehicle route that maximizes the total collected profit in such a way that the duration of the route does not exceed a given threshold. In the COP, customers are grouped in clusters. A profit is associated with each cluster and is gained only if all customers belonging to the cluster are served. We propose two solution approaches for the COP: an exact and a heuristic one. The exact approach is a branch-and-cut while the heuristic approach is a tabu search. Computational results on a set of randomly generated instances are provided to show the efficiency and effectiveness of both approaches. | The Clustered Orienteering Problem |
S0377221714003166 | In the Single Source Capacitated Facility Location Problem (SSCFLP) each customer has to be assigned to one facility that supplies its whole demand. The total demand of customers assigned to each facility cannot exceed its capacity. An opening cost is associated with each facility, and is paid if at least one customer is assigned to it. The objective is to minimize the total cost of opening the facilities and supply all the customers. In this paper we extend the Kernel Search heuristic framework to general Binary Integer Linear Programming (BILP) problems, and apply it to the SSCFLP. The heuristic is based on the solution to optimality of a sequence of subproblems, where each subproblem is restricted to a subset of the decision variables. The subsets of decision variables are constructed starting from the optimal values of the linear relaxation. Variants based on variable fixing are proposed to improve the efficiency of the Kernel Search framework. The algorithms are tested on benchmark instances and new very large-scale test problems. Computational results demonstrate the effectiveness of the approach. The Kernel Search algorithm outperforms the best heuristics for the SSCFLP available in the literature. It found the optimal solution for 165 out of the 170 instances with a proven optimum. The error achieved in the remaining instances is negligible. Moreover, it achieved, on 100 new very large-scale instances, an average gap equal to 0.64% computed with respect to a lower bound or the optimum, when available. The variants based on variable fixing improved the efficiency of the algorithm with minor deteriorations of the solution quality. | A heuristic for BILP problems: The Single Source Capacitated Facility Location Problem |
S0377221714003178 | The aim of this paper is to solve a real-world problem proposed by an international company operating in Spain and modeled as a variant of the Open Vehicle Routing Problem in which the makespan, i.e., the maximum time spent on the vehicle by one person, must be minimized. A competitive multi-start algorithm, able to obtain high quality solutions within reasonable computing time is proposed. The effectiveness of the algorithm is analyzed through computational testing on a set of 19 school-bus routing benchmark problems from the literature, and on 9 hard real-world problem instances. | A multi-start algorithm for a balanced real-world Open Vehicle Routing Problem |
S0377221714003191 | We consider multi-stage stochastic fluid models (SFMs), driven by applications in telecommunications and manufacturing in which control of the behavior of the system during congestion may be required. In a two-stage SFM, the process starts from Stage 1 in level 0, and moves to Stage 2 when reaching threshold b 2 from below. Stage 1 starts again when reaching threshold b 1 < b 2 from above. While in a particular stage, the process evolves according to a traditional SFM with a unique set of phases, generator and fluid rates. We first consider a two-stage SFM with general, real fluid change rates. Next, we analyze a two-stage SFM with an upper boundary B > b 2 . Finally, we discuss a generalization to multi-stage SFMs. We use matrix-analytic methods and derive efficient methodology for the analysis of this class of models. | Multi-stage stochastic fluid models for congestion control |
S0377221714003208 | This paper proposes a novel surrogate-model-based multiobjective evolutionary algorithm called Differential Evolution for Multiobjective Optimization based on Gaussian Process models (GP-DEMO). The algorithm is based on the newly defined relations for comparing solutions under uncertainty. These relations minimize the possibility of wrongly performed comparisons of solutions due to inaccurate surrogate model approximations. The GP-DEMO algorithm was tested on several benchmark problems and two computationally expensive real-world problems. To be able to assess the results we compared them with another surrogate-model-based algorithm called Generational Evolution Control (GEC) and with the Differential Evolution for Multiobjective Optimization (DEMO). The quality of the results obtained with GP-DEMO was similar to the results obtained with DEMO, but with significantly fewer exactly evaluated solutions during the optimization process. The quality of the results obtained with GEC was lower compared to the quality gained with GP-DEMO and DEMO, mainly due to wrongly performed comparisons of the inaccurately approximated solutions. | GP-DEMO: Differential Evolution for Multiobjective Optimization based on Gaussian Process models |
S0377221714003221 | This study introduces the Static Bicycle Relocation Problem with Demand Intervals (SBRP-DI), a variant of the One Commodity Pickup and Delivery Traveling Salesman Problem (1-PDTSP). In the SBRP-DI, the stations are required to have an inventory of bicycles lying between given lower and upper bounds and initially have an inventory which does not necessarily lie between these bounds. The problem consists of redistributing the bicycles among the stations, using a single capacitated vehicle, so that the bounding constraints are satisfied and the repositioning cost is minimized. The real-world application of this problem arises in rebalancing operations for shared bicycle systems. The repositioning subproblem associated with a fixed route is shown to be a minimum cost network problem, even in the presence of handling costs. An integer programming formulation for the SBRP-DI are presented, together with valid inequalities adapted from constraints derived in the context of other routing problems and a Benders decomposition scheme. Computational results for instances adapted from the 1-PDTSP are provided for two branch-and-cut algorithms, the first one for the full formulation, and the second one with the Benders decomposition. | The static bicycle relocation problem with demand intervals |
S0377221714003233 | Hierarchical organizations, especially in government agencies, are known by their pyramidal structures and continuous training needs resulting from promotions and/or assignments. Using scientific and rational methods in the job analysis/description, recruitment/selection, assignment, performance appraisal and career planning functions of human resource management (HRM) process decreases training costs. In this study, we develop a new chain of methodologies (the cost-effective course planning model (CECPM)) to decrease training costs and increase the level of specialization. This methodology is implemented in the following steps of the HRM process: (1) the job analysis/description step, where our Mission Description Matrix defines in measurable units the amount of training needed for an employee assigned to a position, (2) the career matrix step, where the minimum training costs for an employee’s career path are determined using our network-flow model and (3) the assignment step, where we propose a decision support system composed of an analytical hierarchy process, linear programming and Pareto optimality analysis. The results indicate that our proposed system ensures minimum training needs while satisfying person-to-position compatibility and personnel’s preferences. | A mathematical model proposal for cost-effective course planning in large hierarchical organizations |
S0377221714003245 | Horizontal collaboration among shippers is gaining traction as a way to increase logistic efficiency. The total distribution cost of a logistic coalition is generally between 9% and 30% lower than the sum of costs of each partner distributing separately. However, the coalition gain is highly dependent on the flexibility that each partner allows in its delivery terms. Flexible delivery dates, flexible order sizes, order splitting rules, etc., allow the coalition to exploit more opportunities for optimization and create better and cheaper distribution plans. An important challenge in a logistic coalition is the division (or sharing) of the coalition gain. Several methods have been proposed for this purpose, often stemming from the field of game theory. This paper states that an adequate gain sharing method should not only be fair, but should also reward flexibility in order to persuade companies to relax their delivery terms. Methods that limit the criteria for cost allocation to the marginal costs and the values of the subcoalitions are found to be able to generate adequate incentives for companies to adopt a flexible position. In a coalition of two partners however, we show that these methods are not able to correctly evaluate an asymmetric effort to be more flexible. For this situation, we suggest an alternative approach to better measure and reward the value of flexibility. | Measuring and rewarding flexibility in collaborative distribution, including two-partner coalitions |
S0377221714003257 | Generalized characteristic functions extend characteristic functions of ‘classical’ TU-games by assigning a real number to every ordered coalition being a permutation of any subset of the player set. Such generalized characteristic functions can be applied when the earnings or costs of cooperation among a set of players depend on the order in which the players enter a coalition. In the literature, the two main solutions for generalized characteristic functions are the one of Nowak and Radzik (1994), shortly called NR-value, and the one introduced by Sánchez and Bergantiños (1997), shortly called SB-value. In this paper, we introduce the axiom of order monotonicity with respect to the order of the players in a unanimity coalition, requiring that players who enter earlier should get not more in the corresponding (ordered) unanimity game than players who enter later. We propose several classes of order monotonic solutions for generalized characteristic functions that contain the NR-value and SB-value as special (extreme) cases. We also provide axiomatizations of these classes. | Order monotonic solutions for generalized characteristic functions |
S0377221714003269 | Two process capabilities have been identified in the operations management literature to leverage supplier relationships for competitive performance: the ability to continuously improve processes with suppliers (process alignment) and the ability to make changes to these relationships (partnering flexibility). While firms may need both capabilities to be successful, it is unclear what strategy should be used to combine these two seemingly contradictory process capabilities. Using data collected from 318 manufacturing firms on a focal firm’s process capabilities to manage supplier relationships, we examine the performance impacts of two dimensions of a particular strategy: balancing (focusing on achieving a close match between the two process capabilities) and complementing (focusing on creating synergy between the two process capabilities). Our results indicate that the balancing dimension has a much stronger effect on a firm’s competitive performance than the complementing dimension. Also, when a firm pursues a high balance and strong complements strategy (combining high levels of both process capabilities), it is able to reduce its competitive performance risks more than when it pursues a high balance and weak complements strategy (combining low levels of both capabilities) or when it implements unbalanced strategies that emphasize either process alignment or partnering flexibility (combining low levels of one capability with high levels of the other). We conclude by discussing the theoretical contributions and practical guidelines. | How should process capabilities be combined to leverage supplier relationships competitively? |
S0377221714003440 | We introduce a class of incremental network design problems focused on investigating the optimal choice and timing of network expansions. We concentrate on an incremental network design problem with shortest paths. We investigate structural properties of optimal solutions, show that the simplest variant is NP-hard, analyze the worst-case performance of natural greedy heuristics, derive a 4-approximation algorithm, and conduct a small computational study. | Incremental network design with shortest paths |
S0377221714003452 | One important problem faced by the liner shipping industry is the fleet deployment problem. In this problem, the number and type of vessels to be assigned to the various shipping routes need to be determined, in such a way that profit is maximized, while at the same time ensuring that (most of the time) sufficient vessel capacity exists to meet shipping demand. Thus far, the standard assumption has been that complete probability distributions can be readily specified to model the uncertainty in shipping demand. In this paper, it is argued that such distributions are hard, if not impossible, to obtain in practice. To relax this oftentimes restrictive assumption, a new distribution-free optimization model is proposed that only requires the specification of the mean, standard deviation and an upper bound on the shipping demand. The proposed model possesses a number of attractive properties: (1) It can be seen as a generalization of an existing variation of the liner fleet deployment model. (2) It remains a mixed integer linear program and (3) The model has a very intuitive interpretation. A numerical case study is provided to illustrate the model. | Distribution-free vessel deployment for liner shipping |
S0377221714003464 | We examine the 2D strip packing problems with guillotine-cut constraint, where the objective is to pack all rectangles into a strip with fixed width and minimize the total height of the strip. We combine three most successful ideas for the orthogonal rectangular packing problems into a single coherent algorithm: (1) packing a block of rectangles instead of a single rectangle in each step; (2) dividing the strip into layers and pack layer by layer; and (3) unrolling and repacking the top portion of the solutions where usually wasted space occurs. Computational experiments on benchmark test sets suggest that our approach rivals existing approaches. | A block-based layer building approach for the 2D guillotine strip packing problem |
S0377221714003476 | In this paper, we present a Hierarchical Differential Evolution (HDE) algorithm for minimal cut set (mcs) identification of coherent and non-coherent Fault Trees (FTs). In realistic application of large-size systems, problems may be encountered in handling a large number of gates and events. In this work, to avoid any approximation, mcs identification is originally transformed into a hierarchical optimization problem, stated as the search for the minimum combination of cut sets that can guarantee the best coverage of all the minterms that make the system fail: during the first step of the iterative search, a multiple-population, parallel search policy is used to expedite the convergence of the second step of the exploration algorithm. The proposed hierarchical method is applied to the Reactor Protection System (RPS) of a Pressurized Water Reactor (PWR) and to the the Airlock System (AS) of a CANadian Deuterium Uranium (CANDU) reactor. Results are evaluated with respect to the accuracy and computational demand of the solution found. | Hierarchical differential evolution for minimal cut sets identification: Application to nuclear safety systems |
S0377221714003488 | Allocating the right person to a task or job is a key issue for improving quality and performance of achievements, usually addressed using the concept of “competences”. Nevertheless, providing an accurate assessment of the competences of an individual may be in practice a difficult task. We suggest in this paper to model the uncertainty on the competences possessed by a person using a possibility distribution, and the imprecision on the competences required for a task using a fuzzy constraint, taking into account the possible interactions between competences using a Choquet integral. As a difference with comparable approaches, we then suggest to perform the allocation of persons to jobs using a robust optimisation approach, allowing to minimise the risk taken by the decision maker. We first apply this framework to the problem of selecting a candidate within n for a job, then extend the method to the problem of selecting c candidates for j jobs (c ⩾ j) using the leximin criterion. | Robust competence assessment for job assignment |
S0377221714003506 | This paper considers the mobile facility routing and scheduling problem with stochastic demand (MFRSPSD). The MFRSPSD simultaneously determines the route and schedule of a fleet of mobile facilities which serve customers with uncertain demand to minimize the total cost generated during the planning horizon. The problem is formulated as a two-stage stochastic programming model, in which the first stage decision deals with the temporal and spatial movement of MFs and the second stage handles how MFs serve customer demands. An algorithm based on the multicut version of the L-shaped method is proposed in which several lower bound inequalities are developed and incorporated into the master program. The computational results show that the algorithm yields a tighter lower bound and converges faster to the optimal solution. The result of a sensitivity analysis further indicates that in dealing with stochastic demand the two-stage stochastic programming approach has a distinctive advantage over the model considering only the average demand in terms of cost reduction. | A multicut L-shaped based algorithm to solve a stochastic programming model for the mobile facility routing and scheduling problem |
S0377221714003518 | In a passenger railroad system, the service planning problem determines the train stopping strategy, taking into consideration multiple train classes and customer origin–destination (OD) demand, to maximize the short-term operational profit of a rail company or the satisfaction levels of the passengers. The service plan is traditionally decided by rule of thumb, an approach that leaves much room for improvement. To systematically analyze this problem, we propose an integer program approach to determine the optimal service plan for a rail company. The formulated problem has a complex solution space, and commonly used commercial optimization packages are currently incapable of solving this problem efficiently, especially when problems of realistic sizes are considered. Therefore, we develop an implicit enumeration algorithm that incorporates intelligent branching and effective bounding strategies so that the solution space of this integer program can be explored efficiently. The numerical results show that the proposed implicit enumeration algorithm can solve real-world problems and can obtain service plans that are at least as good as those developed by the rail company. | An implicit enumeration algorithm for the passenger service planning problem: Application to the Taiwan Railways Administration line |
S0377221714003695 | For electricity market participants trading in sequential markets with differences in price levels and risk exposure, it is relevant to analyze the potential of coordinated bidding. We consider a Nordic power producer who engages in the day-ahead spot market and the hour-ahead balancing market. In both markets, clearing prices and dispatched volumes are unknown at the time of bidding. However, in the balancing market, the market participant faces an additional risk of not being dispatched. Taking into account the sequential clearing of these markets and the gradual realization of market prices, we formulate the bidding problem as a multi-stage stochastic program. We investigate whether higher risk exposure may cause hesitation to bid into the balancing market. Furthermore, we quantify the gain from coordinated bidding, and by deriving bounds on this gain, assess the performance of alternative bidding strategies used in practice. | Bidding in sequential electricity markets: The Nordic case |
S0377221714003701 | We address the quickest path problem proposing a new algorithm based on the fact that its optimal solution corresponds to a supported non-dominated point in the objective space of the minsum–maxmin bicriteria path problem. This result allows us to design a label setting algorithm which improves all existing algorithms in the state-of-the-art, as it is shown in the extensive experiments carried out considering synthetic and real networks. | Fast and fine quickest path algorithm |
S0377221714003713 | The defection or churn of customers represents an important concern for any company and a central matter of interest in customer base analysis. An additional complication arises in non-contractual settings, where the characteristics that should be observed to saying that a customer has totally or partially defected are not clearly defined. As a matter of fact, different definitions of the churn situation could be used in this context. Focusing on non-contractual settings, in this paper we propose a methodology for evaluating the short-time economic effects that using a certain definition of churn would have on a company. With this aim, we have defined two efficiency measures for the economic results of a marketing campaign implemented against churn, and these measures have been computed using a set of definitions of partial defection. Our methodology finds that definition maximizing both efficiency measures and moreover, the monetary amount that the company should invest per customer in the campaign for achieving the optimal solution. This has been modelled as a multiobjective optimization problem that we solved using compromise programming. Numerical results using real data from a Spanish retailing company are presented and discussed in order to show the performance and validity of our proposal. | A methodology based on profitability criteria for defining the partial defection of customers in non-contractual settings |
S0377221714003725 | We consider the two-machine no-wait open shop minimum makespan problem in which the determination of an optimal solution requires an optimal pairing of the jobs followed by the optimal sequencing of the job pairs. We show that the required enumeration can be curtailed by reducing the pair sequencing problem for a given pair set to a traveling salesman problem which is equivalent to a two-machine no-wait flow shop problem solvable in O(n log n) time. We then propose an optimal O(n log n) algorithm for the proportionate problem with equal machine speeds in which each job has the same processing time on both machines. We show that our O(n log n) algorithm also applies to the more general proportionate problem with equal machine speeds and machine-specific setup times. We also analyze the proportionate problem with unequal machine speeds and conclude that the required enumeration can be further curtailed (compared to the problem with arbitrary job processing times) by eliminating certain job pairs from consideration. | The two-machine no-wait general and proportionate open shop makespan problem |
S0377221714003737 | The m-machine no-wait flowshop scheduling problem with the objective of minimizing total completion time subject to the constraint that the makespan value is not greater than a certain value is addressed in this paper. Setup times are considered non-zero values, and thus, setup times are treated as separate from processing times. Several recent algorithms, an insertion algorithm, two genetic algorithms, three simulated annealing algorithms, two cloud theory-based simulated annealing algorithms, and a differential evolution algorithm are adapted and proposed for the problem. An extensive computational analysis has been conducted for the evaluation of the proposed algorithms. The computational analysis indicates that one of the nine proposed algorithms, one of the simulated annealing algorithms (ISA-2), performs much better than the others under the same computational time. Moreover, the analysis indicates that the algorithm ISA-2 performs significantly better than the earlier existing best algorithm. Specifically, the best performing algorithm, ISA-2, proposed in this paper reduces the error of the existing best algorithm in the literature by at least 90% under the same computational time. All the results have been statistically tested. | Total completion time with makespan constraint in no-wait flowshops with setup times |
S0377221714003749 | This research focuses on the stochastic assignment system motivated by outpatient clinics, especially the physical therapy in rehabilitation service. The aim of this research is to develop a stochastic overbooking model to enhance the service quality as well as to increase the utilization of multiple resources, like therapy equipment in a physical therapy room, with the consideration of patients’ call-in sequence. The schedule for a single-service period includes a fixed number of blocks of equal length. When patients call, they are assigned to an appointment time for that block, and an existing appointment is not allowed to be changed. In each visit, a patient might require more than one resource and a probability of no-show. Two estimation methods were proposed for the expected waiting and overtime cost with multiple resources: Convolution Estimation Method and Joint Cumulative Estimation Method for the upper and lower bound value; respectively. A numerical example based on a physical therapy room was used to show that this stochastic model was able to schedule patients for better profitability compared with traditional appointment systems based on four prioritization rules. The workload in each appointment slot was more balanced albeit more patients were assigned to the first slot to fill up the empty room. | A stochastic appointment scheduling system on multiple resources with dynamic call-in sequence and patient no-shows for an outpatient clinic |
S0377221714003750 | Infrastructure security against possible attacks involves making decisions under uncertainty. This paper presents game theoretic models of the interaction between an adversary and a first responder in order to study the problem of security within a transportation infrastructure. The risk measure used is based on the consequence of an attack in terms of the number of people affected or the occupancy level of a critical infrastructure, e.g. stations, trains, subway cars, escalators, bridges, etc. The objective of the adversary is to inflict the maximum damage to a transportation network by selecting a set of nodes to attack, while the first responder (emergency management center) allocates resources (emergency personnel or personnel-hours) to the sites of interest in an attempt to find the hidden adversary. This paper considers both static and dynamic, in which the first responder is mobile, games. The unique equilibrium strategy pair is given in closed form for the simple static game. For the dynamic game, the equilibrium for the first responder becomes the best patrol policy within the infrastructure. This model uses partially observable Markov decision processes (POMDPs) in which the payoff functions depend on an exogenous people flow, and thus, are time varying. A numerical example illustrating the algorithm is presented to evaluate an equilibrium strategy pair. | Infrastructure security games |
S0377221714003762 | In this paper, we revisit the consumption–investment problem with a general discount function and a logarithmic utility function in a non-Markovian framework. The coefficients in our model, including the interest rate, appreciation rate and volatility of the stock, are assumed to be adapted stochastic processes. Following Yong (2012a,b)’s method, we study an N-person differential game. We adopt a martingale method to solve an optimization problem of each player and characterize their optimal strategies and value functions in terms of the unique solutions of BSDEs. Then by taking limit, we show that a time-consistent equilibrium consumption–investment strategy of the original problem consists of a deterministic function and the ratio of the market price of risk to the volatility, and the corresponding equilibrium value function can be characterized by the unique solution of a family of BSDEs parameterized by a time variable. | Consumption–investment strategies with non-exponential discounting and logarithmic utility |
S0377221714003774 | In this paper, an algorithm for the fast computation of network reliability bounds is proposed. The evaluation of the network reliability is an intractable problem for very large networks, and hence approximate solutions based on reliability bounds have assumed importance. The proposed bounds computation algorithm is based on an efficient BDD representation of the reliability graph model and a novel search technique to find important minpaths/mincuts to quickly reduce the gap between the reliability upper and lower bounds. Furthermore, our algorithm allows the control of the gap between the two bounds by controlling the overall execution time. Therefore, a trade-off between prediction accuracy and computational resources can be easily made in our approach. The numerical results are presented for large real example reliability graphs to show the efficacy of our approach. | Fast computation of bounds for two-terminal network reliability |
S0377221714003786 | Planning techniques for large scale earthworks have been considered in this article. To improve these activities a “block theoretic” approach was developed that provides an integrated solution consisting of an allocation of cuts to fills and a sequence of cuts and fills over time. It considers the constantly changing terrain by computing haulage routes dynamically. Consequently more realistic haulage costs are used in the decision making process. A digraph is utilised to describe the terrain surface which has been partitioned into uniform grids. It reflects the true state of the terrain, and is altered after each cut and fill. A shortest path algorithm is successively applied to calculate the cost of each haul, and these costs are summed over the entire sequence, to provide a total cost of haulage. To solve this integrated optimisation problem a variety of solution techniques were applied, including constructive algorithms, meta-heuristics and parallel programming. The extensive numerical investigations have successfully shown the applicability of our approach to real sized earthwork problems. | An integrated approach for earthwork allocation, sequencing and routing |
S0377221714003798 | This paper is concerned with the Online Quota Traveling Salesman Problem. Depending on the symmetry of the metric and the requirement for the salesman to return to the origin, four variants are analyzed. We present optimal deterministic algorithms for each variant defined on a general space, a real line, or a half-line. As a byproduct, an improved lower bound for a variant of Online TSP on a half-line is also obtained. | Optimal deterministic algorithms for some variants of Online Quota Traveling Salesman Problem |
S0377221714003804 | We present a framework to optimize the conditional value-at-risk (CVaR) of a loss distribution under uncertainty. Our model assumes that the loss distribution is dependent on the state of some system and the fraction of time spent in each state is uncertain. We develop and compare two robust-CVaR formulations that take into account this type of uncertainty. We motivate and demonstrate our approach using radiation therapy treatment planning of breast cancer, where the uncertainty is in the patient’s breathing motion and the states of the system are the phases of the patient’s breathing cycle. We use a CVaR representation of the tails of the dose distribution to the points in the body and account for uncertainty in the patient’s breathing pattern that affects the overall dose distribution. | A robust-CVaR optimization approach with application to breast cancer therapy |
S0377221714003816 | Given a mixed graph G with vertex set V, let E and A denote the sets of edges and arcs, respectively. We use Q + and Z + to denote the sets of positive rational numbers and positive integers, respectively. For any connected mixed graph G = ( V , E ∪ A ; w ; l , u ) with a length function w : E ∪ A → Q + and two integer functions l , u : E ∪ A → Z + satisfying l ( e ) ⩽ u ( e ) for each e ∈ E ∪ A , we are asked to determine a minimum length tour T traversing each e ∈ E ∪ A at least l ( e ) and at most u ( e ) times. This new constrained arc routing problem generalizes the mixed Chinese postman problem. Let n = | V | and m = | E ∪ A | denote the number of vertices and edges (including arcs), respectively. Using network flow techniques, we design a ( 1 + 1 / l 0 ) -approximation algorithm in time O ( n 2 m 3 log n ) to solve this constrained arc routing problem such that l ( e ) < u ( e ) holds for each edge e ∈ E , l ( e ) ⩽ u ( e ) holds for each arc e ∈ A and l 0 = min { l ( e ) | e ∈ E } . In addition, we present two optimal combinatorial algorithms in times O ( n 3 ) and O ( nm 2 log n ) to solve this problem for the cases A = ∅ and E = ∅ , respectively. | Approximation algorithms for solving the constrained arc routing problem in mixed graphs |
S0377221714003828 | In the pharmaceutical industry, sales representatives visit doctors to inform them of their products and encourage them to become an active prescriber. On a daily basis, pharmaceutical sales representatives must decide which doctors to visit and the order to visit them. This situation motivates a problem we more generally refer to as a stochastic orienteering problem with time windows (SOPTW), in which a time window is associated with each customer and an uncertain wait time at a customer results from a queue of competing sales representatives. We develop a priori routes with the objective of maximizing expected sales. We operationalize the sales representative’s execution of the a priori route with relevant recourse actions and derive an analytical formula to compute the expected sales from an a priori tour. We tailor a variable neighborhood search heuristic to solve the problem. We demonstrate the value of modeling uncertainty by comparing the solutions to our model to solutions of a deterministic version using expected values of the associated random variables. We also compute an empirical upper bound on our solutions by solving deterministic instances corresponding to perfect information. | A priori orienteering with time windows and stochastic wait times at customers |
S0377221714003841 | A nonstandard probabilistic setting for modeling of the risk of catastrophic events is presented. It allows random variables to take on infinitely large negative values with non-zero probability, which correspond to catastrophic consequences unmeasurable in monetary terms, e.g. loss of human lives. Thanks to this extension, the safety-first principle is proved to be consistent with traditional axioms on a preference relation, such as monotonicity, continuity, and risk aversion. Also, a robust preference relation is introduced, and an example of a monotone robust preference relation, sensitive to catastrophic events in the sense of Chichilnisky (2002), is provided. The suggested setting is demonstrated in evaluating nuclear power plant projects when the probability of a catastrophe is itself a random variable. | Risk averse decision making under catastrophic risk |
S0377221714003853 | The Traveling Umpire Problem (TUP) is a challenging combinatorial optimization problem based on scheduling umpires for Major League Baseball. The TUP aims at assigning umpire crews to the games of a fixed tournament, minimizing the travel distance of the umpires. The present paper introduces two complementary heuristic solution approaches for the TUP. A new method called enhanced iterative deepening search with leaf node improvements (IDLI) generates schedules in several stages by subsequently considering parts of the problem. The second approach is a custom iterated local search algorithm (ILS) with a step counting hill climbing acceptance criterion. IDLI generates new best solutions for many small and medium sized benchmark instances. ILS produces significant improvements for the largest benchmark instances. In addition, the article introduces a new decomposition methodology for generating lower bounds, which improves all known lower bounds for the benchmark instances. | Decomposition and local search based methods for the traveling umpire problem |
S0377221714003865 | Supply chain partnerships exhibit varying degrees of power distribution among the agents. This has implications for pricing and operational decisions in the channel and eventually influences the end customers. To understand how different power schemes affect the supply chain partners’ performance and consumer surplus, we study channel structures with a dominant manufacturer, a dominant retailer, and no single-agent dominance. Under random and price sensitive demand, channel dominance is interpreted in our setting as exerting power to determine the retail and wholesale prices as well as to transfer the inventory risk to the weaker party. We analyze all problems in a game-theory based framework and characterize the equilibrium retail price, wholesale price, and order/production quantity. We show that the manufacturer-dominated channel structure leads to the highest production quantity, the lowest retail price, and the largest expected surplus for an individual buyer; on the other hand, the entire channel profit and the total consumer surplus are highest when the retailer holds the channel dominance. While both the manufacturer and the retailer are better off when they become a power agent individually, channel dominance does not always guarantee higher share of channel profits, as we show under the manufacturer-dominated structure. Further insights are derived analytically and numerically from comparisons of the manufacturer/retailer dominance schemes with the no single-agent dominance structure and integrated channel. We also study extensions to investigate the effect of demand model and risk sharing, and we address industry settings with alternative schemes of holding cost, shortage penalty and salvage value. | Supply chain performance and consumer surplus under alternative structures of channel dominance |
S0377221714003877 | A warranty is a service contract between a manufacturer and a customer which plays a vital role in many businesses and legal transactions. In this paper, various three-level service contracts will be presented among the following three participants; a manufacturer, an agent, and a customer. In order to obtain a better result, the interaction between the aforementioned participants will be modeled using the game theory approach. Under non-cooperative and semi-cooperative games, the optimal sale price, warranty period and warranty price for the manufacturer and the optimal maintenance cost or repair cost for the agent are obtained by maximizing their profits. The satisfaction of the customer is also maximized by being able to choose one of the suggested options from the manufacturer and the agent, based on the risk parameter. Several numerical examples and managerial insights are presented and used to illustrate the models presented in this paper. | Three-level warranty service contract among manufacturer, agent and customer: A game-theoretical approach |
S0377221714003889 | This paper evaluates the resurrection event regarding defaulted firms and incorporates observable cure events in the default prediction of SME. Due to the additional cure-related observable data, a completely new information set is applied to predict individual default and cure events. This is a new approach in credit risk that, to our knowledge, has not been followed yet. Different firm-specific and macroeconomic default and cure-event-influencing risk drivers are identified. The significant variables allow a firm-specific default risk evaluation combined with an individual risk reducing cure probability. The identification and incorporation of cure-relevant factors in the default risk framework enable lenders to support the complete resurrection of a firm in the case of its default and hence reduce the default risk itself. The estimations are developed with a database that contains 5930 mostly small and medium-sized German firms and a total of more than 23000 financial statements over a time horizon from January 2002 to December 2007. Due to the significant influence on the default risk probability as well as the bank’s possible profit prospects concerning a cured firm, it seems essential for risk management to incorporate the additional cure information into credit risk evaluation. | Cure events in default prediction |
S0377221714003907 | In this paper we study the problem of designing a survivable telecommunication network with shared-protection routing. We develop a heuristic algorithm to solve this problem. Recent results in the area of global re-routing have been used to obtain very tight lower bounds for the problem. Our results indicate that in a majority of problem instances, the average gap between the heuristic solutions and the lower bounds is within 5%. Computational experience is reported on randomly generated problem instances with up to 35 nodes, 80 edges and 595 demand pairs and also on the instances available in SNDlib database. | Survivable network design with shared-protection routing |
S0377221714003919 | We consider cost sharing for a class of facility location games, where the strategy space of each player consists of the bases of a player-specific matroid defined on the set of resources. We assume that resources have nondecreasing load-dependent costs and player-specific delays. Our model includes the important special case of capacitated facility location problems, where players have to jointly pay for opened facilities. The goal is to design cost sharing protocols so as to minimize the resulting price of anarchy and price of stability. We investigate two classes of protocols: basic protocols guarantee the existence of at least one pure Nash equilibrium and separable protocols additionally require that the resulting cost shares only depend on the set of players on a resource. We find optimal basic and separable protocols that guarantee the price of stability/price of anarchy to grow logarithmically/linearly in the number of players. These results extend our previous results (cf. von Falkenhausen & Harks, 2013), where optimal basic and separable protocols were given for the case of symmetric matroid games without delays. We finally study the complexity of computing optimal cost shares. We derive several hardness results showing that optimal cost shares cannot be approximated in polynomial time within a logarithmic factor in the number of players, unless P = NP . For a restricted class of problems that include the above hard instances, we devise an approximation algorithm matching the logarithmic bound. | Optimal cost sharing for capacitated facility location games |
S0377221714003920 | The concept of efficiency in groups postulates that a coalition of firms has to record a smaller distance toward the aggregate technology frontier compared with the sum of individual distances. Efficiency analysis (either allocative or technical) is defined with respect to cooperative firm game in order to provide operational distance functions, the so-called pseudo-distance functions. These pseudo-distances belong to the core interior of the allocative firm game, in other terms, any given firm coalition may always improve its allocative efficiency. We prove that such a result is impossible for technical efficiency, i.e., the technical efficiency cannot increase for all possible coalitions. | Efficient firm groups: Allocative efficiency in cooperative games |
S0377221714003932 | In this research paper, we explored using the trust region method to solve the logit-based SUE problem. We proposed a modified trust region Newton (MTRN) algorithm for this problem. When solving the trust region SUE subproblem, we showed that applying the well-known Steihaug-Toint method is inappropriate, since it may make the convergence rate of the major iteration very slow in the early stage of the computation. To overcome this drawback, a modified Steihaug-Toint method was proposed. We proved the convergence of our MTRN algorithm and showed its convergence rate is superlinear. For the implication of our algorithm, we proposed an important principle on how to select the basic route for each OD pair. We indicated that it is a crucial principle to accelerate the convergence rate of the minor iteration (i.e. trust region subproblem-solving iteration). In this study, other implication issues for the SUE problem are also considered, including the computation of the trial step and the strategy to ensure strict feasibility iteration point. We compared the MTRN algorithm with the Gradient Projection (GP) algorithm on the Sioux Falls network. Some results of numerical analysis are also reported. | Exploring trust region method for the solution of logit-based stochastic user equilibrium problem |
S0377221714004111 | Transportation of a product from multi-source to multi-destination with minimal total transportation cost plays an important role in logistics and supply chain management. Researchers have given considerable attention in minimizing this cost with fixed supply and demand quantities. However, these quantities may vary within a certain range in a period due to the variation of the global economy. So, the concerned parties might be more interested in finding the lower and the upper bounds of the minimal total costs with varying supplies and demands within their respective ranges for proper decision making. This type of transportation problem has received attention of only one researcher, who formulated the problem and solved it by LINGO. We demonstrate that this method fails to obtain the correct upper bound solution always. Then we extend this model to include the inventory costs during transportation and at destinations, as they are interrelated factors. The number of choices of supplies and demands within their respective ranges increases enormously as the number of suppliers and buyers increases. In such a situation, although the lower bound solution can be obtained methodologically, determination of the upper bound solution becomes an NP hard problem. Here we carry out theoretical analyses on developing the lower and the upper bound heuristic solution techniques to the extended model. A comparative study on solutions of small size numerical problems shows promising performance of the current upper bound technique. Another comparative study on results of numerical problems demonstrates the effect of inclusion of the inventory costs. | A heuristic solution technique to attain the minimal total cost bounds of transporting a homogeneous product with varying demands and supplies |
S0377221714004123 | Column generation is involved in the current most efficient approaches to routing problems. Set partitioning formulations model routing problems by considering all possible routes and selecting a subset that visits all customers. These formulations often produce tight lower bounds and require column generation for their pricing step. The bounds in the resulting branch-and-price are tighter when elementary routes are considered, but this approach leads to a more difficult pricing problem. Balancing the pricing with route relaxations has become crucial for the efficiency of the branch-and-price for routing problems. Recently, the ng-routes relaxation was proposed as a compromise between elementary and non-elementary routes. The ng-routes are non-elementary routes with the restriction that when following a customer, the route is not allowed to visit another customer that was visited before if they belong to a dynamically computed set. The larger the size of these sets, the closer the ng-route is to an elementary route. This work presents an efficient pricing algorithm for ng-routes and extends this algorithm for elementary routes. Therefore, we address the Shortest Path Problem with Resource Constraint (SPPRC) and the Elementary Shortest Path Problem with Resource Constraint (ESPPRC). The proposed algorithm combines the Decremental State-Space Relaxation technique (DSSR) with completion bounds. We apply this algorithm for the Generalized Vehicle Routing Problem (GVRP) and for the Capacitated Vehicle Routing Problem (CVRP), demonstrating that it is able to price elementary routes for instances up to 200 customers, a result that doubles the size of the ESPPRC instances solved to date. | Efficient elementary and restricted non-elementary route pricing |
S0377221714004135 | We study a generalization of the Directed Rural Postman Problem where not all arcs requiring a service have to be visited provided that a penalty cost is paid if a service arc is not crossed. The problem, known as Directed Profitable Rural Postman Problem, looks for a tour visiting the selected set of service arcs while minimizing both traveling and penalty costs. We add different valid inequalities to a known mathematical formulation of the problem and develop a branch-and-cut algorithm that introduces connectivity constraints both in a “lazy” and in a standard way. We also propose a matheuristic followed by an improvement heuristic (final refinement). The matheuristic exploits information provided by a problem relaxation to select promising service arcs used to solve optimally Directed Rural Postman problems. The ex-post refinement tries to improve the solution provided by the matheuristic using a branch-and-cut algorithm. The method gets a quick convergence through the introduction of connectivity cuts that are not guaranteed to be valid inequalities, and thus may exclude integer feasible solutions. All proposed methods have been tested on benchmark instances found in literature and compared to state of the art algorithms. Results show that heuristic methods are extremely effective outperforming existing algorithms. Moreover, our exact method is able to close, in less than one hour, all the 22 benchmark instances that have not been solved to optimality yet. | New results for the Directed Profitable Rural Postman Problem |
S0377221714004147 | Economic activity produces not only desirable outputs but also undesirable outputs. Undesirable outputs are usually omitted from efficiency assessments (i.e., applications of Data Envelopment Analysis) which fail to express the true production process. The directional distance function model has been used for handling asymmetrically both desirable and undesirable outputs in the assessment process. In the present paper, we apply a generalized directional distance function to measure the efficiency of the health systems of 171 countries. We incorporate both desirable and undesirable outputs into the efficiency assessment without transforming the latter type of outputs into inputs or into their inverse form, as is done in most of the extant studies that deal with the measurement of health efficiency. The methodology that we apply introduces a modified definition of the efficiency score which yields results consistent with those obtained from radial DEA models. In addition, our results are independent of the length of the direction vector. | Estimating the technical efficiency of health care systems: A cross-country comparison using the directional distance function |
S0377221714004159 | Multiobjective shortest path problems are computationally harder than single objective ones. In particular, execution time is an important limiting factor in exact multiobjective search algorithms. This paper explores the possibility of improving search performance in those cases where the interesting portion of the Pareto front can be initially bounded. We introduce a new exact label-setting algorithm that returns the subset of Pareto optimal paths that satisfy a set of lexicographic goals, or the subset that minimizes deviation from goals if these cannot be fully satisfied. Formal proofs on the correctness of the algorithm are provided. We also show that the algorithm always explores a subset of the labels explored by a full Pareto search. The algorithm is evaluated over a set of problems with three objectives, showing a performance improvement of up to several orders of magnitude as goals become more restrictive. | Multiobjective shortest path problems with lexicographic goal-based preferences |
S0377221714004160 | We consider a master surgery scheduling (MSS) problem in which block operating room (OR) time is assigned to different surgical specialties. While many MSS approaches in the literature consider only the impact of the MSS on operating theater and operating staff, we enlarge the scope to downstream resources, such as the intensive care unit (ICU) and the general wards required by the patients once they leave the OR. We first propose a stochastic analytical approach, which calculates for a given MSS the exact demand distribution for the downstream resources. We then discuss measures to define downstream costs resulting from the MSS and propose exact and heuristic algorithms to minimize these costs. | Master surgery scheduling with consideration of multiple downstream units |
S0377221714004172 | Traditionally, two variants of the L-shaped method based on Benders’ decomposition principle are used to solve two-stage stochastic programming problems: the aggregate and the disaggregate version. In this study we report our experiments with a special convex programming method applied to the aggregate master problem. The convex programming method is of the type that uses an oracle with on-demand accuracy. We use a special form which, when applied to two-stage stochastic programming problems, is shown to integrate the advantages of the traditional variants while avoiding their disadvantages. On a set of 105 test problems, we compare and analyze parallel implementations of regularized and unregularized versions of the algorithms. The results indicate that solution times are significantly shortened by applying the concept of on-demand accuracy. | Applying oracles of on-demand accuracy in two-stage stochastic programming – A computational study |
S0377221714004184 | In this paper we present the Selective Graph Coloring Problem, a generalization of the standard graph coloring problem as well as several of its possible applications. Given a graph with a partition of its vertex set into several clusters, we want to select one vertex per cluster such that the chromatic number of the subgraph induced by the selected vertices is minimum. This problem appeared in the literature under different names for specific models and its complexity has recently been studied for different classes of graphs. Here, we describe different models – some already discussed in previous papers and some new ones – in very different contexts under a unified framework based on this graph problem. We point out similarities between these models, offering a new approach to solve them, and show some generic situations where the selective graph coloring problem may be used. We focus on specific graph classes motivated by each model, and we briefly discuss the complexity of the selective graph coloring problem in each one of these graph classes and point out interesting future research directions. | On some applications of the selective graph coloring problem |
S0377221714004196 | We investigate an automobile supply chain where a manufacturer and a retailer serve a market with a fuel-efficient automobile under a scrappage program by the government. The program awards a subsidy to each consumer who trades in his or her used automobile with a new fuel-efficient automobile, if the manufacturer’s suggested retail price (MSRP) for the new one does not exceed a cutoff level. We derive the conditions assuring that the manufacturer has an incentive to qualify for the program, and find that when the cutoff level is low, the manufacturer may be unwilling to qualify for the program even if the subsidy is high. We also show that when the manufacturer qualifies for the program, increasing the MSRP cutoff level would raise the manufacturer’s expected profit but may decrease the expected sales. A moderate cutoff level can maximize the effectiveness of the program in stimulating the sales of fuel-efficient automobiles, whereas a sufficiently high cutoff level can result in the largest profit for the manufacturer. The retailer’s profit always increases when the manufacturer chooses to qualify for the program. Furthermore, we compute the government’s optimal MSRP cutoff level and subsidy for a given sales target, and find that as the program budget increases, the government should raise the subsidy but reduce the MSRP cutoff level to maximize sales. | Qualifying for a government’s scrappage program to stimulate consumers’ trade-in transactions? Analysis of an automobile supply chain involving a manufacturer and a retailer |
S0377221714004202 | Based on the minimal reduction strategy, Yang et al. (2011) developed a fixed-sum output data envelopment analysis (FSODEA) approach to evaluate the performance of decision-making units (DMUs) with fixed-sum outputs. However, in terms of such a strategy, all DMUs compete over fixed-sum outputs with “no memory” that will result in differing efficient frontiers’ evaluations. To address the problem, in this study, we propose an equilibrium efficiency frontier data envelopment analysis (EEFDEA) approach, by which all DMUs with fixed-sum outputs can be evaluated based on a common platform (or equilibrium efficient frontier). The proposed approach can be divided into two stages. Stage 1 constructs a common evaluation platform via two strategies: an extended minimal adjustment strategy and an equilibrium competition strategy. The former ensures that original efficient DMUs are still efficient, guaranteeing the existence of a common evaluation platform. The latter makes all DMUs achieve a common equilibrium efficient frontier. Then, based on the common equilibrium efficient frontier, Stage 2 evaluates all DMUs with their original inputs and outputs. Finally, we illustrate the proposed approach by using two numerical examples. | An equilibrium efficiency frontier data envelopment analysis approach for evaluating decision-making units with fixed-sum outputs |
S0377221714004214 | In this paper, we address the problem of parallel batching of jobs on identical machines to minimize makespan. The problem is motivated from the washing step of hospital sterilization services where jobs have different sizes, different release dates and equal processing times. Machines can process more than one job at the same time as long as the total size of jobs in a batch does not exceed the machine capacity. We present a branch and bound based heuristic method and compare it to a linear model and two other heuristics from the literature. Computational experiments show that our method can find high quality solutions within short computation time. | A branch and bound based heuristic for makespan minimization of washing operations in hospital sterilization services |
S0377221714004226 | Decision makers benefit from the utilization of decision-support models in several applications. Obtaining managerial insights is essential to better inform the decision-process. This work offers an in-depth investigation into the structural properties of decision-support models. We show that the input–output mapping in influence diagrams, decision trees and decision networks is piecewise multilinear. The conditions under which sensitivity information cannot be extracted through differentiation are examined in detail. By complementing high-order derivatives with finite change sensitivity indices, we obtain a systematic approach that allows analysts to gain a wide range of managerial insights. A well-known case study in the medical sector illustrates the findings. | Decision-network polynomials and the sensitivity of decision-support models |
S0377221714004238 | Our paper reports on the use of data envelopment analysis (DEA) for the assessment of performance of secondary schools in Malaysia during the implementation of the policy of teaching and learning mathematics and science subjects in the English language (PPSMI). The novelty of our application is that it makes use of the hybrid returns-to-scale (HRS) DEA model. This combines the assumption of constant returns to scale with respect to quantity inputs and outputs (teaching provision and students) and variable returns to scale (VRS) with respect to quality factors (attainment levels on entry and exit) and socio-economic status of student families. We argue that the HRS model is a better-informed model than the conventional VRS model in the described application. Because the HRS technology is larger than the VRS technology, the new model provides a tangibly better discrimination on efficiency than could be obtained by the VRS model. To assess the productivity change of secondary schools over the years surrounding the introduction of the PPSMI policy, we adapt the Malmquist productivity index and its decomposition to the case of HRS model. | Combining the assumptions of variable and constant returns to scale in the efficiency evaluation of secondary schools |
S0377221714004251 | Several production environments require simultaneous planing of sizing and scheduling of sequences of production lots. Integration of sequencing decisions in lotsizing and scheduling problems has received an increased attention from the research community due to its inherent applicability to real world problems. A two-dimensional classification framework is proposed to survey and classify the main modeling approaches to integrate sequencing decisions in discrete time lotsizing and scheduling models. The Asymmetric Traveling Salesman Problem can be an important source of ideas to develop more efficient models and methods to this problem. Following this research line, we also present a new formulation for the problem using commodity flow based subtour elimination constraints. Computational experiments are conducted to assess the performance of the various models, in terms of running times and upper bounds, when solving real-word size instances. | Modeling lotsizing and scheduling problems with sequence dependent setups |
S0377221714004263 | Decomposition based algorithms perform well when a suitable set of weights are provided; however determining a good set of weights a priori for real-world problems is usually not straightforward due to a lack of knowledge about the geometry of the problem. This study proposes a novel algorithm called preference-inspired co-evolutionary algorithm using weights (PICEA-w) in which weights are co-evolved with candidate solutions during the search process. The co-evolution enables suitable weights to be constructed adaptively during the optimisation process, thus guiding candidate solutions towards the Pareto optimal front effectively. The benefits of co-evolution are demonstrated by comparing PICEA-w against other leading decomposition based algorithms that use random, evenly distributed and adaptive weights on a set of problems encompassing the range of problem geometries likely to be seen in practice, including simultaneous optimisation of up to seven conflicting objectives. Experimental results show that PICEA-w outperforms the comparison algorithms for most of the problems and is less sensitive to the problem geometry. | Preference-inspired co-evolutionary algorithms using weight vectors |
S0377221714004275 | This article studies the influence of risk on farms’ technical efficiency levels. The analysis extends the order-m efficiency scores approach proposed by Daraio and Simar (2005) to the state-contingent framework. The empirical application focuses on cross section data of Catalan specialized crop farms from the year 2011. Results suggest that accounting for production risks increases the technical performance. A 10% increase in output risk will result in a 2.5% increase in average firm technical performance. | Measuring the impacts of production risk on technical efficiency: A state-contingent conditional order-m approach |
S0377221714004445 | Importance measures have been widely studied and applied in reliability and safety engineering. This paper presents a general formulation of moment-independent importance measures and several commonly discussed importance measures are unified based on Minkowski distance (MD). Moment-independent importance measures can be categorized into three classes of MD importance measures, i.e. probability density function based MD importance measure, cumulative distribution function based MD importance measure and quantile based MD importance measure. Some properties of the proposed MD importance measures are investigated. Several new importance measures are also derived as special cases of the generalized MD importance measures and illustrated with some case studies. | Generalized moment-independent importance measures based on Minkowski distance |
S0377221714004457 | We present a new method called UTA GMS –INT for ranking a finite set of alternatives evaluated on multiple criteria. It belongs to the family of Robust Ordinal Regression (ROR) methods which build a set of preference models compatible with preference information elicited by the Decision Maker (DM). The preference model used by UTA GMS –INT is a general additive value function augmented by two types of components corresponding to “bonus” or “penalty” values for positively or negatively interacting pairs of criteria, respectively. When calculating value of a particular alternative, a bonus is added to the additive component of the value function if a given pair of criteria is in a positive synergy for performances of this alternative on the two criteria. Similarly, a penalty is subtracted from the additive component of the value function if a given pair of criteria is in a negative synergy for performances of the considered alternative on the two criteria. The preference information elicited by the DM is composed of pairwise comparisons of some reference alternatives, as well as of comparisons of some pairs of reference alternatives with respect to intensity of preference, either comprehensively or on a particular criterion. In UTA GMS –INT, ROR starts with identification of pairs of interacting criteria for given preference information by solving a mixed-integer linear program. Once the interacting pairs are validated by the DM, ROR continues calculations with the whole set of compatible value functions handling the interacting criteria, to get necessary and possible preference relations in the considered set of alternatives. A single representative value function can be calculated to attribute specific scores to alternatives. It also gives values to bonuses and penalties. UTA GMS –INT handles quite general interactions among criteria and provides an interesting alternative to the Choquet integral. | Robust ordinal regression for value functions handling interacting criteria |
S0377221714004469 | We consider the ranking of decision alternatives in decision analysis problems under uncertainty, under very weak assumptions about the type of utility function and information about the probabilities of the states of nature. Namely, the following two assumptions are required for the suggested method: the utility function is in the class of increasing continuous functions, and the probabilities of the states of nature are rank-ordered. We develop a simple analytical method for the partial ranking of decision alternatives under the stated assumptions. This method does not require solving optimization programs and is free of the rounding errors. | Decision making under uncertainty with unknown utility function and rank-ordered probabilities |
S0377221714004470 | The distributed permutation flowshop problem has been recently proposed as a generalization of the regular flowshop setting where more than one factory is available to process jobs. Distributed manufacturing is a common situation for large enterprises that compete in a globalized market. The problem has two dimensions: assigning jobs to factories and scheduling the jobs assigned to each factory. Despite being recently introduced, this interesting scheduling problem has attracted attention and several heuristic and metaheuristic methods have been proposed in the literature. In this paper we present a scatter search (SS) method for this problem to optimize makespan. SS has seldom been explored for flowshop settings. In the proposed algorithm we employ some advanced techniques like a reference set made up of complete and partial solutions along with other features like restarts and local search. A comprehensive computational campaign including 10 existing algorithms, together with statistical analyses, shows that the proposed scatter search algorithm produces better results than existing algorithms by a significant margin. Moreover all 720 known best solutions for this problem are improved. | A scatter search algorithm for the distributed permutation flowshop scheduling problem |
S0377221714004482 | The multiple-choice multidimensional knapsack problem (MMKP) is a well-known NP-hard combinatorial optimization problem with a number of important applications. In this paper, we present a “reduce and solve” heuristic approach which combines problem reduction techniques with an Integer Linear Programming (ILP) solver (CPLEX). The key ingredient of the proposed approach is a set of group fixing and variable fixing rules. These fixing rules rely mainly on information from the linear relaxation of the given problem and aim to generate reduced critical subproblem to be solved by the ILP solver. Additional strategies are used to explore the space of the reduced problems. Extensive experimental studies over two sets of 37 MMKP benchmark instances in the literature show that our approach competes favorably with the most recent state-of-the-art algorithms. In particular, for the set of 27 conventional benchmarks, the proposed approach finds an improved best lower bound for 11 instances and as a by-product improves all the previous best upper bounds. For the 10 additional instances with irregular structures, the method improves 7 best known results. | A “reduce and solve” approach for the multiple-choice multidimensional knapsack problem |
S0377221714004494 | PROMETHEE methods are widely used in Multiple Criteria Decision Aiding (MCDA) to deal with real world decision making problems. In this paper, we propose to apply the Stochastic Multicriteria Acceptability Analysis (SMAA) to the family of PROMETHEE methods in order to explore the whole set of parameters compatible with some preference information provided by the Decision Maker (DM). The application of the presented methodology is described in a didactic example. | The SMAA-PROMETHEE method |
S0377221714004500 | In service systems, in order to balance the server’s idle times and the customers’ waiting times, one may fix the arrival times of the customers beforehand in an appointment schedule. We propose a procedure for determining appointment schedules in such a D/G/1-type of system by sequentially minimizing the per-customer expected loss. Our approach provides schedules for any convex loss function; for the practically relevant cases of the quadratic and absolute value loss functions appealing closed-form results are derived. Importantly, our approach does not impose any conditions on the service time distribution; it is even allowed that the customers’ service times have different distributions. A next question that we address concerns the order of the customers. We develop a criterion that yields the optimal order in case the service time distributions belong to a scale family, such as the exponential family. The customers should be scheduled then in non-decreasing order of their scale parameter. While the optimal schedule can be computed numerically under quite general circumstances, in steady-state it can be computed in closed form for exponentially distributed service times under the quadratic and absolute value loss function. Our findings are illustrated by a number of numerical examples; these also address how fast the transient schedule converges to the corresponding steady-state schedule. | Optimized appointment scheduling |
S0377221714004512 | In this paper, newsvendor problems for innovative products are analyzed. Because the product is new, no relevant historical data is available for statistical demand analysis. Instead of using the probability distribution, the possibility distribution is utilized to characterize the uncertainty of the demand. We consider products whose life cycles are expected to be smaller than the procurement lead times. Determining optimal order quantities of such products is a typical one-shot decision problem for a retailer. Therefore, newsvendor models for innovative products are proposed based on the one-shot decision theory (OSDT). The main contributions of this research are as follows: the general solutions of active, passive, apprehensive and daring focus points and optimal alternatives are proposed and the existence theorem is established in the one-shot decision theory; a simple and effective approach for identifying the possibility distribution is developed; newsvendor models with four types of focus points are built; managerial insights into the behaviors of different types of retailers are gained by the theoretical analysis; the proposed models are scenario-based decision models which provide a fundamental alternative to analyze newsvendor problems for innovative products. | Newsvendor models for innovative products with one-shot decision theory |
S0377221714004524 | Line-integrated supermarkets constitute a novel in-house parts logistics concept for feeding mixed-model assembly lines. In this context, supermarkets are decentralized logistics areas located directly in each station. Here, parts are withdrawn from their containers by a dedicated logistics worker and sorted just-in-sequence (JIS) into a JIS-bin. From this bin, assembly workers fetch the parts required by the current workpiece and mount them during the respective production cycle. This paper treats the scheduling of the part supply processes within line-integrated supermarkets. The scheduling problem for refilling the JIS-bins is formalized and a complexity analysis is provided. Furthermore, a heuristic decomposition approach is presented and important managerial aspects are investigated. | Scheduling the part supply of mixed-model assembly lines in line-integrated supermarkets |
S0377221714004536 | The Ship Stowage Planning Problem is the problem of determining the optimal position of containers to be stowed in a containership. In this paper we address the problem considering the objectives of the terminal management that are mainly related to the yard and transport operations. We propose a Binary Integer Program and a two-step heuristic algorithm. An extensive computational experience shows the efficiency and effectiveness of our approach. A classification scheme for stowage planning problems is also provided. | The Terminal-Oriented Ship Stowage Planning Problem |
S0377221714004548 | We present a differential game to study how companies can simultaneously license their innovations to other firms when launching a new product. The licensee may cannibalize licensor’s sales, albeit this can be compensated by gains from royalties. Nonetheless, patent royalties are generally so low that licensing is not an attractive strategy. In this paper we consider the role of licensing to speed up the product diffusion. Word of mouth by licensee’s customers and licensee’s advertising indirectly push forward sales of the licensing company, accelerating new product diffusion. We find evidence that licensing can be a potentially profitable strategy. However, we also find that a weak Intellectual Property Right (IPR) protection can easily diminish the financial attractiveness of licensing. | Licensing radical product innovations to speed up the diffusion |
S0377221714004561 | In spite of its tremendous economic significance, the problem of sales staff schedule optimization for retail stores has received relatively scant attention. Current approaches typically attempt to minimize payroll costs by closely fitting a staffing curve derived from exogenous sales forecasts, oblivious to the ability of additional staff to (sometimes) positively impact sales. In contrast, this paper frames the retail scheduling problem in terms of operating profit maximization, explicitly recognizing the dual role of sales employees as sources of revenues as well as generators of operating costs. We introduce a flexible stochastic model of retail store sales, estimated from store-specific historical data, that can account for the impact of all known sales drivers, including the number of scheduled staff, and provide an accurate sales forecast at a high intra-day resolution. We also present solution techniques based on mixed-integer (MIP) and constraint programming (CP) to efficiently solve the complex mixed integer non-linear scheduling (MINLP) problem with a profit-maximization objective. The proposed approach allows solving full weekly schedules to optimality, or near-optimality with a very small gap. On a case-study with a medium-sized retail chain, this integrated forecasting–scheduling methodology yields significant projected net profit increases on the order of 2–3% compared to baseline schedules. | Retail store scheduling for profit |
S0377221714004573 | In this paper, we introduce and study a generalization of the degree constrained minimum spanning tree problem where we may install one of several available transmission systems (each with a different cost value) in each edge. The degree of the endnodes of each edge depends on the system installed on the edge. We also discuss a particular case that arises in the design of wireless mesh networks (in this variant the degree of the endnodes of each edge depend on the transmission system installed on it as well as on the length of the edge). We propose three classes of models using different sets of variables and compare from a theoretical perspective as well as from a computational point of view, the models and the corresponding linear programming relaxations. The computational results show that some of the proposed models are able to solve to optimality instances with 100 nodes and different scenarios. | Spanning trees with variable degree bounds |
S0377221714004585 | We consider a supply chain comprising a manufacturer and a retailer. The manufacturer supplies a product to the retailer, while the retailer sells the product bundled with after-sales service to consumers in a fully competitive market. The sales volume is affected by the retailer’s service-level commitment. The retailer can build service capacity in-house at a deterministic price before service demand is realized, or buy the service from an outsourcing market at an uncertain price after service demand realization. We find that the outsourcing market encourages the retailer to make a higher level of service commitment, while prompting the manufacturer to reduce the wholesale price, resulting in more demand realization. We analyze how the expected cost of the service in the outsourcing market and the retailer’s risk attitude affect the decisions of both parties. We derive the conditions under which the retailer is willing to build service capacity in-house and under which it will buy the service from the outsourcing market. Moreover, we find that the manufacturer’s sharing with the retailer the cost to build service capacity improves the profits of both parties. | Make-or-buy service capacity decision in a supply chain providing after-sales service |
S0377221714004597 | This paper addresses the resource-constrained project scheduling problem with flexible resource profiles (FRCPSP). Such a problem often arises in many real-world applications, in which the resource usage of an activity is not merely constant, but can be adjusted from period to period. The FRCPSP is, therefore, to simultaneously determine the start time, the resource profile, and the duration of each activity in order to minimize the makespan, subject to precedence relationships, limited availability of multiple resources, and restrictions on resource profiles. We propose four discrete-time model formulations and compare their model efficiency in terms of solution quality and computational times. Both preprocessing and priority-based heuristic methods are also applied to compute both upper and lower bounds of the makespan. Our comparative results show significant dominance of one of the models, the so-called “variable-intensity-based” model, in both solution quality and runtimes. | MIP models for resource-constrained project scheduling with flexible resource profiles |
S0377221714004603 | Engineering and operations management decisions have become increasingly complex as a result of recent advances in information technology. The increased ability to access and communicate information has resulted in expanded system domains consisting of multiple agents, each exhibiting autonomous decision-making capabilities, with potentially complex logistics. Challenges regarding the management of these systems include heterogenous utility drivers and risk preferences among the agents, and various sources of system uncertainty. This paper presents a distributed options-based model that manages the impact of multiple forms of uncertainty from a multi-agent perspective, while adapting as both the stream of information and the capabilities of the agents are better known. Because the actions of decision makers may have an impact on the evolution of underlying sources of uncertainty, this endogenous relationship is modeled and a solution approach developed that converges to an equilibrium system state and improves the performance of agents and the system. The final result is a distributed options-based decision-making policy that both responds to and controls the evolution of uncertainty in large-scale engineering and operations management domains. | An options-based approach to coordinating distributed decision systems |
S0377221714004615 | Recent press has highlighted the environmental benefits associated with online shopping, such as emissions savings from individual drivers, economies of scale in package delivery, and decreased inventories. We formulate a dual channel model for a retailer who has access to both online and traditional market outlets to analyze the impact of customer environmental sensitivity on its supply. In particular, we analyze stocking decisions for each channel incorporating price dependent demand, customer preference/utility for online channels, and channel related costs. We compare and contrast results from both deterministic and stochastic models, and utilize numerical examples to illustrate the implications of industry specific factors on these decisions. Finally, we compare and contrast the findings for disparate industries, such as electronics, books and groceries. | Environmental implications for online retailing |
S0377221714004627 | Quality of decisions in inventory management models depends on the accuracy of parameter estimates used for decision making. In many situations, error in decision making is unavoidable. In such cases, sensitivity analysis is necessary for better implementation of the model. Though the newsvendor model is one of the most researched inventory models, little is known about its robustness. In this paper, we perform sensitivity analysis of the classical newsvendor model. Conditions for symmetry/skewness of cost deviation (i.e., deviation of expected demand–supply mismatch cost from its minimum) have been identified. These conditions are closely linked with symmetry/skewness of the demand density function. A lower bound of cost deviation is established for symmetric unimodal demand distributions. Based on demonstrations of the lower bound, we found the newsvendor model to be sensitive to sub-optimal ordering decisions, more sensitive than the economic order quantity model. Order quantity deviation (i.e., deviation of order quantity from its optimum) is explored briefly. We found the magnitude of order quantity deviation to be comparable with that of parameter estimation error. Mean demand is identified as the most influential parameter in deciding order quantity deviation. | Sensitivity analysis of the newsvendor model |
S0377221714004639 | We propose a new distributed heuristic for approximating the Pareto set of bi-objective optimization problems. Our approach is at the crossroads of parallel cooperative computation, objective space decomposition, and adaptive search. Given a number of computing nodes, we self-coordinate them locally, in order to cooperatively search different regions of the Pareto front. This offers a trade-off between a fully independent approach, where each node would operate independently of the others, and a fully centralized approach, where a global knowledge of the entire population is required at every step. More specifically, the population of solutions is structured and mapped into computing nodes. As local information, every node uses only the positions of its neighbors in the objective space and evolves its local solution based on what we term a ‘localized fitness function’. This has the effect of making the distributed search evolve, over all nodes, to a high quality approximation set, with minimum communications. We deploy our distributed algorithm using a computer cluster of hundreds of cores and study its properties and performance on ρ MNK-landscapes. Through extensive large-scale experiments, our approach is shown to be very effective in terms of approximation quality, computational time and scalability. | Distributed localized bi-objective search |
S0377221714004640 | We consider a clique relaxation model based on the concept of relative vertex connectivity. It extends the classical definition of a k-vertex-connected subgraph by requiring that the minimum number of vertices whose removal results in a disconnected (or a trivial) graph is proportional to the size of this subgraph, rather than fixed at k. Consequently, we further generalize the proposed approach to require vertex-connectivity of a subgraph to be some function f of its size. We discuss connections of the proposed models with other clique relaxation ideas from the literature and demonstrate that our generalized framework, referred to as f-vertex-connectivity, encompasses other known vertex-connectivity-based models, such as s-bundle and k-block. We study related computational complexity issues and show that finding maximum subgraphs with relatively large vertex connectivity is NP-hard. An interesting special case that extends the R-robust 2-club model recently introduced in the literature, is also considered. In terms of solution techniques, we first develop general linear mixed integer programming (MIP) formulations. Then we describe an effective exact algorithm that iteratively solves a series of simpler MIPs, along with some enhancements, in order to obtain an optimal solution for the original problem. Finally, we perform computational experiments on several classes of random and real-life networks to demonstrate performance of the developed solution approaches and illustrate some properties of the proposed clique relaxation models. | Finding maximum subgraphs with relatively large vertex connectivity |
S0377221714004810 | Four NP-hard optimization problems on graphs are studied: The vertex separator problem, the edge separator problem, the maximum clique problem, and the maximum independent set problem. We show that the vertex separator problem is equivalent to a continuous bilinear quadratic program. This continuous formulation is compared to known continuous quadratic programming formulations for the edge separator problem, the maximum clique problem, and the maximum independent set problem. All of these formulations, when expressed as maximization problems, are shown to follow from the convexity properties of the objective function along the edges of the feasible set. An algorithm is given which exploits the continuous formulation of the vertex separator problem to quickly compute approximate separators. Computational results are given. | Continuous quadratic programming formulations of optimization problems on graphs |
S0377221714004822 | We study a selective and periodic inventory routing problem (SPIRP) and develop an Adaptive Large Neighborhood Search (ALNS) algorithm for its solution. The problem concerns a biodiesel production facility collecting used vegetable oil from sources, such as restaurants, catering companies and hotels that produce waste vegetable oil in considerable amounts. The facility reuses the collected waste oil as raw material to produce biodiesel. It has to meet certain raw material requirements either from daily collection, or from its inventory, or by purchasing virgin oil. SPIRP involves decisions about which of the present source nodes to include in the collection program, and which periodic (weekly) routing schedule to repeat over an infinite planning horizon. The objective is to minimize the total collection, inventory and purchasing costs while meeting the raw material requirements and operational constraints. A single-commodity flow-based mixed integer linear programming (MILP) model was proposed for this problem in an earlier study. The model was solved with 25 source nodes on a 7-day cyclic planning horizon. In order to tackle larger instances, we develop an ALNS algorithm that is based on a rich neighborhood structure with 11 distinct moves tailored to this problem. We demonstrate the performance of the ALNS, and compare it with the MILP model on test instances containing up to 100 source nodes. | An adaptive large neighborhood search algorithm for a selective and periodic inventory routing problem |
S0377221714004834 | In marketing research the measurement of individual preferences and assessment of utility functions have long traditions. Conjoint analysis, and particularly choice-based conjoint analysis (CBC), is frequently employed for such measurement. The world today appears increasingly customer or user oriented wherefore research intensity in conjoint analysis is rapidly increasing in various fields, OR/MS being no exception. Although several optimization based approaches have been suggested since the introduction of the Hierarchical Bayes (HB) method for estimating CBC utility functions, recent comparisons indicate that challenging HB is hard. Based on likelihood maximization we propose a method called LM and compare its performance with HB using twelve field data sets. Performance comparisons are based on holdout validation, i.e. predictive performance. Average performance of LM indicates an improvement over HB and the difference is statistically significant. We also use simulation based data sets to compare the performance for parameter recovery. In terms of both predictive performance and RMSE a smaller number of questions in CBC appears to favor LM over HB. | Likelihood estimation of consumer preferences in choice-based conjoint analysis |
S0377221714004846 | We study a mean-risk model derived from a behavioral theory of Disappointment with multiple reference points. One distinguishing feature of the risk measure is that it is based on mutual deviations of outcomes, not deviations from a specific target. We prove necessary and sufficient conditions for strict first and second order stochastic dominance, and show that the model is, in addition, a Convex Risk Measure. The model allows for richer, and behaviorally more plausible, risk preference patterns than competing models with equal degrees of freedom, including Expected Utility (EU), Mean–Variance (M-V), Mean-Gini (M-G), and models based on non-additive probability weighting, such as Dual Theory (DT). In asset allocation, the model allows a decision-maker to abstain from diversifying in a positive expected value risky asset if its performance does not meet a certain threshold, and gradually invest beyond this threshold, which appears more acceptable than the extreme solutions provided by either EU and M-V (always diversify) or DT and M-G (always plunge). In asset trading, the model provides no-trade intervals, like DT and M-G, in some, but not all, situations. An illustrative application to portfolio selection is presented. The model can provide an improved criterion for mean-risk analysis by injecting a new level of behavioral realism and flexibility, while maintaining key normative properties. | Mean-risk analysis with enhanced behavioral content |
S0377221714004858 | In this paper, we combine robust optimization and the idea of ∊ -arbitrage to propose a tractable approach to price a wide variety of options. Rather than assuming a probabilistic model for the stock price dynamics, we assume that the conclusions of probability theory, such as the central limit theorem, hold deterministically on the underlying returns. This gives rise to an uncertainty set that the underlying asset returns satisfy. We then formulate the option pricing problem as a robust optimization problem that identifies the portfolio which minimizes the worst case replication error for a given uncertainty set defined on the underlying asset returns. The most significant benefits of our approach are (a) computational tractability illustrated by our ability to price multi-asset, American and Asian options using linear optimization; and thus the computational complexity of our approach scales polynomially with the number of assets and with time to expiry and (b) modeling flexibility illustrated by our ability to model different kinds of options, various levels of risk aversion among investors, transaction costs, shorting constraints and replication via option portfolios. | Robust option pricing |
S0377221714004871 | This paper provides a two-stage decision framework in which two or more parties exercise a jointly held real option. We show that a single party’s timing decision is always socially efficient if it precedes bargaining on the terms of sharing. However, if the sharing rule is agreed before the exercise timing decision is made, then socially optimal timing is attained only if there is a cash payment element in the division of surplus. If the party that chooses the exercise timing can divert value from the project, then the first-best outcome may not be possible at all and the second-best outcome may be implemented using a contract that is generally not optimal in the former cases. Our framework contributes to the understanding of a range of empirical regularities in corporate and entrepreneurial finance. | Optimal exercise of jointly held real options: A Nash bargaining approach with value diversion |
S0377221714004883 | Suppose customers need to choose when to arrive to a congested queue with some desired service at the end, provided by a single server that operates only during a certain time interval. We study a model where the customers incur not only congestion (waiting) costs but also penalties for their index of arrival. Arriving before other customers is desirable when the value of service decreases with every admitted customer. This may be the case for example when arriving at a concert or a bus with unmarked seats or going to lunch in a busy cafeteria. We provide game theoretic analysis of such queueing systems with a given number of customers, specifically we characterize the arrival process which constitutes a symmetric Nash equilibrium. | Equilibrium arrival times to a queue with order penalties |
S0377221714004895 | In for-profit organizations, profit efficiency decomposition is considered important since estimates on profit drivers are of practical use to managers in their decision making. Profit efficiency is traditionally due to two sources – technical efficiency and allocative efficiency. The contribution of this paper is a novel decomposition of technical efficiency that could be more practical to use if the firm under evaluation really wants to achieve technical efficiency as soon as possible. For this purpose, we show how a new version of the Measure of Inefficiency Proportions (MIP), which seeks the minimization of the total technical effort by the assessed firm, is a lower bound of the value of technical inefficiency associated with the directional distance function. The targets provided by the new MIP could be beneficial for firms since it specifies how firms may become technically efficient simply by decreasing one input or increasing one output, suggesting that each firm should focus its effort on a specific dimension (input or output). This approach is operationalized in a data envelopment analysis framework and applied to a dataset of airlines. | Decomposing technical inefficiency using the principle of least action |
S0377221714004901 | We introduce a novel strategy to address the issue of demand estimation in single-item single-period stochastic inventory optimisation problems. Our strategy analytically combines confidence interval analysis and inventory optimisation. We assume that the decision maker is given a set of past demand samples and we employ confidence interval analysis in order to identify a range of candidate order quantities that, with prescribed confidence probability, includes the real optimal order quantity for the underlying stochastic demand process with unknown stationary parameter(s). In addition, for each candidate order quantity that is identified, our approach produces an upper and a lower bound for the associated cost. We apply this approach to three demand distributions in the exponential family: binomial, Poisson, and exponential. For two of these distributions we also discuss the extension to the case of unobserved lost sales. Numerical examples are presented in which we show how our approach complements existing frequentist—e.g. based on maximum likelihood estimators—or Bayesian strategies. | Confidence-based optimisation for the newsvendor problem under binomial, Poisson and exponential demand |
S0377221714004913 | We explore buyback contracts in a supplier–retailer supply chain where the retailer faces a price-dependent downward-sloping demand curve subject to uncertainty. Differentiated from the existing literature, this work focuses on analytically examining how the uncertainty level embedded in market demand affects the applicability of buyback contracts in supply chain management. To this end, we seek to characterize the buyback model in terms of only the demand uncertainty level (DUL). With this new research perspective, we have obtained some interesting new findings for buyback. For example, we find that (1) even though the supply chain’s efficiency will change over the DUL with a wholesale price-only contract, it will be maintained constantly at that of the corresponding deterministic demand setting with buyback, regardless of the DUL; (2) in the practice of buyback, the buyback issuer should adjust only the buyback price in reaction to different DULs while leave the wholesale price unchanged as that in the corresponding deterministic demand setting; (3) only in the demand setting with an intermediate level of the uncertainty (which is identified quantitatively in Theorem 5), buyback provision is beneficial simultaneously for the supplier, the retailer, and the supply chain system, while this is not the case in the other demand settings. This work reveals that DUL can be a critical factor affecting the applicability of supply chain contracts. | Buyback contracts with price-dependent demands: Effects of demand uncertainty |
S0377221714004925 | Multi-objectivization represents a current and promising research direction which has led to the development of more competitive search mechanisms. This concept involves the restatement of a single-objective problem in an alternative multi-objective form, which can facilitate the process of finding a solution to the original problem. Recently, this transformation was applied with success to the HP model, a simplified yet challenging representation of the protein structure prediction problem. The use of alternative multi-objective formulations, based on the decomposition of the original objective function of the problem, has significantly increased the performance of search algorithms. The present study goes further on this topic. With the primary aim of understanding and quantifying the potential effects of multi-objectivization, a detailed analysis is first conducted to evaluate the extent to which this problem transformation impacts on an important characteristic of the fitness landscape, neutrality. To the authors’ knowledge, the effects of multi-objectivization have not been previously investigated by explicitly sampling and evaluating the neutrality of the fitness landscape. Although focused on the HP model, most of the findings of such an analysis can be extrapolated to other problem domains, contributing thus to the general understanding of multi-objectivization. Finally, this study presents a comparative analysis where the advantages of multi-objectivization are evaluated in terms of the performance of a basic evolutionary algorithm. Both the two- and three-dimensional variants of the HP model (based on the square and cubic lattices, respectively) are considered. | Multi-objectivization, fitness landscape transformation and search performance: A case of study on the hp model for protein structure prediction |
S0377221714004937 | Metro station corridor width design considering demand fluctuation as well as the randomness and state-dependence of service time is an urgent concern and a complicated random planning issue. This paper confirms the accuracy of phase-type distribution (PH) fitting for passenger arrival intervals and service times with randomness and state-dependence in metro station corridors. A PH/PH ( n ) /C/C state-dependent queuing model is thus established by a finite level-dependent quasi-birth–death (QBD) process. The existing M/G ( n ) /C/C, M/G/1/C, and D/D/1/C models are proved to be special cases of the PH/PH ( n ) /C/C model through theoretical derivation and the precision of the proposed model is analyzed through simulation tests. The quantitative relationship between the level of service (LOS) and the corridor width is established based on the proposed model. A total of 81 experiments are designed to compare the calculations between the proposed model and the M/G ( n ) /C/C, M/G/1/C, and D/D/1/C models. Comparison results demonstrate that (1) the value of effective width of the PH/PH ( n ) /C/C queuing model is higher than those of the M/G ( n ) /C/C, M/G/1/C, and D/D/1/C models; (2) the real area occupied per person in the corridor of the PH/PH ( n ) /C/C queuing model is mostly proximate to the designed LOS, whereas those of the M/G ( n ) /C/C, M/G/1/C, and D/D/1/C models fail to meet the designed LOS; and (3) the performance measures of the PH/PH ( n ) /C/C queuing model enjoy high performance-width elasticity and are significantly improved compared with those of the M/G ( n ) /C/C, M/G/1/C, and D/D/1/C models. | A PH/PH ( n ) / C / C state-dependent queuing model for metro station corridor width design |
S0377221714004949 | Motivated by the emergence of online penny or pay-to-bid auctions, in this study, we analyze the operational consequences of all-pay auctions competing with fixed list price stores. In all-pay auctions, bidders place bids, and highest bidder wins. Depending on the auction format, the winner pays either the amount of their bid or that of the second-highest bid. All losing bidders forfeit their bids, regardless of the auction format. Bidders may visit the store, both before and after bidding, and buy the item at the fixed list price. In a modified version, we consider a setting where bidders can use their sunk bid as a credit towards buying the item from the auctioneer at a fixed price (different from the list price). We characterize a symmetric equilibrium in the bidding/buying strategy and derive optimal list prices for both the seller and auctioneer to maximize expected revenue. We consider two situations: (1) one firm operating both channels (i.e. fixed list price store and all-pay auction), and (2) two competing firms, each operating one of the two channels. | All-pay auctions with pre- and post-bidding options |
S0377221714004950 | In this paper, we explore how firms can manage their raw material sourcing better by developing appropriate sourcing relationships with their raw material suppliers. We detail three empirical case studies of firms explaining their different raw material sourcing strategies: (a) firms can adopt a hands-off approach to raw material management, (b) firms can supply raw material directly to their suppliers, and this may be beneficial for some agents in the supply chain, and (c) firms can bring their component suppliers together, and the resulting cooperation between suppliers can be beneficial for supply chain. We then analytically model the three raw material scenarios encountered in our empirical work, examine the resulting profits along the supply chain, and extend the results to a competitive buyer scenario. Overall, our results show that active management of raw material sourcing can add value to supply chains. | Managing raw material in supply chains |
S0377221714004962 | Process improvement plays a significant role in reducing production costs over the life cycle of a product. We consider the role of process improvement in a decentralized assembly system in which a buyer purchases components from several first-tier suppliers. These components are assembled into a finished product, which is sold to the downstream market. The assembler faces a deterministic demand/production rate and the suppliers incur variable inventory costs and fixed setup production costs. In the first stage of the game, which is modeled as a non-cooperative game among suppliers, suppliers make investments in process improvement activities to reduce the fixed production costs. Upon establishing a relationship with the suppliers, the assembler establishes a knowledge sharing network – this network is implemented as a series of meetings among suppliers and also mutual visits to their factories. These meetings facilitate the exchange of best practices among suppliers with the expectation that suppliers will achieve reductions in their production costs from the experiences learned through knowledge sharing. We model this knowledge exchange as a cooperative game among suppliers in which, as a result of cooperation, all suppliers achieve reductions in their fixed costs. In the non-cooperative game, the suppliers anticipate the cost allocation that results from the cooperative game in the second stage by incorporating the effect of knowledge sharing in their cost functions. Based on this model, we investigate the benefits and challenges associated with establishing a knowledge sharing network. We identify and compare various cost allocation mechanisms that are feasible in the cooperative game and show that the system optimal investment levels can be achieved only when the most efficient supplier receives the incremental benefits of the cost reduction achieved by other suppliers due to the knowledge transfer. | Cooperation in assembly systems: The role of knowledge sharing networks |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.