FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0377221713007996 | Three levels of competitiveness affect the success of business enterprises in a globally competitive environment: the competitiveness of the company, the competitiveness of the industry in which the company operates and the competitiveness of the country where the business is located. This study analyses the competitiveness of the automotive industry in association with the national competitiveness perspective using a methodology based on Bayesian Causal Networks. First, we structure the competitiveness problem of the automotive industry through a synthesis of expert knowledge in the light of the World Economic Forum’s competitiveness indicators. Second, we model the relationships among the variables identified in the problem structuring stage and analyse these relationships using a Bayesian Causal Network. Third, we develop policy suggestions under various scenarios to enhance the national competitive advantages of the automotive industry. We present an analysis of the Turkish automotive industry as a case study. It is possible to generalise the policy suggestions developed for the case of Turkish automotive industry to the automotive industries in other developing countries where country and industry competitiveness levels are similar to those of Turkey. | A decision support methodology to enhance the competitiveness of the Turkish automotive industry |
S0377221713008011 | The constant returns to scale assumption maintained by neoclassical theorists for justifying the black-box structure of production technology in long run does not necessarily allow one to infer that there are no scale benefits available in its sub-technologies. Most of real-life production technologies are multi-stage in nature, and the sources of increasing returns lie in the sub-technologies. It is, therefore, imperative to estimate the scale economies of a firm not only for the network technology but also for the sub-technologies. To accomplish this, two approaches are suggested in this contribution, based on the premise concerning whether a network technology construct considers allocative inefficiency. The first approach, which is ours, makes use of a single network technology for two interdependent sub-technologies. The second approach, which is due to Kao and Hwang (2011), however, assumes complete allocative efficiency by considering two independent sub-technology frontiers, one for each sub-technology. The distinction between these two approaches is important from a policy point of view since the network efficiencies revealed from these two approaches have distinctive causative factors that do not permit them to be used interchangeably. | Decomposing technical efficiency and scale elasticity in two-stage network DEA |
S0377221713008023 | In the recent article, Darwish and Odah (2010) develop a scheme that allows for identical replenishment cycles for all the retailers, in the context of a single vendor supplying a group of retailers under VMI partnership. This paper proposes an alternative replenishment scheme allowing for different replenishment cycles for each retailer. An example has been shown to illustrate the cost savings under the proposed model. | Joint replenishment of multi retailer with variable replenishment cycle under VMI |
S0377221713008035 | In many managerial applications, situations frequently occur when a fixed cost is used in constructing the common platform of an organization, and needs to be shared by all related entities, or decision making units (DMUs). It is of vital importance to allocate such a cost across DMUs where there is competition for resources. Data envelopment analysis (DEA) has been successfully used in cost and resource allocation problems. Whether it is a cost or resource allocation issue, one needs to consider both the competitive and cooperative situation existing among DMUs in addition to maintaining or improving efficiency. The current paper uses the cross-efficiency concept in DEA to approach cost and resource allocation problems. Because DEA cross-efficiency uses the concept of peer appraisal, it is a very reasonable and appropriate mechanism for allocating a shared resource/cost. It is shown that our proposed iterative approach is always feasible, and ensures that all DMUs become efficient after the fixed cost is allocated as an additional input measure. The cross-efficiency DEA-based iterative method is further extended into a resource-allocation setting to achieve maximization in the aggregated output change by distributing available resources. Such allocations for fixed costs and resources are more acceptable to the players involved, because the allocation results are jointly determined by all DMUs rather than a specific one. The proposed approaches are demonstrated using an existing data set that has been applied in similar studies. | Fixed cost and resource allocation based on DEA cross-efficiency |
S0377221713008047 | Discussion of learning from discrete-event simulation often takes the form of a hypothesis stating that involving clients in model building provides much of the learning necessary to aid their decisions. Whilst practitioners of simulation may intuitively agree with this hypothesis they are simultaneously motivated to reduce the model building effort through model reuse. As simulation projects are typically limited by time, model reuse offers an alternative learning route for clients as the time saved can be used to conduct more experimentation. We detail a laboratory experiment to test the high involvement hypothesis empirically, identify mechanisms that explain how involvement in model building or model reuse affect learning and explore the factors that inhibit learning from models. Measurement of learning focuses on the management of resource utilisation in a case study of a hospital emergency department and through the choice of scenarios during experimentation. Participants who reused a model benefitted from the increased experimentation time available when learning about resource utilisation. However, participants who were involved in model building simulated a greater variety of scenarios including more validation type scenarios early on. These results suggest that there may be a learning trade-off between model reuse and model building when simulation projects have a fixed budget of time. Further work evaluating client learning in practice should track the origin and choice of variables used in experimentation; studies should also record the methods modellers find most effective in communicating the impact of resource utilisation on queuing. | Learning from discrete-event simulation: Exploring the high involvement hypothesis |
S0377221713008059 | Microfinance institutions face a double bottom-line. They perform financial tasks by giving microcredits to their customers and support projects aiming at reducing poverty. In doing so, they have to be financially self-sufficient and to target poor people excluded from the traditional financial systems. However, a trade-off may exist between financial sustainability and poverty outreach for these institutions. By using a multi-DEA approach, this paper shows that even if a trade-off exists for 15% of the MC2 (Mutuelles Communautaires de Croissance) in Cameroon, there is no trade-off for 46% of them. In order to increase, without trade-off, financial and social performance of inefficient MC2, a benchmarking approach combing DEA and performance indicators has been developed. DEA is used for identifying best-practices and setting benchmarking goals. Performance indicators are used for characterizing areas needing improvements and following the evolution of MC2 toward their goals, i.e., for implementing benchmarking. Complementarity of both approaches provides a tool box for improving financial and social efficiency and reducing the trade-off between financial sustainability and poverty outreach of microfinance institutions. | Financial sustainability and poverty outreach within a network of village banks in Cameroon: A multi-DEA approach |
S0377221713008060 | In this paper we study situations where a group of agents require a service that can only be provided from a source, the so-called source connection problems. These problems contain the standard fixed tree, the classical minimum spanning tree and some other related problems such as the k-hop, the degree constrained and the generalized minimum spanning tree problems among others. Our goal is to divide the cost of a network among the agents. To this end, we introduce a rule which will be referred to as a painting rule because it can be interpreted by means of a story about painting. Some meaningful properties in this context and a characterization of the rule are provided. | A new rule for source connection problems |
S0377221713008072 | This paper models the locations of landfills and transfer stations and simultaneously determines the sizes of the landfills that are to be established. The model is formulated as a bi-objective mixed integer optimization problem, in which one objective is the usual cost-minimization, while the other minimizes pollution. As a matter of fact, pollution is dealt with a two-pronged approach: on the one hand, the model includes constraints that enforce legislated limits on pollution, while one of the objective functions attempts to minimize pollution effects, even though solutions may formally satisfy the letter of the law. The model is formulated and solved for the data of a region in Chile. Computational results for a variety of parameter choices are provided. These results are expected to aid decision makers in the choice of excluding and choosing sites for solid waste facilities. | A bi-objective model for the location of landfills for municipal solid waste |
S0377221713008084 | This paper examines a resource constrained production planning and scheduling problem motivated by the coal supply chain. In this problem, multiple independent producers are connected with a resource availability (or, linking) constraint. A general description of such problems is provided, before decomposing the problem into two levels. In the first level, we deal with production planning and in the second level, we deal with tactical resource scheduling. A real-world coal supply chain example is presented to anchor the approach. The overall problem can be formulated as an integrated mixed integer programming model which, in several cases, struggles to find even a feasible solution in reasonable amount of time. This paper discusses a distributed decision making approach based on column generation (CG). Computational experiments show that, the CG scheme has significant advantages over the integrated model and a Lagrangian relaxation scheme proposed by Thomas et al. (2013). This paper concludes with detailed discussions on the results and future research directions. | A resource constrained scheduling problem with multiple independent producers and a single linking constraint: A coal supply chain example |
S0377221713008096 | Multistage dynamic networks with random arc capacities (MDNRAC) have been successfully used for modeling various resource allocation problems in the transportation area. However, solving these problems is generally computationally intensive, and there is still a need to develop more efficient solution approaches. In this paper, we propose a new heuristic approach that solves the MDNRAC problem by decomposing the network at each stage into a series of subproblems with tree structures. Each subproblem can be solved efficiently. The main advantage is that this approach provides an efficient computational device to handle the large-scale problem instances with fairly good solution quality. We show that the objective value obtained from this decomposition approach is an upper bound for that of the MDNRAC problem. Numerical results demonstrate that our proposed approach works very well. | An arc-exchange decomposition method for multistage dynamic networks with random arc capacities |
S0377221713008308 | The reformulation–linearization technique (RLT), introduced in [Sherali, H. D., Adams. W. P. (1990). A hierarchy of relaxations between the continuous and convex hull representations for zero-one programming problems. SIAM Journal on Discrete Mathematics 3(3), 411–430], provides a way to compute a hierarchy of linear programming bounds on the optimal values of NP-hard combinatorial optimization problems. In this paper we show that, in the presence of suitable algebraic symmetry in the original problem data, it is sometimes possible to compute level two RLT bounds with additional linear matrix inequality constraints. As an illustration of our methodology, we compute the best-known bounds for certain graph partitioning problems on strongly regular graphs. | Symmetry in RLT-type relaxations for the quadratic assignment and standard quadratic optimization problems |
S0377221713008321 | Credit options and side payments are two methods suggested for achieving coordination in a two-echelon supply chain. We examine the credit option coordination mechanism introduced by Chaharsooghi and Heydari [Chaharsooghi, S., & Heydari, J. (2010). Supply chain coordination for the joint determination of order quantity and reorder point using credit option. European Journal of Operational Research, 204(1), 86–95]. This method assumes that the supplier’s opportunity costs are equal to the reduction in the buyer’s financial holding costs during the credit period. In this note, we show that Chaharsooghi and Heydari’s method is not applicable when buyer and supplier opportunity costs are not equal. We introduce an alternate per order rebate method that reduces supply chain costs to centralized management levels. | A note on supply chain coordination for joint determination of order quantity and reorder point using a credit option |
S0377221713008333 | We consider a supply chain in which one manufacturer sells a seasonal product to the end market through a retailer. Faced with uncertain market demand and limited capacity, the manufacturer can maximize its profits by adopting one of two strategies, namely, wholesale price rebate or capacity expansion. In the former, the manufacturer provides the retailer with a discount for accepting early delivery in an earlier period. In the latter, the production capacity of the manufacturer in the second period can be raised so that production is delayed until in the period close to the selling season to avoid holding costs. Our research shows that the best strategy for the manufacturer is determined by three driving forces: the unit cost of holding inventory for the manufacturer, the unit cost of holding inventory for the retailer, and the unit cost of capacity expansion. When the single period capacity is low, adopting the capacity expansion strategy dominates as both parties can improve their profits compared to the wholesale price rebate strategy. When the single period capacity is high, on the other hand, the equilibrium outcome is the wholesale price rebate strategy. | Wholesale price rebate vs. capacity expansion: The optimal strategy for seasonal products in a supply chain |
S0377221713008345 | Expertons and uncertain aggregation operators are tools for dealing with imprecise information that can be assessed with interval numbers. This paper introduces the uncertain generalized probabilistic weighted averaging (UGPWA) operator. It is an aggregation operator that unifies the probability and the weighted average in the same formulation considering the degree of importance that each concept has in the aggregation. Moreover, it is able to assess uncertain environments that cannot be assessed with exact numbers but it is possible to use interval numbers. Thus, we can analyze imprecise information considering the minimum and the maximum result that may occur. Further extensions to this approach are presented including the quasi-arithmetic uncertain probabilistic weighted averaging operator and the uncertain generalized probabilistic weighted moving average. We analyze the applicability of this new approach in a group decision making problem by using the theory of expertons in strategic management. | Group decision making with expertons and uncertain generalized probabilistic weighted aggregation operators |
S0377221713008357 | In this paper, an overview is presented of the existing metaheuristic solution procedures to solve the multi-mode resource-constrained-project scheduling problem, in which multiple execution modes are available for each of the activities of the project. A fair comparison is made between the different metaheuristic algorithms on the existing benchmark datasets and on a newly generated dataset. Computational results are provided and recommendations for future research are formulated. | An experimental investigation of metaheuristics for the multi-mode resource-constrained project scheduling problem on new dataset instances |
S0377221713008369 | This paper introduces a blocking model and closed-form expression of two workers traveling with walk speed m (m = integer) in a no-passing circular-passage system of n stations and assuming n = m +2, 2m +2,…. We develop a Discrete-Timed Markov Chain (DTMC) model to capture the workers’ changes of walk, pick, and blocked states, and quantify the throughput loss from blocking congestion by deriving a steady state probability in a closed-form expression. We validate the model with a simulation study. Additional simulation comparisons show that the proposed throughput model gives a good approximation of a general-sized system of n stations (i.e., n >2), a practical walk speed system of real number m (i.e., m ⩾1), and a bucket brigade order picking application. | Two-worker blocking congestion model with walk speed m in a no-passing circular passage system |
S0377221713008370 | On the class of cycle-free directed graph games with transferable utility solution concepts, called web values, are introduced axiomatically, each one with respect to a chosen coalition of players that is assumed to be an anti-chain in the directed graph and is considered as a management team. We provide their explicit formula representation and simple recursive algorithms to calculate them. Additionally the efficiency and stability of web values are studied. Web values may be considered as natural extensions of the tree and sink values as has been defined correspondingly for rooted and sink forest graph games. In case the management team consists of all sources (sinks) in the graph a kind of tree (sink) value is obtained. In general, at a web value each player receives the worth of this player together with his subordinates minus the total worths of these subordinates. It implies that every coalition of players consisting of a player with all his subordinates receives precisely its worth. We also define the average web value as the average of web values over all management teams in the graph. As application the water distribution problem of a river with multiple sources, a delta and possibly islands is considered. | Tree, web and average web values for cycle-free directed graph games |
S0377221713008382 | Due to the dramatic increase in the world’s container traffic, the efficient management of operations in seaport container terminals has become a crucial issue. In this work, we focus on the integrated planning of the following problems faced at container terminals: berth allocation, quay crane assignment (number), and quay crane assignment (specific). First, we formulate a new binary integer linear program for the integrated solution of the berth allocation and quay crane assignment (number) problems called BACAP. Then we extend it by incorporating the quay crane assignment (specific) problem as well, which is named BACASP. Computational experiments performed on problem instances of various sizes indicate that the model for BACAP is very efficient and even large instances up to 60 vessels can be solved to optimality. Unfortunately, this is not the case for BACASP. Therefore, to be able to solve large instances, we present a necessary and sufficient condition for generating an optimal solution of BACASP from an optimal solution of BACAP using a post-processing algorithm. In case this condition is not satisfied, we make use of a cutting plane algorithm which solves BACAP repeatedly by adding cuts generated from the optimal solutions until the aforementioned condition holds. This method proves to be viable and enables us to solve large BACASP instances as well. To the best of our knowledge, these are the largest instances that can be solved to optimality for this difficult problem, which makes our work applicable to realistic problems. | Optimal berth allocation and time-invariant quay crane assignment in container terminals |
S0377221713008394 | This paper presents a global optimization approach for solving signomial geometric programming problems. In most cases nonconvex optimization problems with signomial parts are difficult, NP-hard problems to solve for global optimality. But some transformation and convexification strategies can be used to convert the original signomial geometric programming problem into a series of standard geometric programming problems that can be solved to reach a global solution. The tractability and effectiveness of the proposed successive convexification framework is demonstrated by seven numerical experiments. Some considerations are also presented to investigate the convergence properties of the algorithm and to give a performance comparison of our proposed approach and the current methods in terms of both computational efficiency and solution quality. | Global optimization of signomial geometric programming problems |
S0377221713008400 | In this paper, we investigate a two-stage lot-sizing and scheduling problem in a spinning industry. A new hybrid method called HOPS (Hamming-Oriented Partition Search), which is a branch-and-bound based procedure that incorporates a fix-and-optimize improvement method is proposed to solve the problem. An innovative partition choice for the fix-and-optimize is developed. The computational tests with generated instances based on real data show that HOPS is a good alternative for solving mixed integer problems with recognized partitions such as the lot-sizing and scheduling problem. | HOPS – Hamming-Oriented Partition Search for production planning in the spinning industry |
S0377221713008412 | In the Distance Constrained Multiple Vehicle Traveling Purchaser Problem (DC-MVTPP) a fleet of vehicles is available to visit suppliers offering products at different prices and with different quantity availabilities. The DC-MVTPP consists in selecting a subset of suppliers so to satisfy products demand at the minimum traveling and purchasing costs, while ensuring that the distance traveled by each vehicle does not exceed a predefined upper bound. The problem generalizes the classical Traveling Purchaser Problem (TPP) and adds new realistic features to the decision problem. In this paper we present different mathematical programming formulations for the problem. A branch-and-price algorithm is also proposed to solve a set partitioning formulation where columns represent feasible routes for the vehicles. At each node of the branch-and-bound tree, the linear relaxation of the set partitioning formulation, augmented by the branching constraints, is solved through column generation. The pricing problem is solved using dynamic programming. A set of instances has been derived from benchmark instances for the asymmetric TPP. Instances with up to 100 suppliers and 200 products have been solved to optimality. | The distance constrained multiple vehicle traveling purchaser problem |
S0377221713008424 | The derivation of a priority vector from a pair-wise comparison matrix (PCM) is an important issue in the Analytic Hierarchy Process (AHP). The existing methods for the priority vector derivation from PCM include eigenvector method (EV), weighted least squares method (WLS), additive normalization method (AN), logarithmic least squares method (LLS), etc. The derived priority vector should be as similar to each column vector of the PCM as possible if a pair-wise comparison matrix (PCM) is not perfectly consistent. Therefore, a cosine maximization method (CM) based on similarity measure is proposed, which maximizes the sum of the cosine of the angle between the priority vector and each column vector of a PCM. An optimization model for the CM is proposed to derive the reliable priority vector. Using three numerical examples, the CM is compared with the other prioritization methods based on two performance evaluation criteria: Euclidean distance and minimum violation. The results show that the CM is flexible and efficient. | A cosine maximization method for the priority vector derivation in AHP |
S0377221713008436 | Pisinger et al. introduced the concept of ‘aggressive reduction’ for large-scale combinatorial optimization problems. The idea is to spend much time and effort in reducing the size of the instance, in the hope that the reduced instance will then be small enough to be solved by an exact algorithm. We present an aggressive reduction scheme for the ‘Simple Plant Location Problem’, which is a classical problem arising in logistics. The scheme involves four different reduction rules, along with lower- and upper-bounding procedures. The scheme turns out to be particularly effective for instances in which the facilities and clients correspond to points on the Euclidean plane. | An aggressive reduction scheme for the simple plant location problem |
S0377221713008448 | The Point-Feature Cartographic Label Placement (PFCLP) problem consists of placing text labels to point features on a map avoiding overlaps to improve map visualization. This paper presents a Clustering Search (CS) metaheuristic as a new alternative to solve the PFCLP problem. Computational experiments were performed over sets of instances with up to 13,206 points. These instances are the same used in several recent and important researches about the PFCLP problem. The results enhance the potential of CS by finding optimal solutions (proven in previous works) and improving the best-known solutions for instances whose optimal solutions are unknown so far. | A Clustering Search metaheuristic for the Point-Feature Cartographic Label Placement Problem |
S0377221713008461 | To deal with their highly variable workload, logistics companies make their task force flexible using multi-skilled employees, flexible working hours or short-term contracts. Together with the legal constraints and the handling equipments’ capacities, these possibilities make personnel scheduling a complex task. This paper describes a model to support their chain of decisions from the weekly timetabling to the daily rostering (detailed task allocation). We divide the problem into three sub-problems depending on the type of decision to be made: (1) workforce dimensioning, (2) task allocation for a week, and (3) detailed rostering for a day. The three decisions are made sequentially, the output of a step being the input of the next one. Each step is modeled as a mixed integer linear program which is described and commented. The proposed models are tested with industrial data as well as generated instances. From the observations made in an industrial context, we show that our model is an actual management tool supporting the managers in their operational decisions. This tool is currently used by the company which provided us with the industrial data. Based on the results with the generated instances, we present the conditions under which the models can be solved within a reasonable amount of time, and we assess the robustness of the daily rostering when the input data changes. | Joint employee weekly timetabling and daily rostering: A decision-support tool for a logistics platform |
S0377221713008473 | In this article, we propose UACOR, a unified ant colony optimization (ACO) algorithm for continuous optimization. UACOR includes algorithmic components from ACO R , DACO R and IACO R -LS, three ACO algorithms for continuous optimization that have been proposed previously. Thus, it can be used to instantiate each of these three earlier algorithms; in addition, from UACOR we can also generate new continuous ACO algorithms that have not been considered before in the literature. In fact, UACOR allows the usage of automatic algorithm configuration techniques to automatically derive new ACO algorithms. To show the benefits of UACOR’s flexibility, we automatically configure two new ACO algorithms, UACOR-s and UACOR-c, and evaluate them on two sets of benchmark functions from a recent special issue of the Soft Computing (SOCO) journal and the IEEE 2005 Congress on Evolutionary Computation (CEC’05), respectively. We show that UACOR-s is competitive with the best of the 19 algorithms benchmarked on the SOCO benchmark set and that UACOR-c performs superior to IPOP-CMA-ES and statistically significantly better than five other algorithms benchmarked on the CEC’05 set. These results show the high potential ACO algorithms have for continuous optimization and suggest that automatic algorithm configuration is a viable approach for designing state-of-the-art continuous optimizers. | A unified ant colony optimization algorithm for continuous optimization |
S0377221713008485 | In this study, we improved the variable neighborhood search (VNS) algorithm for solving uncapacitated multilevel lot-sizing (MLLS) problems. The improvement is twofold. First, we developed an effective local search method known as the Ancestors Depth-first Traversal Search (ADTS), which can be embedded in the VNS to significantly improve the solution quality. Second, we proposed a common and efficient approach for the rapid calculation of the cost change for the VNS and other generate-and-test algorithms. The new VNS algorithm was tested against 176 benchmark problems of different scales (small, medium, and large). The experimental results show that the new VNS algorithm outperforms all of the existing algorithms in the literature for solving uncapacitated MLLS problems because it was able to find all optimal solutions (100%) for 96 small-sized problems and new best-known solutions for 5 of 40 medium-sized problems and for 30 of 40 large-sized problems. | A variable neighborhood search with an effective local search for uncapacitated multilevel lot-sizing problems |
S0377221713008497 | This paper deals with a constrained investment problem for a defined contribution (DC) pension fund where retirees are allowed to defer the purchase of the annuity at some future time after retirement. This problem has already been treated in the unconstrained case in a number of papers. The aim of this work is to deal with the more realistic case when constraints on the investment strategies and on the state variable are present. Due to the difficulty of the task, we consider, as a first step, the basic model of Gerrard, Haberman and Vigna (2004), where interim consumption and annuitization time are fixed. We extend their model by adding a no short-selling constraint on the control variable and a final capital requirement constraint on the state variable. This implies, in particular, no ruin. The mathematical problem is naturally formulated as a stochastic control problem with constraints on the control and the state variable, and is approached by the dynamic programming method. We write the non-linear Hamilton–Jacobi–Bellman equation for the problem and transform it into a dual one that is semi-linear, following a well-established duality procedure. In the special relevant case without running cost, we explicitly compute the value function for the problem and give the optimal strategy in feedback form. A numerical application ends the paper and shows the extent of applicability of the model to a DC pension fund in the decumulation phase. | Income drawdown option with minimum guarantee |
S0377221713008503 | A game with precedence constraints is a TU game with restricted cooperation, where the set of feasible coalitions is a distributive lattice, hence generated by a partial order on the set of players. Its core may be unbounded, and the bounded core, which is the union of all bounded faces of the core, proves to be a useful solution concept in the framework of games with precedence constraints. Replacing the inequalities that define the core by equations for a collection of coalitions results in a face of the core. A collection of coalitions is called normal if its resulting face is bounded. The bounded core is the union of all faces corresponding to minimal normal collections. We show that two faces corresponding to distinct normal collections may be distinct. Moreover, we prove that for superadditive games and convex games only intersecting and nested minimal collection, respectively, are necessary. Finally, it is shown that the faces corresponding to pairwise distinct nested normal collections may be pairwise distinct, and we provide a means to generate all such collections. | On the restricted cores and the bounded core of games on distributive lattices |
S0377221713008515 | Motivated by Markowitz portfolio optimization problems under uncertainty in the problem data, we consider general convex parametric multiobjective optimization problems under data uncertainty. For the first time, this uncertainty is treated by a robust multiobjective formulation in the gist of Ben-Tal and Nemirovski. For this novel formulation, we investigate its relationship to the original multiobjective formulation as well as to its scalarizations. Further, we provide a characterization of the location of the robust Pareto frontier with respect to the corresponding original Pareto frontier and show that standard techniques from multiobjective optimization can be employed to characterize this robust efficient frontier. We illustrate our results based on a standard mean–variance problem. | Robust multiobjective optimization & applications in portfolio optimization |
S0377221713008527 | Natural disasters, such as earthquakes, tsunamis and hurricanes, cause tremendous harm each year. In order to reduce casualties and economic losses during the response phase, rescue units must be allocated and scheduled efficiently. As this problem is one of the key issues in emergency response and has been addressed only rarely in literature, this paper develops a corresponding decision support model that minimizes the sum of completion times of incidents weighted by their severity. The presented problem is a generalization of the parallel-machine scheduling problem with unrelated machines, non-batch sequence-dependent setup times and a weighted sum of completion times – thus, it is NP-hard. Using literature on scheduling and routing, we propose and computationally compare several heuristics, including a Monte Carlo-based heuristic, the joint application of 8 construction heuristics and 5 improvement heuristics, and GRASP metaheuristics. Our results show that problem instances (with up to 40 incidents and 40 rescue units) can be solved in less than a second, with results being at most 10.9% up to 33.9% higher than optimal values. Compared to current best practice solutions, the overall harm can be reduced by up to 81.8%. | Emergency response in natural disaster management: Allocation and scheduling of rescue units |
S0377221713008539 | In this paper we develop a Malmquist productivity index for public sector production characterized by the influence of environmental variables. We extend Johnson and Ruggiero (2011) to the more general case of variable returns to scale to further decompose the Malmquist productivity index into technical, efficiency, scale and environmental change. We apply our model to analyze productivity of Dutch schools using 2002–2007 data. The results indicate that the environment influences the productivity index as well as the technical, efficiency, scale and environmental change components. We see that schools with a moderate classification of environment have the highest productivity numbers. In line with expectations, schools with the worst environment also perform worse and would perform better with an improved environment. | Nonparametric estimation of education productivity incorporating nondiscretionary inputs with an application to Dutch schools |
S0377221713008540 | This paper studies the group-buying mechanism from a dynamic perspective. We consider a seller that offers a product in the form of group buying (priced low but uncertain) and spot purchasing (priced high but guaranteed). In the case of group buying, the information associated with the number of participating customers is updated in the middle of the sale. Customers are assumed to be strategic with a time-dependent utility. In addition to choosing between spot purchasing and group buying, customers could choose to delay their decisions until the information update. We characterize the customer behavior within a rational expectations framework. We then consider the effect of information and demand dynamics. Our results show that whereas an improvement in information quality has a positive effect on customer surplus and the group-buying success rate, the effect of inter-temporal demand correlation is mixed. We also discuss the seller’s profit maximization problem and derive the condition to be satisfied at the optimal group size. | The informational aspect of the group-buying mechanism |
S0377221713008552 | From a practical perspective, the paper demonstrates that the appropriate use of dispersion, population, and equity criteria can lead to fairly good solutions with respect to the p-median objective. The only stipulation is that the decision maker verifies (through simple constraint checks) that the chosen locations meet the dispersion, population, and equity criteria. An empirical investigation is conducted to obtain appropriate values for these parameters. From a location science perspective, a new location model that accounts for equity and efficiency simultaneously is studied and analyzed. Specifically, the p-maxian problem with side constraints on dispersion, population, and equity is developed, its NP-completeness established, and valid inequalities and bounds derived. Computational tests show encouraging results. | Public facility location using dispersion, population, and equity criteria |
S0377221713008564 | This paper studies markets, such as Internet marketplaces for used cars or mortgages, in which consumers engage in sequential search. In particular, we consider the impact of information-brokers (experts) who can, for a fee, provide better information on true values of opportunities. We characterize the optimal search strategy given a price and the terms of service set by the expert, and show how to use this characterization to solve the monopolist expert’s service pricing problem. Our analysis enables the investigation of three common pricing schemes (pay-per-use, unlimited subscription, and package pricing) that can be used by the expert. We demonstrate that in settings characteristic of electronic marketplaces, namely those with lower search costs for consumers and lower costs of production of expert services, unlimited subscription schemes are favored. Finally, we show that the platform that connects consumer and experts can improve social welfare by subsidizing the purchase of expert services. The optimal level of subsidy forces the buyer to exactly fully internalize the marginal cost of provision of expert services. In electronic markets, this cost is minimal, so it may be worthwhile for the platform to make the expert freely available to consumers. | Expert-mediated sequential search |
S0377221713008576 | The fuzzy Analytic Hierarchy Process (fuzzy AHP) is a very popular decision making method and literally thousands of papers have been published about it. However, we find the basic logic of this approach has problems. From its methodology, the definition and operational rules of fuzzy numbers not only oppose the main logic of fuzzy set theory, but also oppose the basic principles of the AHP. In dealing with the outcomes, fuzzy AHP does not give a generally accepted method to rank fuzzy numbers and a way to check the validity of the results. Besides, we discuss the validity of the Analytic Hierarchy/Network Process (AHP/ANP) in complex and uncertain environments and find that fuzzy ANP is a false proposition because there is no fuzzy priority in the super matrix which provides the basis for the ANP. Although fuzzy AHP has been applied in many cases and cited hundreds of times, we hoped that those who use fuzzy AHP would understand the problems associated with this method. | Fuzzy analytic hierarchy process: Fallacy of the popular methods |
S0377221713008588 | The recycling of urban solid wastes is a critical point for the “closing supply chains” of many products, mainly when their value cannot be completely recovered after use. In addition to environmental aspects, the process of recycling involves technical, economic, social and political challenges for public management. For most of the urban solid waste, the management of the end-of-life depends on selective collection to start the recycling process. For this reason, an efficient selective collection has become a mainstream tool in the Brazilian National Solid Waste Policy. In this paper, we study effective models that might support the location planning of sorting centers in a medium-sized Brazilian city that has been discussing waste management policies over the past few years. The main goal of this work is to provide an optimal location planning design for recycling urban solid wastes that fall within the financial budget agreed between the municipal government and the National Bank for Economic and Social Development. Moreover, facility planning involves deciding on the best sites for locating sorting centers along the four-year period as well as finding ways to meet the demand for collecting recyclable materials, given that economic factors, consumer behavior and environmental awareness are inherently uncertain future outcomes. To deal with these issues, we propose a deterministic version of the classical capacity facility location problem, and both a two-stage recourse formulation and risk-averse models to reduce the variability of the second-stage costs. Numerical results suggest that it is possible to improve the current selective collection, as well as hedge against data uncertainty by using stochastic and risk-averse optimization models. | Effective location models for sorting recyclables in public management |
S0377221713008606 | This paper addresses how asymmetric information, fads and Lévy jumps in the price of an asset affect the optimal portfolio strategies and maximum expected utilities of two distinct classes of rational investors in a financial market. We obtain the investors’ optimal portfolios and maximum expected logarithmic utilities and show that the optimal portfolio of each investor is more or less than its Merton optimal. Our approximation results suggest that jumps reduce the excess asymptotic utility of the informed investor relative to that of uninformed investor, and hence jump risk could be helpful for market efficiency as an indirect reducer of information asymmetry. Our study also suggests that investors should pay more attention to the overall variance of the asset pricing process when jumps exist in fads models. Moreover, if there are very little or too much fads, then the informed investor has no utility advantage in the long run. | A jump model for fads in asset prices under asymmetric information |
S0377221713008618 | In recent years, many important real-world applications are studied as “rich” vehicle routing problems that are variants and generalizations of the well-known vehicle routing problem. In this paper we address the pickup-and-delivery version of this problem and consider further generalization by allowing transshipment in the network. Moreover, we allow heterogenous vehicles and flexible fleet size. We describe mixed integer-programming formulations for the problem with and without time windows for services. The number of constraints and variables in the models are bounded by polynomial size of the problem. We discuss several problem variants that are either captured by our models or can be easily captured through simple modifications. Computational work gave promising results and confirms that transshipment in network can indeed enhance optimization. | New mixed integer-programming model for the pickup-and-delivery problem with transshipment |
S0377221713008631 | We provide an approach to optimize a block surgical schedule (BSS) that adheres to the block scheduling policy, using a new type of newsvendor-based model. We assume that strategic decisions assign a specialty to each Operating Room (OR) day and deal with BSS decisions that assign sub-specialties to time blocks, determining block duration as well as sequence in each OR each day with the objective of minimizing the sum of expected lateness and earliness costs. Our newsvendor approach prescribes the optimal duration of each block and the best permutation, obtained by solving the sequential newsvendor problem, determines the optimal block sequence. We obtain closed-form solutions for the case in which surgery durations follow the normal distribution. Furthermore, we give a closed-form solution for optimal block duration with no-shows. | An approach to optimize block surgical schedules |
S0377221713008643 | In this paper we consider the consistent partition problem in reverse convex and convex mixed-integer programming. In particular we will show that for the considered classes of convex functions, both integer and relaxed systems can be partitioned into two disjoint subsystems, each of which is consistent and defines an unbounded region. The polynomial time algorithm to generate the partition will be proposed and the algorithm for a maximal partition will also be provided. | Feasible partition problem in reverse convex and convex mixed-integer programming |
S0377221713008655 | This paper deals with the Traveling Salesman Problem (TSP) with Draft Limits (TSPDL), which is a variant of the well-known TSP in the context of maritime transportation. In this recently proposed problem, draft limits are imposed due to restrictions on the port infrastructures. Exact algorithms based on three mathematical formulations are proposed and their performance compared through extensive computational experiments. Optimal solutions are reported for open instances of benchmark problems available in the literature. | Exact algorithms for the traveling salesman problem with draft limits |
S0377221713008667 | Optimisation algorithms with good anytime behaviour try to return as high-quality solutions as possible independently of the computation time allowed. Designing algorithms with good anytime behaviour is a difficult task, because performance is often evaluated subjectively, by plotting the trade-off curve between computation time and solution quality. Yet, the trade-off curve may be modelled also as a set of mutually nondominated, bi-objective points. Using this model, we propose to combine an automatic configuration tool and the hypervolume measure, which assigns a single quality measure to a nondominated set. This allows us to improve the anytime behaviour of optimisation algorithms by means of automatically finding algorithmic configurations that produce the best nondominated sets. Moreover, the recently proposed weighted hypervolume measure is used here to incorporate the decision-maker’s preferences into the automatic tuning procedure. We report on the improvements reached when applying the proposed method to two relevant scenarios: (i) the design of parameter variation strategies for MAX-MIN Ant System and (ii) the tuning of the anytime behaviour of SCIP, an open-source mixed integer programming solver with more than 200 parameters. | Automatically improving the anytime behaviour of optimisation algorithms |
S0377221713008679 | The minimum cost path problem in a time-varying road network is a complicated problem. The paper proposes two heuristic methods to solve the minimum cost path problem between a pair of nodes with a time-varying road network and a congestion charge. The heuristic methods are compared with an alternative exact method using real traffic information. Also, the heuristic methods are tested in a benchmark dataset and a London road network dataset. The heuristic methods can achieve good solutions in a reasonable running time. | Finding a minimum cost path between a pair of nodes in a time-varying road network with a congestion charge |
S0377221713008680 | Maritime cabotage is a legislation published by a particular coastal country, which is used to conduct the cargo transportation between its two domestic ports. This paper proposes a two-phase mathematical programming model to formulate the liner hub-and-spoke shipping network design problem subject to the maritime cabotage legislations, i.e., the hub location and feeder allocation problem for phase I and the ship route design with ship fleet deployment problem for phase II. The problem in phase I is formulated as a mixed-integer linear programming model. By developing a hub port expanding technique, the problem in phase II is formulated as a vehicle routing problem with pickup and delivery. A Lagrangian relaxation based solution method is proposed to solve it. Numerical implementations based on the Asia–Europe–Oceania shipping services are carried out to account for the impact analysis of the maritime cabotage legislations on liner hub-and-spoke shipping network design problem. | Impact analysis of maritime cabotage legislations on liner hub-and-spoke shipping network design |
S0377221713008692 | The paper presents a generalized regression technique centered on a superquantile (also called conditional value-at-risk) that is consistent with that coherent measure of risk and yields more conservatively fitted curves than classical least-squares and quantile regression. In contrast to other generalized regression techniques that approximate conditional superquantiles by various combinations of conditional quantiles, we directly and in perfect analog to classical regression obtain superquantile regression functions as optimal solutions of certain error minimization problems. We show the existence and possible uniqueness of regression functions, discuss the stability of regression functions under perturbations and approximation of the underlying data, and propose an extension of the coefficient of determination R-squared for assessing the goodness of fit. The paper presents two numerical methods for solving the error minimization problems and illustrates the methodology in several numerical examples in the areas of uncertainty quantification, reliability engineering, and financial risk management. | Superquantile regression with applications to buffered reliability, uncertainty quantification, and conditional value-at-risk |
S0377221713008709 | We study in this paper multi-product facility location problem in a two-stage supply chain in which plants have production limitation, potential depots have limited storage capacity and customer demands must be satisfied by plants via depots. In the paper, handling cost for batch process in depots is considered in a realistic way by a set of capacitated handling modules. Each module can be regards as alliance of equipment and manpower. The problem is to locate depots, choose appropriate handling modules and to determine the product flows from the plants, opened depots to customers with the objective to minimize total location, handling and transportation costs. For the problem, we developed a hybrid method. The initial lower and upper bounds are provided by applying a Lagrangean based on local search heuristic. Then a weighted Dantzig–Wolfe decomposition and path-relinking combined method are proposed to improve obtained bounds. Numerical experiments on 350 randomly generated instances demonstrate our method can provide high quality solution with gaps below 2%. | Lower and upper bounds for a two-stage capacitated facility location problem with handling costs |
S0377221713008710 | We consider a variant of the multidimensional assignment problem (MAP) with decomposable costs in which the resulting optimal assignment is described as a set of disjoint stars. This problem arises in the context of multi-sensor multi-target tracking problems, where a set of measurements, obtained from a collection of sensors, must be associated to a set of different targets. To solve this problem we study two different formulations. First, we introduce a continuous nonlinear program and its linearization, along with additional valid inequalities that improve the lower bounds. Second, we state the standard MAP formulation as a set partitioning problem, and solve it via branch and price. These approaches were put to test by solving instances ranging from tripartite to 20-partite graphs of 4 to 30 nodes per partition. Computational results show that our approaches are a viable option to solve this problem. A comparative study is presented. | Integer programming models for the multidimensional assignment problem with star costs |
S0377221713008722 | We consider a single-product make-to-stock manufacturing–remanufacturing system. Returned products require remanufacturing before they can be sold. The manufacturing and remanufacturing operations are executed by the same single server, where switching from one activity to another does not involve time or cost and can be done at an arbitrary moment in time. Customer demand can be fulfilled by either newly manufactured or remanufactured products. The times for manufacturing and remanufacturing a product are exponentially distributed. Demand and used products arrive via mutually independent Poisson processes. Disposal of products is not allowed and all used products that are returned have to be accepted. Using Markov decision processes, we investigate the optimal manufacture–remanufacture policy that minimizes holding, backorder, manufacturing and remanufacturing costs per unit of time over an infinite horizon. For a subset of system parameter values we are able to completely characterize the optimal continuous-review dynamic preemptive policy. We provide an efficient algorithm based on quasi-birth–death processes to compute the optimal policy parameter values. For other sets of system parameter values, we present some structural properties and insights related to the optimal policy and the performance of some simple threshold policies. | On the optimal control of manufacturing and remanufacturing activities with a single shared server |
S0377221713008734 | This paper addresses the Patient Admission Scheduling (PAS) problem. The PAS problem entails assigning elective patients to beds, while satisfying a number of hard constraints and as many soft constraints as is possible, and arises at all planning levels for hospital management. There exist a few, different variants of this problem. In this paper we consider one such variant and propose an optimization-based heuristic building on branch-and-bound, column generation, and dynamic constraint aggregation to solve it. We achieve tighter lower bounds than previously reported in the literature and, in addition, we are able to produce new best known solutions for five out of twelve instances from a publicly available repository. | A column generation approach for solving the patient admission scheduling problem |
S0377221713008746 | It is well recognized that using the hot standby redundancy provides fast restoration in the case of failures. However the redundant elements are exposed to working stresses before they are used, which reduces the overall system reliability. Moreover, the cost of maintaining the hot redundant elements in the operational state is usually much greater than the cost of keeping them in the cold standby mode. Therefore, there exists a tradeoff between the cost of losses associated with the restoration delays and the operation cost of standby elements. Such a trade-off can be obtained by designing both hot and cold redundancy types into the same system. Thus a new optimization problem arises for the standby system design. The problem, referred to in this work as optimal standby element distributing and sequencing problem (SE-DSP) is to distribute a fixed set of elements between cold and hot standby groups and select the element initiation sequence so as to minimize the expected mission operation cost of the system while providing a desired level of system reliability. This paper first formulates and solves the SE-DSP problem for 1-out-of-N: G heterogeneous non-repairable standby systems. A numerical method is proposed for evaluating the system reliability and expected mission cost simultaneously. This method is based on discrete approximation of time-to-failure distributions of the system elements. A genetic algorithm is used as an optimization tool for solving the formulated optimization problem. Examples are given to illustrate the considered problem and the proposed solution methodology. number of elements in the system number of HS elements index of the element initiated after i −1 failures r.v. representing the time-to-failure of element j scale and shape parameters of element j with Weibull failure distribution probability that element j fails in time interval i after its initiation cost (per time unit) of keeping element j in hot standby (or operation) mode cost (per time unit) of keeping element j in cold standby mode startup cost of cold standby element j startup cost of hot standby element j mission time system reliability expected mission cost random cumulative working time of elements s(1), s(2), …, s(i): X i = min τ , ∑ j = 1 i T s ( j ) ) Pr{X j =Δi} number of considered time intervals during the mission duration of each time interval total cost of using HS element j failed or switched off at time t total cost of using CS element j initiated at time t 0 and failed or switched off at time t cumulative distribution function probability density function probability mass function random variable genetic algorithm hot standby cold standby standby element distributing and sequencing problem | Cold vs. hot standby mission operation cost minimization for 1-out-of-N systems |
S0377221713008758 | This study investigates scheduling problems that occur when the weighted number of late jobs that are subject to deterministic machine availability constraints have to be minimized. These problems can be modeled as a more general job selection problem. Cases with resumable, non-resumable, and semi-resumable jobs as well as cases without availability constraints are investigated. The proposed efficient mixed integer linear programming approach includes possible improvements to the model, notably specialized lifted knapsack cover cuts. The method proves to be competitive compared with existing dedicated methods: numerical experiments on randomly generated instances show that all 350-job instances of the test bed are closed for the well-known problem 1 | r i | ∑ w i U i . For all investigated problem types, 98.4% of 500-job instances can be solved to optimality within 1hour. | A mixed integer linear programming approach to minimize the number of late jobs with and without machine availability constraints |
S0377221713008771 | Inbound and outbound containers are temporarily stored in the storage yard at container terminals. A combination of container demand increase and storage yard capacity scarcity create complex operational challenges for storage yard managers. This paper presents an in-depth overview of storage yard operations, including the material handling equipment used, and highlights current industry trends and developments. A classification scheme for storage yard operations is proposed and used to classify scientific journal papers published between 2004 and 2012. The paper also discusses and challenges the current operational paradigms on storage yard operations. Lastly, the paper identifies new avenues for academic research based on current trends and developments in the container terminal industry. | Storage yard operations in container terminals: Literature overview, trends, and research directions |
S0377221713008783 | Forecasting critical fractiles of the lead time demand distribution is an important problem for operations managers making newsvendor-type inventory decisions. In this paper, we propose a semi-parametric approach to forecasting the critical fractile when demand is serially correlated. Starting from a user-defined but potentially misspecified forecasting model, we use historical demand data to generate empirical forecast errors of this model. These errors are then used to (1) parametrically correct for any bias in the point forecast conditional on the recent demand history and (2) non-parametrically estimate the critical fractile of the demand distribution without imposing distributional assumptions. We present conditions under which this semi-parametric approach provides a consistent estimate of the critical fractile and evaluate its finite sample properties using simulation and real data for retail inventory planning. | A semi-parametric approach for estimating critical fractiles under autocorrelated demand |
S0377221713008795 | Congestion is a major cause of inefficiency in air transportation. A question is whether delays during the arrival phase of a flight can be absorbed more fuel-efficiently in the future. In this context, we analyze Japan’s flow strategy empirically and use queueing techniques in order to gain insight into the generation of the observed delays. Based on this, we derive a rule to balance congestion delays more efficiently between ground and en-route. Whether fuel efficiency can be further improved or not will depend on the willingness to review the concept of runway pressure. | Data and queueing analysis of a Japanese air-traffic flow |
S0377221713008801 | Global liner shipping is a competitive industry, requiring liner carriers to carefully deploy their vessels efficiently to construct a cost competitive network. This paper presents a novel compact formulation of the liner shipping network design problem (LSNDP) based on service flows. The formulation alleviates issues faced by arc flow formulations with regards to handling multiple calls to the same port. A problem which has not been fully dealt with earlier by LSNDP formulations. Multiple calls are handled by introducing service nodes, together with port nodes in a graph representation of the problem, and by introducing numbered arcs between a port and a novel service node. An arc from a port node to a service node indicate whether a service is calling the port or not. This representation allows recurrent calls of a service to a port, which previously could not be handled by LSNDP models. The model ensures strictly weekly frequencies of services, ensures that port-vessel draft capabilities are not violated, respects vessel capacities and the number of vessels available. The profit of the generated network is maximized, i.e. the revenue of flowed cargo subtracted operational costs of the network and a penalty for not flowed cargo. The model can be used to design liner shipping networks to utilize a container carrier’s assets efficiently and to investigate possible scenarios of changed market conditions. The model is solved as a Mixed Integer Program. Results are presented for the two smallest instances of the benchmark suite LINER-LIB-2012 presented in Brouer, Alvarez, Plum, Pisinger, and Sigurd (2013). | A service flow model for the liner shipping network design problem |
S0377221713008886 | The present paper is devoted to the computation of optimal tolls on a traffic network that is described as fuzzy bilevel optimization problem. As a fuzzy bilevel optimization problem we consider bilinear optimization problem with crisp upper level and fuzzy lower level. An effective algorithm for computation optimal tolls for the upper level decision-maker is developed under assumption that the lower level decision-maker chooses the optimal solution as well. The algorithm is based on the membership function approach. This algorithm provides us with a global optimal solution of the fuzzy bilevel optimization problem. | Computation of the optimal tolls on the traffic network |
S0377221713008898 | The concepts of portfolio optimization and diversification have been instrumental in the development and understanding of financial markets and financial decision making. In light of the 60year anniversary of Harry Markowitz’s paper “Portfolio Selection,” we review some of the approaches developed to address the challenges encountered when using portfolio optimization in practice, including the inclusion of transaction costs, portfolio management constraints, and the sensitivity to the estimates of expected returns and covariances. In addition, we selectively highlight some of the new trends and developments in the area such as diversification methods, risk-parity portfolios, the mixing of different sources of alpha, and practical multi-period portfolio optimization. | 60 Years of portfolio optimization: Practical challenges and current trends |
S0377221713008904 | Hospital emergency services are closely connected to demographic issues and population changes. The methodology presented here helps to assess the effects of the forecasted demand changes on the next-year emergency unit workloads. The objective of the study is to estimate the expected volume of emergency hospital services, as measured by the number and costs of medical procedures provided to patients, to be contracted by the Polish National Health Fund (NFZ) branch at the regional level to cover the forecasted demand. A discrete-event simulation model was developed to elaborate the credible forecasts of the function components, the fundamental elements of the contract values granted by the NFZ for emergency departments for the following year. Emergency department-level data were drawn from the NFZ regional branch registry to perform a statistical analysis of emergency services provided to patients in 17 admission units and emergency wards in 2010. The model results indicate that the predicted increase in two age groups, i.e., the youngest children and the older population, will have different effects on the number and value of hospital emergency services to be considered in the contracting policy. There is potential for a discrete-event simulation to support strategic health policy decision making at the regional level. The value of this approach lies in providing estimates for the what-if scenarios related to the prognosis of changing acute demand. | Simulation modelling for contracting hospital emergency services at the regional level |
S0377221713008916 | In order to improve the robustness of a railway system in station areas, this paper introduces an iterative approach to successively optimize the train routing through station areas and to enhance this solution by applying some changes to the timetable in a tabu search environment. We present our vision on robustness and describe how this vision can be used in practice. By introducing the spread of the trains in the objective function for the route choice and timetabling module, we improve the robustness of a railway system. Using a discrete event simulation model, the performance of our algorithms is evaluated based on a case study for the Brussels’ area. The computational results indicate an average improvement in robustness of 6.2% together with a decrease in delay propagation of about 25%. Furthermore, the effect of some measures like changing the train offer to further increase the robustness is evaluated and compared. | Improving the robustness in railway station areas |
S0377221713008928 | This paper presents a binary optimization framework for modeling dynamic resource allocation problems. The framework (a) allows modeling flexibility by incorporating different objective functions, alternative sets of resources and fairness controls; (b) is widely applicable in a variety of problems in transportation, services and engineering; and (c) is tractable, i.e., provides near optimal solutions fast for large-scale instances. To justify these assertions, we model and report encouraging computational results on three widely studied problems – the Air Traffic Flow Management, the Aircraft Maintenance Problems and Job Shop Scheduling. Finally, we provide several polyhedral results that offer insights on its effectiveness. | Dynamic resource allocation: A flexible and tractable modeling framework |
S0377221713008941 | In this paper the joint maintenance and spare parts ordering problem for more than one identical operating items is studied. The operating items may suffer two types of silent failures: a minor failure, which results in item malfunctioning, and a major failure, which renders the item completely out-of-function. Inspections are periodically held to detect any failures and the inspected items are preventively maintained, repaired or replaced according to their condition. Two ordering policies are investigated to supply the necessary spare parts: a periodic review and a continuous review policy. The expected total maintenance and inventory cost per time unit is derived and the proposed models are optimized for real case data. In addition, the sensitivity of the proposed models is studied through numerical examples and the effect of some key problem characteristics on the optimal decisions is discussed. | Joint optimization of spare parts ordering and maintenance policies for multiple identical items subject to silent failures |
S0377221713008953 | The aircraft maintenance routing problem is one of the most studied problems in the airline industry. Most of the studies focus on finding a unique rotation that will be repeated by each aircraft in the fleet with a certain lag. In practice, using a single rotation for the entire fleet is not applicable due to stochasticity and operational considerations in the airline industry. In this study, our aim is to develop a fast responsive methodology which provides maintenance feasible routes for each aircraft in the fleet over a weekly planning horizon with the objective of maximizing utilization of the total remaining flying time of fleet. For this purpose, we formulate an integer linear programming (ILP) model by modifying the connection network representation. The proposed model is solved by using branch-and-bound under different priority settings for variables to branch on. A heuristic method based on compressed annealing is applied to the same problem and a comparison of exact and heuristic methods are provided. The model and the heuristic method are extended to incorporate maintenance capacity constraints. Additionally, a rolling horizon based procedure is proposed to update the existing routes when some of the maintenance decisions are already fixed. | Operational aircraft maintenance routing problem with remaining time consideration |
S0377221713008965 | The recent contribution by Cheng et al. (2013) presents a variant of the traditional radial input- and output-oriented efficiency measures whereby original values are replaced with absolute values. This comment spells out that this article contains some imprecisions and therefore presents some further results. | A note on a variant of radial measure capable of dealing with negative inputs and outputs in DEA |
S0377221713008977 | We investigate the make-buy decision of a manufacturer who does not know its potential suppliers’ capabilities. In order to mitigate the consequences of this limited knowledge, the manufacturer can either perform in-house or audit suppliers. An audit reveals the audited supplier’s capability such that the manufacturer can base the make-buy decision on the audit outcome; the manufacturer might also learn from the audit and update its beliefs about the capabilities of the unaudited suppliers. Interestingly, using a very general model we find that the manufacturer’s decision can be independent of both the number of available suppliers and of the mechanism it uses to update its beliefs after an audit. We illustrate our general model by considering a possible application, where a manufacturer is making the outsource-audit decisions when the suppliers are more cost effective. However, when outsourcing to supplier, the manufacturer would face the uncertainty of whether or not the delivered task can integrate well with the other parts of the project. | Outsourcing to suppliers with unknown capabilities |
S0377221713008989 | Advertising plays an important role in affecting consumer demand. Socially responsible firms are expected to use advertising judiciously, limiting advertising of “bad” products. An example is the advertising initiative adopted by several major food manufacturers to limit the advertising of unhealthy food categories to children. Such initiatives are based on the belief that less advertising will lead to less consumption of these unhealthy food categories. However, food manufacturers usually distribute products to consumers through retailers whose advertising is not restricted by those initiative programs. In this paper, we examine the effectiveness of such advertising initiative in a leader–follower supply chain with one manufacturer and one retailer. We assume that both the manufacturer and the retailer can choose to participate in the advertising initiative by reducing their advertising levels. The problem is formulated as a Stackelberg game. We show that the effectiveness of the advertising initiative critically depends on the leader’s participation in the initiative. If the leader is willing to reduce the advertising level below a threshold, the market coverage of the product can drop significantly. On the other hand, if only the follower participates in the initiative, the market coverage is likely to expand in the majority of cases. Managerial implications of this research are also discussed. | On the impact of advertising initiatives in supply chains |
S0377221713008990 | This article studies a two-firm dynamic pricing model with random production costs. The firms produce the same perishable products over an infinite time horizon when production (or operation) costs are random. In each period, each firm determines its price and production levels based on its current production cost and its opponent’s previous price level. We use an alternating-move game to model this problem and show that there exists a unique subgame perfect Nash equilibrium in production and pricing decisions. We provide a closed-form solution for the firm’s pricing policy. Finally, we study the game in the case of incomplete information, when both or one of the firms do not have access to the current prices charged by their opponents. | Dynamic pricing with uncertain production cost: An alternating-move approach |
S0377221713009004 | The subject of this paper is the problem of finding the optimal replenishment schedule for an inventory, subject to time-dependent demand and deterioration, within a finite time planning horizon. It is shown that taking inflation into account has a profound effect on the solution of the problem. For instance, there is a critical number of replenishment periods, in excess of which the optimal schedule is characterized by the inclusion of token orders at the end of the planning horizon. This and other conclusions, obtained via a careful mathematical analysis of the problem, rectify those of earlier studies. | Inflation and the optimal inventory replenishment schedule within a finite planning horizon |
S0377221713009016 | The identification of different dynamics in sequential data has become an every day need in scientific fields such as marketing, bioinformatics, finance, or social sciences. Contrary to cross-sectional or static data, this type of observations (also known as stream data, temporal data, longitudinal data or repeated measures) are more challenging as one has to incorporate data dependency in the clustering process. In this research we focus on clustering categorical sequences. The method proposed here combines model-based and heuristic clustering. In the first step, the categorical sequences are transformed by an extension of the hidden Markov model into a probabilistic space, where a symmetric Kullback–Leibler distance can operate. Then, in the second step, using hierarchical clustering on the matrix of distances, the sequences can be clustered. This paper illustrates the enormous potential of this type of hybrid approach using a synthetic data set as well as the well-known Microsoft dataset with website users search patterns and a survey on job career dynamics. | Mining categorical sequences from data using a hybrid clustering method |
S0377221713009028 | This paper presents a new combinatorial optimization problem that can be used to model the deployment of broadband telecommunications systems in which optical fiber cables are installed between a central office and a number of end-customers. In this capacitated network design problem the installation of optical fiber cables with sufficient capacity is required to carry the traffic from the central office to the end-customers at minimum cost. In the situation motivating this research the network does not necessarily need to connect all customers (or at least not with the best available technology). Instead, some nodes are potential customers. The aim is to select the customers to be connected to the central server and to choose the cable capacities to establish these connections. The telecom company takes the strategic decision of fixing a percentage of customers that should be served, and aims for minimizing the total cost of the network providing this minimum service. For that reason the underlying problem is called the Prize-Collecting Local Access Network Design problem (PC-LAN). We propose a branch-and-cut approach for solving small instances. For large instances of practical importance, our approach turns into a mixed integer programming (MIP) based heuristic procedure which combines the cutting-plane algorithm with a multi-start heuristic algorithm. The multi-start heuristic algorithm starts with fractional values of the LP-solutions and creates feasible solutions that are later improved using a local improvement strategy. Computational experiments are conducted on small instances from the literature. In addition, we introduce a new benchmark set of real-world instances with up to 86,000 nodes, 116,000 edges and 1500 potential customers. Using our MIP-based approach we are able to solve most of the small instances to proven optimality. For more difficult instances, we are not only able to provide high-quality feasible solutions, but also to provide certificate on their quality by calculating lower bounds to the optimal solution values. | A MIP-based approach to solve the prize-collecting local access network design problem |
S0377221713009041 | In production systems of automobile manufacturers, multi-variant products are assembled on paced final assembly lines. The assignment of operations to workplaces and workers deter mines the productivity of the manufacturing process. In research, various exact and heuristic solution procedures have been developed for different versions of the so-called assembly line balancing problem. This paper shows that there is almost no solution procedure so far, which includes all line balancing restrictions occurring in real-world settings. We present a new general and flexible as well as fast heuristic procedure which meets all relevant requirements of line balancing in automotive industry. It is based on the successful multi-Hoffmann heuristic of Fleszar and Hindi [European Journal of Operational Research 145/3 (2003), 606–620] which is enhanced and extended to consider the required restrictions. Computational experiments show that the new procedure is competitive to state-of-the-art procedures even for specialized problems and able to compute high-quality feasible solutions in different test settings typical in real-world decision situations. | Enhanced multi-Hoffmann heuristic for efficiently solving real-world assembly line balancing problems in automotive industry |
S0377221713009053 | In order to enable domestic commercial banks to be more competitive globally, the Taiwanese government has twice attempted to financially restructure them, in 2001 and 2004. Different from other studies which use deterministic analyses to measure changes in performance between two periods, this paper adopts probabilistic analysis to take the uncertainty related to certain factors into account. Data from six years, from 2005 to 2010, are divided into two periods, 2005–2007 and 2008–2010, to calculate the global Malmquist productivity index (MPI) as a measure of the change in performance. By assuming beta distributions for the data, a Monte Carlo simulation is conducted to find the distribution of the MPI. The results show that, in general, the performance of the commercial banks has indeed improved. While conventional deterministic analyses may mislead top managers and make them overconfident about results that are actually uncertain, probabilistic analysis can produce more reliable information that can thus lead to better decisions. | Measuring performance improvement of Taiwanese commercial banks under uncertainty |
S0377221713009065 | We study the transit frequency optimization problem, which aims to determine the time interval between subsequent buses for a set of public transportation lines given by their itineraries, i.e., sequences of stops and street sections. The solution should satisfy a given origin–destination demand and a constraint on the available fleet of buses. We propose a new mixed integer linear programming (MILP) formulation for an already existing model, originally formulated as a nonlinear bilevel one. The proposed formulation is able to solve to optimality real small-sized instances of the problem using MILP techniques. For solving larger instances we propose a metaheuristic which accuracy is estimated by comparing against exact results (when possible). Both exact and approximated approaches are tested by using existing cases, including a real one related to a small-city which public transportation system comprises 13 lines. The magnitude of the improvement of that system obtained by applying the proposed methodologies, is comparable with the improvements reported in the literature, related to other real systems. Also, we investigate the applicability of the metaheuristic to a larger-sized real case, comprising more than 130 lines. | Frequency optimization in public transportation systems: Formulation and metaheuristic approach |
S0377221713009077 | Meca et al. (2004) studied a class of inventory games which arise when a group of retailers who observe demand for a common item decide to cooperate and make joint orders with the EOQ policy. In this paper, we extend their model to the situation where retailer’s delay in payments is permitted by the supplier. We introduce the corresponding inventory game with permissible delay in payments, and prove that its core is nonempty. Then, a core allocation rule is proposed which can be reached through population monotonic allocation scheme. Under this allocation rule, the grand coalition is shown to be stable from a farsighted point of view. | Inventory games with permissible delay in payments |
S0377221713009089 | We consider two linear project time–cost tradeoff problems with multiple milestones. Unless a milestone is completed on time, penalty costs for tardiness may be imposed. However, these penalty costs can be avoided by compressing the processing times of certain jobs that require additional resources or costs. Our model describes these penalty costs as the total weighted number of tardy milestone. The first problem tries to minimize the total weighted number of tardy milestones within the budget for total compression costs, while the second problem tries to minimize the total weighted number of tardy milestones plus total compression costs. We develop a linear programming formulation for the case with a fixed number of milestones. For the case with an arbitrary number of milestones, we show that under completely ordered jobs, the first problem is NP-hard in the ordinary sense while the second problem is polynomially solvable. | Complexity results for the linear time–cost tradeoff problem with multiple milestones and completely ordered jobs |
S0377221713009090 | One of the largest bottlenecks in iron and steel production is the steelmaking-continuous casting (SCC) process, which consists of steel-making, refining and continuous casting. The SCC scheduling is a complex hybrid flowshop (HFS) scheduling problem with the following features: job grouping and precedence constraints, no idle time within the same group of jobs and setup time constraints on the casters. This paper first models the scheduling problem as a mixed-integer programming (MIP) problem with the objective of minimizing the total weighted earliness/tardiness penalties and job waiting. Next, a Lagrangian relaxation (LR) approach relaxing the machine capacity constraints is presented to solve the MIP problem, which decomposes the relaxed problem into two tractable subproblems by separating the continuous variables from the integer ones. Additionally, two methods, i.e., the boundedness detection method and time horizon method, are explored to handle the unboundedness of the decomposed subproblems in iterations. Furthermore, an improved subgradient level algorithm with global convergence is developed to solve the Lagrangian dual (LD) problem. The computational results and comparisons demonstrate that the proposed LR approach outperforms the conventional LR approaches in terms of solution quality, with a significantly shorter running time being observed. | A novel Lagrangian relaxation approach for a hybrid flowshop scheduling problem in the steelmaking-continuous casting process |
S0377221713009107 | In this work we consider scheduling problems where a sequence of assignments from products to machines – or from tasks to operators, or from workers to resources – has to be determined, with the goal of minimizing the costs (=money, manpower, and/or time) that are incurred by the interplay between those assignments. To account for the different practical requirements (e.g. few changes between different products/tasks on the same machine/operator, few production disruptions, or few changes of the same worker between different resources), we employ different objective functions that are all based on elementary combinatorial properties of the schedule matrix. We propose simple and efficient algorithms to solve the corresponding optimization problems, and provide hardness results where such algorithms most likely do not exist. | Scheduling with few changes |
S0377221713009119 | Minimizing two different upper bounds of the matrix which generates search directions of the nonlinear conjugate gradient method proposed by Dai and Liao, two modified conjugate gradient methods are proposed. Under proper conditions, it is briefly shown that the methods are globally convergent when the line search fulfills the strong Wolfe conditions. Numerical comparisons between the implementations of the proposed methods and the conjugate gradient methods proposed by Hager and Zhang, and Dai and Kou, are made on a set of unconstrained optimization test problems of the CUTEr collection. The results show the efficiency of the proposed methods in the sense of the performance profile introduced by Dolan and Moré. | The Dai–Liao nonlinear conjugate gradient method with optimal parameter choices |
S0377221713009120 | In this paper, we consider an electricity market that consists of a day-ahead and a balancing settlement, and includes a number of stochastic producers. We first introduce two reference procedures for scheduling and pricing energy in the day-ahead market: on the one hand, a conventional network-constrained auction purely based on the least-cost merit order, where stochastic generation enters with its expected production and a low marginal cost; on the other, a counterfactual auction that also accounts for the projected balancing costs using stochastic programming. Although the stochastic clearing procedure attains higher market efficiency in expectation than the conventional day-ahead auction, it suffers from fundamental drawbacks with a view to its practical implementation. In particular, it requires flexible producers (those that make up for the lack or surplus of stochastic generation) to accept losses in some scenarios. Using a bilevel programming framework, we then show that the conventional auction, if combined with a suitable day-ahead dispatch of stochastic producers (generally different from their expected production), can substantially increase market efficiency and emulate the advantageous features of the stochastic optimization ideal, while avoiding its major pitfalls. A two-node power system serves as both an illustrative example and a proof of concept. Finally, a more realistic case study highlights the main advantages of a smart day-ahead dispatch of stochastic producers. | Electricity market clearing with improved scheduling of stochastic production |
S0377221713009132 | Jiang et al. proposed an algorithm to solve the inverse minimum cost flow problems under the bottleneck-type weighted Hamming distance [Y. Jiang, L. Liu, B. Wuc, E. Yao, Inverse minimum cost flow problems under the weighted Hamming distance, European Journal of Operational Research 207 (2010) 50–54]. In this note, it is shown that their proposed algorithm does not solve correctly the inverse problem in the general case due to some incorrect results in that article. Then, a new algorithm is proposed to solve the inverse problem in strongly polynomial time. The algorithm uses the linear search technique and solves a shortest path problem in each iteration. | Note on “Inverse minimum cost flow problems under the weighted Hamming distance” |
S0377221713009144 | This research proposes a solution framework based on discrete-event simulation, sequential bifurcation (SB) and response surface methodology (RSM) to address a multi-response optimization problem inherent in an auto parts supply chain. The objective is to identify the most efficient operating setting that would maximize the logistics performance after the expansion of the assembly plant’s capacity due to market growth. In the proposed framework, we first construct a comprehensive simulation as a platform to model the physical flow of the auto parts operations. We then apply the SB to identify the most important factors that influence system performance. To determine the optimal levels of these key factors, we employ RSM to develop metamodels that best describe the relationship between key decision variables and the multiple system responses. We adapt the Derringer–Suich’s desirability function to find the optimal solution of the metamodels. Computational study shows that our method enables the greatest improvement on system performance. The proposed method helps the case firm develop insights into system dynamics and to optimize the operating condition. It realizes the performance objective of the auto parts supply chain without the need for additional fiscal investment. | Optimal design of the auto parts supply chain for JIT operations: Sequential bifurcation factor screening and multi-response surface methodology |
S0377221713009338 | Managers, typically, are unaware of the significant impact their decisions could have on the random mechanism driving a data generating process. Here, a new parametric Bayesian technique is introduced that would allow managers to obtain an estimate of the impact of their decisions on the stochastic process driving the data; this, in turn, should enhance a company’s overall decision-making capabilities. This general approach to modeling decision-dependency is carried out via an efficient Markov chain Monte Carlo method. A simulated example, and a real-life example, using historical maintenance and failure time data from a system at the South Texas Project Nuclear Operating Company, exemplifies the paper’s theoretical contributions. Conclusive evidence of decision dependence in the failure time distribution is reported, which in turn points to an optimal maintenance policy that results in potentially large financial savings to the Texas-based company. | Decision dependent stochastic processes |
S0377221713009351 | Ambulance diversion (AD) is used by emergency departments (EDs) to relieve congestion by requesting ambulances to bypass the ED and transport patients to another facility. We study optimal AD control policies using a Markov Decision Process (MDP) formulation that minimizes the average time that patients wait beyond their recommended safety time threshold. The model assumes that patients can be treated in one of two treatment areas and that the distribution of the time to start treatment at the neighboring facility is known. Assuming Poisson arrivals and exponential times for the length of stay in the ED, we show that the optimal AD policy follows a threshold structure, and explore the behavior of optimal policies under different scenarios. We analyze the value of information on the time to start treatment in the neighboring hospital, and show that optimal policies depend strongly on the congestion experienced by the other facility. Simulation is used to compare the performance of the proposed MDP model to that of simple heuristics under more realistic assumptions. Results indicate that the MDP model performs significantly better than the tested heuristics under most cases. Finally, we discuss practical issues related to the implementation of the policies prescribed by the MDP. | Optimal control policies for ambulance diversion |
S0377221713009363 | In this paper we study a facility location problem in the plane in which a single point (facility) and a rapid transit line (highway) are simultaneously located in order to minimize the total travel time from the clients to the facility, using the L 1 or Manhattan metric. The rapid transit line is given by a segment with any length and orientation, and is an alternative transportation line that can be used by the clients to reduce their travel time to the facility. We study the variant of the problem in which clients can enter and exit the highway at any point. We provide an O ( n 3 ) -time algorithm that solves this variant, where n is the number of clients. We also present a detailed characterization of the solutions, which depends on the speed given along the highway. | Locating a single facility and a high-speed line |
S0377221713009375 | In this paper, a new general scalarization technique for solving multiobjective optimization problems is presented. After studying the properties of this formulation, two problems as special cases of this general formula are considered. It is shown that some well-known methods such as the weighted sum method, the ∊ -constraint method, the Benson method, the hybrid method and the elastic ∊ -constraint method can be subsumed under these two problems. Then, considering approximate solutions, some relationships between ε -(weakly, properly) efficient points of a general (without any convexity assumption) multiobjective optimization problem and ∊ -optimal solutions of the introduced scalarized problem are achieved. | A combined scalarizing method for multiobjective programming problems |
S0377221713009387 | We investigate an automobile supply chain where a manufacturer and a retailer serve heterogeneous consumers with electric vehicles (EVs) under a government’s price-discount incentive scheme that involves a price discount rate and a subsidy ceiling. We show that the subsidy ceiling is more effective in influencing the optimal wholesale pricing decision of the manufacturer with a higher unit production cost. However, the discount rate is more effective for the manufacturer with a lower unit production cost. Moreover, the expected sales are increasing in the discount rate but may be decreasing in the subsidy ceiling. Analytic results indicate that an effective incentive scheme should include both a discount rate and a subsidy ceiling. We also derive the necessary condition for the most effective discount rate and subsidy ceiling that maximize the expected sales of EVs, and obtain a unique discount rate and subsidy ceiling that most effectively improve the manufacturer’s incentive for EV production. | Supply chain analysis under a price-discount incentive scheme for electric vehicles |
S0377221713009399 | We study the operational implications from competition in the provision of healthcare services, in the context of national public healthcare systems in Europe. Specifically, we study the potential impact of two alternative ways through which policy makers have introduced such competition: (i) via the introduction of private hospitals to operate alongside public hospitals and (ii) via the introduction of increased patient choice to grant European patients the freedom to choose the country they receive treatment at. We use a game-theoretic framework with a queueing component to capture the interactions among the patients, the hospitals and the healthcare funders. Specifically, we analyze two different sequential games and obtain closed form expressions for the patients’ waiting time and the funders’ reimbursement cost in equilibrium. We show that the presence of a private provider can be beneficial to the public system: the patients’ waiting time will decrease and the funders’ cost can decrease under certain conditions. Also, we show that the cross-border healthcare policy, which increases patient mobility, can also be beneficial to the public systems: when welfare requirements across countries are sufficiently close, all funders can reduce their costs without increasing the patients’ waiting time. Our analysis implies that in border regions, where the cost of crossing the border is low, “outsourcing” the high-cost country’s elective care services to the low-cost country is a viable strategy from which both countries’ systems can benefit. | Introducing competition in healthcare services: The role of private care and increased patient mobility |
S0377221713009405 | Internal transport operations connect the seaside, yard side, and landside processes at container terminals. This paper presents an in-depth overview of transport operations and the material handling equipment used, highlights current industry trends and developments, and proposes a new classification scheme for transport operations and scientific journal papers published up to 2012. The paper also discusses and challenges current operational paradigms of transport operations. Lastly, the paper identifies new avenues for academic research based on current trends and developments in the container terminal industry. | Transport operations in container terminals: Literature overview, trends, research directions and classification scheme |
S0377221713009417 | In order to evaluate the performance of socially responsible investment (SRI) funds, we propose some models which use data envelopment analysis (DEA) and can be computed in all phases of the business cycle. These models focus on the most crucial elements of an investment in mutual funds. In the literature both constant and variable returns to scale DEA models have been used to evaluate the performance of mutual funds. We carry out an empirical investigation on European SRI equity funds to test the presence of returns to scale (RTS). Another aspect taken into account by the empirical investigation is the measurement of the degree of social responsibility of SRI equity funds in various European countries. In addition, we analyse the performance of the funds considered with the different DEA models proposed, which differ in the way the ethical objective is taken into account. Moreover, the paper focuses on another crucial issue regarding socially responsible investing: the comparison of the performances between SRI and non-SRI funds. The empirical study suggests that the ethical objective can be pursued without having to renounce financial rewards. | Constant and variable returns to scale DEA models for socially responsible investment funds |
S0377221713009429 | Fitness landscape theory is a mathematical framework for numerical analysis of search algorithms on combinatorial optimization problems. We study a representation of fitness landscape as a weighted directed graph. We consider out forest and in forest structures in this graph and establish important relationships among the forest structures of a directed graph, the spectral properties of the Laplacian matrices, and the numbers of local optima of the landscape. These relationships provide a new approach for computing the numbers of local optima for various problem instances and neighborhood structures. | In and out forests on combinatorial landscapes |
S0377221713009430 | In this article, we review published studies that consider the solution of the one-dimensional cutting stock problem (1DCSP) with the possibility of using leftovers to meet future demands, if long enough. The one-dimensional cutting stock problem with usable leftovers (1DCSPUL) is a problem frequently encountered in practical settings but often, it is not dealt with in an explicit manner. For each work reviewed, we present the application, the mathematical model if one is proposed and comments on the computational results obtained. The approaches are organized into three classes: heuristics, item-oriented, or cutting pattern-oriented. | The one-dimensional cutting stock problem with usable leftovers – A survey |
S0377221713009442 | The research on financial portfolio optimization has been originally developed by Markowitz (1952). It has been further extended in many directions, among them the portfolio insurance theory introduced by Leland and Rubinstein (1976) for the “Option Based Portfolio Insurance” (OBPI) and Perold (1986) for the “Constant Proportion Portfolio Insurance” method (CPPI). The recent financial crisis has dramatically emphasized the interest of such portfolio strategies. This paper examines the CPPI method when the multiple is allowed to vary over time. To control the risk of such portfolio management, a quantile approach is introduced together with expected shortfall criteria. In this framework, we provide explicit upper bounds on the multiple as function of past asset returns and volatilities. These values can be statistically estimated from financial data, using for example ARCH type models. We show how the multiple can be chosen in order to satisfy the guarantee condition, at a given level of probability and for various financial market conditions. | Portfolio insurance: Gap risk under conditional multiples |
S0377221713009454 | This paper is concerned with an algorithmic solution to the split common fixed point problem in Hilbert spaces. Our method can be regarded as a variant of the “viscosity approximation method”. Under very classical assumptions, we establish a strong convergence theorem with regard to involved operators belonging to the wide class of quasi-nonexpansive operators. In contrast with other related processes, our algorithm does not require any estimate of some spectral radius. The technique of analysis developed in this work is new and can be applied to many other fixed point iterations. Numerical experiments are also performed with regard to an inverse heat problem. | A viscosity method with no spectral radius requirements for the split common fixed point problem |
S0377221713009466 | We consider a network design problem that generalizes the hop and diameter constrained Steiner tree problem as follows: Given an edge-weighted undirected graph with two disjoint subsets representing roots and terminals, find a minimum-weight subtree that spans all the roots and terminals so that the number of hops between each relevant node and an arbitrary root does not exceed a given hop limit H. The set of relevant nodes may be equal to the set of terminals, or to the union of terminals and root nodes. This article proposes integer linear programming models utilizing one layered graph for each root node. Different possibilities to relate solutions on each of the layered graphs as well as additional strengthening inequalities are then discussed. Furthermore, theoretical comparisons between these models and to previously proposed flow- and path-based formulations are given. To solve the problem to optimality, we implement branch-and-cut algorithms for the layered graph formulations. Our computational study shows their clear advantages over previously existing approaches. | Hop constrained Steiner trees with multiple root nodes |
S0377221713009478 | It is well known that he influence relation orders the voters the same way as the classical Banzhaf and Shapley–Shubik indices do when they are extended to the voting games with abstention (VGA) in the class of complete games. Moreover, all hierarchies for the influence relation are achievable in the class of complete VGA. The aim of this paper is twofold. Firstly, we show that all hierarchies are achievable in a subclass of weighted VGA, the class of weighted games for which a single weight is assigned to voters. Secondly, we conduct a partial study of achievable hierarchies within the subclass of H-complete games, that is, complete games under stronger versions of influence relation. | Achievable hierarchies in voting games with abstention |
S0377221713009491 | This paper presents two new dynamic programming (DP) algorithms to find the exact Pareto frontier for the bi-objective integer knapsack problem. First, a property of the traditional DP algorithm for the multi-objective integer knapsack problem is identified. The first algorithm is developed by directly using the property. The second algorithm is a hybrid DP approach using the concept of the bound sets. The property is used in conjunction with the bound sets. Next, the numerical experiments showed that a promising partial solution can be sometimes discarded if the solutions of the linear relaxation for the subproblem associated with the partial solution are directly used to estimate an upper bound set. It means that the upper bound set is underestimated. Then, an extended upper bound set is proposed on the basis of the set of linear relaxation solutions. The efficiency of the hybrid algorithm is improved by tightening the proposed upper bound set. The numerical results obtained from different types of bi-objective instances show the effectiveness of the proposed approach. | Dynamic programming algorithms for the bi-objective integer knapsack problem |
S0377221713009508 | The simple assembly line balancing problem (SALBP) is a well-studied NP-complete problem for which a new problem database of generated instances was published in 2013. This paper describes the application of a branch, bound, and remember (BB&R) algorithm using the cyclic best-first search strategy to this new database to produce provably exact solutions for 86% of the unsolved problems in this database. A new backtracking rule to save memory is employed to allow the BB&R algorithm to solve many of the largest problems in the database. | An application of the branch, bound, and remember algorithm to a new simple assembly line balancing dataset |
S0377221713009521 | Cardinal and ordinal inconsistencies are important and popular research topics in the study of decision making with pair-wise comparison matrices (PCMs). Few of the currently-employed tactics are capable of simultaneously dealing with both cardinal and ordinal inconsistency issues in one model, and most are heavily dependent on the method chosen for weight (priorities) derivation or the obtained closest matrix by optimization method that may change many of the original values. In this paper, we propose a Hadamard product induced bias matrix model, which only requires the use of the data in the original matrix to identify and adjust the cardinally inconsistent element(s) in a PCM. Through graph theory and numerical examples, we show that the adapted Hadamard model is effective in identifying and eliminating the ordinal inconsistencies. Also, for the most inconsistent element identified in the matrix, we develop innovative methods to improve the consistency of a PCM. The proposed model is only dependent on the original matrix, is independent of the methods chosen to derive the priority vectors, and preserves most of the original information in matrix A since only the most inconsistent element(s) need(s) to be modified. Our method is much easier to implement than any of the existing models, and the values it recommends for replacement outperform those derived from the literature. It significantly enhances matrix consistency and improves the reliability of PCM decision making. | Enhancing data consistency in decision matrix: Adapting Hadamard model to mitigate judgment contradiction |
S0377221713009533 | A connected dominating set (CDS) is commonly used to model a virtual backbone of a wireless network. To bound the distance that information must travel through the network, we explicitly restrict the diameter of a CDS to be no more than s leading to the concept of a dominating s-club. We prove that for any fixed positive integer s it is NP-complete to determine if a graph has a dominating s-club, even when the graph has diameter s + 1 . As a special case it is NP-complete to determine if a graph of diameter two has a dominating clique. We then propose a compact integer programming formulation for the related minimization problem, enhance the approach with variable fixing rules and valid inequalities, and present computational results. | On connected dominating sets of restricted diameter |
S0377221713009545 | This paper aims at resolving a major obstacle to practical usage of time-consistent risk-averse decision models. The recursive objective function, generally used to ensure time consistency, is complex and has no clear/direct interpretation. Practitioners rather choose a simpler and more intuitive formulation, even though it may lead to a time inconsistent policy. Based on rigorous mathematical foundations, we impel practical usage of time consistent models as we provide practitioners with an intuitive economic interpretation for the referred recursive objective function. We also discourage time-inconsistent models by arguing that the associated policies are sub-optimal. We developed a new methodology to compute the sub-optimality gap associated with a time-inconsistent policy, providing practitioners with an objective method to quantify practical consequences of time inconsistency. Our results hold for a quite general class of problems and we choose, without loss of generality, a CVaR-based portfolio selection application to illustrate the developed concepts. | Time consistency and risk averse dynamic decision models: Definition, interpretation and practical consequences |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.