FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0377221714004974 | Soft OR tools have increasingly been used to support the strategic development of companies at operational and managerial levels. However, we still lack OR applications that can be useful in dealing with the “implementation gap”, understood as the scarcity of resources available to organizations seeking to align their existing processes and structures with a new strategy. In this paper we contribute to filling that gap, describing an action research case study where we supported strategy implementation in a Latin American multinational corporation through a soft OR methodology. We enhanced the ‘Methodology to support organizational self-transformation’, inspired by the Viable System Model, with substantive improvements in data collection and analyses. Those adjustments became necessary to facilitate second order learning and agreements on required structural changes among a large number of participants. This case study contributes to the soft OR and strategy literature with insights about the promise and constraints of this soft OR methodology to collectively structure complex decisions that support organizational redesign and strategy implementation. | A methodology for supporting strategy implementation based on the VSM: A case study in a Latin-American multi-national |
S0377221714004986 | Given a set of players and the cost of each possible coalition, the question we address is which coalitions should be formed. We formulate mixed integer linear programming models for this problem, considering core stability and strong equilibrium. The objective function looks for minimizing the total cost allocated among the players. Concerned about the difficulties of managing large coalitions in practice, we also study the effect of a maximum cardinality constraint per coalition. We test the models in two applications. One is in collaborative forest transportation and the other one in inventory of spare parts for oil operations. In these situations, collaboration opportunities involving significant savings exist, but for several reasons, it may be better to group the players in different sub-coalitions rather than in the grand coalition. The models we propose are thus relevant for deciding how to partition the set of players. We also prove that if the strong equilibrium model is feasible, its optimal cost is equal to the optimal cost of the core stability model and, consequently, a coalition structure that solves one problem also solves the other problem. We present results that illustrate this property. We also present results where the core stability problem is feasible and the strong equilibrium problem is infeasible. Setting an upper bound on the maximum cardinality of the coalitions, allows us to study the marginal savings of enlarging the cardinality of the coalitions. We find that the marginal savings of allowing one more player significantly decreases as the bound increases. | Operations research models for coalition structure in collaborative logistics |
S0377221714004998 | In defined benefit pension plans, allowances are independent from the financial performance of the fund. And the sponsoring firm pays regularly contributions to limit deviations of fund assets from the mathematical reserve, necessary for covering the promised liabilities. This research paper proposes a method to optimize the timing and size of contributions, in a regime switching economy. The model takes into consideration important market frictions, like transactions costs, late payments and illiquidity. The problem is solved numerically using dynamic programming and impulse control techniques. Our approach is based on parallel grids, with trinomial links, discretizing the asset return in each economic regime. | Impulse control of pension fund contributions, in a regime switching economy |
S0377221714005001 | The stochastic variability measures the degree of uncertainty for random demand and/or price in various operations problems. Its ordering property under mean-preserving transformation allows us to study the impact of demand/price uncertainty on the optimal decisions and the associated objective values. Based on Chebyshev’s algebraic inequality, we provide a general framework for stochastic variability ordering under any mean-preserving transformation that can be parameterized by a single scalar, and apply it to a broad class of specific transformations, including the widely used mean-preserving affine transformation, truncation, and capping. The application to mean-preserving affine transformation rectifies an incorrect proof of an important result in the inventory literature, which has gone unnoticed for more than two decades. The application to mean-preserving truncation addresses inventory strategies in decentralized supply chains, and the application to mean-preserving capping sheds light on using option contracts for procurement risk management. | The stochastic ordering of mean-preserving transformations and its applications |
S0377221714005013 | Subcontracting can be an important means of overcoming capacity shortages and of workload balancing, especially in make-to-order companies characterized by high variety, high demand variation and a job shop configuration. But there is a lack of simple, yet powerful subcontracting rules suitable for such contexts. The few existing rules were developed for single work center shops and neglect the actual subcontracting lead time, meaning some subcontracted jobs are destined to become tardy. This study uses Workload Control theory on matching required and available capacity over time to propose four new rules that address these shortcomings. The new rules are compared against four existing rules using an assembly job shop simulation model where the final, assembled product consists of several sub-assemblies that either flow through an internal job shop or are subcontracted. The best new rules stabilize the direct load queuing in front of a work center and significantly improve performance compared to the existing rules. For example, when the workload exceeds capacity by 10%, a 50% reduction in percentage tardy can be achieved. By examining how the workload behaves over time, we reveal that improvements come from selectively subcontracting the sub-assemblies that would otherwise cause overloads, thereby cutting off peaks in the workload. | The design of simple subcontracting rules for make-to-order shops: An assessment by simulation |
S0377221714005025 | In this paper, we consider that the judgments provided by the decision makers (DMs) cannot be aggregated and revised, then define them as hesitant judgments to describe the hesitancy experienced by the DMs in decision making. If there exist hesitant judgments in analytic hierarchy process-group decision making (AHP-GDM), then we call it AHP-hesitant group decision making (AHP-HGDM) as an extension of AHP-GDM. Based on hesitant multiplicative preference relations (HMPRs) to collect the hesitant judgments, we develop a hesitant multiplicative programming method (HMPM) as a new prioritization method to derive ratio-scale priorities from HMPRs. The HMPM is discussed in detail with examples to show its advantages and characteristics. The practicality and effectiveness of our methods are illustrated by an example of the water conservancy in China. | Analytic hierarchy process-hesitant group decision making |
S0377221714005037 | Consider a general coherent system with independent or dependent components, and assume that the components are randomly chosen from two different stocks, with the components of the first stock having better reliability than the others. Then here we provide sufficient conditions on the component’s lifetimes and on the random numbers of components chosen from the two stocks in order to improve the reliability of the whole system according to different stochastic orders. We also discuss several examples in which such conditions are satisfied and an application to the study of the optimal random allocation of components in series and parallel systems. As a novelty, our study includes the case of coherent systems with dependent components by using basic mathematical tools (and copula theory). | Orderings of coherent systems with randomized dependent components |
S0377221714005049 | Profiling engineered data with robust mining methods continues attracting attention in knowledge engineering systems. The purpose of this article is to propose a simple technique that deals with non-linear multi-factorial multi-characteristic screening suitable for knowledge discovery studies. The method is designed to proactively seek and quantify significant information content in engineered mini-datasets. This is achieved by deploying replicated fractional-factorial sampling schemes. Compiled multi-response data are converted to a single master-response effectuated by a series of distribution-free transformations and multi-compressed data fusions. The resulting amalgamated master response is deciphered by non-linear multi-factorial stealth stochastics intended for saturated schemes. The stealth properties of our method target processing datasets which might be overwhelmed by a lack of knowledge about the nature of reference distributions at play. Stealth features are triggered to overcome restrictions regarding the data normality conformance, the effect sparsity assumption and the inherent collapse of the ‘unexplainable error’ connotation in saturated arrays. The technique is showcased by profiling four ordinary controlling factors that influence webpage content performance by collecting data from a commercial browser monitoring service on a large scale web host. The examined effects are: (1) the number of Cascading Style Sheets files, (2) the number of JavaScript files, (3) the number of Image files, and (4) the Domain Name System Aliasing. The webpage performance level was screened against three popular characteristics: (1) the time to first visual, (2) the total loading time, and (3) the customer satisfaction. Our robust multi-response data mining technique is elucidated for a ten-replicate run study dictated by an L9(34) orthogonal array scheme where any uncontrolled noise embedded contribution has not been necessarily excluded. | Concurrent multiresponse non-linear screening: Robust profiling of webpage performance |
S0377221714005256 | One index satisfies the duality axiom if one agent, who is uniformly more risk-averse than another, accepts a gamble, the latter accepts any less risky gamble under the index. Aumann and Serrano (2008) show that only one index defined for so-called gambles satisfies the duality and positive homogeneity axioms. We call it a duality index. This paper extends the definition of duality index to all outcomes including all gambles, and considers a portfolio selection problem in a complete market, in which the agent’s target is to minimize the index of the utility of the relative investment outcome. By linking this problem to a series of Merton’s optimum consumption-like problems, the optimal solution is explicitly derived. It is shown that if the prior benchmark level is too high (which can be verified), then the investment risk will be beyond any agent’s risk tolerance. If the benchmark level is reasonable, then the optimal solution will be the same as that of one of the Merton’s series problems, but with a particular value of absolute risk aversion, which is given by an explicit algebraic equation as a part of the optimal solution. According to our result, it is riskier to achieve the same surplus profit in a stable market than in a less-stable market, which is consistent with the common financial intuition. | Investment under duality risk measure |
S0377221714005268 | Assigning tasks to work stations is an essential problem which needs to be addressed in an assembly line design. The most basic model is called simple assembly line balancing problem type 1 (SALBP-1). We provide a survey on 12 heuristics and 9 lower bounds for this model and test them on a traditional and a lately-published benchmark dataset. The present paper focuses on algorithms published before 2011. We improve an already existing dynamic programming and a tabu search approach significantly. These two are also identified as the most effective heuristics; each with advantages for certain problem characteristics. Additionally we show that lower bounds for SALBP-1 can be distinctly sharpened when merging them and applying problem reduction techniques. | Heuristics and lower bounds for the simple assembly line balancing problem type 1: Overview, computational tests and improvements |
S0377221714005281 | This paper introduces a multi-project problem environment which involves multiple projects with assigned due dates; activities that have alternative resource usage modes; a resource dedication policy that does not allow sharing of resources among projects throughout the planning horizon; and a total budget. Three issues arise when investigating this multi-project environment. First, the total budget should be distributed among different resource types to determine the general resource capacities, which correspond to the total amount for each renewable resource to be dedicated to the projects. With the general resource capacities at hand, the next issue is to determine the amounts of resources to be dedicated to the individual projects. The dedication of resources reduces the scheduling of the projects’ activities to a multi-mode resource constrained project scheduling problem (MRCPSP) for each individual project. Finally, the last issue is the efficient solution of the resulting MRCPSPs. In this paper, this multi-project environment is modeled in an integrated fashion and designated as the resource portfolio problem. A two-phase and a monolithic genetic algorithm are proposed as two solution approaches, each of which employs a new improvement move designated as the combinatorial auction for resource portfolio and the combinatorial auction for resource dedication. A computational study using test problems demonstrated the effectiveness of the solution approach proposed. | Multi-mode resource constrained multi-project scheduling and resource portfolio problem |
S0377221714005293 | Let P be an undirected path graph of n vertices. Each edge of P has a positive length and a constant capacity. Every vertex has a nonnegative supply, which is an unknown value but is known to be in a given interval. The goal is to find a point on P to build a facility and move all vertex supplies to the facility such that the maximum regret is minimized. The previous best algorithm solves the problem in O ( n log 2 n ) time and O ( n log n ) space. In this paper, we present an O ( n log n ) time and O ( n ) space algorithm, and our approach is based on new observations and algorithmic techniques. | Minmax regret 1-facility location on uncertain path networks |
S0377221714005311 | We consider a random yield inventory system, where a company has access to real time information about the actual yield realizations. To contribute to a better understanding of the value of this information, we develop a mathematical model of the inventory system and derive structural properties. We build on these properties to develop an optimal solution approach that can be used to solve small to medium sized problems. To solve large problems, we develop two heuristics. We conduct numerical experiments to test the performances of our approaches and to identify conditions under which real time yield information is particularly beneficial. Our research provides the approaches that are necessary to implement inventory control policies that utilize real time yield information. The results can also be used to estimate the cost savings that can be achieved by using real time yield information. The cost savings can then be compared against the required investments to decide if such an investment is profitable. | The value of real time yield information in multi-stage inventory systems – Exact and heuristic approaches |
S0377221714005323 | The emergence of B2B spot markets has greatly facilitated spot trading and impacted supply chain structures as well as the way commercial transactions take place between firms in many industries. While providing new opportunities, the B2B spot market also exposes participants to a price risk. This new business landscape raises some important questions on how the supplier and manufacturer should change their sales channel and procurement strategies and tailor their decisions to this changing environment. In this paper, we study the channel-choice, pricing and ordering/production decisions of the risk-averse supplier and manufacturer in a two-tier supply chain with a B2B spot market. Our analysis shows that, to benefit from the B2B spot market and control the risk exposure, the upstream supplier should develop an integrated channel-choice and pricing strategy. When the supplier adopts a dual-channel strategy, the equilibrium contract price decreases in the supplier’s risk attitude, but increases in the demand uncertainty. However, it first decreases and then increases in the manufacturer’s risk attitude and spot price volatility. We conclude that rather than simply being a second channel, the B2B spot market provides a strategic tool to supply chain members to achieve an advantageous position in their contract trading. | More than a second channel? Supply chain strategies in B2B spot markets |
S0377221714005335 | We consider a generalisation of the lot-sizing problem that includes an emission capacity constraint. Besides the usual financial costs, there are emissions associated with production, keeping inventory and setting up the production process. Because the capacity constraint on the emissions can be seen as a constraint on an alternative objective function, there is also a clear link with bi-objective optimisation. We show that lot-sizing with an emission capacity constraint is NP -hard and propose several solution methods. Our algorithms are not only able to handle a fixed-plus-linear cost structure, but also more general concave cost and emission functions. First, we present a Lagrangian heuristic to provide a feasible solution and lower bound for the problem. For costs and emissions such that the zero inventory property is satisfied, we give a pseudo-polynomial algorithm, which can also be used to identify the complete set of Pareto optimal solutions of the bi-objective lot-sizing problem. Furthermore, we present a fully polynomial time approximation scheme (FPTAS) for such costs and emissions and extend it to deal with general costs and emissions. Special attention is paid to an efficient implementation with an improved rounding technique to reduce the a posteriori gap, and a combination of the FPTASes and a heuristic lower bound. Extensive computational tests show that the Lagrangian heuristic gives solutions that are very close to the optimum. Moreover, the FPTASes have a much better performance in terms of their actual gap than the a priori imposed performance, and, especially if the heuristic’s lower bound is used, they are very fast. | The economic lot-sizing problem with an emission capacity constraint |
S0377221714005347 | The Choquet integral preference model is adopted in Multiple Criteria Decision Aiding (MCDA) to deal with interactions between criteria, while the Stochastic Multiobjective Acceptability Analysis (SMAA) is an MCDA methodology considered to take into account uncertainty or imprecision on the considered data and preference parameters. In this paper, we propose to combine the Choquet integral preference model with the SMAA methodology in order to get robust recommendations taking into account all parameters compatible with the preference information provided by the Decision Maker (DM). In case the criteria are on a common scale, one has to elicit only a set of non-additive weights, technically a capacity, compatible with the DM’s preference information. Instead, if the criteria are on different scales, besides the capacity, one has to elicit also a common scale compatible with the preferences given by the DM. Our approach permits to explore the whole space of capacities and common scales compatible with the DM’s preference information. | Stochastic multiobjective acceptability analysis for the Choquet integral preference model and the scale construction problem |
S0377221714005359 | A heuristic approach for the two-dimensional bin-packing problem is proposed. The algorithm is based on the sequential heuristic procedure that generates each pattern to produce some items and repeats until all items are produced. Both guillotine and non-guillotine patterns can be used. Each pattern is obtained from calling a pattern-generation procedure, where the objective is to maximize the pattern value. The item values are adjusted after the generation of each pattern using a value correction formula. The algorithm is compared with five published algorithms, using 50 groups of benchmark instances. The results indicate that the algorithm is the most efficient in improving solution quality. | Sequential heuristic for the two-dimensional bin-packing problem |
S0377221714005360 | It has been reported that since year 2000, there have been an average 700 water main breaks per day only in Canada and the USA costing more than CAD 10 billions/year. Moreover, water main leaks affect other neighboring infrastructure that may lead to catastrophic failures. For this, municipality authorities or stakeholders are more concerned about preventive actions rather reacting to failure events. This paper presents a Bayesian Belief Network (BBN) model to evaluate the risk of failure of metallic water mains using structural integrity, hydraulic capacity, water quality, and consequence factors. BBN is a probabilistic graphical model that represents a set of variables and their probabilistic relationships, which also captures historical information about these dependencies. The proposed model is capable of ranking water mains within distribution network that can identify vulnerable and sensitive pipes to justify proper decision action for maintenance/rehabilitation/replacement (M/R/R). To demonstrate the application of proposed model, water distribution network of City of Kelowna has been studied. Result indicates that almost 9% of the total 259 metallic pipes are at high risk in both summer and winter. | Evaluating risk of water mains failure using a Bayesian belief network model |
S0377221714005372 | This paper examines the combined use of predictive analytics, optimization, and overbooking to schedule outpatient appointments in the presence of no-shows. We tackle the problem of optimally overbooking appointments given no-show predictions that depend on the individual appointment characteristics and on the appointment day. The goal is maximizing the number of patients seen while minimizing waiting time and overtime. Our analysis leads to the definition of a near-optimal and simple heuristic which consists of giving same-day appointments to likely shows and future-day appointments to likely no-shows. We validate our findings by performing extensive simulation tests based on an empirical data set of nearly fifty thousand appointments from a real outpatient clinic. The results suggest that our heuristic can lead to a substantial increase in performance and that it should be preferred to open access under most parameter configurations. Our paper will be of great interest to practitioners who want to improve their clinic performance by using individual no-show predictions to guide appointment scheduling. | Outpatient appointment scheduling given individual day-dependent no-show predictions |
S0377221714005384 | In some important group decision making, a moderator representing the collective interest, who has predetermined, and possesses an effective leadership and strong interpersonal communication and negotiation skills, is crucial to the consensus reaching. In the process of consensus reaching, the moderator needs to persuade each individual to change his/her opinion towards a consensus opinion by paying a minimum cost, while the individuals have to modify and to gradually approach this consensus opinion by expecting to obtain a maximum compensation. This paper, which proposes two kinds of minimum cost models with regard to all the individuals and one particular individual respectively, shows the economic significance of these two models by exploring their dual models grounded in the primal–dual linear programming theory, and builds the conditions under which these two models have the same optimal consensus opinion. The validity of the theoretical analysis is confirmed by numerical examples. | Two consensus models based on the minimum cost and maximum return regarding either all individuals or one individual |
S0377221714005396 | In our previous work published in this journal, we showed how the Hit-And-Run (HAR) procedure enables efficient sampling of criteria weights from a space formed by restricting a simplex with arbitrary linear inequality constraints. In this short communication, we note that the method for generating a basis of the sampling space can be generalized to also handle arbitrary linear equality constraints. This enables the application of HAR to sampling spaces that do not coincide with the simplex, thereby allowing the combined use of imprecise and precise preference statements. In addition, it has come to our attention that one of the methods we proposed for generating a starting point for the Markov chain was flawed. To correct this, we provide an alternative method that is guaranteed to produce a starting point that lies within the interior of the sampling space. | Notes on ‘Hit-And-Run enables efficient weight generation for simulation-based multiple criteria decision analysis’ |
S0377221714005402 | In 2014, Wang et al. (2014) extended the model of Lou and Wang (2012) to incorporate the credit period dependent demand and default risk for deteriorating items with maximum lifetime. However, the rates of demand, default risk and deterioration in the model of Wang et al. (2014) are assumed to be specific functions of credit period which limits the contributions. In this note, we first generalize the theoretical results of Wang et al. (2014) under some certain conditions. Furthermore, we also present some structural results instead of a numerical analysis on variation of optimal replenishment and trade credit strategies with respect to key parameters. | A note on “Seller’s optimal credit period and cycle time in a supply chain for deteriorating items with maximum lifetime” |
S0377221714005414 | In this article we generalize the aggregation theory in efficiency and productivity analysis by deriving solutions to the problem of aggregation of individual scale efficiency measures, primal and dual, into aggregate primal and dual scale efficiency measures of a group (e.g., industry). The new aggregation result is coherent with aggregation framework and solutions that were earlier derived for other related efficiency measures and can be used in practice for estimation of scale efficiency of an industry or other groups of firms within it. | Aggregation of scale efficiency |
S0377221714005426 | In most multi-objective optimization problems we aim at selecting the most preferred among the generated Pareto optimal solutions (a subjective selection among objectively determined solutions). In this paper we consider the robustness of the selected Pareto optimal solution in relation to perturbations within weights of the objective functions. For this task we design an integrated approach that can be used in multi-objective discrete and continuous problems using a combination of Monte Carlo simulation and optimization. In the proposed method we introduce measures of robustness for Pareto optimal solutions. In this way we can compare them according to their robustness, introducing one more characteristic for the Pareto optimal solution quality. In addition, especially in multi-objective discrete problems, we can detect the most robust Pareto optimal solution among neighboring ones. A computational experiment is designed in order to illustrate the method and its advantages. It is noteworthy that the Augmented Weighted Tchebycheff proved to be much more reliable than the conventional weighted sum method in discrete problems, due to the existence of unsupported Pareto optimal solutions. | Robustness analysis in Multi-Objective Mathematical Programming using Monte Carlo simulation |
S0377221714005438 | This paper uses a fully nonparametric approach to estimate efficiency measures for primary care units incorporating the effect of (exogenous) environmental factors. This methodology allows us to account for different types of variables (continuous and discrete) describing the main characteristics of patients served by those providers. In addition, we use an extension of this nonparametric approach to deal with the presence of undesirable outputs in data, represented by the rates of hospitalization for ambulatory care sensitive condition (ACSC). The empirical results show that all the exogenous variables considered have a significant and negative effect on efficiency estimates. | Efficiency assessment of primary care providers: A conditional nonparametric approach |
S0377221714005451 | In this paper we review and propose different adaptations of the GRASP metaheuristic to solve multiobjective combinatorial optimization problems. In particular, we describe several alternatives to specialize the construction and improvement components of GRASP when two or more objectives are considered. GRASP has been successfully coupled with Path Relinking for single-objective optimization. Moreover, we propose different hybridizations of GRASP and Path Relinking for multiobjective optimization. We apply the proposed GRASP with Path Relinking variants to two combinatorial optimization problems, the biobjective orienteering problem and the biobjective path dissimilarity problem. We report on empirical tests with 70 instances and 30 algorithms, that show that the proposed heuristics are competitive with the state-of-the-art methods for these problems. | Multiobjective GRASP with Path Relinking |
S0377221714005463 | Loss given default modelling has become crucially important for banks due to the requirement that they comply with the Basel Accords and to their internal computations of economic capital. In this paper, support vector regression (SVR) techniques are applied to predict loss given default of corporate bonds, where improvements are proposed to increase prediction accuracy by modifying the SVR algorithm to account for heterogeneity of bond seniorities. We compare the predictions from SVR techniques with thirteen other algorithms. Our paper has three important results. First, at an aggregated level, the proposed improved versions of support vector regression techniques outperform other methods significantly. Second, at a segmented level, by bond seniority, least square support vector regression demonstrates significantly better predictive abilities compared with the other statistical models. Third, standard transformations of loss given default do not improve prediction accuracy. Overall our empirical results show that support vector regression techniques are a promising technique for banks to use to predict loss given default. | Support vector regression for loss given default modelling |
S0377221714005475 | This study applies the Dynamic Slacks Based Model (DSBM) developed by Tone and Tsutsui (2010) in order to assess the evolution of input saving/output increasing potentials in major Brazilian Banks from 1996 to 2011. We propose that these potentials or slacks can be used as proxies for an eventual financial distress situation in the future. The main research objective is to determine whether or not different characteristics of bank type – related to ownership, size, and merger and acquisition processes – are significantly related to inefficiency levels and, by extension, to an eventual financial distress situation, since higher inefficiency levels also imply lower input saving/output decreasing potentials. Based on a balanced panel model, secondary data from Economatica were collected and analyzed. Results indicate higher inefficiency levels and slacks in small public and national banks. Policy implications are also addressed. | Financial distress drivers in Brazilian banks: A dynamic slacks approach |
S0377221714005487 | In the stochastic variant of the vehicle routing problem with time windows, known as the SVRPTW, travel times are assumed to be stochastic. In our chance-constrained approach to the problem, restrictions are placed on the probability that individual time window constraints are violated, while the objective remains based on traditional routing costs. In this paper, we propose a way to offer this probability, or service level, for all customers. Our approach carefully considers how to compute the start-service time and arrival time distributions for each customer. These distributions are used to create a feasibility check that can be “plugged” into any algorithm for the VRPTW and thus be used to solve large problems fairly quickly. Our computational experiments show how the solutions change for some well-known data sets across different levels of customer service, two travel time distributions, and several parameter settings. | Ensuring service levels in routing problems with time windows and stochastic travel times |
S0377221714005499 | This paper implements and tests a label-setting algorithm for finding optimal hyperpaths in large transit networks with realistic headway distributions. It has been commonly assumed in the literature that headway is exponentially distributed. To validate this assumption, the empirical headway data archived by Chicago Transit Agency are fitted into various probabilistic distributions. The results suggest that the headway data fit much better with Loglogistic, Gamma and Erlang distributions than with the exponential distribution. Accordingly, we propose to model headway using the Erlang distribution in the proposed algorithm, because it best balances realism and tractability. When headway is not exponentially distributed, finding optimal hyperpaths may require enumerating all possible line combinations at each transfer stop, which is tractable only for a small number of alternative lines. To overcome this difficulty, a greedy method is implemented as a heuristic and compared to the brute-force enumeration method. The proposed algorithm is tested on a large scale CTA bus network that has over 10,000 stops. The results show that (1) the assumption of exponentially distributed headway may lead to sub-optimal route choices and (2) the heuristic greedy method provides near optimal solutions in all tested cases. | Finding optimal hyperpaths in large transit networks with realistic headway distributions |
S0377221714005505 | This paper deals with the robust optimization for the cyclic hoist scheduling problem with processing time window constraints. The robustness of a cyclic hoist schedule is defined as its ability to remain stable in the presence of perturbations or variations of certain degree in the hoist transportation times. With such a definition, we propose a method to measure the robustness of a cyclic hoist schedule. A bi-objective mixed integer linear programming (MILP) model, which aims to optimize cycle time and robustness, is developed for the robust cyclic hoist scheduling problem. We prove that the optimal cycle time is a strictly increasing function of the robustness and the problem has infinite Pareto optimal solutions. Furthermore, we derive the so-called ideal point and nadir point that define the lower and upper bounds for the objective values of Pareto front. A Pareto optimal solution can be obtained by solving a single-objective MILP model to minimize the cycle time for a given value of robustness or maximize the robustness for a specific cycle time. The single-objective MILP models are solved using commercial optimization software CPLEX. Computational results on several benchmark instances and randomly generated instances indicate that the proposed approach can solve large-scale problems within a reasonable amount of time. | Robust optimization for the cyclic hoist scheduling problem |
S0377221714005529 | This article discusses the steady state analysis of the M / G / 2 queuing system with two heterogeneous servers under new queue disciplines when the classical First Come First Served ‘(FCFS)’ queue discipline is to be violated. Customers are served either by server-I according to an exponential service time distribution with mean rate μ or by server-II with a general service time distribution B ( t ) . Sequel to some objections raised in the literature on the use of the classical FCFS queue discipline in heterogeneous service systems, two alternative queue disciplines (Serial and Parallel) are considered in this work with the objective that if the FCFS is violated then the violation is a minimum in the long run. Using the embedded method under the serial queue discipline and the supplementary variable technique under the parallel queue discipline, we present an exact analysis of the steady state number of customers in the system and most importantly, the actual waiting time expectation of customers in the system. Our work shows that one can obtain all stationary probabilities and other vital measures for this queue under certain simple but realistic assumptions. | An M / G / 2 queue where customers are served subject to a minimum violation of FCFS queue discipline |
S0377221714005530 | This paper deals with the inverse Data Envelopment Analysis (DEA) under inter-temporal dependence assumption. Both problems, input-estimation and output-estimation, are investigated. Necessary and sufficient conditions for input/output estimation are established utilizing Pareto and weak Pareto solutions of linear multiple-objective programming problems. Furthermore, in this paper we introduce a new optimality notion for multiple-objective programming problems, periodic weak Pareto optimality. These solutions are used in inverse DEA, and it is shown that these can be characterized by a simple modification in weighted sum scalarization tool. | Inverse DEA under inter-temporal dependence using multiple-objective programming |
S0377221714005542 | We investigate two scheduling problems. The first is scheduling with agreements (SWA) that consists in scheduling a set of jobs non-preemptively on identical machines in a minimum time, subject to constraints that only some specific jobs can be scheduled concurrently. These constraints are represented by an agreement graph. We extend the NP-hardness of SWA with three distinct values of processing times to only two values and this definitely closes the complexity status of SWA on two machines with two fixed processing times. The second problem is the so-called resource-constrained scheduling. We prove that SWA is polynomially equivalent to a special case of the resource-constrained scheduling and deduce new complexity results of the latter. | Scheduling: Agreement graph vs resource constraints |
S0377221714005554 | This study considers a hybrid assembly-differentiation flowshop scheduling problem (HADFSP), in which there are three production stages, including components manufacturing, assembly, and differentiation. All the components of a job are processed on different machines at the first stage. Subsequently, they are assembled together on a common single machine at the second stage. At the third stage, each job of a particular type is processed on a dedicated machine. The objective is to find a job schedule to minimize total flow time (TFT). At first, a mixed integer programming (MIP) model is formulated and then some properties of the optimal solution are presented. Since the NP-hardness of the problem, two fast heuristics (SPT-based heuristic and NEH-based heuristic) and three hybrid meta-heuristics (HGA-VNS, HDDE-VNS and HEDA-VNS) are developed for solving medium- and large-size problems. In order to evaluate the performances of the proposed algorithms, a lower bound for the HADFSP with TFT criteria (HADFSP-TFT) is established. The MIP model and the proposed algorithms are compared on randomly generated problems. Computational results show the effectiveness of the MIP model and the proposed algorithms. The computational analysis indicates that, in average, the HDDE-VNS performs better and more robustly than the other two meta-heuristics, whereas the NEH heuristic consume little time and could reach reasonable solutions. | Scheduling a hybrid assembly-differentiation flowshop to minimize total flow time |
S0377221714005566 | Firms face a continuous process of technological and environmental changes that requires them to make managerial decisions in a dynamic context. However, costs and constraints prevent firms from making instant adjustments towards optimal conditions and may cause inefficiency to persist in time. We propose a dynamic inefficiency specification that captures differences in the adjustment costs among firms and non-persistent effects of inefficiency heterogeneity. The model is fitted to a ten year sample of Colombian banks. The new specification improves model fit and have effects on efficiency estimations. Overall, Colombian banks present high inefficiency persistence but important differences between institutions are found. In particular, merged banks present low adjustment costs that allow them to recover rapidly efficiency losses derived from merging processes. | Dynamic effects in inefficiency: Evidence from the Colombian banking sector |
S0377221714005578 | A Passive Optical Network (PON) is a network technology for deploying access networks based on passive optical components. In a single PON access network, the client terminals are connected to a Central Office through optical splitters and interconnecting fibers where each splitter splits in equal parts the input optical signal coming from the Central Office over its different output fibers. In this paper, we consider PON topology solutions where the splitting ratio and the number of splitting stages are not constrained to a given target design but, instead, are decided based on the cost of the solutions. We present different Integer Linear Programming formulations to model this problem and provide computational results showing that the optimal solutions can be computed for realistic problem instances. In addition, we describe how the formulations can be adapted for the traditional PON topology approaches and present computational results showing that significant cost gains are obtained with the unconstrained splitting stage approach. | Single PON network design with unconstrained splitting stages |
S0377221714005591 | Generating companies use the maintenance cost function as the sole or main objective for creating the maintenance schedule of power generators. Usually only maintenance activities related costs are considered to derive the cost function. However, in deregulated markets, maintenance related costs alone do not represent the full costs of generators. This paper models various cost components that affect the maintenance activities in deregulated power markets. The costs that we model include direct and indirect maintenance, failures, interruptions, contractual compensation, rescheduling, and market opportunity. The loss of firm’s reputation and selection of loyalty model are also considered using the Analytic Hierarchy Process (AHP) within an opportunity cost model. A case study is used to illustrate the modelling activities. The enhanced model is utilised in generator maintenance scheduling cases. The experimental results demonstrate the importance and impact of market related costs in maintenance schedules. | Modelling generator maintenance scheduling costs in deregulated power markets |
S0377221714005608 | Using a market share attraction structure of advertising competition and following a supermodular game approach, this article demonstrates for an asymmetric oligopoly, the directional impact of changes in model parameters on the marketing controlled variables of all rivals (advertising budgets) and the operations controlled variables of all rivals (ordered quantities). Importantly, the various changes are examined analytically, empirically and numerically in both non-dominated and dominated asymmetric oligopolies. In this regard, the results indicate that firms in a dominated oligopoly (one firm of market share larger than or equal to 50%) behave differently compared to firms in a non-dominated oligopoly (each firm of market share less than 50%) in response to changes in model parameters. Furthermore, changes in model parameters are investigated in terms of their relative influential impact on a variety of equilibrium measures. In this regard, the findings indicate that for the analyzed model the marketing parameters exert much more influence on the equilibrium measures than the operations parameters. Additionally, a change in the mode of competition from non-cooperation (oligopoly) to cooperation (joint ownership) dictates that strong asymmetric firms (of favorable marketing and operations parameters) continue advertising (but at lower levels) and weak asymmetric firms (of less favorable parameters) cease advertising altogether. | On modeling the advertising-operations interface under asymmetric competition |
S0377221714005621 | This paper presents a multi-level Taguchi-factorial two-stage stochastic programming (MTTSP) approach for supporting water resources management under parameter uncertainties and their interactions. MTTSP is capable of performing uncertainty analysis, policy analysis, factor screening, and interaction detection in a comprehensive and systematic way. A water resources management problem is used to demonstrate the applicability of the proposed approach. The results indicate that interval solutions can be generated for the objective function and decision variables, and a variety of decision alternatives can be obtained under different policy scenarios. The experimental data obtained from the Taguchi’s orthogonal array design are helpful in identifying the significant factors affecting the total net benefit. Then the findings from the multi-level factorial experiment reveal the latent interactions among those important factors and their curvature effects on the model response. Such a sequential strategy of experimental designs is useful in analyzing the interactions for a large number of factors in a computationally efficient manner. | A multi-level Taguchi-factorial two-stage stochastic programming approach for characterization of parameter uncertainties and their interactions: An application to water resources management |
S0377221714005633 | Based on environmental, legal, social, and economic factors, reverse logistics and closed-loop supply chain issues have attracted attention among both academia and practitioners. This attention is evident by the vast number of publications in scientific journals which have been published in recent years. Hence, a comprehensive literature review of recent and state-of-the-art papers is vital to draw a framework of the past, and to shed light on future directions. The aim of this paper is to review recently published papers in reverse logistic and closed-loop supply chain in scientific journals. A total of 382 papers published between January 2007 and March 2013 are selected and reviewed. The papers are then analyzed and categorized to construct a useful foundation of past research. Finally, gaps in the literature are identified to clarify and to suggest future research opportunities. | Reverse logistics and closed-loop supply chain: A comprehensive review to explore the future |
S0377221714005645 | This paper addresses the optimization under uncertainty of the self-scheduling, forward contracting, and pool involvement of an electricity producer operating a mixed power generation station, which combines thermal, hydro and wind sources, and uses a two stage adaptive robust optimization approach. In this problem the wind power production and the electricity pool price are considered to be uncertain, and are described by uncertainty convex sets. To solve this problem, two variants of a constraint generation algorithm are proposed, and their application and characteristics discussed. Both algorithms are used to solve two case studies based on two producers, each operating equivalent generation units, differing only in the thermal units’ characteristics. Their market strategies are investigated for three different scenarios, corresponding to as many instances of electricity price forecasts. The effect of the producers’ approach, whether conservative or more risk prone, is also investigated by solving each instance for multiple values of the so-called budget parameter. It was possible to conclude that this parameter influences markedly the producers’ strategy, in terms of scheduling, profit, forward contracting, and pool involvement. These findings are presented and analyzed in detail, and an attempted rationale is proposed to explain the less intuitive outcomes. Regarding the computational results, these show that for some instances, the two variants of the algorithms have a similar performance, while for a particular subset of them one variant has a clear superiority. forward contracts hydro pump-storage generation units blocks of the forward contracts generating units optimality cuts in the Master problem feasibility cuts in the Master problem time periods thermal generation units production cost function coefficients for unit i (€/hour) cold start-up cost of unit i (€/hour) number of periods unit i must be off at the beginning of the time horizon time periods spanned by contract f shut-down cost (€) minimum down time of unit i (hour) minimum number of periods a unit i must be off at the beginning of the time horizon hot start cost of unit i (€/hour) minimum number of periods a unit i must be on at the beginning of the time horizon minimum power output of unit i (megawatt) maximum power output of unit i (megawatt) power produced at t =0 by unit i (megawatt) maximum ramp-down rate of unit i (megawatt) maximum ramp-up rate of unit i (megawatt) maximum shutdown rate of unit i (megawatt) spinning reserve for period t (megawatt) maximum start-up rate of unit i (megawatt) number of periods unit i must be on at the beginning of the time horizon initial state of unit i {on,off}={1,0} minimum up time of unit i (hour) cold start hours of unit i (hour) initial status of unit i (hour) conversion factor between cubic hectometers 3 and meter 3 /seconds in one hour water head in plant i (meter) power consumption factor power generation factor natural inflow of water for plant i ( meter 3 /seconds) maximum turbined and pumped flow of water for plant i ( meter 3 /seconds) maximum volume of water in the reservoir of plant i ( cubic hectometers 3 ) minimum volume of water in the reservoir of plant i ( cubic hectometers 3 ) minimum volume of water in the reservoir of plant i at the of the horizon ( cubic hectometers 3 ) energy price of buying block j of forward contract f (€/megawatthour) energy price of selling block j of forward contract f (€/megawatthour) shut-down cost of unit i in period t (€) total startup, shutdown, production, and online cost of unit i (€) total startup, shutdown and online cost of unit i (€) startup cost of unit i in period t (€) power bought through block j of forward contract f (megawatt) power sold through block j of forward contract f (megawatt) operational profit of the producer per week (€) power output of unit i in period t (megawatt) power bought in the pool in period t (megawatt) power sold in the pool in period t (megawatt) power output of the pumped-storage hydro unit i in period t (megawatt) power consumption of the pumped-storage hydro unit i in period t (megawatt) turbined flow of water in plant i in period t ( meter 3 /seconds) pumped flow of water in plant i in period t ( meter 3 /seconds) volume of water stored in the reservoir of plant i ( cubic hectometers 3 ) dual variables of the inner problem of the recourse problem dual variables of the inner problem of the recourse problem dual variables of the inner problem of the recourse problem dual variables of the inner problem of the recourse problem dual variables of the inner problem of the recourse problem variable that approximates the recourse problem optimal value on/off status of unit i in period t startup status of unit i in period t shutdown status of unit i in period t selection of forward contract f to buy energy selection of forward contract f to sell energy nominal wind power output in period t (megawatt) down deviation from the nominal wind power output in period t (megawatt) up deviation from the nominal wind power output in period t (megawatt) nominal pool price in period t (€/megawatthour) down deviation from the nominal pool price in period t (€/megawatthour) up deviation from the nominal pool price in period t (€/megawatthour) budget of uncertainty parameter for the pool prices and wind power output wind power output in period t (megawatt) dummy variable to replace the bilinear term z t + α t dummy variable to replace the bilinear term z t - α t pool price in period t (Dollar/megawatthour) =1 if the pool price is at the upper bound of the set =1 if the pool price is at the lower bound of the set =1 if the wind power output is at the upper bound of the set =1 if the wind power output is at the lower bound of the set | Weekly self-scheduling, forward contracting, and pool involvement for an electricity producer. An adaptive robust optimization approach |
S0377221714005803 | The aim of this paper is twofold. On the one hand, it provides a contribution to the debate on judicial efficiency by conducting an applied research on the Italian tax judiciary thanks to a database covering the activities of the Italian tax courts over a 3-year period (2009–2011). On the other hand, it also contributes to the methodological debate, as it compares results obtained with Data Envelopment Analysis (DEA) and Directional Distance Function (DDF), two related non-parametric techniques which allow evaluating the efficiency of each observation as the radial distance from the efficient frontier defined by the best observations. While DEA has already been used to assess the mere technical efficiency of judicial systems, the DDF offers a valuable additional contribution, since it makes it possible to minimize the social cost of production of adjudication in the measurement. This feature makes it particularly attractive in those sectors in which production externalities may arise, such as judicial delays in the case investigated here. Additionally, the paper first applies the bootstrap to the DDF procedure in order to provide more robust estimates and to compare them with the DEA results. | Judicial productivity, delay and efficiency: A Directional Distance Function (DDF) approach |
S0377221714005815 | This article is motivated by the case of a company manufacturing industrial equipment that faces two types of demand: on the one hand there are the so-called regular orders for installations or refurbishing of existing facilities, these orders have a relatively long lead time; on the other hand there are urgent orders mostly related to spare parts when a facility has a breakdown, the delay in such case is much shorter but higher margins can be obtained. We study the order acceptance problem for a firm that serves two classes of demand over an infinite horizon. The firm has to decide whether to accept a regular order (or equivalently how much capacity to set aside for urgent orders) in order to maximize its profit. We formulate this problem as a multi-dimensional Markovian Decision Process (MDP). We propose a family of approximate formulations to reduce the dimension of the state space via aggregation. We show how our approach can be used to compute bounds on the profit associated with the optimal order acceptance policy. Finally, we show that the value of revenue management is commensurate with the operational flexibility of the firm. | Revenue management for operations with urgent orders |
S0377221714005827 | We analyze the role of quality, which we define as an attribute of a product that increases consumers’ willingness to buy, as a competitive tool in a quality-price setting. We consider an incumbent’s entry-deterrence strategies using quality as a deterrent when faced by a potential entrant. We investigate settings motivating the incumbent to blockade the entrant (i.e., prevent entry without extra effort), deter the entrant (i.e., prevent entry with extra effort), or accommodate the entrant (i.e., allow the entry to take place). We identify conditions under which the incumbent may actually over-invest in quality to deter entrance. More interestingly, we also identify conditions under which the incumbent may decrease his quality investment to make it easier for the entrant to penetrate the market. Our model sheds light on entry scenarios of particular platform product markets such as the entry of Xbox to the video game console market. | Quality and entry deterrence |
S0377221714005839 | To impose the law of one price (LoOP) restrictions, which state that all firms face the same input prices, Kuosmanen, Cherchye, and Sipiläinen (2006) developed the top-down and bottom-up approaches to maximizing the industry-level cost efficiency. However, the optimal input shadow prices generated by the above approaches need not be unique, which influences the distribution of the efficiency indices at the individual firm level. To solve this problem, in this paper, we developed a pair of two-level mathematical programming models to calculate the upper and lower bounds of cost efficiency for each firm in the case of non-unique LoOP prices while keeping the industry cost efficiency optimal. Furthermore, a base-enumerating algorithm is proposed to solve the lower bound models of the cost efficiency measure, which are bi-level linear programs and NP-hard problems. Lastly, a numerical example is used to demonstrate the proposed approach. | Cost efficiency in data envelopment analysis under the law of one price |
S0377221714005840 | In this paper we compare the residual lifetime of a used coherent system of age t > 0 with the lifetime of the similar coherent system made up of used components of age t. Here ‘similar’ means that the system has the same structure and the component lifetimes have the same dependence (joint reliability copula). Some comparison results are obtained for the likelihood ratio order, failure rate order, reversed failure rate order and the usual stochastic order. Similar results are reported for comparing inactivity time of a coherent system with lifetime of similar coherent system having component lifetimes same as inactivity times of failed components. | Stochastic comparisons of residual lifetimes and inactivity times of coherent systems with dependent identically distributed components |
S0377221714005852 | In the framework of spatial competition, two or more players strategically choose a location in order to attract consumers. It is assumed standardly that consumers with the same favorite location fully agree on the ranking of all possible locations. To investigate the necessity of this questionable and restrictive assumption, we model heterogeneity in consumers’ distance perceptions by individual edge lengths of a given graph. A profile of location choices is called a “robust equilibrium” if it is a Nash equilibrium in several games which differ only by the consumers’ perceptions of distances. For a finite number of players and any distribution of consumers, we provide a complete characterization of robust equilibria and derive structural conditions for their existence. Furthermore, we discuss whether the classical observations of minimal differentiation and inefficiency are robust phenomena. Thereby, we find strong support for an old conjecture that in equilibrium firms form local clusters. | Robust equilibria in location games |
S0377221714005876 | We investigate a single-leg airline revenue management problem where an airline has limited demand information and uncensored no-show information. To use such hybrid information for simultaneous overbooking and booking control decisions, we combine expected overbooking cost with revenue. Then we take a robust optimization approach with a regret-based criterion. While the criterion is defined on a myriad of possible demand scenarios, we show that only a small number of them are necessary to compute the objective. We also prove that nested booking control policies are optimal among all deterministic ones. We further develop an effective computational method to find the optimal policy and compare our policy to others proposed in the literature. | Analysis of seat allocation and overbooking decisions with hybrid information |
S0377221714005888 | In this study, we investigate two important questions related to dynamic pricing in distribution channels: (i) Are coordinated pricing decisions efficient in a context where prices have carry-over effects on demand? (ii) Should firms practice a skimming or a penetration strategy if they choose to coordinate or to decentralize their activities? To answer these questions, we consider a differential game that takes place in a bilateral monopoly where the past retail prices paid by consumers contribute to the building of a reference price. The latter is used by consumers as a benchmark to evaluate the value of the product, and by firms to decide whether to adopt a skimming or a penetration strategy. We then compute and compare strategies, total channel profits and individual profits under vertical integration and decentralization at steady state and along the optimal time-paths. One of our main findings states that, for some values of the initial reference price, there is a time interval where channel decentralization performs better than coordination. During this transition period, at least one of the channel members could be tempted to end his cooperation, especially if he is not farsighted and if there are no binding agreements with the other channel partners. | Price coordination in distribution channels: A dynamic perspective |
S0377221714005906 | This study proposes two location–allocation models for handling uncertainty in the strategic planning of hospital networks. The models aim to inform how the hospital networking system may be (re)organized when the decision maker seeks to improve geographical access while minimizing costs. Key features relevant in the design of hospital networks, such as hospitals being multiservice providers operating within a hierarchical structure, are modelled throughout a planning horizon in which network changes may occur. The models hold different assumptions regarding decisions that have to be taken without full information on uncertain parameters and on the recourse decisions which will be made once uncertainty is disclosed. While the first model is in line with previous literature and considers location as first-stage decisions, the second model considers location and allocation as first-stage decisions. Uncertainty associated with demand is modelled through a set of discrete scenarios that illustrate future possible realizations. Both models are applied to a case study based on the Portuguese National Health Service. The results illustrate the information that can be obtained with each model, how models can assist health care planners, and what are the consequences of different choices on the decisions to be taken without complete information. The second model has shown to be advantageous on grounds that location–allocation decisions are not scenario dependent, and it appears to be more flexible to handle the planning problem at hand. | Location–allocation approaches for hospital network planning under uncertainty |
S0377221714005918 | The classical objective function of the Vehicle Routing Problem (VRP) is to minimize the total distance traveled by all vehicles (Min–Sum). In several situations, such as disaster relief efforts, computer networks, and workload balance, the minimization of the longest route (Min–Max) is a better objective function. In this paper, we compare the optimal solution of several variants of the Min–Sum and the Min–Max VRP, from the worst-case point of view. Our aim is two-fold. First, we seek to motivate the design of heuristic, metaheuristic, and matheuristic algorithms for the Min–Max VRP, as even the optimal solution of the classical Min–Sum VRP can be very poor if used to solve the Min–Max VRP. Second, we aim to show that the Min–Max approach should be adopted only when it is well-justified, because the corresponding total distance can be very large with respect to the one obtained by optimally solving the classical Min–Sum VRP. | Min–Max vs. Min–Sum Vehicle Routing: A worst-case analysis |
S0377221714005931 | In this paper, we measure the performance for each of the Brazilian Agricultural Research Corporation research centers by means of a Data Envelopment Analysis model. Performance data are available for a panel covering the period 2002–2009. The approach is instrumentalist, in the sense of Ramalho, Ramalho, and Henriques (2010). We investigate the effects on performance of contextual variable indicators related to the intensity of partnerships and revenue generation. For this purpose, we propose a fractional nonlinear regression model and dynamic GMM (Generalized Method of Moments) estimation. We do not rule out the endogeneity of the contextual variables, cross-sectional correlation or autocorrelation within the panel. We conclude that revenue generation and previous performance scores are statistically significant and positively associated with actual performance. | Management of agricultural research centers in Brazil: A DEA application using a dynamic GMM approach |
S0377221714005943 | Cherchye, De Rock, Dierynck, Roodhooft, and Sabbe (2013) introduced a DEA methodology that is specially tailored for multi-output efficiency measurement. The methodology accounts for jointly used inputs and incorporates information on how inputs are allocated to outputs. In this paper, we present extensions that render the methodology useful to deal with undesirable (or “bad”) outputs in addition to desirable (or “good”) outputs. Interestingly, these extensions deal in a natural way with several limitations of existing DEA approaches to treat undesirable outputs. We also demonstrate the practical usefulness of our methodological extensions through an application to US electric utilities. | Multi-output efficiency with good and bad outputs |
S0377221714005955 | The paper contributes to the contemporary research on bank efficiency in India. We analyze the efficiency dynamics of the Indian banking industry from 2004 to 2012. Based on the recent methodological developments of the conditional and unconditional directional distances introduced by Daraio and Simar (2014), we apply a conditional directional distance estimator in order to analyze the dynamic effects of industry’s performance levels. The results indicate that foreign banks perform better compared to national and domestic private banks. There is also evidence of technological change at the period before the Global Financial Crisis. However during and after the Global Financial Crisis these gains diminished. The evidence suggests that national banks fail to sustain their high performance levels gained after the industry’s restructuring period. Finally, the findings support the view that ownership structure affects banks’ technical efficiency levels. | Efficiency dynamics in Indian banking: A conditional directional distance approach |
S0377221714005967 | From the practices of Chinese consumer electronics market, we find there are two key issues in supply chain management: The first issue is the contract type of either wholesale price contracts or consignment contracts with revenue sharing, and the second issue is the decision right of sales promotion (such as advertising, on-site shopping assistance, rebates, and post-sales service) owned by either manufacturers or retailers. We model a supply chain with one manufacturer and one retailer who has limited capital and faces deterministic demand depending on retail price and sales promotion. The two issues interact with each other. We show that only the combination (called as chain business mode) of a consignment contract with the manufacturer’s right of sales promotion or a wholesale price contract with the retailer’s right of sales promotion is better for both members. Moreover, the latter chain business mode is realized only when the retailer has more power in the chain and has enough capital, otherwise the former one is realized. But which one is preferred by customers? We find that the former is preferred by customers who mainly enjoy low price, while the latter is preferred by those who enjoy high sales promotion level. | Contract type and decision right of sales promotion in supply chain management with a capital constrained retailer |
S0377221714005979 | We present a method to solve the free-boundary problem that arises in the pricing of classical American options. Such free-boundary problems arise when one attempts to solve optimal-stopping problems set in continuous time. American option pricing is one of the most popular optimal-stopping problems considered in literature. The method presented in this paper primarily shows how one can leverage on a one factor approximation and the moving boundary approach to construct a solution mechanism. The result is an algorithm that has superior runtimes-accuracy balance to other computational methods that are available to solve the free-boundary problems. Exhaustive comparisons to other pricing methods are provided. We also discuss a variant of the proposed algorithm that allows for the computation of only one option price rather than the entire price function, when the requirement is such. | An approximate moving boundary method for American option pricing |
S0377221714005980 | Multi-objective optimization algorithms aim at finding Pareto-optimal solutions. Recovering Pareto fronts or Pareto sets from a limited number of function evaluations are challenging problems. A popular approach in the case of expensive-to-evaluate functions is to appeal to metamodels. Kriging has been shown efficient as a base for sequential multi-objective optimization, notably through infill sampling criteria balancing exploitation and exploration such as the Expected Hypervolume Improvement. Here we consider Kriging metamodels not only for selecting new points, but as a tool for estimating the whole Pareto front and quantifying how much uncertainty remains on it at any stage of Kriging-based multi-objective optimization algorithms. Our approach relies on the Gaussian process interpretation of Kriging, and bases upon conditional simulations. Using concepts from random set theory, we propose to adapt the Vorob’ev expectation and deviation to capture the variability of the set of non-dominated points. Numerical experiments illustrate the potential of the proposed workflow, and it is shown on examples how Gaussian process simulations and the estimated Vorob’ev deviation can be used to monitor the ability of Kriging-based multi-objective optimization algorithms to accurately learn the Pareto front. | Quantifying uncertainty on Pareto fronts with Gaussian process conditional simulations |
S0377221714005992 | In this work a new benchmark of hard instances for the permutation flowshop scheduling problem with the objective of minimising the makespan is proposed. The new benchmark consists of 240 large instances and 240 small instances with up to 800 jobs and 60 machines. One of the objectives of the work is to generate a benchmark which satisfies the desired characteristics of any benchmark: comprehensive, amenable for statistical analysis and discriminant when several algorithms are compared. An exhaustive experimental procedure is carried out in order to select the hard instances, generating thousands of instances and selecting the hardest ones from the point of view of a gap computed as the difference between very good upper and lower bounds for each instance. Extensive generation and computational experiments, which have taken almost six years of combined CPU time, demonstrate that the proposed benchmark is harder and with more discriminant power than the most common benchmark from the literature. Moreover, a website is developed for researchers in order to share sets of instances, best known solutions and lower bounds, etc. for any combinatorial optimisation problem. | New hard benchmark for flowshop scheduling problems minimising makespan |
S0377221714006006 | This paper describes a hybrid stock trading system based on Genetic Network Programming (GNP) and Mean Conditional Value-at-Risk Model (GNP–CVaR). The proposed method, combining the advantages of evolutionary algorithms and statistical model, has provided useful tools to construct portfolios and generate effective stock trading strategies for investors with different risk-attitudes. Simulation results on five stock indices show that model based on GNP and maximum Sharpe Ratio portfolio performs the best in bull market, and that based on GNP and the global minimum risk portfolio performs the best in bear market. The portfolios constructed by Markowitz’s mean–variance model performs the same as mean-CVaR model. It is clarified that the proposed system significantly improves the function and efficiency of original GNP, which can help investors make profitable decisions. | A hybrid stock trading system using genetic network programming and mean conditional value-at-risk |
S0377221714006018 | In multi-criteria group decision making, a key issue is aggregating individual preference into group one. In this study, we employ a bilateral agreement to conduct group evaluation of alternatives. The bilateral agreement between a pair of individuals could be on the weight of a criterion, a criterion evaluation function, or willingness to pay. Any one of three types of bilateral agreements can derive the group utility. To express the relationship between the bilateral agreement and individual evaluations, the quasi-arithmetic mean is used, which can ensure the consistency property of the pairwise comparison matrices. The minimum requirements are explored to obtain the group preference, which shows that n −1 pairs of bilateral agreements are necessary. Finally, several examples are provided to illustrate the proposed methods. | Multi-criteria group decision making based on bilateral agreements |
S0377221714006031 | In this paper, we present a simple method for finding the extreme points of various types of incomplete attribute weights. Incomplete information about attribute weights is transformed by a sequence of change of variables to a set whose extreme points are readily found. This enhanced method fails to derive the extreme points of every type of incomplete attribute weights. Nevertheless, it provides us with a flexible method for finding the extreme points, including widely-used forms of incomplete attribute weights. Finally, incomplete attribute values, expressed in various forms, are also analyzed to find their characterizing extreme points by applying similar procedures carried out in the incomplete attribute weights. | Extreme point-based multi-attribute decision analysis with incomplete information |
S0377221714006043 | The container relocation problem (CRP) is one of the most crucial issues for container terminals. In a single bay, containers belonging to multiple groups should be retrieved by an equipped yard crane in accordance with their retrieval priorities. An operation of the crane can either relocate a container from the top of a stack to another within the bay, or remove a container with the highest retrieval priority among all remaining containers. The objective of the CRP is to find an optimized operation plan for the crane with the fewest number of container relocations. This paper proposes an improved greedy look-ahead heuristic for the CRP and conducts experiments on four existing data sets. The experimental results show that the proposed approach is able to provide better solutions for large-scale instances in shorter runtime, compared to the up-to-date approaches in the recent literature. | Solving the container relocation problem by an improved greedy look-ahead heuristic |
S0377221714006055 | We consider the problem of scheduling a set of n jobs with arbitrary job sizes on a set of m identical and parallel batch machines so as to minimize the makespan. Motivated by the computational complexity of the problem, we propose a meta-heuristic based on the max–min ant system method. Computational experiments are performed with randomly generated test data. The results show that our algorithm outperforms several of the previously studied algorithms. | A meta-heuristic to minimize makespan for parallel batch machines with arbitrary job sizes |
S0377221714006067 | The problem of long-term production planning of open pit mines is a large combinatorial problem. Application of mathematical programming approaches suffer from reduced computational efficiency due to the large amount of decision variables. This paper presents a new metaheuristic approximation approach based on the Ant Colony Optimization (ACO) for the solution of the problem of open-pit mine production planning. It is a three-dimensional optimization procedure which has the capability of considering any type of objective function, non-linear constraints and real technical restrictions. The proposed process is programmed and tested through its application on a real scale Copper–Gold deposit. The study revealed that the ACO approach is capable to improve the value of the initial mining schedule regarding the current commercial tools considering penalties and without, in a reasonable computational time. Several variants of ACO were examined to find the most compatible variants and the best parameter ranges. Results indicated that the Max–Min Ant System (MMAS) and the Ant Colony System (ACS) are the best possible variants based on the required less amount of memory. It is also proved that the MMAS is the most explorative variant, while the ACS is the fastest method. | Long term production planning of open pit mines by ant colony optimization |
S0377221714006079 | This paper provides a mathematical treatment of the NP-hard post enrolment-based course timetabling problem and presents a powerful two-stage metaheuristic-based algorithm to approximately solve it. We focus particularly on the issue of solution space connectivity and demonstrate that when this is increased via specialised neighbourhood operators, the quality of solutions achieved is generally enhanced. Across a well-known suite of benchmark problem instances, our proposed algorithm is shown to produce results that are superior to all other methods appearing in the literature; however, we also make note of those instances where our algorithm struggles in comparison to others and offer evidence as to why. | Analysing the effects of solution space connectivity with an effective metaheuristic for the course timetabling problem |
S0377221714006080 | This article describes a foundation for modelling generic cognitive structures, under the heading nomology, sometimes known as the “science of the processes of the mind”. It proposes some principles and axioms that are consistent with the evidence in management systems used in business practice. It then reviews previous research about nomology in philosophy, science and the humanities. It shows that the main issue preventing the completion of the foundation of nomology has been the lack of an explanation of the relationship between the objective “nom” part as in economics and the subjective “ology” part as in psychology. It resolves this problem by showing that there are four main objective activities: proposition, perception, pull and push, and for subjective decisions the pull activity becomes redundant. It then describes tests in China and Chinese culture to validate that the results are truly generic. It proposes that nomology will be useful in providing a rigorous foundation for criteria structures in multi-criteria decision-making, and beyond into wider fields, especially those that combine subjective and objective aspects such as in conflict, inter-cultural and inter-disciplinary studies, ethics, and group decision-making. | Foundation of Nomology |
S0377221714006092 | This paper focuses on the impact of consumer environmental awareness (CEA) on order quantities and channel coordination within a one-manufacturer and one-retailer supply chain. The manufacturer produces two types of products: the environmental and the traditional products. These two products differ in their price and environmental quality. Based on the multi-product newsvendor model, this study compares three decision scenarios: the centralized model (M1), the decentralized model (M2), and the decentralized model with the coordination of a return contract (M3). The closed-form expressions of optimal order quantities, wholesale prices and return credits are derived for each scenario. Extending these models, we incorporate a production capacity constraint of the manufacturer. Finally, sensitivity analyses on model parameters are performed and numerical examples are provided. Our study suggests (1) the retailer’s profit monotonically increases while the manufacturer’s profit is convex with respect to CEA; (2) a return contract can help both parties to achieve the profit they could expect in the centralized model; (3) order quantity of the environmental product increases with CEA; (4) the production capacity constraint of the manufacturer does not impact order quantities of the two products if it is sufficiently large (when it is larger than two critical points); otherwise, production capacity constraint negatively changes the channel profit and order quantities; (5) our simulation study and sensitivity analyses indicate that the difference of environmental quality between the traditional and the environmental products determines whether order quantity of the traditional product increases or remains constant with respect to CEA. The firms will benefit from product customization and consumer segmentation based on the distribution of CEA in the market. | Consumer environmental awareness and channel coordination with two substitutable products |
S0377221714006109 | In this paper we consider a production process at operative level on m identical parallel machines, which are subject to stochastic machine failures. To avoid long downtime of the machines, caused by unexpected failures, preventive maintenance activities are planned and conducted, but if a failure could not be averted a corrective maintenance has to be performed. Both maintenance activities are assumed to restore the machine to be âÂÂas good as newâÂÂ. The maintenance activities, the number of jobs and their allocation to machines as well as their sequence have a large impact on the performance of the production process and the delivery dates. We study an order of n jobs, scheduled for a frozen period on the machines. It is assumed that jobs, interrupted by machine failure, have to get repeated right after the corrective maintenance is finished (non-resumable case). We first derive an exact formula for the expected makespan. Since the exact evaluation of a schedule is highly complex, we propose two approximations for the expected makespan. The excellent performance of the approximations is illustrated in a numerical study. | Evaluation of the expected makespan of a set of non-resumable jobs on parallel machines with stochastic failures |
S0377221714006110 | This paper proposes an analysis of the effects of consensus and preference aggregation on the consistency of pairwise comparisons. We define some boundary properties for the inconsistency of group preferences and investigate their relation with different inconsistency indices. Some results are presented on more general dependencies between properties of inconsistency indices and the satisfaction of boundary properties. In the end, given three boundary properties and nine indices among the most relevant ones, we will be able to present a complete analysis of what indices satisfy what properties and offer a reflection on the interpretation of the inconsistency of group preferences. | Boundary properties of the inconsistency of pairwise comparisons in group decisions |
S0377221714006122 | While the cross entropy methodology has been applied to a fair number of combinatorial optimization problems with a single objective, its adaptation to multiobjective optimization has been sporadic. We develop a multiobjective optimization cross entropy (MOCE) procedure for combinatorial optimization problems for which there is a linear relaxation (obtained by ignoring the integrality restrictions) that can be solved in polynomial time. The presence of a relaxation that can be solved with modest computational time is an important characteristic of the problems under consideration because our procedure is designed to exploit relaxed solutions. This is done with a strategy that divides the objective function space into areas and a mechanism that seeds these areas with relaxed solutions. Our main interest is to tackle problems whose solutions are represented by binary variables and whose relaxation is a linear program. Our tests with multiobjective knapsack problems and multiobjective assignment problems show the merit of the proposed procedure. | Cross entropy for multiobjective combinatorial optimization problems with linear relaxations |
S0377221714006134 | The supply chain contracting literature has focused on incentive contracts designed to align supply chain members’ individual interests. A key finding of this literature is that members’ preferences for contractual forms are often at odds: the upstream supplier prefers relatively complex contracts that can coordinate the supply chain; however, the downstream retailer prefers a wholesale price-only contract because it leaves more surplus (than does a coordinating contract), which the retailer can capture. This paper addresses the following question: Under what circumstances do suppliers and retailers prefer the same contractual form? We study supply chain members’ preferences for contractual forms under three different competitive settings in which multiple supply chains compete to sell substitutable products in the same market. Our analysis suggests that both upstream and downstream sides of the supply chain may prefer the same “quantity discount” contract, which would eliminate the conflicts of interest that otherwise typify contracting situations. More interesting still is that both sides may also prefer the wholesale price-only contract; this finding provides a theoretical explanation for why that inefficient (but simple) contract is widely adopted in supply chain transactions. | Preferences for contractual forms in supply chains |
S0377221714006146 | Over the last years, several variants of multi-constrained Vehicle Routing Problems (VRPs) have been studied, forming a class of problems known as Rich Vehicle Routing Problems (RVRPs). The purpose of the paper is twofold: (i) to provide a comprehensive and relevant taxonomy for the RVRP literature and (ii) to propose an elaborate definition of RVRPs. To this end, selected papers addressing various cases are classified using the proposed taxonomy. Once the articles have been classified, a cluster analysis based on two discriminating criteria is performed and leads to the definition of RVRPs. | Rich vehicle routing problems: From a taxonomy to a definition |
S0377221714006171 | The eigenvector method (EM) is well-known to derive information from pairwise comparison matrices in decision making processes. However, this method is logically incomplete since its actual numerical error is unknown and its reliability is doubted by such phenomena as “right–left asymmetry”, “rank reversal”, and reversal of “order of intensity of preference”. In this paper, we associate EM with some standard measuring procedure, analyze this procedure from the viewpoint of measurement theory, and find the actual EM error. We show that the above phenomena have the same cause and are eliminated when the EM errors are taken into account. The full decision support tool, which has all components of a standard measuring tool, is composed of pairwise comparisons as an initial measuring procedure, EM as a data processor, and the obtained formulas for EM errors as an error indicator. We consider two versions of this tool based on the right and the left principal eigenvectors of a pairwise comparison matrix. Both versions are equally suitable to measure and rank any comparable elements with positive numerical values, and have the same mean relative errors equal to the square root of the double Saaty’s Consistency Index. Using the mean relative error, we find the simple upper estimate for maximum permissible errors not impeding the reliable ranking. This estimate imposes tight restrictions on inconsistency of expert judgements in decision making processes with a large number of alternatives. These restrictions are much stronger than previously thought. | Eigenvector ranking method as a measuring tool: Formulas for errors |
S0377221714006183 | In this paper we investigate the problem of finding a safe transit of a ship through areas threatened by sea mines. The aim is to provide decision-making support by a tool that can be integrated into a naval command and control system. We present a route finding algorithm which avoids regions of risk higher than a given threshold. The algorithm takes into account the technical and operational restrictions of the ship’s movement. It allows to minimize the route length, the traveling time, the number of maneuvers, or other objectives. The basic idea is to embed a network in the operational area and compute a least-cost path. Instead of using a regular grid graph which strongly restricts the types of maneuvers and necessitates a path smoothing after optimization, we design a network which is especially tailored to the maneuverability of the vessel. Each path in this network represents a continuous-curvature track based on combinations of clothoids and straight line segments. The approach allows a large variety of maneuvers, hence high-quality solutions are achievable provided a sufficiently dense network. | Planning safe navigation routes through mined waters |
S0377221714006195 | This paper addresses Markov Decision Processes over compact state and action spaces. We investigate the special case of linear dynamics and piecewise-linear and convex immediate costs for the average cost criterion. This model is very general and covers many interesting examples, for instance in inventory management. Due to the curse of dimensionality, the problem is intractable and optimal policies usually cannot be computed, not even for instances of moderate size. We show the existence of optimal policies and of convex and bounded relative value functions that solve the average cost optimality equation under reasonable and easy-to-check assumptions. Based on these insights, we propose an approximate relative value iteration algorithm based on piecewise-linear convex relative value function approximations. Besides computing good policies, the algorithm also provides lower bounds to the optimal average cost, which allow us to bound the optimality gap of any given policy for a given instance. The algorithm is applied to the well-studied Multiple Sourcing Problem as known from inventory management. Multiple sourcing is known to be a hard problem and usually tackled by parametric heuristics. We analyze several MSP instances with two and more suppliers and compare our results to state-of-the-art heuristics. For the considered scenarios, our policies are always at least as good as the best known heuristic, and strictly better in most cases. Moreover, by using the computed lower bounds we show for all instances that the optimality gap has never exceeded 5%, and that it has been much smaller for most of them. | Approximate dynamic programming for stochastic linear control problems on compact state spaces |
S0377221714006201 | Nowadays, modern production patterns, such as batch production, have brought new challenges for multi-unit maintenance decision-making. The maintenance scheduling should not only consider individual machine deterioration, but also apply to batch production with variable lot size. An interactive bi-level maintenance strategy is thus proposed in a multi-unit batch production system with degrading machines. In the machine-level scheduling, a multi-attribute model (MAM) is used to obtain maintenance intervals according to individual machine degradation. In the system-level scheduling, a novel production-driven opportunistic maintenance strategy is developed by considering both machine degradation and characteristics of batch production. In this strategy, advance-postpone balancing (APB) utilizes set-up times as opportunities to make real-time schedules for system-level maintenance. The numerical example shows that the proposed MAM–APB methodology can efficiently eliminate unnecessary production breaks, achieve significant cost reduction and overcome complexity of system scheduling. | Production-driven opportunistic maintenance for batch production based on MAM–APB scheduling |
S0377221714006213 | The hazardous material routing problem from an origin to a destination in an urban area is addressed. We maximise the distance between the route and its closest vulnerable centre, weighted by the centre’s population. A vulnerable centre is a school, hospital, senior citizens’ residence or the like, concentrating a high population or one that is particularly vulnerable or difficult to evacuate in a short time. The potential consequences on the most exposed centre are thus minimized. Though previously studied in a continuous space, the problem is formulated here over a transport (road) network. We present an exact model for the problem, in which we manage to significantly reduce the required variables, as well as an optimal polynomial time heuristic. The integer programming formulation and the heuristic are tested in a real-world case study set in the transport network in the city of Santiago, Chile. | The maximin HAZMAT routing problem |
S0377221714006420 | Green product development has become a key strategic consideration for many companies due to regulatory requirements and the public awareness of environmental protection. Life cycle assessment (LCA) is a popular tool to measure the environmental impact of new product development. Nevertheless, it is often difficult to conduct a traditional LCA at the design phase due to uncertain and/or unknown data. This research adopts the concept of LCA and introduces a comprehensive method that integrates Fuzzy Extent Analysis and Fuzzy TOPSIS for the assessment of environmental performance with respect to different product designs. Methodologically, it exhibits the superiority of the hierarchical structure and the easiness of TOPSIS implementation whilst capturing the vagueness of uncertainty. A case study concerning a consumer electronic product was presented, and data collected through a questionnaire survey were used for the design evaluation. The approach presented in this research is expected to help companies decrease development lead time by screening out poor design options. | A case study of an integrated fuzzy methodology for green product development |
S0377221714006432 | Traditional methods of applying classification models into the area of credit scoring may ignore the effect from censoring. Survival analysis has been introduced with its ability to deal with censored data. The mixture cure model, one important branch of survival models, is also applied in the context of credit scoring, assuming that the study population is a mixture of never-default and will-default customers. We extend the standard mixture cure model through: (1) relaxing the independence assumption of the probability and the time of default; (2) treating the missing defaulting labels as latent variables and applying an augmentation technique; and (3) introducing a discrete truncated exponential distribution to model the time of default. Our full model is written in a hierarchical form so that the Markov chain Monte Carlo method is applied to estimate corresponding parameters. Through an empirical analysis, we show that both mixture models, the standard mixture cure model and the hierarchical mixture cure model (HMCM), are more advanced in identifying future defaulters while compared with logistic regression. It is also concluded that our hierarchical Bayesian extension increases the model’s predictability and provides meaningful output for risk management. | Identifying future defaulters: A hierarchical Bayesian method |
S0377221714006444 | The widespread use of the Internet has significantly changed the behavior of homebuyers. Using online real estate agents, homebuyers can rapidly find some modern houses that meet their needs; however, most current online housing systems provide limit features. In particular, existing systems fail to consider homebuyers’ housing goals and risk attitudes. To increase effectiveness, online real estate agents should provide an efficient matching mechanism, personalized service and house ranking with the aim of increasing both buyers’ satisfaction and deal rate. An efficient online real estate agent should provide an easy way for homebuyers to find (rank) a suitable house (alternatives) with consideration of their different housing philosophies and risk attitudes. In order to comprehend these ambiguous housing goals and risk attitudes, it is also indispensable to determine a satisfaction level for each fuzzy goal and constraint. In this study, we propose fuzzy goal programming with an S-shaped utility function as a decision aid to help homebuyers in choosing their preferred house via the Internet in an easy way. With the use of a decision aid, homebuyers can specify their housing goals and constraints with different priority levels and thresholds as a matching mechanism for a fuzzy search, while the matching mechanism can be translated into a standard query language for a regular relational database. Moreover, a laboratory experiment is conducted on a real case to demonstrate the effectiveness of the proposed approach. The results indicate that the proposed method provides better customer satisfaction than manual systems in housing selection service. | House selection via the internet by considering homebuyers’ risk attitudes with S-shaped utility functions |
S0377221714006456 | This paper presents a cyclical square-root model for the term structure of interest rates assuming that the spot rate converges to a certain time-dependent long-term level. This model incorporates the fact that the interest rate volatility depends on the interest rate level and specifies the mean reversion level and the interest rate volatility using harmonic oscillators. In this way, we incorporate a good deal of flexibility and provide a high analytical tractability. Under these assumptions, we compute closed-form expressions for the values of different fixed income and interest rate derivatives. Finally, we analyze the empirical performance of the cyclical model versus that proposed in Cox et al. (1985) and show that it outperforms this benchmark, providing a better fitting to market data. | A cyclical square-root model for the term structure of interest rates |
S0377221714006468 | The paper examines a method to attribute hazardous waste streams to regional production and consumption activity, and to connect these same waste streams through to different management options. We argue that a method using an input–output framework provides useful intelligence for decision makers seeking to connect elements of the management of the hazardous waste hierarchy to production and to different patterns and types of final consumption (of which domestic household consumption is one). This paper extends application of conventional demand driven input–output attribution methods to identify hazardous waste ‘hotspots’ in the supply chains of different final consumption goods and consumption groups. Using a regional case study to exposit the framework and its use, we find that domestic government final consumption of public administration production indirectly drives hazardous waste generation that goes to landfill, particularly in the domestic construction and sanitary services sectors, but also in the manufacture of wood products. | Can hazardous waste supply chain ‘hotspots’ be identified using an input–output framework? |
S0377221714006481 | In this paper, we consider the duty scheduling of sensor activities in wireless sensor networks to maximize the lifetime. We address full target coverage problems contemplating sensors used for sensing data and transmit it to the base station through multi-hop communication as well as sensors used only for communication purposes. Subsets of sensors (also called covers) are generated. Those covers are able to satisfy the coverage requirements as well as the connection to the base station. Thus, maximum lifetime can be obtained by identifying the optimal covers and allocate them an operation time. The problem is solved through a column generation approach decomposed in a master problem used to allocate the optimal time interval during which covers are used and in a pricing subproblem used to identify the covers leading to maximum lifetime. Additionally, Branch-and-Cut based on Benders’ decomposition and constraint programming approaches are used to solve the pricing subproblem. The approach is tested on randomly generated instances. The computational results demonstrate the efficiency of the proposed approach to solve the maximum network lifetime problem in wireless sensor networks with up to 500 sensors. | Exact approaches for lifetime maximization in connectivity constrained wireless multi-role sensor networks |
S0377221714006493 | This paper fills a noticeable gap in the current economic and penology literature by proposing new performance-enhancing policies based on an efficiency analysis of a sample of male prisons in England and Wales over the period 2009/10. In addition, we advance the empirical literature by integrating the managerialism of four strategic functions of prisons, employment and accommodation, capacity utilization, quality of life in prison and the rehabilitation and re-offending of prisoners. We find that by estimating multiple models focussing on these different areas some prisons are more efficient than other establishments. In terms of policy, it is therefore necessary to consider not just an overall performance metric for individual prisons, as currently undertaken annually by the UK Ministry of Justice, but to look into the administration and managerialism of their main functions in both a business and public policy perspective. Indeed, it is further necessary to view prisons together and not as single entities, so as to obtain a best practice frontier for the different operations that management undertakes in English and Welsh prisons. | An analysis of managerialism and performance in English and Welsh male prisons |
S0377221714006511 | The deterioration in profitability of listed companies not only threatens the interests of the enterprise and internal staff, but also makes investors face significant financial loss. It is important to establish an effective early warning system for prediction of financial crisis for better corporate governance. This paper studies the phenomenon of financial distress for 107 Chinese companies that received the label ‘special treatment’ from 2001 to 2008 by the Shanghai Stock Exchange and the Shenzhen Stock Exchange. We use data mining techniques to build financial distress warning models based on 31 financial indicators and three different time windows by comparing these 107 firms to a control group of firms. We observe that the performance of neural networks is more accurate than other classifiers, such as decision trees and support vector machines, as well as an ensemble of multiple classifiers combined using majority voting. An important contribution of the paper is to discover that financial indicators, such as net profit margin of total assets, return on total assets, earnings per share, and cash flow per share, play an important role in prediction of deterioration in profitability. This paper provides a suitable method for prediction of financial distress for listed companies in China. | Prediction of financial distress: An empirical study of listed Chinese companies using data mining |
S0377221714006547 | This article studies the role of social capital on cotton production efficiency and productivity for a sample of small farms in Maharashtra, India using data envelopment analysis. Input shadow prices are computed as an indicator of the importance of social capital relative to other inputs. Results suggest social capital to be the input with the highest contribution to production efficiency after land. The Luenberger indicator is used to assess the productivity improvement associated to an investment in social capital, which is found to be on the order of 12%. Undertaking collective production activities is found to play an important role in improving productivity. This is especially relevant to agricultural households facing important economic and institutional restrictions that make it difficult to increase conventional (expensive) inputs. | Shadow prices of social capital in rural India, a nonparametric approach |
S0377221714006559 | Underground mine production scheduling possesses mathematical structure similar to and yields many of the same challenges as general scheduling problems. That is, binary variables represent the time at which various activities are scheduled. Typical objectives seek to minimize costs or some measure of production time, or to maximize net present value; two principal types of constraints exist: (i) resource constraints and (ii) precedence constraints. In our setting, we maximize “discounted metal production” for the remaining life of an underground lead and zinc mine that uses three different underground methods to extract the ore. Resource constraints limit the grade, tonnage, and backfill paste (used for structural stability) in each time period, while precedence constraints enforce the sequence in which extraction (and backfill) is performed in accordance with the underground mining methods used. We tailor exact and heuristic approaches to reduce model size, and develop an optimization-based decomposition heuristic; both of these methods transform a computationally intractable problem to one for which we obtain solutions in seconds, or, at most, hours for problem instances based on data sets from the Lisheen mine near Thurles, Ireland. | Optimization-based heuristics for underground mine scheduling |
S0377221714006560 | The role of decision support systems in mitigating operational risks in firms is well established. However, there is a lack of investment in decision support systems in emerging markets, even though inadequate operational risk management is a key cause of discouraging external investment. This has also been exacerbated by insufficient understanding of operational risk in emerging markets, which can be attributed to past operational risk measurement techniques, limited studies on emerging markets and inadequate data. In this paper, using current operational risk techniques, the operational risk of developed and emerging market firms is measured for 100 different companies, for 4 different industry sectors and 5 different countries. Firstly, it is found that operational risk is consistently higher in emerging market firms than in the developed markets. Secondly, it is found that operational risk is not only dependent upon the industry sector but also that market development is the more dominant factor. Thirdly, it is found that the market development and the sector influence the shape of the operational risk distribution, in particular tail and skewness risk. Furthermore, an operational risk measurement method is provided that is applicable to emerging markets. Our results are consistent with under investment in decision support systems in emerging markets and imply operational risk management can be improved by increased investment. | Operational risk: Emerging markets, sectors and measurement |
S0377221714006572 | We present an information good pricing model with persistently heterogeneous consumers and a rising marginal propensity for them to pirate. The dynamic pricing problem faced by a legal seller is solved using a flexible numerical procedure with demand discretisation and sales tracking. Three offsetting pricing mechanisms occur: skimming, compressing price changes, and delaying product launch. A novel trade-off in piracy’s effect on welfare is identified. We find that piracy quickens sales times and raises welfare in fixed size markets, and does the opposite in growing markets. In our model, consumers benefit from very high rates of piracy, legal sellers always dislike it, and pirate providers like moderate but not very high rates. | Welfare implications of piracy with dynamic pricing and heterogeneous consumers |
S0377221714006584 | This paper introduces a two-phase approach to solve average cost Markov decision processes, which is based on state space embedding or time aggregation. In the first phase, time aggregation is applied for policy optimization in a prescribed subset of the state space, and a novel result is applied to expand the evaluation to the whole state space. This evaluation is then used in the second phase in a policy improvement step, and the two phases are then alternated until convergence is attained. Some numerical experiments illustrate the results. | Solving average cost Markov decision processes by means of a two-phase time aggregation algorithm |
S0377221714006596 | Multi-sensor data fusion is an evolving technology whereby data from multiple sensor inputs are processed and combined. The data derived from multiple sensors can, however, be uncertain, imperfect, and conflicting. The present study is undertaken to help contribute to the continuous search for viable approaches to overcome the problems associated with data conflict and imperfection. Sensor readings, represented by belief functions, have to be fused according to their corresponding weights. Previous studies have often estimated the weights of sensor readings based on a single criterion. Mono-criteria approaches for the assessment of sensor reading weights are, however, often unreliable and inadequate for the reflection of reality. Accordingly, this work opts for the use of a multi-criteria decision aid. A modified Analytical Hierarchy Process (AHP) that incorporates several criteria is proposed to determine the weights of a sensor reading set. The approach relies on the automation of pairwise comparisons to eliminate subjectivity and reduce inconsistency. It assesses the weight of each sensor reading, and fuses the weighed readings obtained using a modified average combination rule. The efficiency of this approach is evaluated in a target recognition context. Several tests, sensitivity analysis, and comparisons with other approaches available in the literature are described. | Analytic hierarchy process for multi-sensor data fusion based on belief function theory |
S0377221714006602 | An integrated microbiological–economic framework for policy support is developed to determine the cost-effectiveness of alternative intervention methods and strategies to reduce the risk of Campylobacter in broilers. Four interventions at the farm level and four interventions at the processing stage are considered. Cost analyses are conducted for different risk reduction targets and for three alternative scenarios concerning the acceptable range of interventions. Results demonstrate that using a system-wide policy approach to risk reduction can be more cost-effective than a policy focusing purely on farm-level interventions. Allowing for chemical decontamination methods may enhance cost-effectiveness of intervention strategies further. | Systemic cost-effectiveness analysis of food hazard reduction – Campylobacter in Danish broiler supply |
S0377221714006614 | Wind power has seen strong growth over the last decade and increasingly affects electricity spot prices. In particular, prices are more volatile due to the stochastic nature of wind, such that more generation of wind energy yields lower prices. Therefore, it is important to assess the value of wind power at different locations not only for an investor but for the electricity system as a whole. In this paper, we develop a stochastic simulation model that captures the full spatial dependence structure of wind power by using copulas, incorporated into a supply and demand based model for the electricity spot price. This model is calibrated with German data. We find that the specific location of a turbine – i.e., its spatial dependence with respect to the aggregated wind power in the system – is of high relevance for its value. Many of the locations analyzed show an upper tail dependence that adversely impacts the market value. Therefore, a model that assumes a linear dependence structure would systematically overestimate the market value of wind power in many cases. This effect becomes more important for increasing levels of wind power penetration and may render the large-scale integration into markets more difficult. | Spatial dependencies of wind power and interrelations with spot price dynamics |
S0377221714006626 | Assuming that the proportion defective in a production process p is variable and the component lifetimes are Weibull distributed, integer nonlinear programming problems are formulated and solved in order to determine the optimum component inspection scheme by attributes for k-out-of-n:F system reliability demonstration using available prior knowledge. A limited beta prior model is adopted to reflect the random fluctuations on p. The required quantity of components to test and the permissible number of component failures up to a specified censoring time are found by solving a minimisation problem with nonlinear constraints which are related to the tolerable average producer and consumer risks. First-order Taylor polynomials of the operating characteristic function are used to derive a quite accurate approximate solution. Lower and upper bounds are also deduced. Optimal solutions are usually robust to small variations in the Weibull parameters and prior information. Existing technical knowledge and experience are shown to be of great practical value in designing optimal sampling plans for lot acceptance purposes. The proposed approach generalises the classical viewpoint to those cases in which appreciable prior information on the fraction of defective systems exists, and also allows the analyst to determine the acceptability of a k-out-of-n:F system before assembly and to limit the range of p. Moreover, the practitioners may attain substantial savings in sample size and improved assessments of the true producer and consumer risks. A 4-out-of-5:F system of independent water pumps for cooling a reactor is considered to illustrate the suggested component test plans. | Optimum attributes component test plans for k-out-of-n:F Weibull systems using prior information |
S0377221714006638 | This paper proposes and estimates a globally flexible functional form for the cost function, which we call Neural Cost Function (NCF). The proposed specification imposes a priori and satisfies globally all the properties that economic theory dictates. The functional form can be estimated easily using Markov Chain Monte Carlo (MCMC) techniques or standard iterative SURE. We use a large panel of U.S. banks to illustrate our approach. The results are consistent with previous knowledge about the sector and in accordance with mathematical production theory. | Global approximation to arbitrary cost functions: A Bayesian approach with application to US banking |
S0377221714006651 | This is a review of the literature on variants and extensions of the standard location-routing problem published since the last survey, by Nagy and Salhi, appeared in 2006. We propose a classification of problem variants, provide concise paper excerpts that convey the central ideas of each work, discuss recent developments in the field, and list promising topics for further research. | A survey of variants and extensions of the location-routing problem |
S0377221714006870 | This paper examines the relation between dividend policy, managerial ownership and debt-financing for a large sample of firms listed on NYSE, AMEX and NASDAQ. In addition to standard parametric estimation methods, we use a semi-parametric approach, which helps capture more effectively non-linearities in the data. In line with the alignment effect of managerial ownership, our results support a negative relationship between managerial ownership and dividends when managerial ownership is at relatively low levels. However, this negative relationship turns into a positive one at very high levels of managerial ownership. We also find that the nature of the relationship between managerial ownership and dividends may be more complex than it has been previously thought, and it also differs significantly across firms with different levels of debt/financial constraints. The results are consistent with the view that agency theory provides useful insights but cannot fully explain how firms determine their dividend policy. | Dividend policy, managerial ownership and debt financing: A non-parametric perspective |
S0377221714006882 | The use of Markov Decision Processes for Inspection Maintenance and Rehabilitation of civil engineering structures relies on the use of several transition matrices related to the stochastic degradation process, maintenance actions and imperfect inspections. Point estimators for these matrices are usually used and they are evaluated using statistical inference methods and/or expert evaluation methods. Thus, considerable epistemic uncertainty often veils the true values of these matrices. Our contribution through this paper is threefold. First, we present a methodology for incorporating epistemic uncertainties in dynamic programming algorithms used to solve finite horizon Markov Decision Processes (which may be partially observable). Second, we propose a methodology based on the use of Dirichlet distributions which answers, in our sense, much of the controversy found in the literature about estimating Markov transition matrices. Third, we show how the complexity resulting from the use of Monte-Carlo simulations for the transition matrices can be greatly overcome in the framework of dynamic programming. The proposed model is applied to concrete bridge under degradation, in order to provide the optimal strategy for inspection and maintenance. The influence of epistemic uncertainties on the optimal solution is underlined through sensitivity analysis regarding the input data. | Partially Observable Markov Decision Processes incorporating epistemic uncertainties |
S0377221714007103 | Dynamic pricing has become a common form of electricity tariff, where the price of electricity varies in real time based on the realized electricity supply and demand. Hence, optimizing industrial operations to benefit from periods with low electricity prices is vital to maximizing the benefits of dynamic pricing. In the case of water networks, energy consumed by pumping is a substantial cost for water utilities, and optimizing pump schedules to accommodate for the changing price of energy while ensuring a continuous supply of water is essential. In this paper, a Mixed-Integer Non-linear Programming (MINLP) formulation of the optimal pump scheduling problem is presented. Due to the non-linearities, the typical size of water networks, and the discretization of the planning horizon, the problem is not solvable within reasonable time using standard optimization software. We present a Lagrangian decomposition approach that exploits the structure of the problem leading to smaller problems that are solved independently. The Lagrangian decomposition is coupled with a simulation-based, improved limited discrepancy search algorithm that is capable of finding high quality feasible solutions. The proposed approach finds solutions with guaranteed upper and lower bounds. These solutions are compared to those found by a mixed-integer linear programming approach, which uses a piecewise-linearization of the non-linear constraints to find a global optimal solution of the relaxation. Numerical testing is conducted on two real water networks and the results illustrate the significant costs savings due to optimizing pump schedules. | A Lagrangian decomposition approach for the pump scheduling problem in water networks |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.