FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0377221716301515 | We study how retailers can time their service investments when demand for a product is uncertain and consumers care both about price and service when choosing which retailer to buy from. By “service” we mean activities a retailer can invest in and which can drive traffic into the store. We consider offering extended operating hours as an example of such service and examine the timing of service investments for two competing retailers. Specifically, we analyze two retailers who compete on price and service level, and characterize both the prices and the service levels, as well as the timing of their service investment decisions. Our model also considers two effects of retailer service—the effect on total demand for the product and the effect on a retailer’s market share. We show that investing in service before demand realization, although counterintuitive, can be beneficial for competing retailers. On the other hand, a large mismatch between actual and expected demand and a low probability of high demand justifies the postponement of service investments after observing demand. We also show that the incentive to invest in service before demand realization becomes more pronounced when service investments can increase the overall demand for the product in addition to protecting market share. Our findings have important implications for retailers with regards to the timing of their service investment decisions. | Timing of service investments for retailers under competition and demand uncertainty |
S0377221716301527 | When multiple products compete for the same storage space, their optimal individual lot sizes may need to be reduced to accommodate the storage needs of other products. This challenge is exacerbated with the presence of quantity discounts, which tend to entice larger lot sizes. Under such circumstances, firms may wish to consider storage capacity expansion as an option to take full advantage of quantity discounts. This paper aims to simultaneously determine the optimal storage capacity level along with individual lot sizes for multiple products being offered quantity discounts (either all-units discounts, incremental discounts, or a mixture of both). By utilizing Lagrangian techniques along with a piecewise-linear approximation for capacity cost, our algorithms can generate precise solutions regardless of the functional form of capacity cost (i.e., concave or convex). The algorithms can incorporate simultaneous lot-sizing decisions for thousands of products in a reasonable solution time. We utilize numerical examples and sensitivity analysis to understand the key factors that influence the capacity expansion decision and the performance of the algorithms. The primary characteristic that influences the capacity expansion decision is the size of the quantity discount offered, but variability in demand and capacity per unit influence the expansion decision as well. Furthermore, we discover that all-units quantity discounts are more likely to lead to capacity expansion compared to incremental quantity discounts. Our analysis illuminates the potential for significant savings available to companies willing to explore the option of increasing storage capacity to take advantage of quantity discount offerings for their purchased products. | Shared resource capacity expansion decisions for multiple products with quantity discounts |
S0377221716301539 | In this paper we search for conditions on age-structured differential games to make their analysis more tractable. We focus on a class of age-structured differential games which show the features of ordinary linear-state differential games, and we prove that their open-loop Nash equilibria are sub-game perfect. By means of a simple age-structured advertising problem, we provide an application of the theoretical results presented in the paper, and we show how to determine an open-loop Nash equilibrium. | Age-structured linear-state differential games |
S0377221716301540 | We present the Incident Based-Boat Allocation Model (IB-BAM), a multi-objective model designed to allocate search and rescue resources. The decision of where to locate search and rescue boats depends upon a set of criteria that are unique to a given problem such as the density and types of incidents responded in the area of interest, resource capabilities, geographical factors and governments’ business rules. Thus, traditional models that incorporate only political decisions are no longer appropriate. IB-BAM considers all these criteria and determines optimal boat allocation plans with the objectives of minimizing response time to incidents, fleet operating cost and the mismatch between boats’ workload and operation capacity hours. IB-BAM methodology includes three major steps: In step-1, the model ranks and assigns a weight to each incident type according to its severity, using Analytical Hierarchy Process technique. In step-2, considering historical incident data, a Zonal Distribution Model generates aggregated weighted demand locations. In step-3, a multi-objective mixed integer program determines locations and responsibility zones of search and rescue boats. We demonstrate the effectiveness of the proposed model with respect to the Aegean Sea responsibility area of the Turkish Coast Guard boats. The results show that IB-BAM implementation led to a more effective utilization of boats considering the three objectives of the model. | A multi-objective model for locating search and rescue boats |
S0377221716301552 | This paper presents a novel approach to the problem of infrastructure development by integrating technical, economic and operational aspects, as well as the interactions between the entities who jointly carry out the project. The problem is defined within the context of a Public Private Partnership (PPP), where a public entity delegates the design, construction and maintenance of an infrastructure system to a private entity. Despite the benefits of this procurement method, the relationship between the two entities is inherently conflictive. Three main factors give rise to such conflict: the goals of the public and private party do not coincide, there is information asymmetry between them and their interaction unfolds in environments under uncertainty. The theory of contracts refers to this problem as a principal-agent problem; however, due to the complexity of the problem, it is necessary to recreate a dynamic interaction between the principal (i.e., the public entity) and the agent (i.e., the private entity) while including the monitoring of the infrastructure performance as an essential part of the interaction. The complex relationship between the sequential actions of players and the time-dependent behavior of a physical system is explored using a hybrid agent-based simulation model. The model is illustrated with several examples that show the versatility of the approach and its ability to accommodate the different decision strategies of the players (i.e., principal, agent) and the model of a physical infrastructure system. | A dynamic principal-agent framework for modeling the performance of infrastructure |
S0377221716301564 | We introduce in this paper a new definition for the overall system efficiency in network DEA, which is inspired by the “weak link” notion in supply chains and the maximum-flow/minimum- cut problem in networks. We use a two-phase max-min optimization technique in a multi-objective programming framework to estimate the individual stage efficiencies and the overall system efficiency in two-stage processes of varying complexity. This enables us to estimate unique and unbiased efficiency scores and, if required, to drive the efficiency assessments effectively in line with specific priorities given to the stages. A comparison with the multiplicative decomposition approach on data drawn from the literature brings into light the advantages of our method. | The “weak-link” approach to network DEA for two-stage processes |
S0377221716301576 | The paper suggests a differential game of advertising competition among three symmetric firms, played over an infinite horizon. The objective of the research is to see if a cooperative agreement among the firms can be sustained over time. For this purpose the paper determines the characteristic functions (value functions) of individual players and all possible coalitions. We identify an imputation that belongs to the core. Using this imputation guarantees that, in any subgame starting out on the cooperative state trajectory, no coalition has an incentive to deviate from what was prescribed by the solution of the grand coalition’s optimization problem. | Sustaining cooperation in a differential game of advertising goodwill accumulation |
S0377221716301588 | Attracting customers in the online-to-offline (O2O) business is increasingly difficult as more competitors are entering the O2O market. To create and maintain sustainable competitive advantage in crowded O2O markets requires optimizing the joint pricing-location decision and understanding customers’ behaviours. To investigate the evolutionary location and pricing behaviors of service merchants, this paper proposes an agent-based competitive O2O model in which the service merchants are modeled as profit-maximizing agents and customers as utility-maximizing agents that are connected by social networks through which they can share their service experiences by word of mouth (WOM). It is observed that the service merchant should standardize its service management to offer a stable expectation to customers if their WOM can be ignored. On the other hand, when facing more socialized customers, firms with variable service quality should adopt aggressive pricing and location strategies. Although customers’ social learning facilitates the diversity of services in O2O markets, their online herd behaviors would lead to unpredictable offline demand variations, which consequently pose performance risk to the service merchants. | Evolutionary location and pricing strategies for service merchants in competitive O2O markets |
S037722171630159X | In this paper we consider the Asymmetric Quadratic Traveling Salesman Problem (AQTSP). Given a directed graph and a function that maps every pair of consecutive arcs to a cost, the problem consists in finding a cycle that visits every vertex exactly once and such that the sum of the costs is minimal. We propose an extended Linear Programming formulation that has a variable for each cycle in the graph. Since the number of cycles is exponential in the graph size, we propose a column generation approach. Moreover, we apply a particular reformulation-linearization technique on a compact representation of the problem, and compute lower bounds based on Lagrangian relaxation. We compare our new bounds with those obtained by some linearization models proposed in the literature. Computational results on some set of benchmarks used in the literature show that our lower bounding procedures are very promising. | Lower bounding procedure for the asymmetric quadratic traveling salesman problem |
S0377221716301606 | The sampling information for the cost-effectiveness analysis typically comes from different health care centers, and, as far as we know, it is taken for granted that the distribution of the cost and the effectiveness does not vary across centers. We argue that this assumption is unrealistic, and prove that to not consider the sample heterogeneity will typically give misleading results. Consequently, a cost-effectiveness procedure for heterogeneous samples is here proposed. The proposed cost-effectiveness procedure consists of a Bayesian clustering to measure the sample heterogeneity, and a meta-analysis to account for the specific clustering structure of the data. Examples with real data illustrate this methodology for normal and lognormal models, and the results are compared with those we would obtain if homogeneity of the samples is assumed. | Cost-effectiveness analysis for heterogeneous samples |
S0377221716301618 | Clustering algorithms partition a set of n objects into p groups (called clusters), such that objects assigned to the same groups are homogeneous according to some criteria. To derive these clusters, the data input required is often a single n × n dissimilarity matrix. Yet for many applications, more than one instance of the dissimilarity matrix is available and so to conform to model requirements, it is common practice to aggregate (e.g., sum up, average) the matrices. This aggregation practice results in clustering solutions that mask the true nature of the original data. In this paper we introduce a clustering model which, to handle the heterogeneity, uses all available dissimilarity matrices and identifies for groups of individuals clustering objects in a similar way. The model is a nonconvex problem and difficult to solve exactly, and we thus introduce a Variable Neighborhood Search heuristic to provide solutions efficiently. Computational experiments and an empirical application to perception of chocolate candy show that the heuristic algorithm is efficient and that the proposed model is suited for recovering heterogeneous data. Implications for clustering researchers are discussed. | A model for clustering data from heterogeneous dissimilarities |
S037722171630162X | Discontinuities are common in the pricing of financial derivatives and have a tremendous impact on the accuracy of quasi-Monte Carlo (QMC) method. While if the discontinuities are parallel to the axes, good efficiency of the QMC method can still be expected. By realigning the discontinuities to be axes-parallel, [Wang & Tan, 2013] succeeded in recovering the high efficiency of the QMC method for a special class of functions. Motivated by this work, we propose an auto-realignment method to deal with more general discontinuous functions. The k-means clustering algorithm, a classical algorithm of machine learning, is used to select the most representative normal vectors of the discontinuity surface. By applying this new method, the discontinuities of the resulting function are realigned to be friendly for the QMC method. Numerical experiments demonstrate that the proposed method significantly improves the performance of the QMC method. | An auto-realignment method in quasi-Monte Carlo for pricing financial derivatives with jump structures |
S0377221716301631 | We present a new variant of the full 2-split algorithm, the Quadrant Shrinking Method (QSM), for finding all nondominated points of a tri-objective integer program. The algorithm is easy to implement and solves at most 3 | Y N | + 1 single-objective integer programs when computing the nondominated frontier, where Y N is the set of all nondominated points. A computational study demonstrates the efficacy of QSM. | The Quadrant Shrinking Method: A simple and efficient algorithm for solving tri-objective integer programs |
S0377221716301643 | A modal shift from road transport towards inland water or rail transport could reduce the total Green House Gas emissions and societal impact associated with Municipal Solid Waste management. However, this shift will take place only if demonstrated to be at least cost-neutral for the decision makers. In this paper we examine the feasibility of using multimodal truck and inland water transport, instead of truck transport, for shipping separated household waste in bulk from collection centres to waste treatment facilities. We present a dynamic tactical planning model that minimises the sum of transportation costs, external environmental and societal costs. The Municipal Solid Waste Service Network Design Problem allocates Municipal Solid Waste volumes to transport modes and determines transportation frequencies over a planning horizon. This generic model is applied to a real-life case in Flanders, the northern region of Belgium. Computational results show that multimodal truck and inland water transportation can compete with truck transport by avoiding or reducing transhipments and using barge convoys. | A service network design model for multimodal municipal solid waste transport |
S0377221716301655 | We study due-date assignment problems with common flow allowance on a proportionate flowshop. We focus on both minsum and minmax objectives. Both cases are extended to a setting of a due-window. The proposed solution procedures are shown to be significantly different from those of the single machine problems. All the problems studied here are solved in polynomial time: the minsum problems in O(n 2) and the minmax problems in O(nlog n), where n is the number of jobs. | Minsum and minmax scheduling on a proportionate flowshop with common flow-allowance |
S0377221716301667 | In this work, we consider some basic sports scheduling problems and introduce the notions of graph theory which are needed to build adequate models. We show, in particular, how edge coloring can be used to construct schedules for sports leagues. Due to the emergence of various practical requirements, one cannot be restricted to classical schedules given by standard constructions, such as the circle method, to color the edges of complete graphs. The need of exploring the set of all possible colorings inspires the design of adequate coloring procedures. In order to explore the solution space, local search procedures are applied. The standard definitions of neighborhoods that are used in such procedures need to be extended. Graph theory provides efficient tools for describing various move types in the solution space. We show how formulations in graph theoretical terms give some insights to conceive more general move types. This leads to a series of open questions which are also presented throughout the text. | Edge coloring: A natural model for sports scheduling |
S0377221716301679 | This paper models and analyzes tier-captive autonomous vehicle storage and retrieval systems. While previous models assume sequential commissioning of the lift and vehicles, we propose a parallel processing policy for the system, under which an arrival transaction can request the lift and the vehicle simultaneously. To investigate the performance of this policy, we formulate a fork-join queueing network in which an arrival transaction will be split into a horizontal movement task served by the vehicle and a vertical movement task served by the lift. We develop an approximation method based on decomposition of the fork-join queueing network to estimate the system performance. We build simulation models to validate the effectiveness of analytical models. The results show that the fork-join queueing network is accurate in estimating the system performance under the parallel processing policy. Numerical experiments and a real case are carried out to compare the system response time of retrieval transactions under parallel and sequential processing policies. The results show that, in systems with less than 10 tiers, the parallel processing policy outperforms the sequential processing policy by at least 5.51 percent. The advantage of parallel processing policy is decreasing with the rack height and the aisle length. In systems with more than 10 tiers and a length to height ratio larger than 7, we can find a critical retrieval transaction arrival rate, below which the parallel processing policy outperforms the sequential processing policy. | Modeling parallel movement of lifts and vehicles in tier-captive vehicle-based warehousing systems |
S0377221716301862 | The two-echelon vehicle routing problem (2E-VRP) consists in making deliveries to a set of customers using two distinct fleets of vehicles. First-level vehicles pick up requests at a distribution center and bring them to intermediate sites. At these locations, the requests are transferred to second-level vehicles, which deliver them. This paper addresses a variant of the 2E-VRP that integrates constraints arising in city logistics such as time window constraints, synchronization constraints, and multiple trips at the second level. The corresponding problem is called the two-echelon multiple-trip vehicle routing problem with satellite synchronization (2E-MTVRP-SS). We propose an adaptive large neighborhood search to solve this problem. Custom destruction and repair heuristics and an efficient feasibility check for moves have been designed and evaluated on modified benchmarks for the VRP with time windows. | An adaptive large neighborhood search for the two-echelon multiple-trip vehicle routing problem with satellite synchronization |
S0377221716301874 | We study the predictive power of industry-specific economic sentiment indicators for future macro-economic developments. In addition to the sentiment of firms towards their own business situation, we study their sentiment with respect to the banking sector – their main credit providers. The use of industry-specific sentiment indicators results in a high-dimensional forecasting problem. To identify the most predictive industries, we present a bootstrap Granger Causality test based on the Adaptive Lasso. This test is more powerful than the standard Wald test in such high-dimensional settings. Forecast accuracy is improved by using only the most predictive industries rather than all industries. | The predictive power of the business and bank sentiment of firms: A high-dimensional Granger Causality approach |
S0377221716301886 | We devote this paper to study a class of interval valued multiobjective programming problems. For this we consider two order relations LU and LS on the set of all closed intervals and propose many concepts of Pareto optimal solutions. Based on convexity concepts (viz. LU and LS-convexity) and generalized differentiability (viz. gH-differentiability) of interval valued functions, the KKT optimality conditions for aforesaid problems are obtained. In addition, we compare our results with the results given in Wu (2009) and we show some advantages of our results. The theoretical development is illustrated by suitable examples. | KKT optimality conditions in interval valued multiobjective programming with generalized differentiable functions |
S0377221716301898 | A recently proposed metaheuristics called Wolf Search Algorithm (WSA) has demonstrated its efficacy for various hard-to-solve optimization problems. In this paper, an improved version of WSA namely Eidetic-WSA with a global memory structure (GMS) or just eWSA is presented. eWSA makes use of GMS for improving its search for the optimal fitness value by preventing mediocre visited places in the search space to be visited again in future iterations. Inherited from swarm intelligence, search agents in eWSA and the traditional WSA merge into an optimal solution although the agents behave and make decisions autonomously. Heuristic information gathered from collective memory of the swarm search agents is stored in GMS. The heuristics eventually leads to faster convergence and improved optimal fitness. The concept is similar to a hybrid metaheuristics based on WSA and Tabu Search. eWSA is tested with seven standard optimization functions rigorously. In particular, eWSA is compared with two state-of-the-art metaheuristics, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO). eWSA shares some similarity with both approaches with respect to directed-random search. The similarity with ACO is, however, stronger as ACO uses pheromones as global information references that allow a balance between using previous knowledge and exploring new solutions. Under comparable experimental settings (identical population size and number of generations) eWSA is shown to outperform both ACO and PSO with statistical significance. When dedicating the same computation time, only ACO can be outperformed due to a comparably long run time per iteration of eWSA. | Eidetic Wolf Search Algorithm with a global memory structure |
S0377221716301904 | We propose an equilibrium model that allows to analyze the long-run impact of the electricity market design on transmission line expansion by the regulator and investment in generation capacity by private firms in liberalized electricity markets. The model incorporates investment decisions of the transmission system operator and private firms in expectation of an energy-only market and cost-based redispatch. In different specifications we consider the cases of one vs. multiple price zones (market splitting) and analyze different approaches to recover network cost—in particular lump sum, generation capacity based, and energy based fees. In order to compare the outcomes of our multilevel market model with a first best benchmark, we also solve the corresponding integrated planner problem. Using two test networks we illustrate that energy-only markets can lead to suboptimal locational decisions for generation capacity and thus imply excessive network expansion. Market splitting heals these problems only partially. These results are valid for all considered types of network tariffs, although investment slightly differs across those regimes. | Transmission and generation investment in electricity markets: The effects of market splitting and network fee regimes |
S0377221716301916 | Färe, Grosskopf, and Lovell (1985) merged Farrell’s input and output oriented technical efficiency measures into a new graph-type approach known as hyperbolic distance function (HDF). In spite of its appealing special structure in allowing for the simultaneous and equiproportionate reduction in inputs and increase in outputs, HDF is a non-linear optimization and it is hard to solve particularly when dealing with technologies operating under variable returns to scale. By connecting HDF to the directional distance function, we propose a linear programming based procedure for estimating the exact value of HDF within the non-parametric framework of data envelopment analysis. We illustrate the computational effectiveness of the algorithm on several real-world and simulated data sets, generating the optimal value of HDF through generally solving at most two linear programs. Moreover, our approach has several desirable properties such as: (1) introducing a computational dual formulation for the HDF and providing an economic interpretation in terms of shadow prices; (2) being readily adaptable to measure hyperbolic-oriented super-efficiency; and (3) being flexible to deal with HDF-based efficiency measures on environmental technologies. | Estimating the hyperbolic distance function: A directional distance function approach |
S0377221716301928 | We consider a fluid queue with two modes of service, that represents a production facility, where the processing of the customers (units) is typically carried out at a much faster time-scale than the machine-related processes. We examine the strategic behavior of the customers, regarding the joining/balking dilemma, under two levels of information upon arrival. Specifically, just after arriving and before making the decision, a customer observes the level of the fluid, but may or may not get informed about the state of the server (fast/slow). Assuming that the customers evaluate their utilities based on a natural reward/cost structure, which incorporates their desire for processing and their unwillingness to wait, we derive symmetric equilibrium strategy profiles. Moreover, we illustrate various effects of the information level on the strategic behavior of the customers. The corresponding social optimization problem is also studied and the inefficiency of the equilibrium strategies is quantified via the Price of Anarchy (PoA) measure. | Strategic behavior in an observable fluid queue with an alternating service process |
S0377221716301953 | We consider a setting in which a company not only has a fleet of capacitated vehicles and drivers available to make deliveries, but may also use the services of occasional drivers who are willing to make a single delivery using their own vehicle in return for a small compensation if the delivery location is not too far from their own destination. The company seeks to make all the deliveries at minimum total cost, i.e., the cost associated with its own vehicles and drivers plus the compensation paid to the occasional drivers. The option to use occasional drivers to make deliveries gives rise to a new and interesting variant of the classical capacitated vehicle routing problem. We design and implement a multi-start heuristic which produces solutions with small errors when compared with optimal solutions obtained by solving an integer programming formulation with a commercial solver. A comprehensive computational study provides valuable insight into the potential of using occasional drivers to reduce delivery costs, focusing primarily on the number and flexibility of occasional drivers and the compensation scheme employed. | The Vehicle Routing Problem with Occasional Drivers |
S0377221716301965 | Model verification and validation (V&V) is one of the most important activities in simulation modelling. Model validation is especially challenging for Agent-Based Simulation (ABS). Techniques that can help to improve V&V in simulation modelling are needed. This paper proposes a V&V technique called Test-Driven Simulation Modelling (TDSM) which applies techniques from Test-Driven Development in software engineering to simulation modelling. The main principle in TDSM is that a unit test for a simulation model has to be specified before the simulation model is implemented. Hence, TDSM explicitly embeds V&V in simulation modelling. We use a case study in maritime search operations to demonstrate how TDSM can be used in practice. Maritime search operations (and search operations in general) are one of the classic applications of Operational Research (OR). Hence, we can use analytical models from the vast search theory literature for unit tests in TDSM. The results show that TDSM is a useful technique in the verification and validation of simulation models, especially ABS models. This paper also shows that ABS can offer an alternative modelling approach in the analysis of maritime search operations. | Test-driven simulation modelling: A case study using agent-based maritime search-operation simulation |
S0377221716301989 | As a bioterrorist anthrax attack has serious consequences, an emergency management plan that can reduce the number of casualties should be studied. However, the papers studying this area are still few. This paper proposes a model which links the disease progression, the related medical intervention actions and the logistics deployment together to help crisis managers to extract crucial insights on emergency logistics management from a strategic level standpoint. This model is a multi-period one with the consideration of the period when the patients transfer into the different disease stages, the period when the medical intervention begins and the change of the recovery rate because of the time lag between the two aforementioned periods. Our model can support the decision making process in case of a real anthrax attack and evaluate the important factors, which can have a great impact on the number of casualties. | Modeling the logistics response to a bioterrorist anthrax attack |
S0377221716302016 | The inventory-staggering problem is a multi-item inventory problem in which replenishment cycles are scheduled or offset in order to minimize the maximum inventory level over a given planning horizon. We incorporate symmetry-breaking constraints in a mixed-integer programming model to determine optimal and near-optimal solutions. Local-search heuristics and evolutionary polishing heuristics are also presented to achieve effective and efficient solutions. We examine extensions of the problem that include a continuous-time framework as well as the effect of stochastic demand. | Offsetting inventory replenishment cycles |
S037722171630203X | Given the evolution in the agricultural sector and the new challenges it faces, managing agricultural supply chains efficiently has become an attractive topic for researchers and practitioners. Against this background, the integration of uncertain aspects has continuously gained importance for managerial decision making since it can lead to an increase in efficiency, responsiveness, business integration, and ultimately in market competitiveness. In order to capture appropriately the uncertain conjuncture of most agricultural real-life applications, an increasing amount of research effort is especially dedicated to treating uncertainty. In particular, quantitative modeling approaches have found extensive use in agricultural supply chain management. This paper provides an overview of the latest advances and developments in the application of operations research methodologies to handling uncertainty occurring in the agricultural supply chain management problems. It seeks to: (i) offer a representative overview of the predominant research topics, (ii) highlight the most pertinent and widely used frameworks, and (iii) discuss the emergence of new operations research advances in the agricultural sector. The broad spectrum of reviewed contributions is classified and presented with respect to three most relevant discerned features: uncertainty modeling types, programming approaches, and functional application areas. Ultimately, main review findings are pointed out and future research directions which emerge are suggested. | Handling uncertainty in agricultural supply chain management: A state of the art |
S0377221716302041 | In this paper, we present a general EOQ model for items that are subject to inspection for imperfect quality. Each lot that is delivered to the sorting facility undertakes a 100 per cent screening and the percentage of defective items per lot reduces according to a learning curve. The generality of the model is viewed as important both from an academic and practitioner perspective. The mathematical formulation considers arbitrary functions of time that allow the decision maker to assess the consequences of a diverse range of strategies by employing a single inventory model. A rigorous methodology is utilised to show that the solution is a unique and global optimal and a general step-by-step solution procedure is presented for continuous intra-cycle periodic review applications. The value of the temperature history and flow time through the supply chain is also used to determine an efficient policy. Furthermore, coordination mechanisms that may affect the supplier and the retailer are explored to improve inventory control at both echelons. The paper provides illustrative examples that demonstrate the application of the theoretical model in different settings and lead to the generation of interesting managerial insights. | Efficient inventory control for imperfect quality items |
S0377221716302053 | Inverse data envelopment analysis (DEA) is a reversed optimization problem that can serve as a useful planning tool for managerial decisions by providing information such as how much resources (or outcomes) should be invested (or produced) to achieve a desired level of competitiveness whereas the conventional DEA focuses mainly on a post-hoc assessment of the organizational performance. Inverse DEA studies however are based on an assumption that the efficiency level of observed decision making units (DMUs) will not change within the period of interest, which in fact confines the use of inverse DEA to a sensitivity analysis by simply addressing what alternative levels of input and/or output would have been possible to result in the same efficiency score obtained. In this paper, we discuss an inverse DEA problem considering expected changes of the production frontier in the future by integrating the inverse optimization problem with a time series application of DEA so that it can be an ex-ante decision support tool for the new product target setting practices. We use an example of the vehicle engine development case to demonstrate the proposed method. | Inverse DEA with frontier changes for new product target setting |
S037722171630220X | The linguistic term set is an applicable and flexible technique in qualitative decision making (QDM). To further develop the linguistic term set, this paper proposes a generalized asymmetric linguistic term set (GALTS) based on the asymmetric sigmoid semantics, which belongs to an asymmetric and non-uniform linguistic term set, and can be used to address the QDM problems involving risk appetites of the decision maker (DM). Then, a value-at-risk fitting (VARF) approach is designed for obtaining the risk appetite parameters of the GALTS and six desirable properties of the GALTS are analyzed, i.e., asymmetry, non-uniformity, generality, variability, range consistency, and diminishing-utility. Based on the above approaches and the generalized asymmetric linguistic preference relations (GALPRs), a QDM process involving risk appetites of the DM is designed. Because the GALPRs consist of subjective information provided by the DM, the process is not perfectly consistent and is usually difficult to change or repeat. Thus, a transitivity improvement approach is investigated, and the corresponding calculation steps are provided. Finally, an example dealing with the problem of investment decision making is provided, and the results fully demonstrate the validity of the proposed methods for QDM involving risk appetites. | Generalized asymmetric linguistic term set and its application to qualitative decision making involving risk appetites |
S0377221716302223 | We provide a new portfolio decomposition formula that sheds light on the economics of portfolio choice for investors following the mean-variance (MV) criterion. We show that the number of components of a dynamic portfolio strategy can be reduced to two: the first is preference free and hedges the risk of a discount bond maturing at the investor’s horizon while the second hedges the time variation in pseudo relative risk tolerance. Both components entail strong horizon effects in the dynamic asset allocation as a result of time-varying risk tolerance and investment opportunity sets. We also provide closed-form solutions for the optimal portfolio strategy in the presence of market return predictability. The model parameters are estimated over the period 1963 to 2012 for the U.S. market. We show that (i) intertemporal hedging can be very large, (ii) the MV criterion hugely understates the true extent of risk aversion for high values of the risk aversion parameter, and the more so the shorter the investment horizon, and (iii) the efficient frontiers seem problematic for investment horizons shorter than one year but satisfactory for large horizons. Overall, adopting the MV model leads to acceptable results for medium and long term investors endowed with medium or high risk tolerance, but to very problematic ones otherwise. | Understanding dynamic mean variance asset allocation |
S0377221716302247 | Conventional data envelopment analysis (DEA) models are designed for measuring the productive efficiency of decision making units (DMUs) based merely on historical data. However, in many practical applications, such past results are not sufficient for evaluating a DMU's performance in highly volatile operating environments, such as those with highly volatile crude oil prices and currency exchange rates. That is, in such environments, a DMU's whole performance may be seriously distorted if its future performance, which is sensitive to crude oil price volatility and/or currency fluctuations, is ignored in the evaluation process. However, despite its importance, to our knowledge, there are no DEA models proposed in the literature that explicitly take future performance volatility into account. Hence, this research aims at developing a new system of DEA models that incorporate a DMU's uncertain future performance, and thus can be applied to fully measure their efficiency. | DEA models incorporating uncertain future performance |
S0377221716302259 | Distance functions in production theory are mathematical structures that characterize the belonging to the reference technology through a numerical value, behave as technical efficiency measures when the focus is analyzing an observed input–output vector within its production possibility set and present a dual relationship with some support function (profit, revenue, cost function). In this paper, we endow the well-known weighted additive models in Data Envelopment Analysis with a distance function structure, introducing the Weighted Additive Distance Function and showing its main properties. | The weighted additive distance function |
S0377221716302260 | The transportation of biomedical samples is a key component of healthcare supply chains. The samples are collected, consolidated into cooler boxes, and then transported to be analyzed in a specialized laboratory. Since many hospitals and samples’ collection points are assigned to the same laboratory, it is important to manage the flow of samples arriving to the laboratory to avoid congestion. In other words, it is preferable to try to desynchronize the samples’ arrivals by managing the vehicles’ departure times and the routes ordering. We propose a mathematical model and a multi-start heuristic to minimize the route duration times and the maximum number of samples’ boxes arriving at the laboratory within a given time period. Based on real data, we demonstrated that both the model and the heuristic are very efficient in solving real size instances. | A practical vehicle routing problem with desynchronized arrivals to depot |
S0377221716302284 | This paper considers a firm that faces a declining profit stream for its established product. The firm has the option to invest in a new technology with which it can produce an innovative product while having the option to exit at any point in time. In the presence of an exit option, earlier work determined the optimal timing to invest, where it was shown that higher uncertainty might accelerate investment timing. In the present paper the firm also decides on capacity. This extension leads to monotonicity, i.e. higher uncertainty delays investment timing. We also find that higher potential profitability of the innovative product market increases the incentive to invest earlier, where, however, we get the counterintuitive result that the firm invests in smaller capacity. Finally, if quantity has a smaller negative effect on price, the firm wants to acquire a larger capacity at a lower investment threshold. | How to escape a declining market: Capacity investment or Exit? |
S0377221716302302 | An analysis of the open queueing network MAP − ( G I / ∞ ) K is presented in this paper. The MAP − ( G I / ∞ ) K network implements Markov routing, general service time distribution, and an infinite number of servers at each node. Analysis is performed under the condition of a growing fundamental rate for the Markovian arrival process. It is shown that the stationary probability distribution of the number of customers at the nodes can be approximated by multi-dimensional Gaussian distribution. Parameters of this distribution are presented in the paper. Numerical results validate the applicability of the obtained approximations under relevant conditions. The results of the approximations are applied to estimate the optimal number of servers for a network with finite-server nodes. In addition, an approximation of higher-order accuracy is derived. | Queueing network MAP − ( G I / ∞ ) K with high-rate arrivals |
S0377221716302314 | In this work we consider dynamic pricing for the case of continuous replenishment. An essential ingredient in such a formulation is the use of time normalized revenue or profit function, in other words revenue or profit per unit time. This provides the incentive to sell many items in the shortest time (and of course at a high price). Moreover, for most firms what matters most is how much revenue or profit is achieved in a certain time frame, for example per year. This changes the problem qualitatively and methodologically. We develop a new dynamic pricing model for this formulation. We derive an analytical solution to the pricing problem in the form of a simple-to-solve ordinary differential equation (ODE) equation. The trajectory of this ODE gives the optimal pricing curve. Unlike many of the models existing in the literature that rely on computationally demanding dynamic programming type solutions, our model is relatively simple to solve. Also, we apply the derived equation to two commonly used price-demand functions (the exponential and the power functions), and derive closed-form pricing curves for these functions. | Analytical solutions to the dynamic pricing problem for time-normalized revenue |
S037722171630234X | We propose a novel meta-approach to support collaborative multi-objective supplier selection and order allocation (SSOA) decisions by integrating multi-criteria decision analysis and linear programming (LP). The proposed model accounts for suppliers’ performance synergy effects within a hierarchical decision structure. It incorporates both heterogeneous objective data and subjective judgments of the decision makers (DMs) representing various groups with different voting powers (VPs). We maximize the total value of purchasing (TVP) by optimizing order quantity assignment to suppliers and taking into consideration their synergies encountered in different time horizons. We apply the proposed model to a contractor selection and order quantity assignment problem in an agricultural commodity trading (ACT) company. We maximize the strategic effectiveness of both the customers and the suppliers, minimize risks, increase the degree of cooperation between trading partners on all levels of supply chain integration, enhance transparent knowledge sharing and aggregation, and support collaborative decision making. | Modeling synergies in multi-criteria supplier selection and order allocation: An application to commodity trading |
S0377221716302363 | Military medical planners must consider the dispatching of aerial military medical evacuation (MEDEVAC) assets when preparing for and executing major combat operations. The launch authority seeks to dispatch MEDEVAC assets such that prioritized battlefield casualties are transported quickly and efficiently to nearby medical treatment facilities. We formulate a Markov decision process (MDP) model to examine the MEDEVAC dispatching problem. The large size of the problem instance motivating this research suggests that conventional exact dynamic programming algorithms are inappropriate. As such, we employ approximate dynamic programming (ADP) techniques to obtain high quality dispatch policies relative to current practices. An approximate policy iteration algorithmic strategy is applied that utilizes least squares temporal differencing for policy evaluation. We construct a representative planning scenario based on contingency operations in northern Syria both to demonstrate the applicability of our MDP model and to examine the efficacy of our proposed ADP solution methodology. A designed computational experiment is conducted to determine how selected problem features and algorithmic features affect the quality of solutions attained by our ADP policies. Results indicate that the ADP policy outperforms the myopic policy (i.e., the default policy in practice) by up to nearly 31% with regard to a lifesaving performance metric, as compared for a baseline scenario. Moreover, the ADP policy provides decreased MEDEVAC response times and utilization rates. These results benefit military medical planners interested in the development and implementation of cogent MEDEVAC tactics, techniques, and procedures for application in combat situations with a high operations tempo. | Approximate dynamic programming for the dispatch of military medical evacuation assets |
S0377221716302375 | This paper studies a manufacturer marketing a product through a dual-channel supply chain, comprised of an online channel and a brick-and-mortar retail channel. In particular, we consider the pricing and channel priority strategies of dual-channel supply chain in the presence of supply shortage caused by random yields. To this end, we develop game theoretic models to investigate the price decisions and the channel priority strategy, as well as examine the impacts of channel coordination and the time sequence of decisions, i.e., ex-ante and ex-post production yield, on the channel priority strategy. While encountered with a potential supply shortage, the manufacturer has two channel-allocation priority strategies: direct channel priority and retail channel priority. Our study shows that: (i) coordination of the dual-channel supply chain can alleviate the retailer's complaint of insufficient supply; (ii) counter-intuitively, the retail channel priority is adopted only when the total surplus in the retail channel is low in the decentralized setting; and (iii) the effect of the unit cost of sales of the direct channel on the motivation to use retail channel priority depends on the effect of channel priority on the demand. In addition, we find that the main results of pricing and channel priority strategies remain robust to the time sequence of channel priority decision (yield ex-ante or ex-post). | Pricing and supply priority in a dual-channel supply chain |
S0377221716302399 | This paper considers periodic preventive maintenance policies for a deteriorating repairable system. On each failure the system is repaired and, at the planned times, it is periodically maintained to improve its reliability performance. Most of periodic preventive maintenance (PM) models for repairable systems have been studied assuming that the failure process between two PMs follows the nonhomogeneous Poisson process (NHPP), implying the minimal repair on each failure. However, in this paper, we assume that the failure process between two PMs follows a new counting process which is a generalized version of the NHPP. We develop two types of PM models and study detailed properties of the optimal policies which minimize the long-run expected cost rates. Numerical examples are also provided. | New stochastic models for preventive maintenance and maintenance optimization |
S0377221716302405 | We consider the admission control and inventory management problems of a single-component make-to-order production system. Components are purchased from suppliers in batches of fixed size subject to stochastic lead times and setup costs. A control policy specifies when a batch of components is purchased, and whether the demand for each MTO production is accepted upon arrival. We formulate the problem as a Markov decision process (MDP) model, and characterize the structure of optimal admission control and inventory replenishment policies. We show that a state dependent base-stock policy is optimal for the inventory replenishment, although the MDP value function is not necessarily convex. We also show that the optimal admission control can be identified as a lattice dependent policy. A sensitivity analysis is conducted to show how the optimal policy changes as a function of the system parameters. To effectively coordinate admission and inventory control decisions, we propose simple, implementable, and yet effective heuristic policies. Our extensive numerical results suggest that the proposed heuristics can greatly help firms to effectively coordinate their admission and inventory control activities. | Admission and inventory control of a single-component make-to-order production system with replenishment setup cost and lead time |
S0736584514000295 | Cable winding is an alternative technology to create stator windings in large electrical machines. Today such cable winding is performed manually, which is very repetitive, time-consuming and therefore also expensive. This paper presents the design, function and control system of a developed cable feeder tool for robotized stator cable winding. The presented tool was able to catch a cable inside a cable guiding system and to grab the cable between two wheels. One of these wheels was used to feed cable through the feeder. A control system was integrated in the tool to detect feeding slippage and to supervise the feeding force on the cable. Functions to calculate the cable feed length, to release the cable from the tool and for positional calibration of the stator to be wound were also integrated in the tool. In validating the function of the cable feeder tool, the stator of the linear generator used in the Wave Energy Converter generator developed at Uppsala University was used as an example. Through these experiments, it was shown that the developed robot tool design could be used to achieve automated robotized cable winding. These results also complied with the cycle time assumptions for automated cable winding from earlier research. Hence, it was theoretically indicated that the total winding cycle time for one Uppsala University Wave Energy Converter stator could be reduced from about 80h for manual winding with four personnel to less than 20h in a fully developed cable winding robot cell. The same robot tool and winding automation could also be used, with minor adjustments, for other stator designs. | A cable feeder tool for robotized cable winding |
S0736584514000465 | Recently, the number of high-rise building has increased along with the development of technology to cope with the increase in population. Because of this, many researches on an automatic building façade maintenance system have been conducted to satisfy the increasing demands of façade maintenance. However, most researches have focused on the mechanism and system composition, while working safety issues have not been sufficiently dealt with. This paper deals with the motion control issues of the building façade maintenance robot system which is composed of a vertical robot and a horizontal robot moving along the rail of the façade. With consideration for the vertical robot, these issues include the safety of docking process and the stability of vertical motion. During the docking process for the inter-floor circulation of the horizontal robot, shocks and positioning errors are generated due to increasing load. To solve this, the rail brake system is operated to suppress the shock during the docking process, and a re-leveling process is conducted to compensate the gap which is equal to the positioning error between the built-in transom rail of the robot and the transom rail of the building. In addition, many noises are generated from the surroundings that significantly affect the motion of the vertical robot due to vibration. To enhance the motion stability of the vertical robot, vibration suppression control is developed in this paper, using the state estimation which considers the dynamic properties of the wire rope. For the feasibility of this algorithm, the field experiment of the building façade maintenance robot is conducted. | Vertical motion control of building façade maintenance robot with built-in guide rail |
S0736584515000022 | Optimising performance of complex engineering artefacts, which are typically designed to have a useful life of several decades, becomes very difficult during in-service if lessons learnt are not used properly. The authors argue that performance of long-lived complex artefacts can be improved if adequate product in-service data is fed back to the early stages of the product life cycle. This paper discusses an inclusive life cycle approach to optimising product performance by using knowledge and experience gained during in-service. The problem is presented alongside a review of literature of relevant subject areas. A framework for in-service knowledge management is then presented and operationalised through an industrial case study. The framework is developed from the point of view of an integrated product and service provider. The findings from the case study demonstrate how in-service knowledge can be captured, fed back and reused for the design and manufacture stages of the product lifecycle. This enables designers to learn from in-service product performance by informing subsequent designs with in-service knowledge, and consequently improving the through-life product performance. | A framework for optimising product performance through feedback and reuse of in-service experience |
S0736584515000666 | The requirement to increase inspection speeds for non-destructive testing (NDT) of composite aerospace parts is common to many manufacturers. The prevalence of complex curved surfaces in the industry provides motivation for the use of 6 axis robots in these inspections. The purpose of this paper is to present work undertaken for the development of a KUKA robot manipulator based automated NDT system. A new software solution is presented that enables flexible trajectory planning to be accomplished for the inspection of complex curved surfaces often encountered in engineering production. The techniques and issues associated with conventional manual inspection techniques and automated systems for the inspection of large complex surfaces were reviewed. This approach has directly influenced the development of a MATLAB toolbox targeted to NDT automation, capable of complex path planning, obstacle avoidance, and external synchronization between robots and associated external NDT systems. This paper highlights the advantages of this software over conventional off-line-programming approaches when applied to NDT measurements. An experimental validation of path trajectory generation, on a large and curved composite aerofoil component, is presented. Comparative metrology experiments were undertaken to evaluate the real path accuracy of the toolbox when inspecting a curved 0.5m2 and a 1.6m2 surface using a KUKA KR16 L6-2 robot. The results have shown that the deviation of the distance between the commanded TCPs and the feedback positions were within 2.7mm. The variance of the standoff between the probe and the scanned surfaces was smaller than the variance obtainable via commercial path-planning software. Tool paths were generated directly on the triangular mesh imported from the CAD models of the inspected components without need for an approximating analytical surface. By implementing full external control of the robotic hardware, it has been possible to synchronise the NDT data collection with positions at all points along the path, and our approach allows for the future development of additional functionality that is specific to NDT inspection problems. For the current NDT application, the deviations from CAD design and the requirements for both coarse and fine inspections, dependent on measured NDT data, demand flexibility in path planning beyond what is currently available from existing off-line robot programming software. | Robotic path planning for non-destructive testing – A custom MATLAB toolbox approach |
S0736584515000794 | Additive manufacturing methods such as three-dimensional printing (3DP) show a great potential for production of porous structure with complex internal and external structures for bone tissue engineering applications. To optimize the 3DP manufacturing process and to produce 3D printed parts with the requisite architecture and strength, there was a need to fine-tune the printing parameters. The purpose of this study was to develop optimal processing parameters based on a design of the experiments approach to evaluate the ability of 3DP for making calcium sulfate-based scaffold prototypes. The major printing parameters examined in this study were layer thickness, delay time of spreading the next layer, and build orientation of the specimens. Scaffold dimensional accuracy, porosity, and mechanical stiffness were systematically investigated using a design of experiment approach. Resulting macro-porous structures were also studied to evaluate the potential of 3DP technology for meeting the small-scale geometric requirements of bone scaffolds. Signal-to-noise ratio and analysis of variance (ANOVA) were employed to identify the important factors that influence optimal 3D printed part characteristics. The results showed that samples built using the minimum layer thickness (89 µm) and x-direction of build bed with 300 ms delay time between spreading each layer yielded the highest quality scaffold prototypes; thus, these parameters are suggested for fabrication of an engineered bone tissue scaffold. Furthermore, this study identified orientation and new layer spreading delay time as the most important factors influencing the dimensional accuracy, compressive strength, and porosity of the samples. | Effect of technical parameters on porous structure and strength of 3D printed calcium sulfate prototypes |
S0736584516300606 | Large-scale offshore wind turbine blades need careful dimensional inspection at the production stage. This paper aims to establish an accurate measurement technique using Coherent Laser Radar technology combined with B-Spline point generation and alignment. Through varying the Degrees of Freedom (DoF), used for data point transformation, within the Spatial Analyser software package, erroneous inspection results generated by unconstrained blade flexing can be eradicated. The paper concludes that implementing a single B-Spline point generation and alignment method, whilst allowing transformation with DoF in X, Y and Rz, provides confidence to wind turbine blade manufacturers that inspection data is accurate. The experimental procedure described in this paper can also be applied to the precision inspection of other large-scale non-rigid, unconstrained objects. | Investigating the measurement of offshore wind turbine blades using coherent laser radar |
S0736585316300661 | Since 2009, the Croatian mobile market has been subject to various negative influences, including the economic crisis and a 6% telecommunication tax on mobile operator gross revenues. This paper took advantage of this tumultuous period to explore the effects of quality change on monthly price of cell phone plans in Croatia during 2009–2013. We used hedonic modeling to calculate hedonic price and quality indices over the period. The results indicate that cell phone plan prices slowly fell rather than rose, and that plan quality increased. The results are consistent with trends in other European countries with similar market conditions. | Hedonic modeling to explore the relationship of cell phone plan price and quality in Croatia |
S0743731514001440 | Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. | A uniform approach for programming distributed heterogeneous computing systems |
S0743731514001518 | Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or “homologous”) on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. Here, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for 2.56 M sequences on up to 8K cores show parallel efficiencies of ∼ 75 % – 100 % , a time-to-solution of 33 s, and a rate of ∼ 2.0 M alignments per second. | A work stealing based approach for enabling scalable optimal sequence homology detection |
S0743731515001434 | The increasing failure rate in High Performance Computing encourages the investigation of fault tolerance mechanisms to guarantee the execution of an application in spite of node faults. This paper presents an automatic and scalable fault tolerant model designed to be transparent for applications and for message passing libraries. The model consists of detecting failures in the communication socket caused by a faulty node. In those cases, the affected processes are recovered in a healthy node and the connections are reestablished without losing data. The Redundant Array of Distributed Independent Controllers architecture proposes a decentralized model for all the tasks required in a fault tolerance system: protection, detection, recovery and masking. Decentralized algorithms allow the application to scale, which is a key property for current HPC system. Three different rollback recovery protocols are defined and discussed with the aim of offering alternatives to reduce overhead when multicore systems are used. A prototype has been implemented to carry out an exhaustive experimental evaluation through Master/Worker and Single Program Multiple Data execution models. Multiple workloads and an increasing number of processes have been taken into account to compare the above mentioned protocols. The executions take place in two multicore Linux clusters with different socket communications libraries. | Fault tolerance at system level based on RADIC architecture |
S0743731516000174 | MapReduce-like frameworks have achieved tremendous success for large-scale data processing in data centers. A key feature distinguishing MapReduce from previous parallel models is that it interleaves parallel and sequential computation. Past schemes, and especially their theoretical bounds, on general parallel models are therefore, unlikely to be applied to MapReduce directly. There are many recent studies on MapReduce job and task scheduling. These studies assume that the servers are assigned in advance. In current data centers, multiple MapReduce jobs of different importance levels run together. In this paper, we investigate a schedule problem for MapReduce taking server assignment into consideration as well. We formulate a MapReduce server-job organizer problem (MSJO) and show that it is NP-complete. We develop a 3-approximation algorithm and a fast heuristic design. Moreover, we further propose a novel fine-grained practical algorithm for general MapReduce-like task scheduling problem. Finally, we evaluate our algorithms through both simulations and experiments on Amazon EC2 with an implementation with Hadoop. The results confirm the superiority of our algorithms. | Joint scheduling of MapReduce jobs with servers: Performance bounds and experiments |
S0743731516300077 | The paper presents a novel approach and algorithm with mathematical formula for obtaining the exact optimal number of task resources for any workload running on Hadoop MapReduce. In the era of Big Data, energy efficiency has become an important issue for the ubiquitous Hadoop MapReduce framework. However, the question of what is the optimal number of tasks required for a job to get the most efficient performance from MapReduce still has no definite answer. Our algorithm for optimal resource provisioning allows users to identify the best trade-off point between performance and energy efficiency on the runtime elbow curve fitted from sampled executions on the target cluster for subsequent behavioral replication. Our verification and comparison show that the currently well-known rules of thumb for calculating the required number of reduce tasks are inaccurate and could lead to significant waste of computing resources and energy with no further improvement in execution time. | Towards efficient resource provisioning in MapReduce |
S0747563213003075 | This paper introduces a new perspective on information behavior in Web 2.0 environments, including the role of mobile access in bridging formal to informal learning. Kuhlthau’s (1991, 2007) Information Search Process (ISP) model is identified as a theoretical basis for exploring Information Seeking attitudes and behaviors, while social learning and literacy concepts of Vygotsky (1962, 1978), Bruner (1962, 1964) and Jenkins (2010) are identified as foundations for Information Sharing. The Guided Inquiry Spaces model (Maniotes, 2005) is proposed as an approach to bridging the student’s informal learning world and the curriculum-based teacher’s world. Research within this framework is operationalized through a recently validated Information and Communications Technology Learning (ICTL) survey instrument measuring learners’ preferences for self-expression, sharing, and knowledge acquisition interactions in technology-pervasive environments. Stepwise refinement of ICTL produced two reliable and valid psychometric scales, Information Sharing (alpha =.77) and Information Seeking (alpha =.72). Cross-validation with an established Mobile Learning Scale (Khaddage & Knezek, 2013) indicates that Information Sharing aligns significantly (p <.05) with Mobile Learning. Information Seeking, Information Sharing, and mobile access are presented as important, complimentary components important, complimentary components in the shift along the formal to informal learning continuum. Therefore, measures of these constructs can assist in understanding students’ preferences for 21st century learning. | Information Seeking, Information Sharing, and going mobile: Three bridges to informal learning |
S0747563213004093 | Internet addiction is a rapidly growing field of research, receiving attention from researchers, journalists and policy makers. Despite much empirical data being collected and analyzed clear results and conclusions are surprisingly absent. This paper argues that conceptual issues and methodological shortcomings surrounding internet addiction research have made theoretical development difficult. An alternative model termed compensatory internet use is presented in an attempt to properly theorize the frequent assumption that people go online to escape real life issues or alleviate dysphoric moods and that this sometimes leads to negative outcomes. An empirical approach to studying compensatory internet use is suggested by combining the psychological literature on internet addiction with research on motivations for internet use. The theoretical argument is that by understanding how motivations mediate the relationship between psychosocial well-being and internet addiction, we can draw conclusions about how online activities may compensate for psychosocial problems. This could help explain why some people keep spending so much time online despite experiencing negative outcomes. There is also a methodological argument suggesting that in order to accomplish this, research needs to move away from a focus on direct effects models and consider mediation and interaction effects between psychosocial well-being and motivations in the context of internet addiction. This is key to further exploring the notion of internet use as a coping strategy; a proposition often mentioned but rarely investigated. | A conceptual and methodological critique of internet addiction research: Towards a model of compensatory internet use |
S0747563213004743 | This article reports new findings on the incidence of risk and the associated experience of harm reported by children and adolescents aged 11–16, regarding receipt of sexual messages on the internet (known popularly as sexting). Findings showed that the main predictors of the risk of seeing or receiving sexual messages online are age (older), psychological difficulties (higher), sensation seeking (higher) and risky online and offline behavior (higher). By contrast, the main predictors of harm resulting from receiving such messages were age (younger), gender (girls), psychological difficulties (higher) and sensation seeking (lower), with no effect for risky online or offline behavior. The findings suggest that accounts of internet-related risks should distinguish between predictors of risk and harm. Since some exposure to risk is necessary to build resilience, rather than aiming to reduce risk through policy and practical interventions, the findings can be used to more precisely target those who experience harm in order to reduce harm overall from internet use. | When adolescents receive sexual messages on the internet: Explaining experiences of risk and harm |
S0747563214001411 | Understanding Veterans’ narrated experience as they navigate a web-based intervention is important because it can inform the content, layout and format of these therapies. Using the “Think Aloud” method, twenty-five Veterans of military service expressed thoughts and reactions while navigating through a web-based Motivational Interviewing intervention. The intervention encouraged Veterans applying for Compensation for military-related psychiatric conditions to engage in work related activities. They then completed quantitative ratings of the site. Overall, the site was rated highly, and ratings were in the neutral range as to whether internet delivery of the material was preferable to in-person counseling. Comments revealed the complexity of adapting Motivational Interviewing for a web-based intervention. The intervention provided reflections and non-judgmental statements to Veterans accustomed to more directive statements, and receiving reflections from a computer-therapist evoked mixed responses. Veterans answered questions with intuitive formats quickly, and usually did not read directions concerning how to answer questions. Veterans felt frustrated by the lack of support throughout the Compensation process. They advocated for further development of this web-based intervention as a support for people awaiting their claim determination. | Developing a benefits counseling website for Veterans using Motivational Interviewing techniques |
S0747563214002271 | The comprehension of micro-worlds has always been the focus and the challenge of chemistry learning. Junior high school students’ imaginative abilities are not yet mature. As a result, they are not able to visualize microstructures correctly during the beginning stage of chemistry learning. This study targeted “the composition of substances” segment of junior high school chemistry classes and, furthermore, involved the design and development of a set of inquiry-based Augmented Reality learning tools. Students could control, combine and interact with a 3D model of micro-particles using markers and conduct a series of inquiry-based experiments. The AR tool was tested in practice at a junior high school in Shenzhen, China. Through data analysis and discussion, we conclude that (a) the AR tool has a significant supplemental learning effect as a computer-assisted learning tool; (b) the AR tool is more effective for low-achieving students than high-achieving ones; (c) students generally have positive attitudes toward this software; and (d) students’ learning attitudes are positively correlated with their evaluation of the software. | A case study of Augmented Reality simulation system application in a chemistry course |
S0747563214003227 | A field experiment examined whether increasing opportunities for face-to-face interaction while eliminating the use of screen-based media and communication tools improved nonverbal emotion–cue recognition in preteens. Fifty-one preteens spent five days at an overnight nature camp where television, computers and mobile phones were not allowed; this group was compared with school-based matched controls (n =54) that retained usual media practices. Both groups took pre- and post-tests that required participants to infer emotional states from photographs of facial expressions and videotaped scenes with verbal cues removed. Change scores for the two groups were compared using gender, ethnicity, media use, and age as covariates. After five days interacting face-to-face without the use of any screen-based media, preteens’ recognition of nonverbal emotion cues improved significantly more than that of the control group for both facial expressions and videotaped scenes. Implications are that the short-term effects of increased opportunities for social interaction, combined with time away from screen-based media and digital communication tools, improves a preteen’s understanding of nonverbal emotional cues. | Five days at outdoor education camp without screens improves preteen skills with nonverbal emotion cues |
S0747563214004087 | An online survey (N =461) investigated how individuals’ interpersonal need and ability affect their motivations of Twitter use and how different motivations predict specific usage behavior. Based on the two competing views concerning the antecedents and consequences of online communication (social enhancement vs. social compensation), the joint effect of affiliative tendency and communication competence was hypothesized. For those high on affiliative tendency, communication competence positively predicted Twitter use for network expansion and negatively predicted more self-focused, intrapersonal Twitter use, but no such effect was found for less affiliative individuals. Those using Twitter for surveillance spent more time on Twitter and maintained a larger Twitter network, while those using Twitter for network expansion posted tweets and retweeted others’ posts more frequently. | How social is Twitter use? Affiliative tendency and communication competence as predictors |
S0747563214004324 | The promise of computer-mediated communication (CMC) to reduce intergroup prejudice has generated mixed results. Theories of CMC yield alternative and mutually exclusive explanations about mechanisms by which CMC fosters relationships online with potential to ameliorate prejudice. This research tests contact-hypothesis predictions and two CMC theories on multicultural, virtual groups who communicated during a yearlong online course focusing on educational technology. Groups included students from the three major Israeli education sectors—religious Jews, secular Jews, and Muslims—who completed pretest and posttest prejudice measures. Two sets of control subjects who did not participate in virtual groups provided comparative data. An interaction of the virtual groups experience×religious/cultural membership affected prejudice toward different religious/cultural target groups, by reducing prejudice toward the respective outgroups for whom the greatest initial enmity existed. Comparisons of virtual group participants to control subjects further support the influence of the online experience. Correlations between prejudice with group identification and with interpersonal measures differentiate which theoretical processes pertained. | Computer-mediated communication and the reduction of prejudice: A controlled longitudinal field experiment among Jews and Arabs in Israel |
S0747563214005366 | For hybrid merchants, who sell goods simultaneously through digital media and conventional channels, creating a price proposition is a major and controversial decision. We model the interaction between hybrid merchants and their customers within the context of an experience goods market; and we study how merchants and customers both learn from this interaction to make optimum decisions. The equilibrium solution of the proposed game shows that experience goods’ loyal customers tend to switch channels, make repeat purchases online, and avoid learning alternative value propositions. And the optimum strategy for hybrid merchants involves higher prices that rely on solid branding and knowledge of the clientele. The findings also yield important managerial implications. | Learning from customer interaction: How merchants create price-level propositions for experience goods in hybrid market environments |
S074756321400538X | For acquiring new skills or knowledge, contemporary learners frequently rely on the help of educational technologies supplementing human teachers as a learning aid. In the interaction with such systems, speech-based communication between the human user and the technical system has increasingly gained importance. Since spoken computer output can take on a variety of forms depending on the method of speech generation and the employment of prosodic modulations, the effects of such auditory variations on the user’s learning achievement require systematic investigation. The experiment reported here examined the specific effects of speech generation method and prosody of spoken system feedback in a computer-supported learning environment, and may serve as validational tool for future investigations of spoken computer feedback effects on learning. Learning performance in a basic cognitive task was compared between users receiving pre-recorded, naturally spoken system feedback with neutral prosody, pre-recorded feedback with motivating (praising or blaming) prosody, or computer-synthesized feedback. The observed results provide empirical evidence that users of technical tutoring systems benefit from pre-recorded, naturally spoken feedback, and do even more so from feedback with motivational prosodic modulations matching their performance success. Theoretical implications and considerations for future implementations of spoken feedback in computer-based educational systems are discussed. | Carrot and stick 2.0: The benefits of natural and motivational prosody in computer-assisted learning |
S0747563214005652 | This study investigates the now-common action of looking at a mobile phone display, thereby offering insight into the present communication situation in an era in which the use of high-performance mobile phones has become ubiquitous. In this study, the action of looking at a mobile phone display is considered nonverbal behavior/communication. This study applies a basic, general model to elucidate the present situation of face-to-face communication in light of the increasing prevalence of social interaction via mobile phone use. The results derived from the model include mobile phone users’ increasing social power and an accumulation of potential discontent in relation to different interpretations. This study concludes that in an era of high-performance mobile phones, the social context in face-to-face communication can be influenced by the act of looking at a mobile phone display. | The action of looking at a mobile phone display as nonverbal behavior/communication: A theoretical perspective |
S0747563214006207 | In this study, the characteristics of what users observe when visiting a media website as wells as the prediction of the impact on oneself, friends and others are researched. The influence that this information has over their opinion verifies the existence of Web Third-person effect (WTPE). With the use of an online survey (N =9150) in all media websites it was proved that the variables that have a greater impact either on others or our friends than ourselves are: The number of users being concurrently online on the same media website, the exact number of users having read each article on a media website as well as the number of users having shared a news article on facebook, twitter, or other social networks. Moreover, age is a significant factor that explains the findings and is important to the effect. Additionally, factors that affect the influence of the user generated messages on others than on oneself were found. Furthermore, the more credible the news is perceived to be and when there is not a particular mediated message the WTPE is absent confirming the existing theory. | Web Third-person effect in structural aspects of the information on media websites |
S0747563214007559 | Location-based social networks (LBSNs) are a recent phenomenon for sharing a presence at everyday locations with others and have the potential to give new insights into human behaviour. To date, due to barriers in data collection, there has been little research into how our personality relates to the categories of place that we visit. Using the Foursquare LBSN, we have released a web-based participatory application that examines the personality characteristics and checkin behaviour of volunteer Foursquare users. Over a four-month period, we examine the behaviour and the “Big Five” personality traits of 174 anonymous users who had collectively checked in 487,396 times at 119,746 venues. Significant correlations are found for Conscientiousness, Openness and Neuroticism. In contrast to some previous findings about online social networks, Conscientiousness is positively correlated with LBSN usage. Openness correlates mainly with location-based variables (average distance between venues visited, venue popularity, number of checkins at sociable venues). For Neuroticism, further negative correlations are found (number of venues visited, number of sociable venues visited). No correlations are found for the other personality traits, which is surprising for Extroversion. The study concludes that personality traits help to explain individual differences in LBSN usage and the type of places visited. | Personality and location-based social networks |
S0747563215000102 | The treadmill desk is a new human–computer interaction (HCI) setup intended to reduce the time workers spend sitting. As most workers will not choose to spend their entire workday walking, this study investigated the short-term delayed effect of treadmill desk usage. An experiment was conducted in which participants either sat or walked while they read a text and received emails. Afterward, all participants performed a task to evaluate their attention and memory. Behavioral, neurophysiological, and perceptual evidence showed that participants who walked had a short-term increase in memory and attention, indicating that the use of a treadmill desk has a delayed effect. These findings suggest that the treadmill desk, in addition to having health benefits for workers, can also be beneficial for businesses by enhancing workforce performance. | The delayed effect of treadmill desk usage on recall and attention |
S0747563215000527 | Games are important vehicles for learning and behavior change as long as players are motivated to continue playing. We study the impact of verbal feedback in stimulating player motivation and future play in a brain-training game. We conducted a 2 (feedback valence: positive vs. negative)×3 (feedback type: descriptive, comparative, evaluative) between-subjects experiment (N =157, 69.4% female, M age =32.07). After playing a brain-training game and receiving feedback, we tapped players’ need satisfaction, motivation and intention to play the game again. Results demonstrate that evaluative feedback increases, while comparative feedback decreases future game play. Furthermore, negative feedback decreases players’ feeling of competence, but also increases immediate game play. Positive feedback, in contrast, satisfies competence and autonomy needs, thereby boosting intrinsic motivation. Negative feedback thus motivates players to repair poor short-term performances, while positive feedback is more powerful in fostering long-term motivation and play. | How feedback boosts motivation and play in a brain-training game |
S0747563215001156 | This study investigated how a job seeker self-presentation affects recruiter’s hiring recommendations in an online communities and what categories of self-presentation contribute to fit perceptions for obtaining hiring recommendations. The study participants viewed potential candidates’ LinkedIn profiles and responded to questions regarding the argument quality and source credibility of their self-presentations, fit perceptions, and hiring recommendations. The results show that recruiters make inferences about job seekers’ person–job fit and person–organisation fit on the basis of argument quality in specific self-presentation categories, which in turn predict recruiters’ intentions to recommend job seekers for hiring. Although certain specific categories of self-presentation offering source credibility have positive associations with person–person (P–P) fit perception, there is a non-significant relationship between perceived P–P fit and hiring recommendations. | Self-presentation and hiring recommendations in online communities: Lessons from LinkedIn |
S0747563215001417 | Emotions-aware applications are getting a lot of attention as a way to improve the user experience, and also thanks to increasingly affordable Brain–Computer Interfaces (BCI). Thus, projects collecting emotion-related data are proliferating, like social networks sentiment analysis or tracking students’ engagement to reduce Massive Online Open Courses (MOOCs) drop out rates. All them require a common way to represent emotions so it can be more easily integrated, shared and reused by applications improving user experience. Due to the complexity of this data, our proposal is to use rich semantic models based on ontology. EmotionsOnto is a generic ontology for describing emotions and their detection and expression systems taking contextual and multimodal elements into account. The ontology has been applied in the context of EmoCS, a project that collaboratively collects emotion common sense and models it using the EmotionsOnto and other ontologies. Currently, emotion input is provided manually by users. However, experiments are being conduced to automatically measure users’s emotional states using Brain–Computer Interfaces. | Emotions ontology for collaborative modelling and learning of emotional responses |
S074756321500268X | There has been much debate surrounding the potential benefits and costs of online interaction. The present research argues that engagement with online discussion forums can have underappreciated benefits for users’ well-being and engagement in offline civic action, and that identification with other online forum users plays a key role in this regard. Users of a variety of online discussion forums participated in this study. We hypothesized and found that participants who felt their expectations had been exceeded by the forum reported higher levels of forum identification. Identification, in turn, predicted their satisfaction with life and involvement in offline civic activities. Formal analyses confirmed that identification served as a mediator for both of these outcomes. Importantly, whether the forum concerned a stigmatized topic moderated certain of these relationships. Findings are discussed in the context of theoretical and applied implications. | Individual and social benefits of online discussion forums |
S0747563215003180 | Online syndicated text-based advertising is ubiquitous on news sites, blogs, personal websites, and on search result pages. Until recently, a common distinguishing feature of these text-based advertisements has been their background color. Following intervention by the Federal Trade Commission (FTC), the format of these advertisements has undergone a subtle change in their design and presentation. Using three empirical experiments, we investigate the effect of industry-standard advertising practices on click rates, and demonstrate changes in user behavior when this familiar differentiator is modified. Using three large-scale experiments (N 1 =101, N 2 =84, N 3 =176) we find that displaying advertisement and content results with a differentiated background results in significantly lower click rates. Our results demonstrate the strong link between background color differentiation and advertising, and reveal how alternative differentiation techniques influence user behavior. | Differentiation of online text-based advertising and the effect on users’ click behavior |
S0747563215003489 | We study how categories form and develop over time in a sensemaking task by groups of students employing a collaborative tagging system. In line with distributed cognition theories, we look at both the tags students use and their strength of representation in memory. We hypothesize that categories get more differentiated over time as students learn, and that semantic stabilization on the group level (i.e. the convergence in the use of tags) mediates this relationship. Results of a field experiment that tested the impact of topic study duration on the specificity of tags confirms these hypotheses, although it was not study duration that produced this effect, but rather the effectiveness of the collaborative taxonomy the groups built. In the groups with higher levels of semantic stabilization, we found use of more specific tags and better representation in memory. We discuss these findings with regard to the important role of the information value of tags that would drive both the convergence on the group level as well as a shift to more specific levels of categorization. We also discuss the implication for cognitive science research by highlighting the importance of collaboratively built artefacts in the process of how knowledge is acquired, and implications for educational applications of collaborative tagging environments. | Dynamics of human categorization in a collaborative tagging system: How social processes of semantic stabilization shape individual sensemaking |
S074756321500360X | On Facebook, users are exposed to posts from both strong and weak ties. Even though several studies have examined the emotional consequences of using Facebook, less attention has been paid to the role of tie strength. This paper aims to explore the emotional outcomes of reading a post on Facebook and examine the role of tie strength in predicting happiness and envy. Two studies – one correlational, based on a sample of 207 American participants and the other experimental, based on a sample of 194 German participants – were conducted in 2014. In Study 2, envy was further distinguished into benign and malicious envy. Based on a multi-method approach, the results showed that positive emotions are more prevalent than negative emotions while browsing Facebook. Moreover, tie strength is positively associated with the feeling of happiness and benign envy, whereas malicious envy is independent of tie strength after reading a (positive) post on Facebook. | The emotional responses of browsing Facebook: Happiness, envy, and the role of tie strength |
S0747563215300546 | Technological development has influenced the ways in which learning and reading takes place, and a variety of technological tools now supplement and partly replace paper books. Previous studies have suggested that digital study media impair metacognitive monitoring and regulation (Ackerman & Goldsmith, 2011; Ackerman & Lauterman, 2012; Lauterman & Ackerman 2014). The aim of the current study was to explore the relationship between metacognitive experiences and learning for digital versus non-digital texts in a test situation where metacognitive experiences were assessed more broadly compared to previous studies, and where a larger number of potentially confounding factors were controlled for. Experiment 1 (N = 100) addressed the extent to which metacognitive monitoring accuracy for 4 factual texts was influenced by whether texts were presented on a paper sheet, a PC, an iPad, or a Kindle. Metacognitive experiences were measured by Predictions of Performance (PoP), Judgements of Learning (JoL), and Confidence Ratings (CR), and learning outcome was measured by recognition performance. Experiment 2 (N = 50) applied the same basic procedure, comparing a paper condition with a PC condition with the opportunity to take notes and highlight text. In both experiments, study media had no consistent effect on metacognitive calibration or resolution. The results give little support to previous claims that digital learning impairs metacognitive regulation. | The relationship between metacognitive experiences and learning: Is there a difference between digital and non-digital study media? |
S0747563215301655 | Multi-player online battle arena games (MOBAs) are large virtual environments requiring complex problem-solving and social interaction. We asked whether these games generate psychologically interesting data about the players themselves. Specifically, we asked whether user names, which are chosen by players outside of the game itself, predicted in-game behaviour. To examine this, we analysed a large anonymized dataset from a popular MOBA (‘League of Legends’) – by some measures the most popular game in the world. We find that user names contain two pieces of information that correlate with in-game social behaviour. Both player age (estimated from numerical sequences within name) and the presence of highly anti-social words are correlated with the valences of player/player interactions within the game. Our findings suggest that players' real-world characteristics influence behaviour and interpersonal interactions within online games. Anonymized statistics derived from such games may therefore be a valuable tool for studying psychological traits across global populations. | What's in a name? Ages and names predict the valence of social interactions in a massive online game |
S0747563215301850 | Concerns exist that Internet gambling may increase rates of gambling harms, yet research to date has found inconsistent results. Internet gamblers are a heterogeneous group and considering this population as a whole may miss important differences between gamblers. The differential relationship of using mobile and other devices for gambling online has not been considered as compared to the use of computers. The true relationship of Internet gambling on related problems and differences between preferred modes for accessing online gambling may be obscured by confounding personal and behavioural factors. This paper thus uses the innovative approach of propensity score matching to estimate the consequence of gambling offline, or online through a computer, as compared to mobile or other supplementary devices by accounting for confounding effects of difference among groups of Australian gamblers (N = 4482). Gamblers who prefer to gamble online using computers had lower rates of gambling problems as compared to those using mobile and supplementary devices. Individual life cycle was useful to differentiate between groups, indicating age, marital, and employment status should be considered together to predict how people gamble online. This is the first empirical study to suggest that the mode of accessing Internet gambling may be related to subsequent harms. | Is all Internet gambling equally problematic? Considering the relationship between mode of access and gambling problems |
S0747563215301904 | In high-risk domains such as human space flight, cognitive performances can be negatively affected by emotional responses to events and conditions in their working environment (e.g., isolation and health incidents). The COgnitive Performance and Error (COPE) model distinguishes effects of work content on cognitive task load and emotional state, and their effect on the professional's performance. This paper examines the relationships between these variables for a simulated Mars-mission. Six volunteers (well-educated and -motivated men) were isolated for 520 days in a simulated spacecraft in which they had to execute a (virtual) mission to Mars. As part of this mission, every other week, several computer tasks were performed. These tasks consisted of a negotiation game, a chat-based learning activity and an entertainment game. Before and after these tasks, and after post-task questionnaires, the participants rated their emotional state consisting of arousal, valence and dominance, and their cognitive task load consisting of level of information processing, time occupied and task-set switches. Results revealed significant differences between cognitive task load and emotional state levels when work content varied. Significant regression models were also found that could explain variation in task performance. These findings contribute to the validation of the COPE model and suggest that differences in appraisals for tasks may bring about different emotional states and task performances. | Work content influences on cognitive task load, emotional state and performance during a simulated 520-days' Mars mission |
S0747563215302636 | This study examines (1) the associations between multiple types of child maltreatment and Internet addiction, and (2) the mediating effects of post-traumatic stress disorder (PTSD) on these associations. We collected data from a national proportionately stratified random sample of 6233 fourth-grade students in Taiwan in 2014. We conducted bivariate correlations and sets of multiple regression analyses to examine the associations between multiple types of maltreatment (5 types in total) and Internet addiction, and to identify the mediating role of PTSD. The results reveal that being male and experiencing abuse (psychological neglect, physical neglect, paternal physical violence, sexual violence) were associated with increased risk among children of developing PTSD and Internet addiction. Moreover, PTSD mediated the associations between multiple types of maltreatment (except maternal physical violence) and Internet addiction. This study demonstrates (1) the effects of multiple types of maltreatment on the PTSD and Internet addiction of children and (2) the importance of early prevention and intervention in addressing related public-health concerns. | Associations between child maltreatment, PTSD, and internet addiction among Taiwanese students |
S0747563215303101 | This article explores whether people more frequently attempt to repair misunderstandings when speaking to an artificial conversational agent if it is represented as fully human. Interactants in dyadic conversations with an agent (the chat bot Cleverbot) spoke to either a text screen interface (agent's responses shown on a screen) or a human body interface (agent's responses vocalized by a human speech shadower via the echoborg method) and were either informed or not informed prior to interlocution that their interlocutor's responses would be agent-generated. Results show that an interactant is less likely to initiate repairs when an agent-interlocutor communicates via a text screen interface as well as when they explicitly know their interlocutor's words to be agent-generated. That is to say, people demonstrate the most “intersubjective effort” toward establishing common ground when they engage an agent under the same social psychological conditions as face-to-face human–human interaction (i.e., when they both encounter another human body and assume that they are speaking to an autonomously-communicating person). This article's methodology presents a novel means of benchmarking intersubjectivity and intersubjective effort in human-agent interaction. “Intersubjectivity has [ … ] to be taken for granted in order to be achieved.” – Rommetveit (1974, p. 56) | Co-constructing intersubjectivity with artificial conversational agents: People are more likely to initiate repairs of misunderstandings with agents represented as human |
S0747563216300036 | While cyberbullying among children and adolescents is a well-investigated phenomenon, few studies have centred on adults' exposure to cyberbullying in working life. Drawing on a large sample of 3371 respondents, this study investigates the prevalence of cyberbullying and face-to-face bullying in Swedish working life and its relation to gender and organisational position. Using a cyberbullying behaviour questionnaire (CBQ), the result shows that 9.7% of the respondents can be labelled as cyberbullied in accordance with Leymann's cut-off criterion. Fewer respondents, .7%, labelled themselves as cyberbullied and 3.5% labelled themselves as bullied face-to-face. While no significant relationships with gender or organisational position was found for individuals exposed to face-to-face bullying, this study showed that men to a higher degree than women were exposed to cyberbullying. Moreover, individuals with a supervisory position were more exposed to cyberbullying than individuals with no managerial responsibility. | Exploring cyberbullying and face-to-face bullying in working life – Prevalence, targets and expressions |
S0747563216300097 | In this paper we consider whether people with similar personality traits have a preference for common locations. Due to the difficulty in tracking and categorising the places that individuals choose to visit, this is largely unexplored. However, the recent popularity of location-based social networks (LBSNs) provides a means to gain new insight into this question through checkins - records that are made by LBSN users of their presence at specific street level locations. A web-based participatory survey was used to collect the personality traits and checkin behaviour of 174 anonymous users, who, through their common check-ins, formed a network with 5373 edges and an approximate edge density of 35%. We assess the degree of overlap in personality traits for users visiting common locations, as detected by user checkins. We find that people with similar high levels of conscientiousness, openness or agreeableness tended to have checked-in locations in common. The findings for extraverts were unexpected in that they did not provide evidence of individuals assorting at the same locations, contrary to predictions. Individuals high in neuroticism were in line with expectations, they did not tend to have locations in common. Unanticipated results concerning disagreeableness are of particular interest and suggest that different venue types and distinctive characteristics may act as attractors for people with particularly selective tendencies. These findings have important implications for decision-making and location. | Birds of a feather locate together? Foursquare checkins and personality homophily |
S0747563216300899 | Ambient awareness refers to the awareness social media users develop of their online network in result of being constantly exposed to social information, such as microblogging updates. Although each individual bit of information can seem like random noise, their incessant reception can amass to a coherent representation of social others. Despite its growing popularity and important implications for social media research, ambient awareness on public social media has not been studied empirically. We provide evidence for the occurrence of ambient awareness and examine key questions related to its content and functions. A diverse sample of participants reported experiencing awareness, both as a general feeling towards their network as a whole, and as knowledge of individual members of the network, whom they had not met in real life. Our results indicate that ambient awareness can develop peripherally, from fragmented information and in the relative absence of extensive one-to-one communication. We report the effects of demographics, media use, and network variables and discuss the implications of ambient awareness for relational and informational processes online. | Ambient awareness: From random noise to digital closeness in online social networks |
S0747563216301005 | This exploratory study drew upon the social compensation/social enhancement hypotheses and weak tie network theory to predict what kind of people supplement offline coping resources with online coping resources more than others. Using a large, representative survey the authors found that low self-esteem, lonely, and socially isolated individuals add more online resources to their mix of preferred coping strategies than their counterparts. These groups benefit from the fact that online coping resources are not as strongly entangled with online social ties as are offline coping resources with offline ties, and from the fact that online coping resources can sometimes be mobilized without any social interactions. In contrast to offline coping, the researchers also found that men mobilize more online coping resources than women. The authors discuss the implications of these findings in terms of the social compensation hypothesis and online weak tie networks. | Predictors of mobilizing online coping versus offline coping resources after negative life events |
S074756321630125X | Provocative messages targeting childhood obesity are a central means to increase problem awareness. But what happens when different online media platforms take up the campaign, comment, re-contextualize, and evaluate it? Relating to preliminary findings of persuasion research, we postulate that source credibility perceptions vary across types of online media platforms and contextualization of the message. Individual characteristics, in particular weight-related factors, are assumed to influence message effects. A 3 (media type: blog, online news, Facebook) × 2 (reinforcement versus impairment context) experimental design with students (N = 749) aged between 13 and 18 years was conducted. Results show an interaction between media type and argumentation for affective self-perceptions of weight. Self-relevance varies based on different source credibility perceptions. Overall, campaign re-contextualization of provocative messages may result in negative persuasion effects and needs to be considered in campaign development. | Source does matter: Contextual effects on online media-embedded health campaigns against childhood obesity |
S0747563216302059 | There is growing evidence that social media addiction is an evolving problem, particularly among adolescents. However, the absence of an instrument measuring social media addiction hinders further development of the research field. The present study, therefore, aimed to test the reliability and validity of a short and easy to administer Social Media Disorder (SMD) Scale that contains a clear diagnostic cut-off point to distinguish between disordered (i.e. addicted) and high-engaging non-disordered social media users. Three online surveys were conducted among a total of 2198 Dutch adolescents aged 10 to 17. The 9-item scale showed solid structural validity, appropriate internal consistency, good convergent and criterion validity, sufficient test-retest reliability, and satisfactory sensitivity and specificity. In sum, this study generated evidence that the short 9-item scale is a psychometrically sound and valid instruments to measure SMD. | The Social Media Disorder Scale |
S0747563216302448 | The emerging retail culture is characterized by the extensive use of mobile technologies, high connectivity, ubiquitous computing and contactless technologies, which enable consumers to experience shopping differently. In fact, innovative mobile technologies provide new tools (apps) which are able to separate the moment of purchase from the moment of effective consumption, by allowing consumers to make purchases by mobile phone and collect them at home or at a store (a pick-up boutique or collection point), in addition to the traditional in-store service (purchase in the store and collect/consume in the store). The aim of this paper is to understand the extent to which mobile technologies have an impact on consumer behaviour, with emphasis on the drivers motivating consumers to adopt the consumer experience of mobile shopping. To achieve this goal we used a qualitative approach involving 29 consumers in the Italian market, where mobile shopping is still at an early stage. The findings shed a light on the extent to which consumers are moving from e-channels to mobile channels and take into account the effect of these technological innovations in retail settings from a cognitive standpoint, where studies are limited. The implications for researchers and practitioners are then discussed, with emphasis on retailers need to develop new mobile service competences, and integrate and synthetize physical retail settings with mobile opportunities and functionalities. | The effect of mobile retailing on consumers' purchasing experiences: A dynamic perspective |
S0747563216303089 | Virtual reality appears to be a promising and motivating platform to safely practice and rehearse social skills for children with Autism Spectrum Disorders (ASD). However, the literature to date is subject to limitations in elucidating the effectiveness of these virtual reality interventions. This study investigated the impact of a Virtual Reality Social Cognition Training to enhance social skills in children with ASD. Thirty children between the ages of 7–16 diagnosed with ASD completed 10, 1-h sessions across 5 weeks. Three primary domains were measured pre-post: emotion recognition, social attribution, attention and executive function. Results revealed improvements on measures of emotion recognition, social attribution, and executive function of analogical reasoning. These preliminary findings suggest that the use of a virtual reality platform offers an effective treatment option for improving social impairments commonly found in ASD. | Virtual Reality Social Cognition Training for children with high functioning autism |
S0747563216303351 | Technology plays an almost ubiquitous role in contemporary British society. Despite this, we do not have a well-theorised understanding of the ways adolescent girls use digital devices in the context of their developing secure relationships with their families and friends. This study aims to address this gap in understanding. Fifteen young women based in the Midlands and from across the socio-economic spectrum participated between 2012 and 2013. Participants completed three research tools exploring technology-mediated attachment and relationships, and participated in a face-to-face interview. The findings suggest that it is possible for girls to develop attachments with others through, and with, technology; technology use brings people together and mediates relationships in a range of ways encapsulated by attachment functions. The study highlights the ongoing importance of parental and peer relationships by suggesting that technology can act as a means by which the positive and negative attributes of existing relationships can be amplified. | So why have you added me? Adolescent girls’ technology-mediated attachments and relationships |
S074756321630348X | The increasing convergence of the gambling and gaming industries has raised questions about the extent to which social casino game play may influence gambling. This study aimed to examine the relationship between social casino gaming and gambling through an online survey of 521 adults who played social casino games in the previous 12 months. Most social casino game users (71.2%) reported that these games had no impact on how much they gambled. However, 9.6% reported that their gambling overall had increased and 19.4% reported that they had gambled for money as a direct result of these games. Gambling as a direct result of social casino games was more common among males, younger users, those with higher levels of problem gambling severity and more involved social casino game users in terms of game play frequency and in-game payments. The most commonly reported reason for gambling as a result of playing social casino games was to win real money. As social casino games increased gambling for some users, this suggests that simulated gambling may influence actual gambling expenditure particularly amongst those already vulnerable to or affected by gambling problems. | Migration from social casino games to gambling: Motivations and characteristics of gamers who gamble |
S0747563216303521 | Introduction Web-based neuropsychological testing can be an important tool in meeting the increasing demands for neuropsychological assessment in the clinic and in large research studies. The primary aim of this study was to investigate practice effects and reliability of self-administered web-based neuropsychological tests in Memoro. Due to lack of consistent analysis and reporting of reliability in the literature, especially intraclass correlation coefficients (ICC), we highlight how using different ICC measures results in different estimates of reliability. Method 61 (31 females) participants (mean age 53.3 years) completed the Memoro tests twice with a median of 14 days between testing. Results Practice effects were detected for all cognitive measures (d = 0.32–0.61), most pronounced for memory measures. Reliability estimated using two-way random effects single measure absolute agreement ICC(2,1) were between 0.55 and 0.74. Two-way mixed effects average measure consistency ICC(3,2), ranged from 0.79 to 0.89. Reliability was highest for the processing speed task and lower for the memory tasks. Conclusions Memoro tests had test-retest reliability similar to that of traditional, computerized and web-based test batteries used clinically and in research. It is important to carefully choose and specify the ICC implemented, as ICC(2,1) and ICC(3,2) give different results and reflect reliability of different measures. | Initial assessment of reliability of a self-administered web-based neuropsychological test battery |
S0747563216303697 | This paper reports on qualitative insights generated from 46 semi-structured interviews with adults ranging in age from 18 to 70. It focuses on an online social behaviour, ‘fraping’, which involves the unauthorised alteration of content on a person’s social networking site (SNS) profile by a third party. Our exploratory research elucidates what constitutes a frape, who is involved in it, and what the social norms surrounding the activity are. We provide insights into how frape contributes to online sociality and the co-construction of online identity, and identify opportunities for further work in understanding the interplay between online social identities, social groups and social norms. | Fraping, social norms and online representations of self |
S088523081300017X | SAMAR is a system for subjectivity and sentiment analysis (SSA) for Arabic social media genres. Arabic is a morphologically rich language, which presents significant complexities for standard approaches to building SSA systems designed for the English language. Apart from the difficulties presented by the social media genres processing, the Arabic language inherently has a high number of variable word forms leading to data sparsity. In this context, we address the following 4 pertinent issues: how to best represent lexical information; whether standard features used for English are useful for Arabic; how to handle Arabic dialects; and, whether genre specific features have a measurable impact on performance. Our results show that using either lemma or lexeme information is helpful, as well as using the two part of speech tagsets (RTS and ERTS). However, the results show that we need individualized solutions for each genre and task, but that lemmatization and the ERTS POS tagset are present in a majority of the settings. | SAMAR: Subjectivity and sentiment analysis for Arabic social media |
S0885230813000181 | Recent research on English word sense subjectivity has shown that the subjective aspect of an entity is a characteristic that is better delineated at the sense level, instead of the traditional word level. In this paper, we seek to explore whether senses aligned across languages exhibit this trait consistently, and if this is the case, we investigate how this property can be leveraged in an automatic fashion. We first conduct a manual annotation study to gauge whether the subjectivity trait of a sense can be robustly transferred across language boundaries. An automatic framework is then introduced that is able to predict subjectivity labeling for unseen senses using either cross-lingual or multilingual training enhanced with bootstrapping. We show that the multilingual model consistently outperforms the cross-lingual one, with an accuracy of over 73% across all iterations. | Sense-level subjectivity in a multilingual setting |
S0885230813000193 | This papers studies the synthesis of speech over a wide vocal effort continuum and its perception in the presence of noise. Three types of speech are recorded and studied along the continuum: breathy, normal, and Lombard speech. Corresponding synthetic voices are created by training and adapting the statistical parametric speech synthesis system GlottHMM. Natural and synthetic speech along the continuum is assessed in listening tests that evaluate the intelligibility, quality, and suitability of speech in three different realistic multichannel noise conditions: silence, moderate street noise, and extreme street noise. The evaluation results show that the synthesized voices with varying vocal effort are rated similarly to their natural counterparts both in terms of intelligibility and suitability. | Synthesis and perception of breathy, normal, and Lombard speech in the presence of noise |
S088523081300020X | Sentiment analysis is the natural language processing task dealing with sentiment detection and classification from texts. In recent years, due to the growth in the quantity and fast spreading of user-generated contents online and the impact such information has on events, people and companies worldwide, this task has been approached in an important body of research in the field. Despite different methods having been proposed for distinct types of text, the research community has concentrated less on developing methods for languages other than English. In the above-mentioned context, the present work studies the possibility to employ machine translation systems and supervised methods to build models able to detect and classify sentiment in languages for which less/no resources are available for this task when compared to English, stressing upon the impact of translation quality on the sentiment classification performance. Our extensive evaluation scenarios show that machine translation systems are approaching a good level of maturity and that they can, in combination to appropriate machine learning algorithms and carefully chosen features, be used to build sentiment analysis systems that can obtain comparable performances to the one obtained for English. | Comparative experiments using supervised learning and machine translation for multilingual sentiment analysis |
S0885230813000211 | Post-filtering can be used in mobile communications to improve the quality and intelligibility of speech. Energy reallocation with a high-pass type filter has been shown to work effectively in improving the intelligibility of speech in difficult noise conditions. This paper introduces a post-filtering algorithm that adapts to the background noise level as well as to the fundamental frequency of the speaker and models the spectral effects observed in natural Lombard speech. The introduced method and another post-filtering technique were compared to unprocessed telephone speech in subjective listening tests in terms of intelligibility and quality. The results indicate that the proposed method outperforms the reference method in difficult noise conditions. | An adaptive post-filtering method producing an artificial Lombard-like effect for intelligibility enhancement of narrowband telephone speech |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.