title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Efficacy and safety of paliperidone palmitate in adult patients with acutely symptomatic schizophrenia: a randomized, double-blind, placebo-controlled, dose-response study. | This 13-week, double-blind study evaluated the efficacy and safety of the atypical antipsychotic paliperidone palmitate (recently approved in the United States) versus placebo administered as monthly gluteal injections (after two initial doses given 1 week apart) in acutely symptomatic patients with schizophrenia. Patients (N=388) were randomly assigned (1 : 1 : 1 : 1) to paliperidone palmitate 50, 100, or 150 mg eq. or placebo. As the 150 mg eq. dose was administered to fewer patients (n=30) than planned, meaningful and definitive conclusions cannot be drawn from the results of this group. The change from baseline in Positive and Negative Syndrome Scale total score at endpoint showed improvement in both paliperidone palmitate 50 and 100 mg eq. groups but was significant only in the 100 mg eq. group (P=0.019). The paliperidone palmitate 50 (P=0.004) and 100 mg eq. (P<0.001) groups showed significant improvement in the Personal and Social Performance score from baseline to endpoint versus placebo. Common adverse events (in >or=2% of patients in any group) more frequent with paliperidone palmitate 50 or 100 mg eq. than placebo (>or=5% difference) were headache, vomiting, extremity pain, and injection site pain. Treatment with paliperidone palmitate (100 mg eq.) was efficacious and all doses tested were tolerable. |
Entity Extraction, Linking, Classification, and Tagging for Social Media: A Wikipedia-Based Approach | Many applications that process social data, such as tweets, must extract entities from tweets (e.g., “Obama” and “Hawaii” in “Obama went to Hawaii”), link them to entities in a knowledge base (e.g., Wikipedia), classify tweets into a set of predefined topics, and assign descriptive tags to tweets. Few solutions exist today to solve these problems for social data, and they are limited in important ways. Further, even though several industrial systems such as OpenCalais have been deployed to solve these problems for text data, little if any has been published about them, and it is unclear if any of the systems has been tailored for social media. In this paper we describe in depth an end-to-end industrial system that solves these problems for social data. The system has been developed and used heavily in the past three years, first at Kosmix, a startup, and later at WalmartLabs. We show how our system uses a Wikipedia-based global “real-time” knowledge base that is well suited for social data, how we interleave the tasks in a synergistic fashion, how we generate and use contexts and social signals to improve task accuracy, and how we scale the system to the entire Twitter firehose. We describe experiments that show that our system outperforms current approaches. Finally we describe applications of the system at Kosmix and WalmartLabs, and lessons learned. |
Collision Avoidance for Cooperative UAVs with Rolling Optimization Algorithm Based on Predictive State Space | Unmanned Aerial Vehicles (UAVs) have recently received notable attention because of their wide range of applications in urban civilian use and in warfare. With air traffic densities increasing, it is more and more important for UAVs to be able to predict and avoid collisions. The main goal of this research effort is to adjust real-time trajectories for cooperative UAVs to avoid collisions in three-dimensional airspace. To explore potential collisions, predictive state space is utilized to present the waypoints of UAVs in the upcoming situations, which makes the proposed method generate the initial collision-free trajectories satisfying the necessary constraints in a short time. Further, a rolling optimization algorithm (ROA) can improve the initial waypoints, minimizing its total distance. Several scenarios are illustrated to verify the proposed algorithm, and the results show that our algorithm can generate initial collision-free trajectories more efficiently than other methods in the common airspace. |
The Garnet user interface development environment : a proposal | The Garnet project aims to create a set of tools that will help user interface designers create, modify and maintain highly-interactive, graphical, direct manipulation user interfaces. These tools will form a "User Interface Development Environment" (UBDE), which is sometimes called a 'User Interface Management System" (UIMS). Garnet takes a new approach to UIDEs by concentrating on a particular class of programs: those whose primary focus is creating and editing graphical objects. Garnet is composed of six major parts: an object-oriented graphics package, a constraint system, encapsulated input device handlers called "interactors," a user interface tool kit, user interface construction tools, and a "graphical editor shell" to help build editor-style programs. This document presents an overview of the approach that we propose to take for the Garnet project. This research was sponsored by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 4976, Amendment 20, under contract F33615-87-C-1499, monitored by the Avionics Laboratory, Air Force Wright Aeronautical Laboratories, Aeronautical Systems Division (AFSC), Wright-Patterson AFB, Ohio 45433-6543. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the US Government. Brad A. Myers The Garnet UIDE Table of |
Learning Spatial-Temporal Varying Graphs with Applications to Climate Data Analysis | An important challenge in understanding climate change is to uncover the dependency relationships between various climate observations and forcing factors. Graphical lasso, a recently proposed `1 penalty based structure learning algorithm, has been proven successful for learning underlying dependency structures for the data drawn from a multivariate Gaussian distribution. However, climatological data often turn out to be non-Gaussian, e.g. cloud cover, precipitation, etc. In this paper, we examine nonparametric learning methods to address this challenge. In particular, we develop a methodology to learn dynamic graph structures from spatial-temporal data so that the graph structures at adjacent time or locations are similar. Experimental results demonstrate that our method not only recovers the underlying graph well but also captures the smooth variation properties on both synthetic data and climate data. Introduction Climate change poses many critical socio-technological issues in the new century (IPCC 2007). An important challenge in understanding climate change is to uncover the dependency relationships between the various climate observations and forcing factors, which can be of either natural or anthropogenic (human) origin, e.g. to assess which parameters are mostly responsible for climate change. Graph is one of the most natural representations of dependency relationships among multiple variables. There have been extensive studies on learning graph structures that are invariant over time. In particular, `1 penalty based learning algorithms, such as graphical lasso, establish themselves as one of the most promising techniques for structure learning, especially for data with inherent sparse graph structures (Meinshausen and Bühlmann 2006; Yuan and Lin 2007) and have been successfully applied in diverse areas, such as gene regulatory network discovery (Friedman 2004), social network analysis (Goldenberg and Moore 2005) and so on. Very recently, several methods have been proposed to model time-evolving graphs with applications from gene regulatory network analysis (Song, Kolar, and Xing 2009), financial data analysis (Xuan and Murphy 2007) to oil-production Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. monitoring system (Liu, Kalagnanam, and Johnsen 2009). Most of the existing methods assume that the data are drawn from a multivariate Gaussian distribution at each time stamp and then estimate the graphs on a chain of time. Compared with the existing approaches for graph structure learning, there are two major challenges associated with climate data: one is that meteorological or climatological data often turn out to be non-Gaussian, e.g. precipitation, cloud cover, and relative humidity, which belong to bounded or skewed distributions (Boucharel et al. 2009); the other is the smooth variation property, i.e. the graph structures may vary over temporal and/or spatial scales, but the graphs at adjacent time or locations should be similar. In this paper, we present a nonparametric approach with kernel weighting techniques to address these two challenges for spatial-temporal data in climate applications. Specifically, for a fixed time t and location s, we propose to adopt a two-stage procedure: (1) instead of blindly assuming that the data follow Gaussian or any other parametric distributions, we learn a set of marginal functions which can transform the original data into a space where they are normally distributed; (2) we construct the covariance matrix for t and s via a kernel weighted combination of all the data at different time and locations. Then the state-of-the-art graph structure learning algorithm, “graphical lasso” (Yuan and Lin 2007; Friedman, Hastie, and Tibshirani 2008), can be applied to uncover the underlying graph structure. It is worthwhile noting that our kernel weighting techniques are very flexible, i.e. they can account for smooth variation in many types (e.g. altitude) besides time and space. To the best of our knowledge, it is the first practical method for learning nonstationary graph structures without assuming any parametric underlying distributions. Preliminary We concern ourselves with the problem in learning graph structures which vary in both temporal and spatial domains. At each time t and location s, we take n i.i.d. observations on p random variables which are denoted as {Xts i }i=1, where each X i := (X ts i1 , . . . X ts ip) T ∈ R is a p dimensional vector. Taking the climate data for example, we may independently measure several factors (variables), such as temperature, precipitation, carbon-dioxide (CO2), at each location spreading at different time in a year. Our goal is to explore the dependency relationships among these variables over time and locations. Markov Random Fields (MRFs) have been widely adopted for modeling dependency relationships (Kinderman and Snell 1980). For a fixed time and location, denote each observation as a p-dimensional random vector X = (X1, . . . Xp). We encode the structure of X with an undirected graph G = (V, E), where each node u in the vertex set V = {v1, . . . vp} corresponds to a component of X. The edge set encodes conditional independencies among components of X. More precisely, the edge between (u, v) is excluded from E if and only if Xu is conditionally independent of Xv given the rest of variables V\u,v ≡ {Xi, 1 ≤ i ≤ p, i 6= u, v}: (u, v) 6∈ E ⇔ Xu ⊥⊥ Xv |V\u,v (1) A large body of literature assumes that X follows a multivariate Gaussian distribution, N(μ,Σ), with the mean vector μ and the covariance matrix Σ. Let Ω = Σ−1 be the inverse of the covariance matrix (a.k.a. the precision matrix). One good property of multivariate Gaussian distributions is that Xu ⊥⊥ Xv |V\u,v if and only if Ωuv = 0 (Lauritzen 1996). Under the Gaussian assumption, we may deduce conditional independencies by estimating the inverse covariance matrix. In real world applications, many variables are conditionally independent given others. Therefore, only a few essential edges should appear in the estimated graph. In other words, the estimated inverse covariance matrix Ω̂ should be sparse with many zero elements. Inspired by the success of “lasso” for linear models, Yuan and Lin proposed “graphical lasso” to obtain a sparse Ω̂ by minimizing the negative log-likelihood with `1 penalization on Ω̂ (Yuan and Lin 2007). More precisely, let {X1,X2, . . .Xn} be n random samples from N(μ,Σ), where each Xi ∈ R and let Σ̂ be the estimated covariance matrix using maximum likelihood: |
Du Fu in the Poetry Standards (Shige) and the Origins of the Earliest Du Fu Commentary | AbstractThis article explores the relation between the late Tang genre of poetry criticism known as shige — poetry standards or poetry models — and the earliest interlinear commentary on the Tang poet Du Fu. Representative passages from surviving shige that explicate Du Fu couplets are examined. Eleventh-century commentators of Du Fu applied metaphorical correspondences between natural objects and human moral and political qualities, a major feature of shige exegesis, to formulate political interpretations of selected, enigmatic Du Fu poems. These interpretations accorded with Song literati, especially Qingli and Yuanyou period, notions of political culture. The article concludes that this process aided in the creation of an image of Du Fu that projects him forward toward the Song half of the Tang-Song transition, from an aristocratic to a literati society. However, much of his greatest poetry looks back in lament toward the vanished glories of the Tang aristocratic world. |
Practical Linear Models for Large-Scale One-Class Collaborative Filtering | Collaborative filtering has emerged as the de facto approach to personalized recommendation problems. However, a scenario that has proven difficult in practice is the one-class collaborative filtering case (OC-CF), where one has examples of items that a user prefers, but no examples of items they do not prefer. In such cases, it is desirable to have recommendation algorithms that are personalized, learning-based, and highly scalable. Existing linear recommenders for OC-CF achieve good performance in benchmarking tasks, but they involve solving a large number of a regression subproblems, limiting their applicability to large-scale problems. We show that it is possible to scale up linear recommenders to big data by learning an OCCF model in a randomized low-dimensional embedding of the user-item interaction matrix. Our algorithm, Linear-FLow, achieves state-of-the-art performance in a comprehensive set of experiments on standard benchmarks as well as real data. |
Player AnalysisInput Frame Depth EstimationDepth Estimation Scene ReconstructionScene | We present a system that transforms a monocular video of a soccer game into a moving 3D reconstruction, in which the players and field can be rendered interactively with a 3D viewer or through an Augmented Reality device. At the heart of our paper is an approach to estimate the depth map of each player, using a CNN that is trained on 3D player data extracted from soccer video games. We compare with state of the art body pose and depth estimation techniques, and show results on both synthetic ground truth benchmarks, and real YouTube soccer footage. |
TESS: Temporal event sequence summarization | We suggest a novel method of clustering and exploratory analysis of temporal event sequences data (also known as categorical time series) based on three-dimensional data grid models. A data set of temporal event sequences can be represented as a data set of three-dimensional points, each point is defined by three variables: a sequence identifier, a time value and an event value. Instantiating data grid models to the 3D-points turns the problem into 3D-coclustering. The sequences are partitioned into clusters, the time variable is discretized into intervals and the events are partitioned into clusters. The cross-product of the univariate partitions forms a multivariate partition of the representation space, i.e., a grid of cells and it also represents a nonparametric estimator of the joint distribution of the sequences, time and events dimensions. Thus, the sequences are grouped together because they have similar joint distribution of time and events, i.e., similar distribution of events along the time dimension. The best data grid is computed using a parameter-free Bayesian model selection approach. We also suggest several criteria for exploiting the resulting grid through agglomerative hierarchies, for interpreting the clusters of sequences and characterizing their components through insightful visualizations. Extensive experiments on both synthetic and real-world data sets demonstrate that our approach is efficient, effective and discover meaningful underlying patterns in sets of temporal event sequences. |
A Multi-tiered Recommender System Architecture for Supporting E-Commerce | Nowadays, many e-Commerce tools support customers with automatic recommendations. Many of them are centralized and lack in ef ciency and scalability, while other ones are distributed and require a computational overhead excessive for many devices. Moreover, all the past proposals are not open and do not allow new personalized terms to be introduced into the domain ontology. In this paper, we present a distributed recommender, based on a multi-tiered agent system, trying to face the issues outlined above. The proposed system is able to generate very effective suggestions without a too onerous computational task. We show that our system introduces signi cant advantages in terms of openess, privacy and security. |
Consequences of physical inactivity in chronic obstructive pulmonary disease. | The many health benefits of regular physical activity underline the importance of this topic, especially in this period of time when the prevalence of a sedentary lifestyle in the population is increasing. Physical activity levels are especially low in patients with chronic obstructive pulmonary disease (COPD). Regular physical activity and an active lifestyle has shown to be positively associated with outcomes such as exercise capacity and health-related quality of life, and therefore could be beneficial for the individual COPD patient. An adequate level of physical activity needs to be integrated into daily life, and stimulation of physical activity when absent is important. This article aims to discuss in more detail the possible role of regular physical activity for a number of well-known outcome parameters in COPD. |
‘Institutional Thickness’: Local Governance and Economic Development in Birmingham, England | This article uses the concept of institutional thickness to describe key features of the local governance of economic development. For this purpose, a methodology for the empirical assessment of institutional thickness is developed and applied to the case of Birmingham, England. The results from this empirical analysis are threefold. First, they make it possible to draw some conclusions on the role that local governments can play to promote local economic development. Second, they suggest that institutional thickness is a useful organizing concept for analyses of the local governance of economic development. Finally, they demonstrate the value of a verifiable and replicable methodology for the detection and measurement of local institutional conditions and of governance arrangements. Copyright (c) 2007 The Authors. Journal Compilation (c) 2007 Joint Editors and Blackwell Publishing Ltd. |
Cyber physical systems in the context of Industry 4.0 | We are currently experiencing the fourth Industrial Revolution in terms of cyber physical systems. These systems are industrial automation systems that enable many innovative functionalities through their networking and their access to the cyber world, thus changing our everyday lives significantly. In this context, new business models, work processes and development methods that are currently unimaginable will arise. These changes will also strongly influence the society and people. Family life, globalization, markets, etc. will have to be redefined. However, the Industry 4.0 simultaneously shows characteristics that represent the challenges regarding the development of cyber-physical systems, reliability, security and data protection. Following a brief introduction to Industry 4.0, this paper presents a prototypical application that demonstrates the essential aspects. |
Effective Techniques for Message Reduction and Load Balancing in Distributed Graph Computation | Massive graphs, such as online social networks and communication networks, have become common today. To efficiently analyze such large graphs, many distributed graph computing systems have been developed. These systems employ the "think like a vertex" programming paradigm, where a program proceeds in iterations and at each iteration, vertices exchange messages with each other. However, using Pregel's simple message passing mechanism, some vertices may send/receive significantly more messages than others due to either the high degree of these vertices or the logic of the algorithm used. This forms the communication bottleneck and leads to imbalanced workload among machines in the cluster. In this paper, we propose two effective message reduction techniques: (1)vertex mirroring with message combining, and (2)an additional request-respond API. These techniques not only reduce the total number of messages exchanged through the network, but also bound the number of messages sent/received by any single vertex. We theoretically analyze the effectiveness of our techniques, and implement them on top of our open-source Pregel implementation called Pregel+. Our experiments on various large real graphs demonstrate that our message reduction techniques significantly improve the performance of distributed graph computation. |
Thermodynamics of liquid/liquid distribution | The thermodynamics of aliphatic and aromatic solute distribution between various polar and nonpolar liquid phases have been determined using a novel flow microcalorimetry approach for the enthalpy term, and conventional shake—flask procedures for the free—energy term. Data have been examined using enthalpy/entropy compensation analysis, and the origin of the thermodynamics found has been studied by examination of the corresponding enthalpies and free energies of solute solvatiom. |
A Survey on Transfer Learning | A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research. |
New strategy for the prenatal detection/exclusion of paternal cystic fibrosis mutations in maternal plasma. | BACKGROUND
Since the presence of fetal DNA was discovered in maternal blood, different investigations have focused on non-invasive prenatal diagnosis. The analysis of fetal DNA in maternal plasma may allow the diagnosis of fetuses at risk of cystic fibrosis (CF) without any risk of fetal loss. Here, we present a new strategy for the detection of fetal mutations causing CF in maternal plasma.
METHODS
We have used a mini-sequencing based method, the SNaPshot, for fetal genotyping of the paternal mutation in maternal blood from three pregnancies at risk of CF.
RESULTS
The paternal mutation was detected in the analysis of plasma samples from cases 1 and 3 but not in case 2. Results of a posterior conventional molecular analysis of chorionic biopsies were in full agreement with those obtained from analysis of the plasma samples.
CONCLUSIONS
The knowledge about the inheritance of the paternal mutation in a fetus may avoid the conventional prenatal diagnosis in some cases. The SNaPshot technique has been shown to be a sensitive and accurate method for the detection of fetal mutations in maternal plasma. Its ease handling, rapid and low cost makes it appropriate for a future routine clinical use in non-invasive prenatal diagnosis of cystic fibrosis. |
Classification of Passes in Football Matches Using Spatiotemporal Data | A knowledgeable observer of a game of football (soccer) can make a subjective evaluation of the quality of passes made between players during the game, such as rating them as Good, OK, or Bad. In this article, we consider the problem of producing an automated system to make the same evaluation of passes and present a model to solve this problem.
Recently, many professional football leagues have installed object tracking systems in their stadiums that generate high-resolution and high-frequency spatiotemporal trajectories of the players and the ball. Beginning with the thesis that much of the information required to make the pass ratings is available in the trajectory signal, we further postulated that using complex data structures derived from computational geometry would enable domain football knowledge to be included in the model by computing metric variables in a principled and efficient manner. We designed a model that computes a vector of predictor variables for each pass made and uses machine learning techniques to determine a classification function that can accurately rate passes based only on the predictor variable vector.
Experimental results show that the learned classification functions can rate passes with 90.2% accuracy. The agreement between the classifier ratings and the ratings made by a human observer is comparable to the agreement between the ratings made by human observers, and suggests that significantly higher accuracy is unlikely to be achieved. Furthermore, we show that the predictor variables computed using methods from computational geometry are among the most important to the learned classifiers. |
Mutual information-based SVM-RFE for diagnostic classification of digitized mammograms | 0167-8655/$ see front matter 2009 Elsevier B.V. A doi:10.1016/j.patrec.2009.06.012 * Corresponding author. Tel.: +82 2 705 8931; fax: E-mail addresses: [email protected] (S. Yoon), sa Computer aided diagnosis (CADx) systems for digitized mammograms solve the problem of classification between benign and malignant tissues while studies have shown that using only a subset of features generated from the mammograms can yield higher classification accuracy. To this end, we propose a mutual information-based Support Vector Machine Recursive Feature Elimination (SVM-RFE) as the classification method with feature selection in this paper. We have conducted extensive experiments on publicly available mammographic data and the obtained results indicate that the proposed method outperforms other SVM and SVM-RFE-based methods. 2009 Elsevier B.V. All rights reserved. |
AIR SCORE ASSESSMENT FOR ACUTE APPENDICITIS | BACKGROUND
Acute appendicitis is the most common cause of acute abdomen. Approximately 7% of the population will be affected by this condition during full life. The development of AIR score may contribute to diagnosis associating easy clinical criteria and two simple laboratory tests.
AIM
To evaluate the score AIR (Appendicitis Inflammatory Response score) as a tool for the diagnosis and prediction of severity of acute appendicitis.
METHOD
Were evaluated all patients undergoing surgical appendectomy. From 273 patients, 126 were excluded due to exclusion criteria. All patients were submitted o AIR score.
RESULTS
The value of the C-reactive protein and the percentage of leukocytes segmented blood count showed a direct relationship with the phase of acute appendicitis.
CONCLUSION
As for the laboratory criteria, serum C-reactive protein and assessment of the percentage of the polymorphonuclear leukocytes count were important to diagnosis and disease stratification. |
Primary stability and histomorphometric bone-implant contact of self-drilling and self-tapping orthodontic microimplants. | INTRODUCTION
The aim of this study was to evaluate the primary stability and the histomorphometric measurements of self-drilling and self-tapping orthodontic microimplants and the correlations between factors related to host, implant, and measuring technique.
METHODS
Seventy-two self-drilling and self-tapping implants were placed into bovine iliac crest blocks after computed tomography assessments. Insertion torque values, subjective assessments of stability, and Periotest (Medizintecknik Gulden, Modautal, Germany) measurements were performed for each implant. Twelve specimens of each group were assigned to histologic and histomorphometric assessments.
RESULTS
The differences between insertion torque values, most Periotest values, and subjective assessments of stability scores were insignificant (P >0.05). The bone-implant contact percentage of the self-drilling group (87.60%) was higher than that of the self-tapping group (80.73%) (P <0.05). Positive correlations were found between insertion torque value, cortical bone thickness, and density in both groups (P <0.05). Negative correlations between insertion torque values and Periotest values were mostly observed in the self-drilling group (P <0.05). Positive correlations were found between bone-implant contact percentages, cortical bone densities, and insertion torque values in both groups (P <0.05). The differences between insertion torque values and corresponding subjective assessments of stability scores were different in both groups (P <0.05).
CONCLUSIONS
The differences in insertion torque values, Periotest values, and subjective assessments of stability scores of self-drilling and self-tapping implants were insignificant. Self-drilling implants had higher bone-implant contact percentages than did self-tapping implants. Significant correlations were found between parameters influencing the primary stability of the implants. |
An Assessment of Intrinsic and Extrinsic Motivation on Task Performance in Crowdsourcing Markets | Crowdsourced labor markets represent a powerful new paradigm for accomplishing work. Understanding the motivating factors that lead to high quality work could have significant benefits. However, researchers have so far found that motivating factors such as increased monetary reward generally increase workers’ willingness to accept a task or the speed at which a task is completed, but do not improve the quality of the work. We hypothesize that factors that increase the intrinsic motivation of a task – such as framing a task as helping others – may succeed in improving output quality where extrinsic motivators such as increased pay do not. In this paper we present an experiment testing this hypothesis along with a novel experimental design that enables controlled experimentation with intrinsic and extrinsic motivators in Amazon’s Mechanical Turk, a popular crowdsourcing task market. Results suggest that intrinsic motivation can indeed improve the quality of workers’ output, confirming our hypothesis. Furthermore, we find a synergistic interaction between intrinsic and extrinsic motivators that runs contrary to previous literature suggesting “crowding out” effects. Our results have significant practical and theoretical implications for crowd work. |
Estimation of V̇O2max from the ratio between HRmax and HRrest – the Heart Rate Ratio Method | The effects of ṫ̇raining and/or ageing upon maximal oxygen uptake (V̇O2max) and heart rate values at rest (HRrest) and maximal exercise (HRmax), respectively, suggest a relationship between V̇O2max and the HRmax-to-HRrest ratio which may be of use for indirect testing of V̇O2max. Fick principle calculations supplemented by literature data on maximum-to-rest ratios for stroke volume and the arterio-venous O2 difference suggest that the conversion factor between mass-specific V̇O2max (ml·min−1·kg−1) and HRmax·HRrest −1 is ~15. In the study we experimentally examined this relationship and evaluated its potential for prediction of V̇O2max. V̇O2max was measured in 46 well-trained men (age 21–51 years) during a treadmill protocol. A subgroup (n=10) demonstrated that the proportionality factor between HRmax·HRrest −1 and mass-specific V̇O2max was 15.3 (0.7) ml·min−1·kg−1. Using this value, V̇O2max in the remaining 36 individuals could be estimated with an SEE of 0.21 l·min−1 or 2.7 ml·min−1·kg−1 (~4.5%). This compares favourably with other common indirect tests. When replacing measured HRmax with an age-predicted one, SEE was 0.37 l·min−1 and 4.7 ml·min−1·kg−1 (~7.8%), which is still comparable with other indirect tests. We conclude that the HRmax-to-HRrest ratio may provide a tool for estimation of V̇O2max in well-trained men. The applicability of the test principle in relation to other groups will have to await direct validation. V̇O2max can be estimated indirectly from the measured HRmax-to-HRrest ratio with an accuracy that compares favourably with that of other common indirect tests. The results also suggest that the test may be of use for V̇O2max estimation based on resting measurements alone. |
Internal Validation of Predictive Logistic Regression Models for Decision-Making in Wildlife Management | Predictive logistic regression models are commonly used to make informed decisions related to wildlife management and conservation, such as predicting favourable wildlife habitat for land conservation objectives and predicting vital rates for use in population models. Frequently, models are developed for use in the same population from which sample data were obtained, and thus, they are intended for internal use within the same population. Before predictions from logistic regression models are used to make management decisions, predictive ability should be validated. We describe a process for conducting an internal model validation, and we illustrate the process of internal validation using logistic regression models for predicting the number of successfully breeding wolf packs in six areas in the US northern Rocky Mountains. We start by defining the major components of accuracy for binary predictions as calibration and discrimination, and we describe methods for quantifying the calibration and discrimination abilities of a logistic regression model. We also describe methods for correcting problems of calibration and future predictive accuracy in a logistic regression model. We then show how bootstrap simulations can be used to obtain unbiased estimates of prediction accuracy when models are calibrated and evaluated within the same population from which they were developed. We also show how bootstrapping can be used to assess coverage rates and recalibrate the endpoints of confidence intervals for predictions from a logistic regression model, to achieve nominal coverage rates. Using the data on successfully breeding wolf packs in the northern Rocky Mountains, we validate that predictions from a model developed with data specific to each of six analysis areas are better calibrated to each population than a global model developed using all data simultaneously. We then use shrinkage of model coefficients to improve calibration and future predictive accuracy for the area-specific model, and recalibrate confidence interval endpoints to provide better coverage properties. Following this validation, managers can be confident that logistic regression predictions will be reliable in this situation, and thus that management decisions will be based on accurate predictions. |
Mining of Massive Datasets | The popularity of the Web and Internet commerce provides many extremely large datasets from which information can be gleaned by data mining. This book focuses on practical algorithms that have been used to solve key problems in data mining and can be used on even the largest datasets. It begins with a discussion of the map-reduce framework, an important tool for parallelizing algorithms automatically. The tricks of locality-sensitive hashing are explained. This body of knowledge, which deserves to be more widely known, is essential when seeking similar objects in a very large collection without having to compare each pair of objects. Stream processing algorithms for mining data that arrives too fast for exhaustive processing are also explained. The PageRank idea and related tricks for organizing the Web are covered next. Other chapters cover the problems of finding frequent itemsets and clustering, each from the point of view that the data is too large to fit in main memory, and two applications: recommendation systems and Web advertising, each vital in e-commerce. This second edition includes new and extended coverage on social networks, machine learning and dimensionality reduction. Written by leading authorities in database and web technologies, it is essential reading for students and practitioners alike |
Influence Factors of Understanding Business Process Models | The increasing utilization of business process models both in business analysis and information systems development raises several issues regarding quality measures. In this context, this paper discusses understandability as a particular quality aspect and its connection with personal, model, and content related factors. We use an online survey to explore the ability of the model reader to draw correct conclusions from a set of process models. For the first group of the participants we used models with abstract activity labels (e.g. A, B, C) while the second group received the same models with illustrative labels such as “check credit limit”. The results suggest that all three categories indeed have an impact on the understandability. |
Empirical vulnerability analysis of automated smart contracts security testing on blockchains | e emerging blockchain technology supports decentralized computing paradigm shi and is a rapidly approaching phenomenon. While blockchain is thought primarily as the basis of Bitcoin, its application has grown far beyond cryptocurrencies due to the introduction of smart contracts. Smart contracts are self-enforcing pieces of soware, which reside and run over a hosting blockchain. Using blockchain-based smart contracts for secure and transparent management to govern interactions (authentication, connection, and transaction) in Internet-enabled environments, mostly IoT, is a niche area of research and practice. However, writing trustworthy and safe smart contracts can be tremendously challenging because of the complicated semantics of underlying domain-specic languages and its testability. ere have been high-prole incidents that indicate blockchain smart contracts could contain various code-security vulnerabilities, instigating nancial harms. When it involves security of smart contracts, developers embracing the ability to write the contracts should be capable of testing their code, for diagnosing security vulnerabilities, before deploying them to the immutable environments on blockchains. However, there are only a handful of security testing tools for smart contracts. is implies that the existing research on automatic smart contracts security testing is not adequate and remains in a very stage of infancy. With a specic goal to more readily realize the application of blockchain smart contracts in security and privacy, we should rst understand their vulnerabilities before widespread implementation. Accordingly, the goal of this paper is to carry out a far-reaching experimental assessment of current static smart contracts security testing tools, for the most widely used blockchain, the Ethereum and its domain-specic programming language, Solidity, to provide the rst body of knowledge for creating more secure blockchainbased soware. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). |
Double ridged horn antenna designs for wideband applications | In this paper, different double ridged horn antenna (DRHA) designs are investigated for wideband applications. A classic design of 1–18 GHz DRHA with exponential ridges is modelled and the antenna pattern deficiencies are detected at frequencies above 12 GHz. The antenna pattern is optimized by modification of the antenna structure. However, the impedance matching is affected and the VSWR is increased at the frequencies below 2 GHz. The matching problem can be resolved by adding lossy materials to the back cavity of antenna. We have shown reduction of the antenna efficiency by 15% over the whole frequency range, except at the lower frequencies. |
Molecular marker-assisted breeding options for maize improvement in Asia | Maize is one of the most important food and feed crops in Asia, and is a source of income for several million farmers. Despite impressive progress made in the last few decades through conventional breeding in the “Asia-7” (China, India, Indonesia, Nepal, Philippines, Thailand, and Vietnam), average maize yields remain low and the demand is expected to increasingly exceed the production in the coming years. Molecular marker-assisted breeding is accelerating yield gains in USA and elsewhere, and offers tremendous potential for enhancing the productivity and value of Asian maize germplasm. We discuss the importance of such efforts in meeting the growing demand for maize in Asia, and provide examples of the recent use of molecular markers with respect to (i) DNA fingerprinting and genetic diversity analysis of maize germplasm (inbreds and landraces/OPVs), (ii) QTL analysis of important biotic and abiotic stresses, and (iii) marker-assisted selection (MAS) for maize improvement. We also highlight the constraints faced by research institutions wishing to adopt the available and emerging molecular technologies, and conclude that innovative models for resource-pooling and intellectual-property-respecting partnerships will be required for enhancing the level and scope of molecular marker-assisted breeding for maize improvement in Asia. Scientists must ensure that the tools of molecular marker-assisted breeding are focused on developing commercially viable cultivars, improved to ameliorate the most important constraints to maize production in Asia. |
Oxidative damage to collagen and related substrates by metal ion/hydrogen peroxide systems: random attack or site-specific damage? | Degradation of collagen by oxidant species may play an important role in the progression of rheumatoid arthritis. Whilst the overall effects of this process are reasonably well defined, little is known about the sites of attack, the nature of the intermediates, or the mechanism(s) of degradation. In this study electron paramagnetic resonance spectroscopy with spin trapping has been used to identify radicals formed on collagen and related materials by metal ion-H2O2 mixtures. Attack of the hydroxyl radical, from a Fe(II)-H2O2 redox couple, on collagen peptides gave signals from both side chain (.CHR'R"), and alpha-carbon[.C(R)(NH-)CO-,R = side-chain]radicals. Reaction with collagen gave both broad anisotropic signals, from high-molecular-weight protein-derived radicals, and isotropic signals from mobile species. The latter may be low-molecular-weight fragments, or mobile side-chain species; these signals are similar to those from the alpha-carbon site of peptides and the side-chain of lysine. Enzymatic digestion of the large, protein-derived, species releases similar low-molecular-weight adducts. The metal ion employed has a dramatic effect on the species observed. With Cu(I)-H2O2 or Cu(II)-H2O2 instead of Fe(II)-H2O2, evidence has been obtained for: i) altered sites of attack and fragmentation, ii) C-terminal decarboxylation, and iii) hydrogen abstraction at N-terminal alpha-carbon sites. This altered behaviour is believed to be due to the binding of copper ions to some substrates and hence site-specific damage. This has been confirmed in some cases by electron paramagnetic resonance studies of the Cu(II) ions. |
Linear scaling electronic structure methods in chemistry and physics | Scientists have known the nonrelativistic equations of quantum mechanics since 1926, when Austrian physicist Erwin Schrödinger published a series of papers on quantum mechanics. As Paul Dirac pointed out in 1929, questions in quantum mechanics are in principle just questions in applied mathematics. In practice, however, solving these equations has proved challenging. In spite of the impressive computer power at our disposal, solving the basic equation of quantum mechanics—the many-electron Schrödinger equation, which rules the quantum mechanical behavior of molecules and materials at the atomic level and determines their basic properties—remains a difficult task, and will require the interplay of physics, chemistry, mathematics, and computational science. Developing new methods for electronic structure calculations is more than just developing new algorithms: it requires a deep physical and chemical understanding of many-electron systems. Combining this understanding with modern mathematical concepts leads to algorithms that exploit the peculiarities of electronic systems to yield powerful new electronic structure methods. Adapting these methods for modern computer architectures will result in powerful programs to aid the research of many scientists. In this drive for better methods, algorithms in which computing time increases linearly with respect to the number of atoms in the system are the ultimate goal. Most physical quantities are extensive—that is, they grow linearly with system size. We might therefore expect that the computational effort will grow linearly with system size as well. An even slower increase in computing time is certainly not possible unless we ignore the basic physics of the electronic system. In this article, we review the physical principles and algorithms behind the quest for electronic structure computational methods that scale linearly with respect to system size. |
Reinforcement learning: The Good, The Bad and The Ugly | Reinforcement learning provides both qualitative and quantitative frameworks for understanding and modeling adaptive decision-making in the face of rewards and punishments. Here we review the latest dispatches from the forefront of this field, and map out some of the territories where lie monsters. |
Stronger Baselines for Trustable Results in Neural Machine Translation | Interest in neural machine translation has grown rapidly as its effectiveness has been demonstrated across language and data scenarios. New research regularly introduces architectural and algorithmic improvements that lead to significant gains over “vanilla” NMT implementations. However, these new techniques are rarely evaluated in the context of previously published techniques, specifically those that are widely used in state-of-theart production and shared-task systems. As a result, it is often difficult to determine whether improvements from research will carry over to systems deployed for real-world use. In this work, we recommend three specific methods that are relatively easy to implement and result in much stronger experimental systems. Beyond reporting significantly higher BLEU scores, we conduct an in-depth analysis of where improvements originate and what inherent weaknesses of basic NMT models are being addressed. We then compare the relative gains afforded by several other techniques proposed in the literature when starting with vanilla systems versus our stronger baselines, showing that experimental conclusions may change depending on the baseline chosen. This indicates that choosing a strong baseline is crucial for reporting reliable experimental results. |
Debriefing after failed paediatric resuscitation: a survey of current UK practice. | OBJECTIVES
Debriefing is a form of psychological "first aid" with origins in the military. It moved into the spotlight in 1983, when Mitchell described the technique of critical incident stress debriefing. To date little work has been carried out relating to the effectiveness of debriefing hospital staff after critical incidents. The aim of this study was to survey current UK practice in order to develop some "best practice" guidelines.
METHODS
This study was a descriptive evaluation based on a structured questionnaire survey of 180 lead paediatric and emergency medicine consultants and nurses, selected from 50 UK trusts. Questions collected data about trust policy and events and also about individuals' personal experience of debrief. Free text comments were analyzed using the framework method described for qualitative data.
RESULTS
Overall, the response rate was 80%. 62% said a debrief would occur most of the time. 85% reported that the main aim was to resolve both medical and psychological and emotional issues. Nearly all involve both doctors and nurses (88%); in over half (62%) other healthcare workers would be invited, eg, paramedics, students. Sessions are usually led by someone who was involved in the resuscitation attempt (76%). This was a doctor in 80%, but only 18% of responders said that a specifically trained person had led the session. Individuals' psychological issues would be discussed further on a one-to-one basis and the person directed to appropriate agencies. Any strategic working problems highlighted would be discussed with a senior member of staff and resolved via clinical governance pathways.
CONCLUSIONS
Little is currently known about the benefits of debriefing hospital staff after critical incidents such as failed resuscitation. Debriefing is, however, widely practised and the results of this study have been used to formulate some best practice guidelines while awaiting evidence from further studies. |
Managing the requirements flow from strategy to release in large-scale agile development: a case study at Ericsson | In a large organization, informal communication and simple backlogs are not sufficient for the management of requirements and development work. Many large organizations are struggling to successfully adopt agile methods, but there is still little scientific knowledge on requirements management in large-scale agile development organizations. We present an in-depth study of an Ericsson telecommunications node development organization which employs a large scale agile method to develop telecommunications system software. We describe how the requirements flow from strategy to release, and related benefits and problems. Data was collected by 43 interviews, which were analyzed qualitatively. The requirements management was done in three different processes, each of which had a different process model, purpose and planning horizon. The release project management process was plan-driven, feature development process was continuous and implementation management process was agile. The perceived benefits included reduced development lead time, increased flexibility, increased planning efficiency, increased developer motivation and improved communication effectiveness. The recognized problems included difficulties in balancing planning effort, overcommitment, insufficient understanding of the development team autonomy, defining the product owner role, balancing team specialization, organizing system-level work and growing technical debt. The study indicates that agile development methods can be successfully employed in organizations where the higher level planning processes are not agile. Combining agile methods with a flexible feature development process can bring many benefits, but large-scale software development seems to require specialist roles and significant coordination effort. |
Seeing through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels | When human annotators are given a choice about what to label in an image, they apply their own subjective judgments on what to ignore and what to mention. We refer to these noisy "human-centric" annotations as exhibiting human reporting bias. Examples of such annotations include image tags and keywords found on photo sharing sites, or in datasets containing image captions. In this paper, we use these noisy annotations for learning visually correct image classifiers. Such annotations do not use consistent vocabulary, and miss a significant amount of the information present in an image, however, we demonstrate that the noise in these annotations exhibits structure and can be modeled. We propose an algorithm to decouple the human reporting bias from the correct visually grounded labels. Our results are highly interpretable for reporting "what's in the image" versus "what's worth saying." We demonstrate the algorithm's efficacy along a variety of metrics and datasets, including MS COCO and Yahoo Flickr 100M.We show significant improvements over traditional algorithms for both image classification and image captioning, doubling the performance of existing methods in some cases. |
Predicting personality with social media | Social media is a place where users present themselves to the world, revealing personal details and insights into their lives. We are beginning to understand how some of this information can be utilized to improve the users' experiences with interfaces and with one another. In this paper, we are interested in the personality of users. Personality has been shown to be relevant to many types of interactions; it has been shown to be useful in predicting job satisfaction, professional and romantic relationship success, and even preference for different interfaces. Until now, to accurately gauge users' personalities, they needed to take a personality test. This made it impractical to use personality analysis in many social media domains. In this paper, we present a method by which a user's personality can be accurately predicted through the publicly available information on their Facebook profile. We will describe the type of data collected, our methods of analysis, and the results of predicting personality traits through machine learning. We then discuss the implications this has for social media design, interface design, and broader domains. |
Challenges and Pitfalls of Partitioning Blockchains | Blockchain has received much attention in recent years. This immense popularity has raised a number of concerns, scalability of blockchain systems being a common one. In this paper, we seek to understand how Ethereum, a well-established blockchain system, would respond to sharding. Sharding is a prevalent technique to increase the scalability of distributed systems. To understand how sharding would affect Ethereum, we model Ethereum blockchain as a graph and evaluate five methods to partition the graph. We assess methods using three metrics: the balance among shards, the number of transactions that would involve multiple shards, and the amount of data that would be relocated across shards upon repartitioning of the graph. |
Design of CPW fed bow-tie slot antenna for ground penetrating radar application | This paper presents the antenna for ground penetrating radar (GPR) for soil characteristics measurement application. The objective is operating frequency having range from 250-1250 MHz whilst restricting antenna lobs radiate to ground direction. The demand of ground penetrating radar design is antenna with ultra wide bandwidths (UWB). Designing optimum gain consideration of 5dBi thus the conventional antennas such as dipoles cannot be used effectively throughout the frequency band. The main objective of this proposed antenna is to use it for agriculture soil characteristics extraction. This paper proposes the practicability of building a ultra-wide band bow-tie slot antenna for GPR system is explored. Aluminum back cavity is considered to achieve unidirectional radiation pattern and enhance the gain of antenna. Considering best simulation results antenna is configured and implemented. |
The use of pornography during the commission of sexual offenses. | The goal of this study was to examine the use of pornographic materials by sex offenders during the commission of their crimes. A sample of 561 sex offenders was examined. There were 181 offenders against children, 144 offenders against adults, 223 incest offenders, 8 exhibitionists, and 5 miscellaneous cases. All but four cases were men. A total of 96 (17%) offenders had used pornography at the time of their offenses. More offenders against children than against adults used pornography in the offenses. Of the users, 55% showed pornographic materials to their victims and 36% took pictures, mostly of child victims. Nine cases were involved in the distribution of pornography. Results showed that pornography plays only a minor role in the commission of sexual offenses, however the current findings raise a major concern that pornography use in the commission of sexual crimes primarily involved child victims. |
Dressed to Kill | Technology is often defined in terms of tools or machines but, in this article, it is treated as the human capacity to make. The author focuses on clothing as an instance of making in war. Specific attention is paid to the junction between the power to make (or unmake) and the social and ritual capacities for regulation through which making is governed. In this sense, the study is intended as a contribution to a revived interest in the incomplete Durkheimian project on elementary forms of technique, and techniques of the body in particular. The case-study material derives from the civil war in Sierra Leone (1991—2002) in which dress was as important an aspect of making war as weaponry. Various functions and social and material entailments of battle dress are described and differentiated, and the central role of magic for understanding clothing (and technology more generally) is underlined. |
Production-level facial performance capture using deep convolutional neural networks | We present a real-time deep learning framework for video-based facial performance capture---the dense 3D tracking of an actor's face given a monocular video. Our pipeline begins with accurately capturing a subject using a high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations. With 5--10 minutes of captured footage, we train a convolutional neural network to produce high-quality output, including self-occluded regions, from a monocular video sequence of that subject. Since this 3D facial performance capture is fully automated, our system can drastically reduce the amount of labor involved in the development of modern narrative-driven video games or films involving realistic digital doubles of actors and potentially hours of animated dialogue per character. We compare our results with several state-of-the-art monocular real-time facial capture techniques and demonstrate compelling animation inference in challenging areas such as eyes and lips. |
Data Mining Model for Predicting Student Enrolment in STEM Courses in Higher Education Institutions | Educational data mining is the process of applying data mining tools and techniques to analyze data at educational institutions. In this paper, educational data mining was used to predict enrollment of students in Science, Technology, Engineering and Mathematics (STEM) courses in higher educational institutions. The study examined the extent to which individual, sociodemographic and school-level contextual factors help in pre-identifying successful and unsuccessful students in enrollment in STEM disciplines in Higher Education Institutions in Kenya. The Cross Industry Standard Process for Data Mining framework was applied to a dataset drawn from the first, second and third year undergraduate female students enrolled in STEM disciplines in one University in Kenya to model student enrollment. Feature selection was used to rank the predictor variables by their importance for further analysis. Various predictive algorithms were evaluated in predicting enrollment of students in STEM courses. Empirical results showed the following: (i) the most important factors separating successful from unsuccessful students are: High School final grade, teacher inspiration, career flexibility, pre-university awareness and mathematics grade. (ii) among classification algorithms for prediction, decision tree (CART) was the most successful classifier with an overall percentage of correct classification of 85.2%. This paper showcases the importance of Prediction and Classification based data mining algorithms in the field of education and also presents some promising future lines. |
Redirecting philosophy : reflections of the nature of knowledge from Plato to Lonergan | In a contemporary climate that tends to dismiss philosophy as an outmoded and increasingly useless discipline, philosophers have been forced to reconsider much of what they have formerly taken for granted. Redirecting Philosophy, Hugo Meynell's reassessment of the foundations and nature of knowledge, is a compelling response to this trend. This illuminating study surveys and analysis the views of the most influential contemporary thinkers in the English-speaking world (Wittgenstein, Strawson, Searle, Popper, Feyerabend, Kuhn, Rorty, Lonergan) and in continental philosophy (Husserl, Heidegger, Derrida, Foucault, Habermas). In setting those views against the background of classical philosophy, Meynell offers fresh perspectives on the basic problems that occupy philosophers today - problems such as scepticism, truth, experience, metaphysics, method, power, humane values, and the role of science. An insightful, up-to-date guide to philosophy and the theory of science, Meynell's book will be stimulating and valuable reading both in and out of the classroom. |
Evolutionary Function Approximation for Reinforcement Learning | Temporal difference methods are theoretically grounded and empirically effective methods for addressing reinforcement learning problems. In most real-world reinforcement learning tasks, TD methods require a function approximator to represent the value function. However, using function approximators requires manually making crucial representational decisions. This thesis investigates evolutionary function approximation, a novel approach to automatically selecting function approximator representations that enable efficient individual learning. This method evolves individuals that are better able to learn. I present a fully implemented instantiation of evolutionary function approximation which combines NEAT, a neuroevolutionary optimization technique, with Q-learning, a popular TD method. The resulting NEAT+Q algorithm automatically discovers effective representations for neural network function approximators. This thesis also presents on-line evolutionary computation, which improves the on-line performance of evolutionary computation by borrowing selection mechanisms used in TD methods to choose individual actions and using them in evolutionary computation to select policies for evaluation. I evaluate these contributions with extended empirical studies in two domains: 1) the mountain car task, a standard reinforcement learning benchmark on which neural network function approximators have previously performed poorly and 2) server job scheduling, a large probabilistic domain drawn from the field of autonomic computing. The results demonstrate that evolutionary function approximation can significantly improve the performance of TD methods and on-line evolutionary computation can significantly improve evolutionary methods. |
Clarification Questions with Feedback | In this paper, we investigate how people construct clarification questions. Our goal is to develop similar strategies for handling errors in automatic spoken dialogue systems in order to make error recovery strategies more efficient. Using a crowd-sourcing tool [7], we collect a dataset of user responses to clarification questions when presented with sentences in which some words are missing. We find that, in over 60% of cases, users choose to continue the conversation without asking a clarification question. However, when users do ask a question, our findings support earlier research showing that users are more likely to ask a targeted clarification question than a generic question. Using the dataset we have collected, we are exploring machine learning approaches for determining which system responses are most appropriate in different contexts and developing strategies for constructing clarification questions.1 |
Quality expectations of machine translation | Machine Translation (MT) is being deployed for a range of use-cases by millions of people on a daily basis. There should, therefore, be no doubt as to the utility of MT. However, not everyone is convinced that MT can be useful, especially as a productivity enhancer for human translators. In this chapter, I address this issue, describing how MT is currently deployed, how its output is evaluated and how this could be enhanced, especially as MT quality itself improves. Central to these issues is the acceptance that there is no longer a single ‘gold standard’ measure of quality, such that the situation in which MT is deployed needs to be borne in mind, especially with respect to the expected ‘shelf-life’ of the translation itself. 1 Machine Translation Today Machine Translation (MT) is being deployed for a range of use-cases by millions of people on a daily basis. I will examine the reasons for this later in this chapter, but one inference is very clear: those people using MT in those use-cases must already be satisfied with the level of quality emanating from the MT systems they are deploying, otherwise they would stop using them. That is not the same thing at all as saying that MT quality is perfect, far from it. The many companies and academic researchers who develop and deploy MT engines today continue to strive to improve the quality of the translations produced. This too is an implicit acceptance of the fact that the level of quality is sub-optimal – for some use-cases at least – and can be improved. If MT system output is good enough for some areas of application, yet at the same time system developers are trying hard to improve the level of translations produced by their engines, then translation quality – whether produced by a machine or by a human – needs to be measurable. Note that this applies also to translators who complain that MT quality is too poor to be used in their workflows; in order to decide that with some certainty – rather than rejecting MT out-of-hand merely as a knee-jerk reaction to the onset of this new technology – the impact of MT on translators’ work needs to be measurable. In Way (2013), I appealed to two concepts, which are revisited here, namely: |
Thermoremanence in red sandstone clasts and emplacement temperature of a quaternary pyroclastic deposit (Catalan Volcanic Zone, ne Spain) | SummaryThe application of the progressive thermal demagnetization procedure of volcanic rock debris has been frequently used to determine the emplacement temperatures of pyroclastic deposits and thus to characterize the nature of these volcanic deposits. This debris consists of a mixture of juvenile fragments derived from the explosive fragmentation of erupting magma and an assortment of lithic clasts derived mainly from the walls of a volcanic conduit, as well as from the ground. The temperature at which the clasts were deposited can be estimated by analyzing its remanent magnetization. To do this, oriented samples of clasts are subjected to progressive thermal demagnetization and the directions of the resulting remanent vectors provide the necessary information. Clasts of basalt, andesite, limestone, pumice and homebricks have previously been used to estimate the emplacement temperatures of pyroclastic deposits. According to our data, clasts of red sandstones also seem to be good carriers of thermoremanent magnetization. We have carried out a paleomagnetic study on a Quaternary, lithic-rich, massive, pyroclastic deposit from the Puig d'Adri volcano (Catalan Volcanic Zone), which contains a large number of red sandstone clasts. It is concluded that the studied deposit cannot be considered as a lahar or as a pyroclastic surge deposit, considering both the emplacement temperature and the morphological features. |
A randomized, double-blind, placebo-controlled study of the efficacy and safety of 2 doses of vortioxetine in adults with major depressive disorder. | BACKGROUND
This 8-week, randomized, double-blind, placebo-controlled study, conducted August 2010-May 2012 in the United States, evaluated the safety and efficacy of vortioxetine 10 mg and 15 mg in patients with major depressive disorder (MDD). The mechanism of action of vortioxetine is thought to be related to direct modulation of serotonin (5-HT) receptor activity and inhibition of the serotonin transporter.
METHOD
Adults aged 18-75 years with MDD (DSM-IV-TR) and Montgomery-Asberg Depression Rating Scale (MADRS) total score ≥ 26 were randomized (1:1:1) to receive vortioxetine 10 mg or 15 mg or placebo once daily, with the primary efficacy end point being change from baseline at week 8 in MADRS analyzed by mixed model for repeated measures. Adverse events were recorded during the study, suicidal ideation and behavior were assessed using the Columbia-Suicide Severity Rating Scale (C-SSRS), and sexual dysfunction was assessed using the Arizona Sexual Experience (ASEX) scale.
RESULTS
Of the 1,111 subjects screened, 469 subjects were randomized: 160 to placebo, 157 to vortioxetine 10 mg, and 152 to vortioxetine 15 mg. Differences from placebo in the primary efficacy end point were not statistically significant for vortioxetine 10 mg or vortioxetine 15 mg. Nausea, headache, dry mouth, constipation, diarrhea, vomiting, dizziness, and flatulence were reported in ≥ 5% of subjects receiving vortioxetine. Discontinuation due to adverse events occurred in 7 subjects (4.4%) in the placebo group, 8 (5.2%) in the vortioxetine 10 mg group, and 12 (7.9%) in the vortioxetine 15 mg group. ASEX total scores were similar across groups. There were no clinically significant trends within or between treatment groups on the C-SSRS, laboratory values, electrocardiogram, or vital sign parameters.
CONCLUSIONS
In this study, vortioxetine did not differ significantly from placebo on MADRS total score after 8 weeks of treatment in MDD subjects.
TRIAL REGISTRATION
ClinicalTrials.gov identifier: NCT01179516. |
Customers Churn Prediction and Attribute Selection in Telecom Industry Using Kernelized Extreme Learning Machine and Bat Algorithms | With the fast development of digital systems and concomitant information technologies, there is certainly an incipient spirit in the extensive overall economy to put together digital Customer Relationship Management (CRM) systems. This slanting is further more palpable in the telecommunications industry, in which businesses turn out to be increasingly digitalized. Customer churn prediction is a foremost aspect of a contemporary telecom CRM system. Churn prediction model leads the customer relationship management to retain the customers who will be possible to give up. Currently scenario, a lot of outfit and monitored classifiers and data mining techniques are employed to model the churn prediction in telecom. Within this paper, Kernelized Extreme Learning Machine (KELM) algorithm is proposed to categorize customer churn patterns in telecom industry. The primary strategy of proposed work is organized the data from telecommunication mobile customer’s dataset. The data preparation is conducted by using preprocessing with Expectation Maximization (EM) clustering algorithm. After that, customer churn behavior is examined by using Naive Bayes Classifier (NBC) in accordance with the four conditions like customer dissatisfaction (H1), switching costs (H2), service usage (H3) and customer status (H4). The attributes originate from call details and customer profiles which is enhanced the precision of customer churn prediction in the telecom industry. The attributes are measured using BAT algorithm and KELM algorithm used for churn prediction. The experimental results prove that proposed model is better than AdaBoost and Hybrid Support Vector Machine (HSVM) models in terms of the performance of ROC, sensitivity, specificity, accuracy and processing time. |
MATLAB based code for 3 D joint inversion of Magnetotelluric and Direct Current resistivity imaging data | 24 th EM Induction Workshop, Helsingør, Denmark, August 12-19, 2018 1 /4 MATLAB based code for 3D joint inversion of Magnetotelluric and Direct Current resistivity imaging data M. Israil 1 , A. Singh 1 Anita Devi 1 , and Pravin K.Gupta 1 1 Indian Institute of Technology, Roorkee, 247667, India, [email protected] |
Applications of Causally Defined Direct and Indirect Effects in Mediation Analysis using SEM in Mplus | This paper summarizes some of the literature on causal effects in mediation analysis. It presents causally-defined direct and indirect effects for continuous, binary, ordinal, nominal, and count variables. The expansion to non-continuous mediators and outcomes offers a broader array of causal mediation analyses than previously considered in structural equation modeling practice. A new result is the ability to handle mediation by a nominal variable. Examples with a binary outcome and a binary, ordinal or nominal mediator are given using Mplus to compute the effects. The causal effects require strong assumptions even in randomized designs, especially sequential ignorability, which is presumably often violated to some extent due to mediator-outcome confounding. To study the effects of violating this assumption, it is shown how a sensitivity analysis can be carried out. This can be used both in planning a new study and in evaluating the results of an existing study. |
Determinants of modern contraceptive utilization among married women of reproductive age group in North Shoa Zone, Amhara Region, Ethiopia | BACKGROUND
Ethiopia is the second most populous country in Africa with high fertility and fast population growth rate. It is also one of the countries with high maternal and child mortality rate in sub-Saharan Africa Family planning is a crucial strategy to halt the fast population growth, to reduce child mortality and improve maternal health (Millennium Development Goal 4 and 5). Therefore, this study aimed to assess the prevalence and determinants of modern contraceptive utilization among married women of reproductive age group.
METHODS
A community based cross-sectional study was conducted from August 15 to September 1, 2010 among married women aged 15-49 years in Debre Birhan District. Multistage sampling technique was used to select a total of 851 study participants. A pre-tested structured questionnaire was used for gathering data. Bivariate and multivariate logistic regression analyses were performed using SPSS version 16.0 statistical package.
RESULTS
Modern contraceptive prevalence rate among currently married women was 46.9%. Injectable contraceptives were the most frequently used methods (62.9%), followed by intrauterine device (16.8%), pills (14%), norplant (4.3%), male condom (1.2%) and female sterilization (0.8%). Multiple logistic regression model revealed that the need for more children (AOR 9.27, 95% CI 5.43-15.84), husband approve (AOR 2.82, 95% CI 1.67-4.80), couple's discussion about family planning issues (AOR 7.32, 95% CI 3.60-14.86). Similarly, monthly family income and number of living children were significantly associated with the use of modern contraceptives.
CONCLUSION
Modern contraceptive use was high in the district. Couple's discussion and husband approval of contraceptives use were significantly associated with the use of modern contraceptives. Therefore, district health office and concerned stakeholders should focus on couples to encourage communication and male involvement for family planning. |
[Importance-Performance Analysis for services management]. | Importance-Performance Analysis (IPA) constitutes an indirect approximation to user's satisfaction measurement that allows to represent, in an easy and functional way, the main points and improvement areas of a specific product or service. Beginning from the importance and judgements concerning the performance that users grant to each prominent attributes of a service, it is possible to obtain a graphic divided into four quadrants in which recommendations for the organization economic resources management are included. Nevertheless, this tool has raised controversies since its origins, referred fundamentally to the placement of the axes that define the quadrants and the conception and measurement of the importance of attributes that compose the service. The primary goal of this article is to propose an alternative to the IPA representation that allows to overcome the limitations and contradictions derived from the original technique, without rejecting the classical graph. The analysis applies to data obtained in a survey about satisfaction with primary health care services of Galicia. Results will permit to advise to primary health care managers with a view toward the planning of future strategic actions. |
Interproximal gingivitis and plaque reduction by four interdental products. | OBJECTIVE
The study was conducted to compare the performance of three interdental products to dental floss in the control and removal of plaque, and in the reduction of gingivitis.
METHODOLOGY
One-hundred and twenty subjects were screened for the presence of interproximal sites of a size suitable for a GUMO Go-Betweens cleaner, and for being in compliance with inclusion and exclusion criteria. They were then assessed with the Plaque, Gingivitis, and Eastman Interdental Bleeding Indices (EIBI) at baseline, given a prophylaxis, randomly assigned to one of four products (Glide dental floss, Butler flossers, GUM Go-Betweens cleaners, and GUM Soft-Picks cleaners), and given product use instructions. Subjects returned at three weeks for a compliance review and at six weeks for a final visit. Plaque was assessed at the final visit before and after using the assigned products. Plaque, gingivitis, and bleeding scores were evaluated by analysis of covariance using the baseline measurements as the covariate.
RESULTS
All four interdental products significantly reduced interdental plaque from baseline to before-use at the final visit (after six weeks) employing baseline plaque as a covariate. Reductions were 16% to 24%. Similarly, use of the products at the final visit resulted in 26% to 31% reductions in plaque with the before-use plaque as a covariate. Interdental gingivitis scores showed a reduction both lingually and buccally, with reductions ranging from 27% to 36% for the former and 34% to 53% for the latter (baseline was the covariate). No statistical differences were found between the products on the lingual interdental sites. The Go-Betweens cleaners showed a statistically greater reduction in the Gingival Index score buccally than the other three products. No differences were noted among the products for the EIBI.
CONCLUSION
In this study, dental floss, the recognized "gold standard" for gingivitis reduction, was matched in performance by flossers and an interdental cleaner with small elastomeric fingers, and surpassed by an interdental brush. All products performed comparably for plaque reduction and removal. |
Psychotic-like Experiences and Substance Use in College Students. | Psychotic disorders, as well as psychotic-like experiences and substance use, have been found to be associated. The main goal of the present study was to analyse the relationship between psychoticlike experiences and substance use in college students. The simple comprised a total of 660 participants (M = 20.3 years, SD = 2.6). The results showed that 96% of the sample reported some delusional experience, while 20.3% reported at least one positive psychotic-like experience. Some substance use was reported by 41.1% of the sample, differing in terms of gender. Substance users reported more psychoticlike experiences than non-users, especially in the positive dimension. Also, alcohol consumption predicted in most cases extreme scores on measures of delusional ideation and psychotic experiences. The association between these two variables showed a differentiated pattern, with a stronger relationship between substance use and cognitive-perceptual psychotic-like experiences. To some extent, these findings support the dimensional models of the psychosis phenotype and contribute a better understanding of the links between psychoticlike experiences and substance use in young adults. Future studies should further explore the role of different risk factors for psychotic disorders and include models of the gene-environment interaction. |
Smart Supervisory Control for Optimized Power Management System of hybrid Micro-Grid | Micro grid have been accepted concept widely for the better interconnection of distributed generators (DGs). Corresponding to the conventional power system ac microgrid have been proposed, particularly increasing the use of renewable energy sources generate dc power which is need a dc link for the purpose of grid connection and as a result of increasing modern dc loads. Dc micro grid has been recently emerged for their benefits in terms of efficiency cost and no of conversion stages.. During the islanding operation of the hybrid ac/dc microgrid, the IC is intended to take the role of supplier to one microgrid and at the same time acts as a load to the other microgrid and the power management system should be able to share the power demand between the exiting ac and dc sources in both the microgrids. This paper considers the power flow control and management issues amongst multiple sources distributed throughout both ac and dc microgrids. The paper proposes a decentralized power sharing method in order to eliminate the need for communication between DGs or microgrids. The performance of the proposed power control strategy is validated for different operating conditions, using MATLAB/SIMULATION environment. Keyword: Mixed integer linear programming, hybrid ac/dc microgrid, interlinking ac/dc converter, power management. Introduction Nowadays, electrical grids are more distributed, intelligent and flexible. They are not only driven by the growing environmental concern and the energy security, but also by the ornamenting of the electricity market. Over 100 years the three phase AC power systems existing due to its different operating voltage levels and over long distance. Newly more renewable power conversion systems are connected in ac distribution systems due to environmental issues caused by fueled power plants. Nowadays, more DC loads like LED and Electric vehicles are connected to AC power systems to save energy and to reduce the pollution caused by the fossil fueled power plants. The rising rate of consumption of nuclear and fossil fuels, and the community demand for reducing pollutant emission in electricity generation field are the most significant reasons for worldwide attention to the renewable energy resources. In generally micro grids are defined as a cluster of loads, distributed energy sources, and storage devices. It is accepted that for excellent operation of the micro-grid, a PMS is essential to manage power flow in the micro-grid. It is noteworthy that the power flow means the determination of the output of the electricity generation facilities to meet the demanded power. There are two general approaches to develop PMSs: 1) rule based and 2) optimization-based. In contrast, the latter approach supervises power flow in a micro-grid by minimizing a cost function, which is derived based on performance expectations of the micro-grid, and considering some operational constraints. Recently, a robust PMS for a grid-connected system is presented in [1], in which uncertainty in the generation prediction is considered in the design procedure. On the other hand, hybrid ac/dc micro-grid is a new concept, which decouples dc sources with dc loads and ac sources with ac loads, while power is exchanged between both sides using a bidirectional converter/inverter [2], [3]. In [4]-[6], a droop-based controller is introduced to manage power sharing between the ac and dc micro-grids. In this paper, the ac and dc micro-grids are treated as two separate entities with individual droop representations, where the information from these two droop characteristics is merged to decide the amount of power to exchange between the microgrids. In [7], a hybrid ac/dc micro-grid consisting a WG as an ac source and a PV array as a dc source is International Journal of Applied Engineering Research ISSN 0973-4562 Volume 11, Number 6 (2016) pp 3980-3986 © Research India Publications. http://www.ripublication.com 3981 presented, where a rule-based system is proposed to manage the power flow in the hybrid ac/dcmicro-grid. In [8] and [9], the amount of power which should be exchanged between the micro-grids is determined through arule-based management system with four predefined operating modes. Finally, a rulebased PMS for a hybrid ac/dc microgrid is presented in [10], where more distinct operation modes are considered. Hence, microgrids [12] are becoming a reality to cope with a new scenario in which renewable energy, distributed generation (DG) and distributed energy-storage systems have to be integrated together. This new concept makes the final user not to be a passive element in the grid, but an entity able to generate, storage, control and manage part of the energy that he/she will consume. Besides, a reduction in cost and an increment in reliability and transparency are achieved. The radial transformation of the electrical grid entails deeply challenges not only on the architecture of the power system, but also in the control system. There are many control techniques in the literature based on the droop control method whose aim is to avoid communication between DG units [13][18]. Although this method achieves good reliability and flexibility, it presents several drawbacks [19], [20]: 1) it is not suitable when parallel DG units share nonlinear loads, it must take into account harmonic currents; 2) the output impedance of DG units and the line impedances affect the power sharing accuracy; 3)it is dependent of the load frequency deviations, which implies a phase deviation between DG units and grid/load. To cope with this problem, two additional control loops have been defined in [21] and [22]: Secondary control, which restores the nominal values of the frequency and voltage in the MG; and Tertiary control, which sets the reference of the frequency and voltage in the MG. This paper is organized as follows. Section II gives a brief overview of the typical hybrid ac/dc micro grid modeling In section III gives the overview of renewable power source models and their corresponding converters, where the models of PV and DG as the sources of the ac micro-grid are given in Section III-A, the model of battery as the source of the dc micro-gridis given in Section III-B, and the model of Generator as the source of dc micro grid in section III-C. In Section IV, a PMS to coordinating between the ac and dc micro-grids is proposed. Section IV-Apresents, Grid Connected Mode. In Section IV-B, minimum and maximum charge/discharge power of the battery banks, and minimum allowable power exchange between the ac and dc micro-grids in isolated mode computation are presented. The simulation results obtained with the proposed PMS are also reported in Section V. Finally, Section VI summarizes the main outcome of this paper. Microgrid Modelling MicroGrid design begins by understanding the load profile needed to be served by the system. Often, the load is adjusted in the system design by applying energy efficiency measures such as demand response controls, equipment upgrades, and other system adjustments to reduce the overall generation needs. What does not typically happen, however, is a closer look at those loads to determine how much can be served natively by DC power sources. This is a fundamental flaw in optimizing the use of renewable energy resources that supply DC electricity-like photovoltaic, battery, and fuel cell technologies. Instead, the entire load is considered and the resulting power generation requirements are sized accordingly. With some additional thought and separation of the DC loads from the AC loads, the amount of renewable generation required can be dramatically reduced to only supply dc power for the dc equipment in the building. In other words, by designing separate DC and AC networks, a building MicroGrid could be developed that minimizes power losses due to transformation or conversion by simply supplying the equipment with the native electricity it requires. So, instead of using 50 rooftop solar panels to supply a building, you might only need 10, driving down system costs and increasing efficiencies by utilizing the best type of power for each piece of equipment being served. The conceptual architecture described in this article proposes a building MicroGrid using a hybrid approach of DC renewable generation resources for DC equipment and the utility grid for AC equipment. An “energy router” acts as the hub that manages electricity across the AC and DC buses and minimizes the need for lossy DC-AC and AC-DC electricity transformations. Figure 2.1: Architecture of Microgrid Figure 2. 1, above, shows the overall building level DC MicroGrid architecture. Because of the power losses incurred with DC power over long distances, this concept is best used in a building level MicroGrid with short runs for the circuits on the system. In reality, calling it a hybrid MicroGrid is probably more apropos, but for the purposes of this discussion, we’ll use the DC MicroGrid terminology to keep it short. The Green Lines represent the DC part of the system, while the Orange Lines represent the AC portion. In typical office buildings, the amount of DC power equipment ranges from 20-30% of the overall load, with those numbers steadily rising as LED lights, computers, more electronics, and electric International Journal of Applied Engineering Research ISSN 0973-4562 Volume 11, Number 6 (2016) pp 3980-3986 © Research India Publications. http://www.ripublication.com 3982 vehicles enter the equation. The architectural rendering in Figure 1 may not be complete as some DC equipment will require DC-DC transformations from the DC network’s supply voltage, but these transformations are less lossy than AC-DC transformations and also produce much less heat. DC Network The building’s DC electricity generation needs can be met through renewable energy technologies: Solar (Photovoltaic) |
Selective Sensitization to the Psychosis-Inducing Effects of Cocaine: A Possible Marker for Addiction Relapse Vulnerability? | Patients in inpatient rehabilitation for uncomplicated cocaine dependence were asked whether, compared with the time of their first regular use, they could now identify changes in the effects of similar doses of cocaine. We asked about a spectrum of cocaine effects “then” and “now” and whether the same amount of drug caused effects to occur to about the same degree, less intensely (tolerance), or more intensely (sensitization). Nearly half our sample developed predominantly paranoid psychoses in the context of cocaine use. Sensitization was consistently linked only to psychosis-related cocaine effects.It has been proposed that mesolimbic dopaminergic sensitization might contribute to addiction severity. A preliminary followup of patients who were sensitized or nonsensitized to psychosis development suggests that rehospitalization for treatment of addiction may be more frequent in the sensitized group. |
Effectiveness of Front-Of-Pack Nutrition Labels in French Adults: Results from the NutriNet-Santé Cohort Study | BACKGROUND
To date, no consensus has emerged on the most appropriate front-of-pack (FOP) nutrition label to help consumers in making informed choices. We aimed to compare the effectiveness of the label formats currently in use: nutrient-specific, graded and simple summary systems, in a large sample of adults.
METHODS
The FOP label effectiveness was assessed by measuring the label acceptability and understanding among 13,578 participants of the NutriNet-Santé cohort study, representative of the French adult population. Participants were exposed to five conditions, including four FOP labels: Guideline Daily Amounts (GDA), Multiple Traffic Lights (MTL), 5-Color Nutrition Label (5-CNL), Green Tick (Tick), and a "no label" condition. Acceptability was evaluated by several indicators: attractiveness, liking and perceived cognitive workload. Objective understanding was assessed by the percentage of correct answers when ranking three products according to their nutritional quality. Five different product categories were tested: prepared fish dishes, pizzas, dairy products, breakfast cereals, and appetizers. Differences among the label effectiveness were compared with chi-square tests.
RESULTS
The 5-CNL was viewed as the easiest label to identify and as the one requiring the lowest amount of effort and time to understand. GDA was considered as the least easy to identify and to understand, despite being the most attractive and liked label. All FOP labels were found to be effective in ranking products according to their nutritional quality compared with the "no label" situation, although they showed differing levels of effectiveness (p<0.0001). Globally, the 5-CNL performed best, followed by MTL, GDA and Tick labels.
CONCLUSIONS
The graded 5-CNL label was considered as easy to identify, simple and rapid to understand; it performed well when comparing the products' nutritional quality. Therefore, it is likely to present advantages in real shopping situations where choices are usually made quickly. |
Reducing the Crime Problem: A Not So Dismal Criminology | The New Criminology appeared when I was a PhD student at the University of Queensland. It was the most important criminology book of the decade. More generally, the 1970s was the British decade in criminology, just as the 1960s had been in music. American criminology, which had dominated our thinking in the colonies during the 1960s, seemed theoretically uninspired in comparison. But by the 1980s the Cliff Richards of British criminology were back. Places such as Canada, Scandinavia and even Australia became more interesting intellectual communities for criminologists. Some of those involved in the great British criminology books of the 1970s — Maureen Cain, Stan Cohen, Frank Pearce, Ian Taylor, Paul Walton — actually left the country. Among the others who left were Kit Carson and Barry Hindess, who, while they did not write central criminological books during the 1970s, in different ways significantly influenced the British intellectual leadership of the field. |
Causal inference in economics and marketing. | This is an elementary introduction to causal inference in economics written for readers familiar with machine learning methods. The critical step in any causal analysis is estimating the counterfactual-a prediction of what would have happened in the absence of the treatment. The powerful techniques used in machine learning may be useful for developing better estimates of the counterfactual, potentially improving causal inference. |
Why is "SXSW" trending? Exploring Multiple Text Sources for Twitter Topic Summarization | User-contributed content is creating a surge on the Internet. A list of “buzzing topics” can effectively monitor the surge and lead people to their topics of interest. Yet a topic phrase alone, such as “SXSW”, can rarely present the information clearly. In this paper, we propose to explore a variety of text sources for summarizing the Twitter topics, including the tweets, normalized tweets via a dedicated tweet normalization system, web contents linked from the tweets, as well as integration of different text sources. We employ the concept-based optimization framework for topic summarization, and conduct both automatic and human evaluation regarding the summary quality. Performance differences are observed for different input sources and types of topics. We also provide a comprehensive analysis regarding the task challenges. |
Effectiveness of structured hourly nurse rounding on patient satisfaction and clinical outcomes. | Structured hourly nurse rounding is an effective method to improve patient satisfaction and clinical outcomes. This program evaluation describes outcomes related to the implementation of hourly nurse rounding in one medical-surgical unit in a large community hospital. Overall Hospital Consumer Assessment of Healthcare Providers and Systems domain scores increased with the exception of responsiveness of staff. Patient falls and hospital-acquired pressure ulcers decreased during the project period. |
Stability analysis of negative impedance converter | Negative Impedance Converter (NIC) have received much attention to improve the restriction of antenna gain-bandwidth tradeoff relationship. However, a significant problem with NIC is the potential instability of unwanted oscillation due to the presence of positive feedback. For solving this problem, we propose a NIC circuit technique which is stable and has wideband characteristics of negative impedance. |
CEFAM: Comprehensive Evaluation Framework for Agile Methodologies | Agile software development is regarded as an effective and efficient approach, mainly due to its ability to accommodate rapidly changing requirements, and to cope with modern software development challenges. There is therefore a strong tendency to use agile software development methodologies where applicable; however, the sheer number of existing agile methodologies and their variants hinders the selection of an appropriate agile methodology or method chunk. Methodology evaluation tools address this problem through providing detailed evaluations, yet no comprehensive evaluation framework is available for agile methodologies. We introduce the comprehensive evaluation framework for agile methodologies (CEFAM) as an evaluation tool for project managers and method engineers. The hierarchical (and mostly quantitative) evaluation criterion set introduced in this evaluation framework enhances the usability of the framework and provides results that are precise enough to be useful for the selection, adaptation and construction of agile methodologies. |
A two-warehouse inventory model for items with three-parameter Weibull distribution deterioration, shortages and linear trend in demand | Depending on the type of goods and storage facilities available, perishable goods decay in different manners in terms of the initial point and rate of deterioration. The three-parameter Weibull distribution is an excellent generalization of exponential decay, with the flexibility of modeling various types of deteriorations. Since inventory management of perishable goods involves expensive storage facilities, the retailer with small storage may have to rent a warehouse. In this paper, we discuss a two-warehouse inventory model where deteriorations in the two warehouses follow independent three-parameter Weibull distributions. Transfer of units is from the rented warehouse to the own warehouse, and incurs a positive cost per unit. Demand is a non-decreasing linear function of time, shortages are backlogged and replenishment is instantaneous. A solution procedure for obtaining optimal values of initial inventory level and cycle time is presented. Sensitivity analysis is carried out. The effect of using other related deterioration distributions is illustrated. |
Training of Airport Security Screeners | scientifically based training system that is effective and efficient and allows achieving an excellent level of detection performance. Object recognition is a very complex process but essentially it means to compare visual information to object representations stored in visual memory. The ability to recognize an object class depends on whether itself or similar instance has been stored previously in visual memory. In other words, you can only recognize what you have learned. This explains why training is so important. Identifying the threat items in Fig. 1a and 1b is difficult without training because the objects are depicted in a view that is rather unusual in everyday life. Detecting a bomb such as in Fig. 1c is difficult for untrained people because usually we do not encounter bombs in everyday life. Therefore, a good training system must contain many forbidden objects in many viewpoints in order to train screeners to detect them reliably. Indeed, several studies from our lab and many others worldwide have found that object recognition is often dependent on viewpoint. Moreover, there are numerous studies from neuroscience suggesting that objects are stored in a view-based format in the brain. As you can see in Fig. 2 the hammer, dirk, grenade and gun, which are visible in the bags of Fig. 1a and 1b are indeed much easier to recognize if they are shown in a view that is more often encountered in real life. Because you never know how terrorists place their threat items in a bag, airport security screeners should be trained to detect prohibited items from all kinds of different viewpoints. In a close collaboration with Zurich State Police, Airport division we have Current x-ray machines provide high resolution images, many image processing features and even automatic explosive detection. But the machine is only one half of the whole system. The last and most important decision is always taken by the human operator. In fact, the best and most expensive equipment is of limited use, if a screener finally fails to recognize a threat in the x-ray image. This is of special importance because according to several aviation security experts the human operator is currently the weakest link in airport security. This matter is being realized more and more and several authorities as well as airports are planning to increase investments into a very important element of aviation security: Effective and efficient training of screeners. Indeed, … |
Optical coherence tomography. | A technique called optical coherence tomography (OCT) has been developed for noninvasive cross-sectional imaging in biological systems. OCT uses low-coherence interferometry to produce a two-dimensional image of optical scattering from internal tissue microstructures in a way that is analogous to ultrasonic pulse-echo imaging. OCT has longitudinal and lateral spatial resolutions of a few micrometers and can detect reflected signals as small as approximately 10(-10) of the incident optical power. Tomographic imaging is demonstrated in vitro in the peripapillary area of the retina and in the coronary artery, two clinically relevant examples that are representative of transparent and turbid media, respectively. |
CSI: Community-Level Social Influence Analysis | Modeling how information propagates in social networks driven by peer influence, is a fundamental research question towards understanding the structure and dynamics of these complex networks, as well as developing viral marketing applications. Existing literature studies influence at the level of individuals, mostly ignoring the existence of a community structure in which multiple nodes may exhibit a common influence pattern. In this paper we introduce CSI, a model for analyzing information propagation and social influence at the granularity of communities. CSI builds over a novel propagation model that generalizes the classic Independent Cascade model to deal with groups of nodes (instead of single nodes) influence. Given a social network and a database of past information propagation, we propose a hierarchical approach to detect a set of communities and their reciprocal influence strength. CSI provides a higher level and more intuitive description of the influence dynamics, thus representing a powerful tool to summarize and investigate patterns of influence in large social networks. The evaluation on various datasets suggests the effectiveness of the proposed approach in modeling information propagation at the level of communities. It further enables to detect interesting patterns of influence, such as the communities that play a key role in the overall diffusion process, or that are likely to start information cascades. |
DeepMimic: example-guided deep reinforcement learning of physics-based character skills | A longstanding goal in character animation is to combine data-driven specification of behavior with a system that can execute a similar behavior in a physical simulation, thus enabling realistic responses to perturbations and environmental variation. We show that well-known reinforcement learning (RL) methods can be adapted to learn robust control policies capable of imitating a broad range of example motion clips, while also learning complex recoveries, adapting to changes in morphology, and accomplishing user-specified goals. Our method handles keyframed motions, highly-dynamic actions such as motion-captured flips and spins, and retargeted motions. By combining a motion-imitation objective with a task objective, we can train characters that react intelligently in interactive settings, e.g., by walking in a desired direction or throwing a ball at a user-specified target. This approach thus combines the convenience and motion quality of using motion clips to define the desired style and appearance, with the flexibility and generality afforded by RL methods and physics-based animation. We further explore a number of methods for integrating multiple clips into the learning process to develop multi-skilled agents capable of performing a rich repertoire of diverse skills. We demonstrate results using multiple characters (human, Atlas robot, bipedal dinosaur, dragon) and a large variety of skills, including locomotion, acrobatics, and martial arts. |
Classification of Histopathology Images of Breast into Benign and Malignant using a Single-layer Convolutional Neural Network | Breast cancer is known as the second most prevalent cancer among women worldwide, and an accurate and fast diagnosis of it requires a pathologist to go through a time-consuming process examining different captured images under varying magnifications. Computer vision and machine learning techniques are used by many scholars to automate this process and provide a faster and more accurate diagnosis of such cancers, but most of them have utilized handengineered feature descriptors to classify the type of images (whether benign or malignant) based upon. Deep learning techniques have made a significant progress in the world of pattern recognition, image classification, object detection, etc. Convolutional Neural Networks (CNN) -- a special kind of deep learning methods -- are best known for identifying patterns in the images; they try to represent an abstract form of images containing the most salient information needed for distinguishing them from different similar-looking images. The main aim of this paper is to employ CNN for the task of breast cancer classification given an unknown image of the patient for an accurate diagnosis. A new network design is proposed to extract the most informative features from a collection of histopathology images provided by BreakHis database of microscopic breast tumor images. The experimental results carried on 1,995 histopathological images (with a 40× magnifying factor), demonstrated an improved accuracy compared to some prior works, and a comparable performance regarding one of the previous works. |
Designing and Packaging Wide-Band PAs: Wideband PA and Packaging, History, and Recent Advances: Part 1 | In current applications such as communication, aerospace and defense, electronic warfare (EW), electromagnetic compatibility (EMC), and sensing, among others, there is an ever-growing demand for more linear power with increasingly greater bandwidth and efficiency. Critical for these applications are the design and packaging of wide-band, high-power amplifiers that are both compact in size and low in cost [1]. Most such applications, including EW, radar, and EMC testers, require high 1-dB compression point (P1dB) power with good linearity across a wide band (multi-octave to decade bandwidth) [2]. In addition to linear power, wide bandwidth is essential for high-data-rate communication and high resolution in radar and active imagers [3]. In modern electronics equipment such as automated vehicles, rapidly increasing complexity imposes strict EMC regulations for human safety and security [4]. This often requires challenging specifications for the power amplifier (PA), such as very high P1dB power [to kilowatt, continuous wave (CW)] across approximately a decade bandwidth with high linearity, reliability, and long life, even for 100% load mismatch [5]. |
Union and Difference of Models, 10 years later | This paper contains a summary of the talk given by the author on the occasion of the MODELS 2013 most influential paper award. The talk discussed the original paper as published in 2003, the research work done by others afterwards and the author’s personal reflection on the award. 1 Version Control of Software and System Models There are two main usage scenarios for design models in software and system development: models as sketches, that serve as a communication aid in informal discussions, and models as formal artifacts, to be analyzed, transformed into other artifacts, maintained and evolved during the whole software and system development process. In this second scenario, models are valuable assets that should be kept in a trusted repository. In a complex development project, these models will be updated often and concurrently by different developers. Therefore, there is a need for a version control system for models with optimistic locking. This is a system to compare, merge and store all versions of all models created within a development project. We can illustrate the use of a version control system for models as follows. Let us assume that the original model shown at the top of Figure 1 is edited simultaneously by two developers. One developer has decided that the subclass B is no longer necessary in the model. Simultaneously, the other developer has decided that class C should have a subclass D. The problem is to combine the contributions of both developers into a single model. This is the model shown at the bottom of Fig. 1. We presented the basic algorithms to solve this problem in the original paper published in the proceedings of the UML 2003 conference [1]. The proposed solution is based on calculating the final model as the merge of the differences between the original and the edited models. Figure 2 shows an example of the difference of two models, in this case the difference between the models edited by the developers and the original model. The result of the difference is not always a model, in a similar way that the difference between two natural numbers is not a natural number but a negative one. An example of this is shown in the bottom B C A Original Model |
Robust Instance Recognition in Presence of Occlusion and Clutter | We present a robust learning based instance recognition framework from single view point clouds. Our framework is able to handle real-world instance recognition challenges, i.e, clutter, similar looking distractors and occlusion. Recent algorithms have separately tried to address the problem of clutter [9] and occlusion [16] but fail when these challenges are combined. In comparison we handle all challenges within a single framework. Our framework uses a soft label Random Forest [5] to learn discriminative shape features of an object and use them to classify both its location and pose. We propose a novel iterative training scheme for forests which maximizes the margin between classes to improve recognition accuracy, as compared to a conventional training procedure. The learnt forest outperforms template matching, DPM [7] in presence of similar looking distractors. Using occlusion information, computed from the depth data, the forest learns to emphasize the shape features from the visible regions thus making it robust to occlusion. We benchmark our system with the state-of-the-art recognition systems [9, 7] in challenging scenes drawn from the largest publicly available dataset. To complement the lack of occlusion tests in this dataset, we introduce our Desk3D dataset and demonstrate that our algorithm outperforms other methods in all settings. |
Research synthesis in software engineering: A tertiary study | 0950-5849/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.infsof.2011.01.004 ⇑ Corresponding author. E-mail addresses: [email protected] (D.S. Cruzes Context: Comparing and contrasting evidence from multiple studies is necessary to build knowledge and reach conclusions about the empirical support for a phenomenon. Therefore, research synthesis is at the center of the scientific enterprise in the software engineering discipline. Objective: The objective of this article is to contribute to a better understanding of the challenges in synthesizing software engineering research and their implications for the progress of research and practice. Method: A tertiary study of journal articles and full proceedings papers from the inception of evidencebased software engineering was performed to assess the types and methods of research synthesis in systematic reviews in software engineering. Results: As many as half of the 49 reviews included in the study did not contain any synthesis. Of the studies that did contain synthesis, two thirds performed a narrative or a thematic synthesis. Only a few studies adequately demonstrated a robust, academic approach to research synthesis. Conclusion: We concluded that, despite the focus on systematic reviews, there is limited attention paid to research synthesis in software engineering. This trend needs to change and a repertoire of synthesis methods needs to be an integral part of systematic reviews to increase their significance and utility for research and practice. 2011 Elsevier B.V. All rights reserved. |
An Experimental Investigation of Influence of Process Parameters on Cutting Tool Chatter | Cutting tool chatter, a relative movement between the cutting tool and work piece during machining is an important parameter which influences the cutting tool life and the surface finish of the machined part. Further the chatter will be influenced by self-excitation of the cutting tool, tool tip temperature and several controlled parameters such as process parameters including depth of cut, feed rate, and spindle speed. In the present study, the experimentation was carried out to investigate the influence of cutting tool chatter on surface roughness during the machining of mild steel material with Carbide insert cutting tool under different combinations of process parameters. The experimental work was carried out on a conventional lathe under different combinations of process parameters designed through 3k factorial design. The cutting tool chatter was captured with the help of tri-axial accelerometer, mounted on the cutting tool through a four channel FFT analyser and NVGate 9.10 software. The surface roughness of the machined part was measured with the help of Mitutoyo SJ-201 instrument. The combined effects of process parameters and surface roughness on cutting tool chatter were analysed by using analysis of variance (ANOVA) tool. Keywords—Cutting tool chatter, Surface roughness, Tri-axial accelerometer, FFT analyser, ANOVA |
Improving parenting skills for families of young children in pediatric settings: a randomized clinical trial. | IMPORTANCE
Disruptive behavior disorders, such as attention-deficient/hyperactivity disorder and oppositional defiant disorder, are common and stable throughout childhood. These disorders cause long-term morbidity but benefit from early intervention. While symptoms are often evident before preschool, few children receive appropriate treatment during this period. Group parent training, such as the Incredible Years program, has been shown to be effective in improving parenting strategies and reducing children's disruptive behaviors. Because they already monitor young children's behavior and development, primary care pediatricians are in a good position to intervene early when indicated.
OBJECTIVE
To investigate the feasibility and effectiveness of parent-training groups delivered to parents of toddlers in pediatric primary care settings.
DESIGN, SETTING, AND PARTICIPANTS
This randomized clinical trial was conducted at 11 diverse pediatric practices in the Greater Boston area. A total of 273 parents of children between 2 and 4 years old who acknowledged disruptive behaviors on a 20-item checklist were included.
INTERVENTION
A 10-week Incredible Years parent-training group co-led by a research clinician and a pediatric staff member.
MAIN OUTCOMES AND MEASURES
Self-reports and structured videotaped observations of parent and child behaviors conducted prior to, immediately after, and 12 months after the intervention.
RESULTS
A total of 150 parents were randomly assigned to the intervention or the waiting-list group. An additional 123 parents were assigned to receive intervention without a randomly selected comparison group. Compared with the waiting-list group, greater improvement was observed in both intervention groups (P < .05). No differences were observed between the randomized and the nonrandomized intervention groups.
CONCLUSIONS AND RELEVANCE
Self-reports and structured observations provided evidence of improvements in parenting practices and child disruptive behaviors that were attributable to participation in the Incredible Years groups. This study demonstrated the feasibility and effectiveness of parent-training groups conducted in pediatric office settings to reduce disruptive behavior in toddlers.
TRIAL REGISTRATION
clinicaltrials.gov Identifier: NCT00402857. |
Deep learning with convolutional neural networks for brain mapping and decoding of movement-related information from the human EEG | Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, i.e. learning from the raw data. Now, there is increasing interest in using deep ConvNets for end-to-end EEG analysis. However, little is known about many important aspects of how to design and train ConvNets for end-to-end EEG decoding, and there is still a lack of techniques to visualize the informative EEG features the ConvNets learn. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed movements from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching or surpassing that of the widely-used filter bank common spatial patterns (FBCSP) decoding algorithm. While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta and high gamma frequencies. These methods also proved useful as a technique for spatially mapping the learned features, revealing the topography of the causal contributions of features in different frequency bands to decoding the movement classes. Our study thus shows how to design and train ConvNets to decode movement-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Tonio Ball |
Dynamic properties and liquefaction potential of soils | Design of geotechnical engineering problems that involve dynamic loading of soils and soil–structure interaction systems requires the determination of two important parameters, the shear modulus and the damping of the soils. The recent developments in the numerical analyses for the nonlinear dynamic responses of grounds due to strong earthquake motions have increased the demand for the dynamic soil properties corresponding to large strain level also. Further, the most common cause of ground failure during earthquakes is the liquefaction phenomenon which has produced severe damage all over the world. This paper summarizes the methods of determining the dynamic properties as well as potential for liquefaction of soils. Parameters affecting the dynamic properties and liquefaction have been brought out. A simple procedure of obtaining the dynamic properties of layered ground has been highlighted. Results of a series of cyclic triaxial tests on liquefiable sands collected from the sites close to the Sabarmati river belt have been presented. |
Distributed multi-agent algorithm for residential energy management in smart grids | Distributed renewable power generators, such as solar cells and wind turbines are difficult to predict, making the demand-supply problem more complex than in the traditional energy production scenario. They also introduce bidirectional energy flows in the low-voltage power grid, possibly causing voltage violations and grid instabilities. In this article we describe a distributed algorithm for residential energy management in smart power grids. This algorithm consists of a market-oriented multi-agent system using virtual energy prices, levels of renewable energy in the real-time production mix, and historical price information, to achieve a shifting of loads to periods with a high production of renewable energy. Evaluations in our smart grid simulator for three scenarios show that the designed algorithm is capable of improving the self consumption of renewable energy in a residential area and reducing the average and peak loads for externally supplied power. |
Cannabinoids and the skeleton: from marijuana to reversal of bone loss. | The active component of marijuana, Delta(9)-tetrahydrocannabinol, activates the CB1 and CB2 cannabinoid receptors, thus mimicking the action of endogenous cannabinoids. CB1 is predominantly neuronal and mediates the cannabinoid psychotropic effects. CB2 is predominantly expressed in peripheral tissues, mainly in pathological conditions. So far the main endocannabinoids, anandamide and 2-arachidonoylglycerol, have been found in bone at 'brain' levels. The CB1 receptor is present mainly in skeletal sympathetic nerve terminals, thus regulating the adrenergic tonic restrain of bone formation. CB2 is expressed in osteoblasts and osteoclasts, stimulates bone formation, and inhibits bone resorption. Because low bone mass is the only spontaneous phenotype so far reported in CB2 mutant mice, it appears that the main physiologic involvement of CB2 is associated with maintaining bone remodeling at balance, thus protecting the skeleton against age-related bone loss. Indeed, in humans, polymorphisms in CNR2, the gene encoding CB2, are strongly associated with postmenopausal osteoporosis. Preclinical studies have shown that a synthetic CB2-specific agonist rescues ovariectomy-induced bone loss. Taken together, the reports on cannabinoid receptors in mice and humans pave the way for the development of 1) diagnostic measures to identify osteoporosis-susceptible polymorphisms in CNR2, and 2) cannabinoid drugs to combat osteoporosis. |
Porous NiTi for bone implants: a review. | NiTi foams are unique among biocompatible porous metals because of their high recovery strain (due to the shape-memory or superelastic effects) and their low stiffness facilitating integration with bone structures. To optimize NiTi foams for bone implant applications, two key areas are under active study: synthesis of foams with optimal architectures, microstructure and mechanical properties; and tailoring of biological interactions through modifications of pore surfaces. This article reviews recent research on NiTi foams for bone replacement, focusing on three specific topics: (i) surface modifications designed to create bio-inert porous NiTi surfaces with low Ni release and corrosion, as well as bioactive surfaces to enhance and accelerate biological activity; (ii) in vitro and in vivo biocompatibility studies to confirm the long-term safety of porous NiTi implants; and (iii) biological evaluations for specific applications, such as in intervertebral fusion devices and bone tissue scaffolds. Possible future directions for bio-performance and processing studies are discussed that could lead to optimized porous NiTi implants. |
Medication augmentation after the failure of SSRIs for depression. | BACKGROUND
Although clinicians frequently add a second medication to an initial, ineffective antidepressant drug, no randomized controlled trial has compared the efficacy of this approach.
METHODS
We randomly assigned 565 adult outpatients who had nonpsychotic major depressive disorder without remission despite a mean of 11.9 weeks of citalopram therapy (mean final dose, 55 mg per day) to receive sustained-release bupropion (at a dose of up to 400 mg per day) as augmentation and 286 to receive buspirone (at a dose of up to 60 mg per day) as augmentation. The primary outcome of remission of symptoms was defined as a score of 7 or less on the 17-item Hamilton Rating Scale for Depression (HRSD-17) at the end of this study; scores were obtained over the telephone by raters blinded to treatment assignment. The 16-item Quick Inventory of Depressive Symptomatology--Self-Report (QIDS-SR-16) was used to determine the secondary outcomes of remission (defined as a score of less than 6 at the end of this study) and response (a reduction in baseline scores of 50 percent or more).
RESULTS
The sustained-release bupropion group and the buspirone group had similar rates of HRSD-17 remission (29.7 percent and 30.1 percent, respectively), QIDS-SR-16 remission (39.0 percent and 32.9 percent), and QIDS-SR-16 response (31.8 percent and 26.9 percent). Sustained-release bupropion, however, was associated with a greater reduction (from baseline to the end of this study) in QIDS-SR-16 scores than was buspirone (25.3 percent vs. 17.1 percent, P<0.04), a lower QIDS-SR-16 score at the end of this study (8.0 vs. 9.1, P<0.02), and a lower dropout rate due to intolerance (12.5 percent vs. 20.6 percent, P<0.009).
CONCLUSIONS
Augmentation of citalopram with either sustained-release bupropion or buspirone appears to be useful in actual clinical settings. Augmentation with sustained-release bupropion does have certain advantages, including a greater reduction in the number and severity of symptoms and fewer side effects and adverse events. (ClinicalTrials.gov number, NCT00021528.). |
Online Self-Tracking Groups to Increase Fruit and Vegetable Intake: A Small-Scale Study on Mechanisms of Group Effect on Behavior Change | BACKGROUND
Web-based interventions with a self-tracking component have been found to be effective in promoting adults' fruit and vegetable consumption. However, these interventions primarily focus on individual- rather than group-based self-tracking. The rise of social media technologies enables sharing and comparing self-tracking records in a group context. Therefore, we developed an online group-based self-tracking program to promote fruit and vegetable consumption.
OBJECTIVE
This study aims to examine (1) the effectiveness of online group-based self-tracking on fruit and vegetable consumption and (2) characteristics of online self-tracking groups that make the group more effective in promoting fruit and vegetable consumption in early young adults.
METHODS
During a 4-week Web-based experiment, 111 college students self-tracked their fruit and vegetable consumption either individually (ie, the control group) or in an online group characterized by a 2 (demographic similarity: demographically similar vs demographically diverse) × 2 (social modeling: incremental change vs ideal change) experimental design. Each online group consisted of one focal participant and three confederates as group members or peers, who had their demographics and fruit and vegetable consumption manipulated to create the four intervention groups. Self-reported fruit and vegetable consumption were assessed using the Food Frequency Questionnaire at baseline and after the 4-week experiment.
RESULTS
Participants who self-tracked their fruit and vegetable consumption collectively with other group members consumed more fruits and vegetables than participants who self-tracked individually (P=.01). The results did not show significant main effects of demographic similarity (P=.32) or types of social modeling (P=.48) in making self-tracking groups more effective in promoting fruit and vegetable consumption. However, additional analyses revealed the main effect of performance discrepancy (ie, difference in fruit and vegetable consumption between a focal participant and his/her group members during the experiment), such that participants who had a low performance discrepancy from other group members had greater fruit and vegetable consumption than participants who had a high performance discrepancy from other group members (P=.002). A mediation test showed that low performance discrepancy led to greater downward contrast (b=-0.78, 95% CI -2.44 to -0.15), which in turn led to greater fruit and vegetable consumption.
CONCLUSIONS
Online self-tracking groups were more effective than self-tracking alone in promoting fruit and vegetable consumption for early young adults. Low performance discrepancy from other group members lead to downward contrast, which in turn increased participants' fruit and vegetable consumption over time. The study highlighted social comparison processes in online groups that allow for sharing personal health information. Lastly, given the small scale of this study, nonsignificant results with small effect sizes might be subject to bias. |
Computing with the Leaky Integrate-and-Fire Neuron: Logarithmic Computation and Multiplication | The leaky integrate-and-fire (LIF) model of neuronal spiking (Stein 1967) provides an analytically tractable formalism of neuronal firing rate in terms of a neuron's membrane time constant, threshold, and refractory period. LIF neurons have mainly been used to model physiologically realistic spike trains, but little application of the LIF model appears to have been made in explicitly computational contexts. In this article, we show that the transfer function of a LIF neuron provides, over a wide parameter range, a compressive nonlinearity sufficiently close to that of the logarithm so that LIF neurons can be used to multiply neural signals by mere addition of their outputs yielding the logarithm of the product. A simulation of the LIF multiplier shows that under a wide choice of parameters, a LIF neuron can log-multiply its inputs to within a 5 relative error. |
Increased bacterial infections after transfusion of leukoreduced non-irradiated blood products in recipients of allogeneic stem cell transplants after reduced-intensity conditioning. | Blood components transfused to hematopoietic stem cell transplant (HSCT) recipients are irradiated to prevent transfusion-associated graft-versus-host disease (TA-GVHD). The effect of transfusing non-irradiated blood products in HSCT outcome, including incidence of transplant complications, bacterial infections, acute and chronic GVHD presentation, and characteristics, has not been documented. Clinical records as well as blood bank and electronic databases of HSCT patients grafted after reduced-intensity conditioning who received irradiated versus non-irradiated blood products, after blood irradiation became unavailable at our center, were scrutinized for transplant outcome, clinical evolution, engraftment characteristics including days to neutrophil and platelet recovery, acute and chronic GVHD, rate and type of infections, and additional transplant-related comorbidities. All transfused blood products were leukoreduced. A total of 156 HSCT recipients was studied, 73 received irradiated and 83 non-irradiated blood components. Bacterial infections were significantly more frequent in patients transfused with non-irradiated blood products, P = .04. Clinically relevant increased rates of fever and neutropenia and mucositis were also documented in these patients. No cases of TA-GVHD occurred. Classical GVHD developed in 37 patients (50.7%) who received irradiated blood products and 36 (43.9%) who received non-irradiated blood products, P = .42. Acute GVHD developed in 28 patients (38.4%) in the blood-irradiated and 33 patients (39.8%) in the non-irradiation group, P = .87. The 2-year GVHD-free survival rate was 40% in the irradiated versus 40.6% in the non-irradiation group, P = .071. Increased bacterial infections were found in HSCT recipients transfused with non-irradiated blood products, which ideally must always be irradiated. |
Implementing Customizable Online Food Ordering System Using Web Based Application | Typically in a restaurant food order process involves several steps for ordering the food where firstly customer starting from browsing the paper based menu and then inform to the waiter for ordering items. Usually the process require that the customer has to be seated before starting. An alternative method for the customers is “Food Pre-Order System using Web Based Application” in which customer can be able to create the order before they approach the restaurant. Customer using Smartphone. When the customer approach to the restaurant, the saved order can be confirmed by touching the Smartphone. The list of selected pre-ordered items shall be shown on the kitchen screen, and when confirmed, order slip shall be printed for further order processing. The solution provides easy and convenient way to select pre-order transaction form customers. |
Event Structure Influences Language Production: Evidence from Structural Priming in Motion Event Description. | This priming study investigates the role of conceptual structure during language production, probing whether English speakers are sensitive to the structure of the event encoded by a prime sentence. In two experiments, participants read prime sentences aloud before describing motion events. Primes differed in 1) syntactic frame, 2) degree of lexical and conceptual overlap with target events, and 3) distribution of event components within frames. Results demonstrate that conceptual overlap between primes and targets led to priming of (a) the information that speakers chose to include in their descriptions of target events, (b) the way that information was mapped to linguistic elements, and (c) the syntactic structures that were built to communicate that information. When there was no conceptual overlap between primes and targets, priming was not successful. We conclude that conceptual structure is a level of representation activated during priming, and that it has implications for both Message Planning and Linguistic Formulation. |
Identifying Expectation Errors in Value / Glamour Strategies : A Fundamental Analysis Approach 2 Identifying Expectation Errors in Value / Glamour Strategies : A Fundamental Analysis Approach | It is well established that value stocks outperform glamour stocks, yet considerable debate exists about whether the return differential reflects compensation for risk or mispricing. Under mispricing explanations, prices of glamour (value) firms reflect systematically optimistic (pessimistic) expectations; thus, the value/glamour effect should be concentrated (absent) among firms with (without) ex ante identifiable expectation errors. Classifying firms based upon whether expectations implied by current pricing multiples are congruent with the strength of their fundamentals, we document that value/glamour returns and ex post revisions to market expectations are predictably concentrated (absent) among firms with ex ante biased (unbiased) market expectations. |
Social media marketing in tourism and hospitality | Now, we come to offer you the right catalogues of book to open. social media marketing in tourism and hospitality is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you. |
Comprehensive school-based behavioral assessment of the effects of methylphenidate. | Individualized assessments of the effects of three doses of methylphenidate (MPH) were conducted for 2 students with attention deficit hyperactivity disorder within each child's classroom using behavioral, academic, and social measures. A double-blind, placebo-controlled, multielement design was used to evaluate the results. Results suggested that at least one or more dosages of MPH were associated with some degree of improvement for both children in each area of functioning as compared to placebo. However, the degree of improvement at times varied substantially across dosage and area of functioning. Results suggest that MPH dosage and area of child functioning are critical assessment parameters and that controlled clinical trials are necessary to optimize the effectiveness of treatment with MPH for the individual child. |
Ethics of Belief | The broad question asked under the heading “Ethics of Belief” is: What ought one believe? An ethics of belief attempts to uncover the norms that guide belief formation and maintenance. The dominant view among contemporary philosophers is that evidential norms do; I should always follow my evidence and only believe when the evidence is sufficient to support my belief. This view is called “evidentialism,” although, as we shall see, this term gets applied to a number of views that can be distinguished from one another. Evidentialists often cite David Hume (1999: 110) as their historic exemplar who said “a wise man … proportions his beliefs to the evidence” and thus argued against the reasonableness of believing in miracles (see Hume, David; Wisdom). Those who argue that there can be good practical reasons for believing, independent of one's evidence, can turn for inspiration to Blaise Pascal (1966: 124), who argued that the best reason to form a belief in God was a practical one, namely the possibility of avoiding eternal suffering (see Reasons; Reasons for Action, Morality and; Faith).
Keywords:
ethics;
James, William;
philosophy;
Williams, Bernard;
duty and obligation;
knowledge;
rationality;
responsibility |
Antibody response to BK polyomavirus as a prognostic biomarker and potential therapeutic target in prostate cancer | Infectious agents, including the BK polyomavirus (BKPyV), have been proposed as important inflammatory pathogens in prostate cancer. Here, we evaluated whether the preoperative antibody response to BKPyV large T antigen (LTag) and viral capsid protein 1 (VP1) was associated with the risk of biochemical recurrence in 226 patients undergoing radical prostatectomy for primary prostate cancer. Essentially, the multivariate Cox regression analysis revealed that preoperative seropositivity to BKPyV LTag significantly reduced the risk of biochemical recurrence, independently of established predictors of biochemical recurrence such as tumor stage, Gleason score and surgical margin status. The predictive accuracy of the regression model was denotatively increased by the inclusion of the BKPyV LTag serostatus. In contrast, the VP1 serostatus was of no prognostic value. Finally, the BKPyV LTag serostatus was associated with a peculiar cytokine gene expression profile upon assessment of the cellular immune response elicited by LTag. Taken together, our findings suggest that the BKPyV LTag serology may serve as a prognostic factor in prostate cancer. If validated in additional studies, this biomarker may allow for better treatment decisions after radical prostatectomy. Finally, the favorable outcome of LTag seropositive patients may provide a potential opportunity for novel therapeutic approaches targeting a viral antigen. |
Self-compassion and adaptive psychological functioning | Two studies are presented to examine the relation of self-compassion to psychological health. Selfcompassion entails being kind and understanding toward oneself in instances of pain or failure rather than being harshly self-critical; perceiving one’s experiences as part of the larger human experience rather than seeing them as isolating; and holding painful thoughts and feelings in mindful awareness rather than over-identifying with them. Study 1 found that self-compassion (unlike self-esteem) helps buVer against anxiety when faced with an ego-threat in a laboratory setting. Self-compassion was also linked to connected versus separate language use when writing about weaknesses. Study 2 found that increases in self-compassion occurring over a one-month interval were associated with increased psychological well-being, and that therapist ratings of self-compassion were signiWcantly correlated with selfreports of self-compassion. Self-compassion is a potentially important, measurable quality that oVers a conceptual alternative to Western, more egocentric concepts of self-related processes and feelings. © 2006 Elsevier Inc. All rights reserved. |
Mobility Management for Femtocells in LTE-Advanced: Key Aspects and Survey of Handover Decision Algorithms | Support of femtocells is an integral part of the Long Term Evolution - Advanced (LTE-A) system and a key enabler for its wide adoption in a broad scale. Femtocells are short-range, low-power and low-cost cellular stations which are installed by the consumers in an unplanned manner. Even though current literature includes various studies towards understanding the main challenges of interference management in the presence of femtocells, little light has been shed on the open issues of mobility management (MM) in the two-tier macrocell-femtocell network. In this paper, we provide a comprehensive discussion on the key aspects and research challenges of MM support in the presence of femtocells, with the emphasis given on the phases of a) cell identification, b) access control, c) cell search, d) cell selection/reselection, e) handover (HO) decision, and f) HO execution. A detailed overview of the respective MM procedures in the LTE-A system is also provided to better comprehend the solutions and open issues posed in real-life systems. Based on the discussion for the HO decision phase, we subsequently survey and classify existing HO decision algorithms for the two-tier macrocell-femtocell network, depending on the primary HO decision criterion used. For each class, we overview up to three representative algorithms and provide detailed flowcharts to describe their fundamental operation. A comparative summary of the main decision parameters and key features of selected HO decision algorithms concludes this work, providing insights for future algorithmic design and standardization activities. |
Case Report: Induced Lactation in a Transgender Woman | Objective: Our report describes a case of nonpuerperal induced lactation in a transgender woman. Methods: We present the relevant clinical and laboratory findings, along with a review of the relevant literature. Results: A 30-year-old transgender woman who had been receiving feminizing hormone therapy for the past 6 years presented to our clinic with the goal of being able to breastfeed her adopted infant. After implementing a regimen of domperidone, estradiol, progesterone, and breast pumping, she was able to achieve sufficient breast milk volume to be the sole source of nourishment for her child for 6 weeks. This case illustrates that, in some circumstances, modest but functional lactation can be induced in transgender women. |
Matchmaking for online games and other latency-sensitive P2P systems | The latency between machines on the Internet can dramatically affect users' experience for many distributed applications. Particularly, in multiplayer online games, players seek to cluster themselves so that those in the same session have low latency to each other. A system that predicts latencies between machine pairs allows such matchmaking to consider many more machine pairs than can be probed in a scalable fashion while users are waiting. Using a far-reaching trace of latencies between players on over 3.5 million game consoles, we designed Htrae, a latency prediction system for game matchmaking scenarios. One novel feature of Htrae is its synthesis of geolocation with a network coordinate system. It uses geolocation to select reasonable initial network coordinates for new machines joining the system, allowing it to converge more quickly than standard network coordinate systems and produce substantially lower prediction error than state-of-the-art latency prediction systems. For instance, it produces 90th percentile errors less than half those of iPlane and Pyxida. Our design is general enough to make it a good fit for other latency-sensitive peer-to-peer applications besides game matchmaking. |
Validating UML Models and OCL Constraints | The UML has been widely accepted as a standard for modeling software systems and is supported by a great number of CASE tools. However, UML tools often provide only little support for validating models early during the design stage. Also, there is generally no substantial support for constraints written in the Object Constraint Language (OCL). We present an approach for the validation of UML models and OCL constraints that is based on animation. The USE tool (UMLbased Specification Environment) supports developers in this process. It has an animator for simulating UML models and an OCL interpreter for constraint checking. Snapshots of a running system can be created, inspected, and checked for conformance with the model. As a special case study, we have applied the tool to parts of the UML 1.3 metamodel and its well-formedness rules. The tool enabled a thorough and systematic check of the OCL well-formedness rules in the UML standard. |
Assessment of sexual function in patients with cancer undergoing radiotherapy--a single centre prospective study. | AIM
The main objective was to delineate the rates and clinical course of sexual function and depression in cancer patients undergoing radiotherapy.
PATIENTS AND METHODS
Forty-eight male and 90 female radiotherapy-naive outpatients with breast or pelvic cancer completed the International Index of Erectile Function (IIEF) or the Female Sexual Function Index (FSFI), and the Hamilton Depression Scale (HDS) prior to (phase 1), at the end of (phase 2) and 12 months after radiotherapy (phase 3).
RESULTS
Overall, the majority of patients (93.8% of males and 80% of females) experienced intense sexual dysfunction. At presentation, males reported severe erectile dysfunction that was significantly associated with age. However, only in sexual desire was the difference between baseline and phase 3 significant. In females, an improvement was observed in all parameters of FSFI between phase 1 and 3. Females with stage III disease achieved lower scores in almost all parameters of FSFI than those with stage II. Finally, although a quarter of patients reported elevated depression scores, depression was not related to sexual function.
CONCLUSION
A significant proportion of cancer patients experience intense levels of sexual dysfunction and depression throughout radiotherapy and the subsequent year. Pelvic radiotherapy affected sexual function to a higher degree than did breast radiotherapy. |
Immunity-Based Intrusion Detection System: A General Framework | This paper focuses on investigating immunological principles in designing a multi-agent system for intrusion/anomaly detection and response in networked computers. In this approach, the immunity-based agents roam around the machines (nodes or routers), and monitor the situation in the network (i.e. look for changes such as malfunctions, faults, abnormalities, misuse, deviations, intrusions, etc.). These agents can mutually recognize each other's activities and can take appropriate actions according to the underlying security policies. Specifically, their activities are coordinated in a hierarchical fashion while sensing, communicating and generating responses. Such an agent can learn and adapt to its environment dynamically and can detect both known and unknown intrusions. This research is the part of an effort to develop a multi-agent detection system that can simultaneously monitor networked computer's activities at different levels (such as user level, system level, process level and packet level) in order to determine intrusions and anomalies. The proposed intrusion detection system is designed to be flexible, extendible, and adaptable that can perform real-time monitoring in accordance with the needs and preferences of network administrators. This paper provides the conceptual view and a general framework of the proposed system. 1. Inspiration from the nature: Every organism in nature is constantly threatened by other organisms, and each species has evolved elaborate set of protective measures called, collectively, the immune system. The natural immune system is an adaptive learning system that is highly distributive in nature. It employs multi-level defense mechanisms to make rapid, highly specific and often very protective responses against wide variety of pathogenic microorganisms. The immune system is a subject of great research interest because of its powerful information processing capabilities [5,6]. Specifically, its' mechanisms to extract unique signatures from antigens and ability to recognize and classify dangerous antigenic peptides are very important. It also uses memory to remember signature patterns that have been seen previously, and use combinatorics to construct antibody for efficient detection. It is observed that the overall behavior of the system is an emergent property of several local interactions. Moreover, the immune response can be either local or systemic, depending on the route and property of the antigenic challenge [19]. The immune system is consists of different populations of immune cells (mainly B or T cells) which circulate at various primary and secondary lymphoid organs of the body. They are carefully controlled to ensure that appropriate populations of B and T cells (naive, effector, and memory) are recruited into different location [19]. This differential migration of lymphocyte subpopulations at different locations (organs) of the body is called trafficking or homing. The lymph nodes and organs provide specialized local environment (called germinal center) during pathogenic attack in any part of the body. This dynamic mechanism support to create a large number of antigen-specific lymphocytes (as effector and memory cells) for stronger defense through the process of the clonal expansion and differentiation. Interestingly, memory cells exhibit selective homing to the type of tissue in which they first encountered an antigen. Presumably this ensures that a particular memory cell will return to the location where it is most likely to re-encounter a subsequent antigenic challenge. The mechanisms of immune responses are self-regulatory in nature. There is no central organ that controls the functions of the immune system. The regulation of the clonal expansion and proliferation of B cells are closely regulated (with a co-stimulation) in order to prevent uncontrolled immune response. This second signal helps to ensure tolerance and judge between dangerous and harmless invaders. So the purpose of this accompanying signal in identifying a non-self is to minimize false alarm and to generate decisive response in case of a real danger[19]. 2. Existing works in Intrusion Detection: The study of security in computer networks is a rapidly growing area of interest because of the proliferation of networks (LANs, WANs etc.), greater deployment of shared computer databases (packages) and the increasing reliance of companies, institutions and individuals on such data. Though there are many levels of access protection to computing and network resources, yet the intruders are finding ways to entry into many sites and systems, and causing major damages. So the task of providing and maintaining proper security in a network system becomes a challenging issue. Intrusion/Anomaly detection is an important part of computer security. It provides an additional layer of defense against computer misuse (abuse) after physical, authentication and access control. There exist different methods for intrusion detection [7,23,25,29] and the early models include IDES (later versions NIDES and MIDAS), W & S, AudES, NADIR, DIDS, etc. These approaches monitor audit trails generated by systems and user applications and perform various statistical analyses in order to derive regularities in behavior pattern. These works based on the hypothesis that an intruder's behavior will be noticeably different from that of a legitimate user, and security violations can be detected by monitoring these audit trails. Most of these methods, however, used to monitor a single host [13,14], though NADIR and DIDS can collect and aggregate audit data from a number of hosts to detect intrusions. However, in all cases, there is no real analysis of patterns of network activities and they only perform centralized analysis. Recent works include GrIDS[27] which used hierarchical graphs to detect attacks on networked systems. Other approaches used autonomous agent architectures [1,2,26] for distributed intrusion detection. 3. Computer Immune Systems: The security in the field of computing may be considered as analogous to the immunity in natural systems. In computing, threats and dangers (of compromising privacy, integrity, and availability) may arise because of malfunction of components or intrusive activities (both internal and external). The idea of using immunological principles in computer security [9-11,15,16,18] started since 1994. Stephanie Forrest and her group at the University of New Mexico have been working on a research project with a long-term goal to build an artificial immune system for computers [911,15,16]. This immunity-based system has much more sophisticated notions of identity and protection than those afforded by current operating systems, and it is suppose to provide a general-purpose protection system to augment current computer security systems. The security of computer systems depends on such activities as detecting unauthorized use of computer facilities, maintaining the integrity of data files, and preventing the spread of computer viruses. The problem of protecting computer systems from harmful viruses is viewed as an instance of the more general problem of distinguishing self (legitimate users, uncorrupted data, etc.) from dangerous other (unauthorized users, viruses, and other malicious agents). This method (called the negative-selection algorithm) is intended to be complementary to the more traditional cryptographic and deterministic approaches to computer security. As an initial step, the negativeselection algorithm has been used as a file-authentication method on the problem of computer virus detection [9]. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.