title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Designing a hybrid AI system as a forex trading decision support tool | In this study, a hybrid artificial intelligent (AI) system integrating neural network and expert system is proposed to support foreign exchange (forex) trading decisions. In this system, a neural network is used to predict the forex price in terms of quantitative data, while an expert system is used to handle qualitative factor and to provide forex trading decision suggestions for traders incorporating experts' knowledge and the neural network's results. The effectiveness of the proposed hybrid AI system is illustrated by simulation experiments |
Limiting fake accounts in large-scale distributed systems through adaptive identity management | Various online, networked systems offer a lightweight process for obtaining identities (e.g., confirming a valid e-mail address), so that users can easily join them. Such convenience comes with a price, however: with minimum effort, an attacker can subvert the identity management scheme in place, obtain a multitude of fake accounts, and use them for malicious purposes. In this work, we approach the issue of fake accounts in large-scale, distributed systems, by proposing a framework for adaptive identity management. Instead of relying on users' personal information as a requirement for granting identities (unlike existing proposals), our key idea is to estimate a trust score for identity requests, and price them accordingly using a proof of work strategy. The research agenda that guided the development of this framework comprised three main items: (i) investigation of a candidate trust score function, based on an analysis of users' identity request patterns, (ii) combination of trust scores and proof of work strategies (e.g. cryptograhic puzzles) for adaptively pricing identity requests, and (iii) reshaping of traditional proof of work strategies, in order to make them more resource-efficient, without compromising their effectiveness (in stopping attackers). |
Research on Vessels' Portable Ammunition Support System Based on Data Warehouse | There are many problems in the management system of vessels' portable ammunition which uses the traditional affairs database. The shortcomings of traditional database are analyzed firstly. To deal with these problems, a new ammunition support system based on data warehouse is proposed. The structure of the data warehouse is discussed in detail. |
Can Beach Cleans Do More Than Clean-Up Litter? Comparing Beach Cleans to Other Coastal Activities | Coastal visits not only provide psychological benefits but can also contribute to the accumulation of rubbish. Volunteer beach cleans help address this issue, but may only have limited, local impact. Consequently, it is important to study any broader benefits associated with beach cleans. This article examines the well-being and educational value of beach cleans, as well as their impacts on individuals' behavioral intentions. We conducted an experimental study that allocated students (n = 90) to a beach cleaning, rock pooling, or walking activity. All three coastal activities were associated with positive mood and pro-environmental intentions. Beach cleaning and rock pooling were associated with higher marine awareness. The unique impacts of beach cleaning were that they were rated as most meaningful but linked to lower restorativeness ratings of the environment compared with the other activities. This research highlights the interplay between environment and activities, raising questions for future research on the complexities of person-environment interactions. |
Analytic performance of immunoassays for drugs of abuse below established cutoff values. | BACKGROUND
The analytic performance and accuracy of drug detection below Substance Abuse and Mental Health Services Administration (SAMHSA) cutoffs is not well known. In some patient populations, clinically significant concentrations of abused drugs in urine may not be detected when current SAMHSA cutoffs are used. Our objectives were to define the precision profiles of three immunoassay systems for drugs of abuse and to evaluate the accuracy of testing at concentrations at which the CV was <20%.
METHODS
Drug-free urine was supplemented with analytes to assess the precision in three commercial drugs-of-abuse immunoassay systems below the SAMHSA-dictated cutoffs for amphetamines, opiates, benzoylecgonine, phencyclidine, and cannabinoids. Consecutive urine samples with signals associated with a CV <20% by Emit immunoassay and below SAMHSA cutoffs were then subjected to confirmatory analysis.
RESULTS
The CV of all immunoassay systems tested remained <20% to drug concentrations well below SAMHSA cutoffs. The accuracy of urine drug-screening results between the SAMHSA-specified cutoffs and the precision-based cutoffs was less than accuracy for specimens above the SAMHSA cutoffs, but the use of the precision-based cutoff produced a 15.6% increase in the number of screen-positive specimens and a 7.8% increase in the detection of specimens that yielded positive results on confirmatory testing.
CONCLUSION
The precision of three commercial immunoassay systems for drugs-of-abuse screening is adequate to detect drugs below SAMHSA cutoffs. Knowledge of the positive predictive values of screening immunoassays at lower cutoff concentrations could enable efficient use of confirmatory testing resources and improved detection of illicit drug use. |
Vision-assisted image editing | V / h a t Is It? When I think of image editing, packages such as Photoshop and Paint Shop Pro come to mind. These packages are used to edit. transform or manipulate, typically with extensive user guidance, one or more images to produce a d e s i r e d result . Tasks such as se lect ion, matting, blending, warping, morphing, etc. are often tedious and t ime consuming.Visionassisted edi t ing can l ighten the burden, whether the goal is a simple cut and paste composition or a major special effect for a movie (see Doug Roble's article in this issue). Thus, this article focuses on computer vision techniques that reduce (often significantly) the time and effort involved in editing images and video. The goal of vision systems is to de tec t edges, regions, shapes, surface features, lighting proper t ies , 3D geometry, etc. Host currently available image editing tools and f i l ters ut i l ize low-level, 2D geometr ic o r image processing operations that manipulate pixels. However, vision techniques extract descriptive object or scene information, thus allowing a user to edit in terms of higher-level features. Fully automatic computer vision remains a major focus in the computer vision community. Comp le te au tomat ion is cer ta in ly preferred for such tasks as robotic navigation, image/v ideo compression, model dr iven object delineation, multiple image correspondence, image-based model ing o r anyt ime autonomous interpretation of images/video is desired. However, general purpose image editing will continue to require human guidante due to the essential role of the user in the creative process and in identifying which image components are of interest. Host vision-assisted image editing techniques fall somewhere between user-assisted vision and vision-based interaction. User-assisted vision describes those techniques where the user interacts in image (or parameter) space to begin and/or guide a vision algorithm so tha t i t produces a desired result , For example, Photoshop's magic wand computes a connected region of similar pixels based on a mouse click in the area co be selected.Visionbased interaction refers to those methods where the computer has done some or all of the "vision" part and the user interacts within the resulUn8 vision-based feature space. One example is the ICE (Interact ive Con tou r Editing) system [4] that computes an image's edge representation and then allows a user to interactively select edge groupings to extract or remove image features. A tool is classified based on where a user can "touch" the data of the underlying vision function the process that computes results from inputs. User-assisted vision manipulates the input (or domain) space of the vision func t ion wh i le v is ion-based in te rac t ion provides access to the result (or range). Some tools al low intervention at several steps in the process, including the ability to adjust partial or intermediate results. Regardless of a tool's classification, there are algorithmic properties that are desirable for image editing tools.These tools should be: |
H89, an inhibitor of PKA and MSK, inhibits cyclic-AMP response element binding protein-mediated MAPK phosphatase-1 induction by lipopolysaccharide | Lipopolysaccharide (LPS) stimulates the production of inflammatory cytokines and the amplification of immune responses via MAPK pathways. MAPK phosphatases (MKPs) feedback-regulate the activities of MAPKs to prevent excessive immunological functions. H89 has been used as an inhibitor of the protein kinase A (PKA) and mitogen- and stress-activated protein kinase (MSK) pathways. In view of the potential roles of PKA and MSK for MKP-1 induction and the ability of H89 to inhibit these kinases, this study examined the effect of H89 on MKP-1 induction by LPS and the role of cyclic-AMP response element binding protein (CREB) in the MKP-1 induction. H89 treatment inhibited increases in MKP-1 protein and mRNA levels, and gene transcription by LPS in Raw264.7 cells. Immunoblot, gel-shift, and chromatin-immunoprecipitation assays showed the activation of CREB by LPS, and the ability of H89 to inhibit it, suggesting that H89’s inhibition of CREB may affect MKP-1 induction. In addition, H89 prevented the ability of LPS to induce other MKP genes (Dusp-2, 4, 8, and 16). Experiments using MAPK inhibitors showed that MAPKs are involved in CREB phosphorylation and MKP-1 induction, suggesting that CREB-mediated MKP-1 induction serves in part as a feedback-inhibitory loop of MAPKs. Our results demonstrate that H89 inhibits the activation of CREB and the CREB-mediated MKP-1 induction by LPS, which may result from its inhibition of PKA and MSK. |
Real-time elderly activity monitoring system based on a tri-axial accelerometer. | PURPOSE
The purpose of this study is to develop an automatic human movement classification system for the elderly using single waist-mounted tri-axial accelerometer.
METHODS
Real-time movement classification algorithm was developed using a hierarchical binary tree, which can classify activities of daily living into four general states: (1) resting state such as sitting, lying, and standing; (2) locomotion state such as walking and running; (3) emergency state such as fall and (4) transition state such as sit to stand, stand to sit, stand to lie, lie to stand, sit to lie, and lie to sit. To evaluate the proposed algorithm, experiments were performed on five healthy young subjects with several activities, such as falls, walking, running, etc.
RESULTS
The results of experiment showed that successful detection rate of the system for all activities were about 96%. To evaluate long-term monitoring, 3 h experiment in home environment was performed on one healthy subject and 98% of the movement was successfully classified.
CONCLUSIONS
The results of experiment showed a possible use of this system which can monitor and classify the activities of daily living. For further improvement of the system, it is necessary to include more detailed classification algorithm to distinguish several daily activities. |
SSD-Sface: Single shot multibox detector for small faces | In this thesis we present an approach to adapt the Single Shot multibox Detector (SSD) for face detection. Our experiments are performed on the WIDER dataset which contains a large amount of small faces (faces of 50 pixels or less). The results show that the SSD method performs poorly on the small/hard subset of this dataset. We analyze the influence of increasing the resolution during inference and training time. Building on this analysis we present two additions to the SSD method. The first addition is changing the SSD architecture to an image pyramid architecture. The second addition is creating a selection criteria on each of the different branches of the image pyramid architecture. The results show that increasing the resolution, even during inference, increases the performance for the small/hard subset. By combining resolutions in an image pyramid structure we observe that the performance keeps consistent across different sizes of faces. Finally, the results show that adding a selection criteria on each branch of the image pyramid further increases performance, because the selection criteria negates the competing behaviour of the image pyramid. We conclude that our approach not only increases performance on the small/hard subset of the WIDER dataset but keeps on performing well on the large subset. |
Discriminative shape from shading in uncalibrated illumination | Estimating surface normals from just a single image is challenging. To simplify the problem, previous work focused on special cases, including directional lighting, known reflectance maps, etc., making shape from shading impractical outside the lab. To cope with more realistic settings, shading cues need to be combined and generalized to natural illumination. This significantly increases the complexity of the approach, as well as the number of parameters that require tuning. Enabled by a new large-scale dataset for training and analysis, we address this with a discriminative learning approach to shape from shading, which uses regression forests for efficient pixel-independent prediction and fast learning. Von Mises-Fisher distributions in the leaves of each tree enable the estimation of surface normals. To account for their expected spatial regularity, we introduce spatial features, including texton and silhouette features. The proposed silhouette features are computed from the occluding contours of the surface and provide scale-invariant context. Aside from computational efficiency, they enable good generalization to unseen data and importantly allow for a robust estimation of the reflectance map, extending our approach to the uncalibrated setting. Experiments show that our discriminative approach outperforms state-of-the-art methods on synthetic and real-world datasets. |
Refusing ‘Slave Man's Revenge’: Reading the Politics of the Resisting Body in Zee Edgell's Beka Lamb and Brenda Flanagan's You Alone Are Dancing | In this article I focus on the black woman's body and its use as a political and cultural signifier and as a site where the politics of gender power is enacted. I argue that two first novels of two contemporary Caribbean women writers, Zee Edgell and Brenda Flanagan, intervene in a history of fictional representation which uses the figure of the ‘native’ woman to signify territorial, economic and sexual conquest and exploitation. The black woman, in early twentieth‐century Caribbean anti‐colonial fiction, silently concedes to her aggressor, whose actions ultimately end in her destruction. The novels of these writers revise the themes and forms of representation that characterise these earlier, predominantly male‐authored texts: in its focus on resistance to both colonial and patriarchal dominance, their work presents an alternative to either victimhood or to the suggestion that, for the poor, working‐class black woman, her body is her only capital. |
Using Kullback-Leibler Distance for Text Categorization | A system that performs text categorization aims to assign appropriate categories from a predefined classification scheme to incoming documents. These assignments might be used for varied purposes such as filtering, or retrieval. This paper introduces a new effective model for text categorization with great corpus (more or less 1 million documents). Text categorization is performed using the Kullback-Leibler distance between the probability distribution of the document to classify and the probability distribution of each category. Using the same representation of categories, experiments show a significant improvement when the above mentioned method is used. KLD method achieve substantial improvements over the tfidf performing method. |
WordNet : an electronic lexical database | WordNet is perhaps the most important and widely used lexical resource for natural language processing systems up to now. WordNet: An Electronic Lexical Database, edited by Christiane Fellbaum, discusses the design of WordNet from both theoretical and historical perspectives, provides an up-to-date description of the lexical database, and presents a set of applications of WordNet. The book contains a foreword by George Miller, an introduction by Christiane Fellbaum, seven chapters from the Cognitive Sciences Laboratory of Princeton University, where WordNet was produced, and nine chapters contributed by scientists from elsewhere. Miller's foreword offers a fascinating account of the history of WordNet. He discusses the presuppositions of such a lexical database, how the top-level noun categories were determined, and the sources of the words in WordNet. He also writes about the evolution of WordNet from its original incarnation as a dictionary browser to a broad-coverage lexicon, and the involvement of different people during its various stages of development over a decade. It makes very interesting reading for casual and serious users of WordNet and anyone who is grateful for the existence of WordNet. The book is organized in three parts. Part I is about WordNet itself and consists of four chapters: "Nouns in WordNet" by George Miller, "Modifiers in WordNet" by Katherine Miller, "A semantic network of English verbs" by Christiane Fellbaum, and "Design and implementation of the WordNet lexical database and search software" by Randee Tengi. These chapters are essentially updated versions of four papers from Miller (1990). Compared with the earlier papers, the chapters in this book focus more on the underlying assumptions and rationales behind the design decisions. The description of the information contained in WordNet, however, is not as detailed as in Miller (1990). The main new additions in these chapters include an explanation of sense grouping in George Miller's chapter, a section about adverbs in Katherine Miller's chapter, observations about autohyponymy (one sense of a word being a hyponym of another sense of the same word) and autoantonymy (one sense of a word being an antonym of another sense of the same word) in Fellbaum's chapter, and Tengi's description of the Grinder, a program that converts the files the lexicographers work with to searchable lexical databases. The three papers in Part II are characterized as "extensions, enhancements and |
Accelerating t-SNE using tree-based algorithms | The paper investigates the acceleration of t-SNE—an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots—using two treebased algorithms. In particular, the paper develops variants of the Barnes-Hut algorithm and of the dual-tree algorithm that approximate the gradient used for learning t-SNE embeddings in O(N logN). Our experiments show that the resulting algorithms substantially accelerate t-SNE, and that they make it possible to learn embeddings of data sets with millions of objects. Somewhat counterintuitively, the Barnes-Hut variant of t-SNE appears to outperform the dual-tree variant. |
A Survey of Automatic Query Expansion in Information Retrieval | The relative ineffectiveness of information retrieval systems is largely caused by the inaccuracy with which a query formed by a few keywords models the actual user information need. One well known method to overcome this limitation is automatic query expansion (AQE), whereby the user’s original query is augmented by new features with a similar meaning. AQE has a long history in the information retrieval community but it is only in the last years that it has reached a level of scientific and experimental maturity, especially in laboratory settings such as TREC. This survey presents a unified view of a large number of recent approaches to AQE that leverage various data sources and employ very different principles and techniques. The following questions are addressed. Why is query expansion so important to improve search effectiveness? What are the main steps involved in the design and implementation of an AQE component? What approaches to AQE are available and how do they compare? Which issues must still be resolved before AQE becomes a standard component of large operational information retrieval systems (e.g., search engines)? |
Unsaturated soils: From constitutive modelling to numerical algorithms | This paper presents an overview of constitutive modelling of unsaturated soils and the numerical algorithms for solving the associated boundary value problems. It first discusses alternative stress and strain variables that can be used in constitutive models for unsaturated soils. The paper then discusses the key issues in unsaturated soil modelling and how these issues can be incorporated into an existing model for saturated soils. These key issues include (1) volumetric behaviour associated with saturation or suction changes; (2) strength behaviour associated with saturation and suction changes, and (3) hydraulic behaviour associated with saturation or suction changes. The paper also shows how hysteresis in soil–water characteristics can be incorporated into the elasto-plastic framework, leading to coupled hydro-mechanical models. Finally, the paper demonstrates the derivation of the incremental stress–strain relations for unsaturated soils and discusses briefly the new challenges in implementing these relations into the finite |
Learning Recurrent Neural Networks with Hessian-Free Optimization | In this work we resolve the long-outstanding problem of how to effectively train recurrent neural networks (RNNs) on complex and difficult sequence modeling problems which may contain long-term data dependencies. Utilizing recent advances in the Hessian-free optimization approach (Martens, 2010), together with a novel damping scheme, we successfully train RNNs on two sets of challenging problems. First, a collection of pathological synthetic datasets which are known to be impossible for standard optimization approaches (due to their extremely long-term dependencies), and second, on three natural and highly complex real-world sequence datasets where we find that our method significantly outperforms the previous state-of-theart method for training neural sequence models: the Long Short-term Memory approach of Hochreiter and Schmidhuber (1997). Additionally, we offer a new interpretation of the generalized Gauss-Newton matrix of sch (2002) which is used within the HF approach of Martens. |
The Area Processing Unit of Caroline - Finding the Way through DARPA's Urban Challenge | This paper presents a vision-based color segmentation algorithm suitable for urban environments that separates an image into areas of drivable and non-drivable terrain. Assuming that a part of the image is known to be drivable terrain, other parts of the image are classified by comparing the Euclidean distance of each pixel’s color to the mean colors of the drivable area in real-time. Moving the search area depending on each frame’s result ensures temporal consistency and coherence. Furthermore, the algorithm classifies artifacts such as white and yellow lane markings and hard shadows as areas of unknown drivability. The algorithm was thoroughly tested on the autonomous vehicle ’Caroline’, which was a finalist in the 2007 DARPA Urban Challenge. |
Overview of the Photovoltaic System Topologies | — Due to increased interest for solar energy harvesting systems in recent years the number of developed system types is large. In order to choose the optimal one for future system development the analysis of common architectures being used for photovoltaic systems has been done. The paper contains the small description of different converter architectures and analysis of prototyped or simulated systems proposed in the references. Systems of different distribution level are observed from more integrated to more distributed topologies. Distribution level comes hand-to-hand to the overall system efficiency. Less distributed systems in case of low irradiation disturbance show better performance due to higher efficiency that could be achieved relatively easy. More distributed systems have better performance in cases of the frequent partial shading for example in building-integrated photovoltaic systems when there are relatively high objects (like trees, chimneys and other buildings) close to the PV-panel mounting place that can partially shade the PV. But this type of systems have large number of small-power converters that usually has smaller efficiency as the design of effective and small-power converters is hard technical task. The reason of better performance is that distributed PV-systems have larger number of maximum power point tracking converters and therefore increase the harvest of energy by keeping the photovoltaic power generation process optimal. This paper is made for choice of most suitable system topology for future PV-system implementation. |
Association between adult weight gain and colorectal cancer: a dose-response meta-analysis of observational studies. | This study investigated the association between adult weight gain and risk of colorectal cancer (CRC). Using terms related to weight gain and CRC, we searched PubMed, Embase and Web of Science for relevant studies published before June 2014. Two evaluators independently selected studies according to the selection criteria, and eight studies were included (three case-control and five cohort studies). Summary estimates were obtained using fixed- or random-effects models. The relative risk (RR) of the association between adult weight gain and CRC was 1.25 (95% confidence interval [CI], 1.10-1.43); the RR was 1.30 (95% CI, 1.14-1.49) for colon cancer (CC) and 1.27 (95% CI, 1.02-1.58) for rectal cancer (RC) for the highest versus lowest category. For every 5-kg increase in adult weight, the risk increased by 5% (RR, 1.05; 95% CI, 1.02-1.09) for CRC, 6% (RR, 1.06; 95% CI, 1.02-1.11) for CC and 6% (RR, 1.06; 95% CI, 1.03-1.08) for RC. The subgroup analyses showed a positive association between adult weight gain and risk of CRC only in men, and the RR was 1.65 (95% CI, 1.42-1.92) for the highest versus lowest category of adult weight gain and 1.10 (95% CI, 1.06-1.15) for a 5-kg increase in adult weight. In conclusion, there is evidence that adult weight gain is associated with an increased risk of CRC. However, the positive association between adult weight gain and risk of CRC is stronger among men than among women. |
Finding and assessing social media information sources in the context of journalism | Social media is already a fixture for reporting for many journalists, especially around breaking news events where non-professionals may already be on the scene to share an eyewitness report, photo, or video of the event. At the same time, the huge amount of content posted in conjunction with such events serves as a challenge to finding interesting and trustworthy sources in the din of the stream. In this paper we develop and investigate new methods for filtering and assessing the verity of sources found through social media by journalists. We take a human centered design approach to developing a system, SRSR ("Seriously Rapid Source Review"), informed by journalistic practices and knowledge of information production in events. We then used the system, together with a realistic reporting scenario, to evaluate the filtering and visual cue features that we developed. Our evaluation offers insights into social media information sourcing practices and challenges, and highlights the role technology can play in the solution. |
Precise determination of the critical threshold and exponents in a three-dimensional continuum percolation model | We present a large-scale computer simulation of the prototypical three-dimensional continuum percolation model consisting of a distribution of overlapping (spatially uncorrelated) spheres. By using simulations of up to 10 5 particles and studying the finite-size scaling of various effective percolation thresholds, we obtain a value of pc = 0.2895±0.0005. This value is significantly smaller than the values obtained for simulations that have been carried out using smaller systems. Employing this value of pc and systems of size L = 160 (relative to a sphere of unit radius), we also obtain estimates of the critical exponents ν, β, andγ for the continuum system and show that the values are different than those obtained using previous values of pc. |
Using rich social media information for music recommendation via hypergraph model | There are various kinds of social media information, including different types of objects and relations among these objects, in music social communities such as Last.fm and Pandora. This information is valuable for music recommendation. However, there are two main challenges to exploit this rich social media information: (a) There are many different types of objects and relations in music social communities, which makes it difficult to develop a unified framework taking into account all objects and relations. (b) In these communities, some relations are much more sophisticated than pairwise relation, and thus cannot be simply modeled by a graph. We propose a novel music recommendation algorithm by using both multiple kinds of social media information and music acoustic-based content. Instead of graph, we use hypergraph to model the various objects and relations, and consider music recommendation as a ranking problem on this hypergraph. While an edge of an ordinary graph connects only two objects, a hyperedge represents a set of objects. In this way, hypergraph can be naturally used to model high-order relations. |
Where Has All the Education Gone ? | Cross-national data show no association between increases in human capital attributable to the rising educational attainment of the labor force and the rate of growth of output per worker. This implies that the association of educational capital growth with conventional measures of total factor production is large, strongly statistically significant, and negative. These are “on average” results, derived from imposing a constant coefficient. However, the development impact of education varied widely across countries and has fallen short of expectations for three possible reasons. First, the institutional/ governance environment could have been sufficiently perverse that the accumulation of educational capital lowered economic growth. Second, marginal returns to education could have fallen rapidly as the supply of educated labor expanded while demand remained stagnant. Third, educational quality could have been so low that years of schooling created no human capital. The extent and mix of these three phenomena vary from country to country in explaining the actual economic impact of education, or the lack thereof. |
Compression of Deep Convolutional Neural Networks under Joint Sparsity Constraints | We consider the optimization of deep convolutional neural networks (CNNs) such that they provide good performance while having reduced complexity if deployed on either conventional systems utilizing spatial-domain convolution or lower complexity systems designed for Winograd convolution. Furthermore, we explore the universal quantization and compression of these networks. In particular, the proposed framework produces one compressed model whose convolutional filters can be made sparse either in the spatial domain or in the Winograd domain. Hence, one compressed model can be deployed universally on any platform, without need for re-training on the deployed platform, and the sparsity of its convolutional filters can be exploited for further complexity reduction in either domain. To get a better compression ratio, the sparse model is compressed in the spatial domain which has a less number of parameters. From our experiments, we obtain 24.2×, 47.7× and 35.4× compressed models for ResNet-18, AlexNet and CT-SRCNN, while their computational cost is also reduced by 4.5×, 5.1× and 23.5×, respectively. |
Arc-flash in large battery energy storage systems — Hazard calculation and mitigation | This paper deals with the arc-flash haz ard calculation in battery energy storage systems (BESSs). The lack of international harmonized standards, coupled with a foreseeable increasing use of BESSs, makes this subject very interesting, especially due to the practical involvements related to arc-flash hazard associated with BESS maintenance operation. A Li-ion battery circuit model, based on experimental short-circuit tests conducted by TERNA (the Italian transmission system operator) and a non-linear arc resistance model are used to assess the incident energy related to arc-flash. Efficient and cost-effective solutions, able to reduce incident energy below the safe value 8 cal/cm2 and based on the installation of fast-acting fuses, are then proposed and discussed. |
Narco-Terrorism : The Merger of the War on Drugs and the War on Terror | The aim of this article is to analyse the phenomena of narco-terrorism and the practical measures utilised to counter this threat. By adopting the model of the crime-terror continuum developed by Tamara Makarenko, the article will outline the similarities and dissimilarities of narcotics trafficking and terrorism in order to provide a more nuanced perspective on the concept of narco-terrorism. By doing so, the article will evaluate the kind of approach taken in combating the threat of narco-terrorism. |
A Simple Operator for Very Precise Estimation of Ellipses | This paper presents a simple linear operator that accurately estimates the position and parameters of ellipse features. Based on the dual conic model, the operator avoids the intermediate stage of precisely extracting individual edge points by exploiting directly the raw gradient information in the neighborhood of an ellipse's boundary. Moreover, under the dual representation, the dual conic can easily be constrained to a dual ellipse when minimizing the algebraic distance. The new operator is assessed and compared to other estimation approaches in simulation as well as in real situation experiments and shows better accuracy than the best approaches, including those limited to the center position. |
Ellipsoidal Toolbox (ET) | Ellipsoidal Toolbox is the first free MATLAB package that implements the operations of ellipsoidal calculus: geometric (Minkowski) sums and differences of ellipsoids, intersections of ellipsoids, and ellipsoids with hyperplanes and polyhedra. The toolbox uses ellipsoidal methods to compute forward and backward reach sets of continuous- and discrete-time piecewise affine systems. Forward and backward reach sets can be also computed for continuous-time piece-wise linear systems with disturbances |
Psychological capital : internal and external validity of the Psychological Capital Questionnaire (PCQ-24) on a South African sample : original research | Occupational stress and burnout are serious problems in modern organisations. The cost of high stress and burnout levels to employers include higher staff turnover, lower morale, excessive sick leave and reduced productivity and efficiency (e.g. Cordes & Dougherty, 1993; Lee & Ashforth, 1996; Schaufeli & Enzmann, 1998; Wright & Bonett, 1997). Studies from the emerging field of positive organisational behaviour (POB) (Luthans, 2002) have shown that the construct of psychological capital (PsyCap) (i.e. a higher order constellation of positive psychological components that consists of hope, optimism, self-efficacy and resilience), may contribute to decreased stress (e.g. Avey, Luthans & Jensen, 2009) and increased work engagement (Avey, Wernsing & Luthans, 2008). Within the framework of Hobfoll’s (2002) psychological resources theory, Luthans, Youssef and Avolio (2007, p. 10) define PsyCap as ‘an individual’s positive psychological state of development, characterised by: (1) having confidence (self-efficacy) to take on and put in the necessary effort to succeed at challenging tasks; (2) making a positive attribution (optimism) about succeeding now and in the future; (3) persevering towards goals, and when necessary, redirecting paths to goals (hope) in order to succeed; and (4) when beset by problems and adversity, sustaining and bouncing back and even beyond (resiliency) to attain success’. In essence, PsyCap represents an individual’s, ‘positive appraisal of circumstances and probability for success based on motivated effort and perseverance’ (Luthans, Avolio, Avey & Norman, 2007, p. 550). PsyCap has been shown to impact a range of workplace outcomes like job performance (Luthans, Avolio, Avey & Norman, 2007; Luthans, Avolio, Walumbwa & Li, 2005), stress (e.g. Avey et al., 2009) and well-being (Culberson, Fullagar & Mills, 2010). Page 1 of 12 |
A dynamic over-sampling procedure based on sensitivity for multi-class problems | Classification with imbalanced datasets supposes a new challenge for researches in the framework of machine learning. This problem appears when the number of patterns that represents one of the classes of the dataset (usually the concept of interest) is much lower than in the remaining classes. Thus, the learning model must be adapted to this situation, which is very common in real applications. In this paper, a dynamic over-sampling procedure is proposed for improving the classification of imbalanced datasets with more than two classes. This procedure is incorporated into a memetic algorithm (MA) that optimizes radial basis functions neural networks (RBFNNs). To handle class imbalance, the training data are resampled in two stages. In the first stage, an over-sampling procedure is applied to the minority class to balance in part the size of the classes. Then, the MA is run and the data are oversampled in different generations of the evolution, generating new patterns of the minimum sensitivity class (the class with the worst accuracy for the best RBFNN of the population). The methodology proposed is tested using 13 imbalanced benchmark classification datasets from well-known machine learning problems and one complex problem of microbial growth. It is compared to other neural network methods specifically designed for handling imbalanced data. These methods include different over-sampling procedures in the preprocessing stage, a threshold-moving method where the output threshold is moved toward inexpensive classes and ensembles approaches combining the models obtained with these techniques. The results show that our proposal is able to improve the sensitivity in the generalization set and obtains both a high accuracy level and a good classification level for |
A Highly Parallel Framework for HEVC Coding Unit Partitioning Tree Decision on Many-core Processors | High Efficiency Video Coding (HEVC) uses a very flexible tree structure to organize coding units, which leads to a superior coding efficiency compared with previous video coding standards. However, such a flexible coding unit tree structure also places a great challenge for encoders. In order to fully exploit the coding efficiency brought by this structure, huge amount of computational complexity is needed for an encoder to decide the optimal coding unit tree for each image block. One way to achieve this is to use parallel computing enabled by many-core processors. In this paper, we analyze the challenge to use many-core processors to make coding unit tree decision. Through in-depth understanding of the dependency among different coding units, we propose a parallel framework to decide coding unit trees. Experimental results show that, on the Tile64 platform, our proposed method achieves averagely more than 11 and 16 times speedup for 1920x1080 and 2560x1600 video sequences, respectively, without any coding efficiency degradation. |
Effects of dance on movement control in Parkinson's disease: a comparison of Argentine tango and American ballroom. | OBJECTIVE
The basal ganglia may be selectively activated during rhythmic, metered movement such as tango dancing, which may improve motor control in individuals with Parkinson's disease. Other partner dances may be more suitable and preferable for those with Parkinson's disease. The purpose of this study was to compare the effects of tango, waltz/foxtrot and no intervention on functional motor control in individuals with Parkinson's disease.
DESIGN
This study employed a randomized, between- notsubject, prospective, repeated measures design.
SUBJECTS/PATIENTS
Fifty-eight people with mild-moderate Parkinson's disease participated.
METHODS
Participants were randomly assigned to tango, waltz/foxtrot or no intervention (control) groups. Those in the dance groups attended 1-h classes twice a week, completing 20 lessons in 13 weeks. Balance, functional mobility, forward and backward walking were evaluated before and after the intervention.
RESULTS
Both dance groups improved more than the control group, which did not improve. The tango and waltz/foxtrot groups improved significantly on the Berg Balance Scale, 6-minute walk distance, and backward stride length. The tango group improved as much or more than those in the waltz/foxtrot group on several measures.
CONCLUSION
Tango may target deficits associated with Parkinson's disease more than waltz/foxtrot, but both dances may benefit balance and locomotion. |
Nursing Theory in Holistic Nursing Practice | Concept: An abstract idea or notion. Conceptual model: A group of interrelated concepts described to suggest relationships among them. Framework: A basic structure; the context in which theory is developed; the structure that permits theory to be understood. Grand theory: A theory that covers a broad area of the discipline’s concerns. Metaparadigm: Concepts that identify the domain of a discipline. Metatheory: Theory about theory development; theory about theory. Midrange theory: A focused theory for nursing that deals with a portion of nurses’ concerns or that is oriented to patient outcomes. Model: A representation of interactions between and among concepts. Nursing theory: A framework; a set of interrelated concepts that are testable; a way of Nursing Theory in Holistic Nursing Practice |
General Control Considerations for Input-Series Connected DC/DC Converters | This paper discusses the general control problems of dc/dc converters connected in series at the input. As the input voltage is shared by a number of dc/dc converters, the resulting converter relieves the voltage stresses of individual devices and hence is suitable for high input-voltage applications. At the output side, parallel connection provides current sharing and is suitable for high output-current applications. Moreover, series connection at the output side is also possible, resulting in output voltage sharing. Theoretically, from a power balance consideration, one can show that fulfillment of input-voltage sharing implies fulfillment of output-current or of output-voltage sharing, and vice versa. However, the presence of right-half-plane poles can cause instability when the sharing is implemented at the output side. As a consequence, control should be directed to input-voltage sharing in order to ensure a stable sharing of the input voltage and of the output current (parallel connection at output) or output voltage (series connection at output). In this paper, general problems in input-series connected converter systems are addressed. Minimal control structures are then derived and some practical design considerations are discussed in detail. Illustrative examples are given for addressing these general control considerations. Finally, experimental prototypes are built to validate these considerations. |
America is like Metamucil: fostering critical and creative thinking about metaphor in political blogs | Blogs are becoming an increasingly important medium -- socially, academically, and politically. Much research has involved analyzing blogs, but less work has considered how such analytic techniques might be incorporated into tools for blog readers. A new tool, metaViz, analyzes political blogs for potential conceptual metaphors and presents them to blog readers. This paper presents a study exploring the types of critical and creative thinking fostered by metaViz as evidenced by user comments and discussion on the system. These results indicate the effectiveness of various system features at fostering critical thinking and creativity, specifically in terms of deep, structural reasoning about metaphors and creatively extending existing metaphors. Furthermore, the results carry broader implications beyond blogs and politics about exploring alternate configurations between computation and human thought. |
Interference cancellation for cellular systems: a contemporary overview | Cellular networks today are interference-limited and only becomes increasingly so in the future due to the many users that need to share the spectrum to achieve high-rate multimedia communication. Despite the enormous amount of academic and industrial research in the past 20 years on interference-aware receivers and the large performance improvements promised by these multi-user techniques, today's receivers still generally treat interference as background noise. In this article, we enumerate the reasons for this widespread scepticism, and discuss how current and future trends increases the need for and viability of multi-user receivers for both the uplink, where many asynchronous users are simultaneously detected, and the downlink, where users are scheduled and largely orthogonalized; but the mobile handset still needs to cope with a few dominant interfering base stations. New results for interference cancelling receivers that use conventional front-ends are shown to alleviate many of the shortcomings of prior techniques, particularly for the challenging uplink. This article gives an overview of key recent research breakthroughs on interference cancellation and highlights system-level considerations for future multi-user receivers. |
Habermas and the Unfinished Project of Modernity: Critical Essays on the Philosophical Discourse of Modernity | Introduction, Maurizio Passerin d'Entreves - "Modernity Versus Postmodernity", J. Habermas. Part 1 Critical rejoinders: the discourse of modernity - Hegel, Nietzsche, Heidegger and Habermas, F. Dallmayr deconstruction, postmodernism and philosophy - Habermas on Derrida, C. Norris splitting the difference - Habermas's critique of Derrida, D. Hoy Habermas and Foucault, J. Schmidt intersubjectivity and the monadic core of the psyche - Habermas and Castoriadis on the unconscious, J. Whitebook. Part 2 Thematic reformulations: two versions of the linguistic turn - Habermas and poststructuralism, J. Bohman Habermas and the question of alterity, D. Coole the casuality of fate - modernity and modernism in Habermas, J.M. Bernstein the subject of justice in postmodern discourse - aesthetic judgment and political rationality, D. Ingram. |
An AOI algorithm for PCB based on feature extraction | With the development of the micro-electronic industry, electronical components assembled on the printed circuit board (PCB) become more and more microsize and its mounted density is increasing. It is out of date to depend on manual inspection to assure the joints quality. Instead, automated optical inspection (AOI) for solder joints based on the machine vision has become more and more important. In this paper, based on the image acquired from a 3-CCD color camera and a 3-color hemispherical LED arrays light resource (red, green, and blue), the place, shape, and logical features of solder joints of chip components are extracted. The place features are related to solder joint place. As for shape features, we divide solder joint into several regions and extract its shape features by its color, occupancy ratio of area, center of gravity and continuous pixels. The logical features come from their close relationships of different regions of shape features, color distributing and place features. On the basis of the features, an AOI algorithm is developed. The defects of lacking solder, surplus solder, no solder, pseudo joints, wrong component, damaged component, component absent, shift, tomb stone, wrong polarity, etc. can be identified properly by using the proposed algorithm. Finally, some experiment results are presented to show the validity of the algorithm. |
Graph-Based Network Analysis of Resting-State Functional MRI | In the past decade, resting-state functional MRI (R-fMRI) measures of brain activity have attracted considerable attention. Based on changes in the blood oxygen level-dependent signal, R-fMRI offers a novel way to assess the brain's spontaneous or intrinsic (i.e., task-free) activity with both high spatial and temporal resolutions. The properties of both the intra- and inter-regional connectivity of resting-state brain activity have been well documented, promoting our understanding of the brain as a complex network. Specifically, the topological organization of brain networks has been recently studied with graph theory. In this review, we will summarize the recent advances in graph-based brain network analyses of R-fMRI signals, both in typical and atypical populations. Application of these approaches to R-fMRI data has demonstrated non-trivial topological properties of functional networks in the human brain. Among these is the knowledge that the brain's intrinsic activity is organized as a small-world, highly efficient network, with significant modularity and highly connected hub regions. These network properties have also been found to change throughout normal development, aging, and in various pathological conditions. The literature reviewed here suggests that graph-based network analyses are capable of uncovering system-level changes associated with different processes in the resting brain, which could provide novel insights into the understanding of the underlying physiological mechanisms of brain function. We also highlight several potential research topics in the future. |
Structural Health Monitoring Framework Based on Internet of Things: A Survey | Internet of Things (IoT) has recently received a great attention due to its potential and capacity to be integrated into any complex system. As a result of rapid development of sensing technologies such as radio-frequency identification, sensors and the convergence of information technologies such as wireless communication and Internet, IoT is emerging as an important technology for monitoring systems. This paper reviews and introduces a framework for structural health monitoring (SHM) using IoT technologies on intelligent and reliable monitoring. Specifically, technologies involved in IoT and SHM system implementation as well as data routing strategy in IoT environment are presented. As the amount of data generated by sensing devices are voluminous and faster than ever, big data solutions are introduced to deal with the complex and large amount of data collected from sensors installed on structures. |
Progress in nonequilibrium quantum field theory III | We review recent developments and open questions for the description of nonequilibrium quantum fields, continuing hep-ph/0302210 and hep-ph/0410330 [J. Berges and J. Serreau in SEWM02, Ed. M.G. Schmidt, World Scientific, Singapore, 2003 [ arXiv:hep-ph/0302210 ]. SEWM04, Eds. K.J. Eskola, K. Kainulainen, K. Kajantie, K. Rummukainen, World Scientific, Singapore, 2005 [ arXiv:hep-ph/0410330 ]]. |
Programmable self-assembly in a thousand-robot swarm | Self-assembly enables nature to build complex forms, from multicellular organisms to complex animal structures such as flocks of birds, through the interaction of vast numbers of limited and unreliable individuals. Creating this ability in engineered systems poses challenges in the design of both algorithms and physical systems that can operate at such scales. We report a system that demonstrates programmable self-assembly of complex two-dimensional shapes with a thousand-robot swarm. This was enabled by creating autonomous robots designed to operate in large groups and to cooperate through local interactions and by developing a collective algorithm for shape formation that is highly robust to the variability and error characteristic of large-scale decentralized systems. This work advances the aim of creating artificial swarms with the capabilities of natural ones. |
Issues in building general letter to sound rules | In generaltext-to-speechsystems, it is notpossibleto guaranteethat a lexiconwill containall wordsfoundin a text, thereforesomesystemfor predictingpronunciationfrom theword itself is necessary . Herewe presenta generalframework for building letter to sound (LTS) rulesfrom a word list in a language.The techniquecanbe fully automatic,thougha small amountof handseedingcangive betterresults.We have appliedthis techniqueto English(UK and US), Frenchand German. The generatedmodelsachieve, 75%, 58%, 93% and89%, respecti vely, wordscorrectfor held out data from theword lists. To testour modelson moretypical datawe alsoanalyzedgeneral text, to find which wordsdo not appearin our lexicon. Theseunknown wordswereusedasamorerealistictestcorpusfor ourmodels. We also discussthe distribution and type of suchunknown words. |
Output impedance design of parallel-connected UPS inverters with wireless load-sharing control | This paper deals with the design of the output impedance of uninterruptible power system (UPS) inverters with parallel-connection capability. In order to avoid the need for any communication among modules, the power-sharing control loops are based on the P/Q droop method. Since in these systems the power-sharing accuracy is highly sensitive to the inverters output impedance, novel control loops to achieve both stable output impedance and proper power balance are proposed. In this sense, a novel wireless controller is designed by using three nested loops: 1) the inner loop is performed by using feedback linearization control techniques, providing a good quality output voltage waveform; 2) the intermediate loop enforces the output impedance of the inverter, achieving good harmonic power sharing while maintaining low output voltage total harmonic distortion; and 3) the outer loop calculates the output active and reactive powers and adjusts the output impedance value and the output voltage frequency during the load transients, obtaining excellent power sharing without deviations in either the frequency or the amplitude of the output voltage. Simulation and experimental results are reported from a parallel-connected UPS system sharing linear and nonlinear loads. |
Learning Discriminative Projections for Text Similarity Measures | Traditional text similarity measures consider each term similar only to itself and do not model semantic relatedness of terms. We propose a novel discriminative training method that projects the raw term vectors into a common, low-dimensional vector space. Our approach operates by finding the optimal matrix to minimize the loss of the pre-selected similarity function (e.g., cosine) of the projected vectors, and is able to efficiently handle a large number of training examples in the highdimensional space. Evaluated on two very different tasks, cross-lingual document retrieval and ad relevance measure, our method not only outperforms existing state-of-the-art approaches, but also achieves high accuracy at low dimensions and is thus more efficient. |
A Framework for Protecting Worker Location Privacy in Spatial Crowdsourcing | Spatial Crowdsourcing (SC) is a transformative platform that engages individuals, groups and communities in the act of collecting, analyzing, and disseminating environmental, social and other spatio-temporal information. The objective of SC is to outsource a set of spatio-temporal tasks to a set of workers, i.e., individuals with mobile devices that perform the tasks by physically traveling to specified locations of interest. However, current solutions require the workers, who in many cases are simply volunteering for a cause, to disclose their locations to untrustworthy entities. In this paper, we introduce a framework for protecting location privacy of workers participating in SC tasks. We argue that existing location privacy techniques are not sufficient for SC, and we propose a mechanism based on differential privacy and geocasting that achieves effective SC services while offering privacy guarantees to workers. We investigate analytical models and task assignment strategies that balance multiple crucial aspects of SC functionality, such as task completion rate, worker travel distance and system overhead. Extensive experimental results on real-world datasets show that the proposed technique protects workers’ location privacy without incurring significant performance metrics penalties. |
Learning to Rank Non-Factoid Answers: Comment Selection in Web Forums | Recent initiatives in IR community have shown the importance of going beyond factoid Question Answering (QA) in order to design useful real-world applications. Questions asking for descriptions or explanations are much more difficult to be solved, e.g., the machine learning models cannot focus on specific answer words or their lexical type. Thus, researchers have started to explore powerful methods for feature engineering. Two of the most promising methods are convolution tree kernels (CTKs) and convolutional neural networks (CNNs) as they have been shown to obtain high performance in the task of answer sentence selection in factoid QA. In this paper, we design state-of-the-art models for non-factoid QA also carried out on noisy data. In particular, we study and compare models for comment selection in a community QA (cQA) scenario, where the majority of questions regard descriptions or explanations. To deal with such complex task, we incorporate relational information holding between questions and comments as well as domain-specific features into both convolutional models above.
Our experiments on a cQA corpus show that both CTK and CNN achieve the state of the art, also according to a direct comparison with the results obtained by the best systems of the SemEval cQA challenge. |
Information Theory and Statistical Mechanics | Information theory provides a constructive criterion for setting up probability distributions on the basis of partial knowledge, and leads to a type of statistical inference which is called the maximum.entropy estimate. It is the least biased estimate possible on the given information; i.e., it is maximally noncom-mittal with regard to missing information. If one considers statistical mechanics as a form of statistical inference rather than as a physical theory, it is found that the usual computational rules, starting with the determination of the partition function, are an immediate consequence of the maximum-entropy principle. In the resulting "subjective statistical mechanics," the usual rules are thus justified independently of any physical argument, and in particular independently of experimental verification; whether |
Electromigration characteristics for Al-Ge-Cu | K. Kikuta and T. KikkawaULSI Device Development Laboratories, NEC Corporation.1 120, Shimokuzawa, Sagamihara City, Kanagawa 229, JapanABSTRACTAluminum-Germanium-Copper (Al-Ge-Cu) alloy is a promising material for interconnectionsto fill contact holes and vias using low temperature reflow sputtering due to its lower melting pointthan conventional Al alloys. The reflow temperature for contact- and via-filling decreases as theGe concentration in Al increases. The suitable Ge concentration for reflow sputtering at around 400 |
Arbitrary Analog/RF Spatial Filtering for Digital MIMO Receiver Arrays | Traditional digital multiple-input multiple-output (MIMO) receivers that feature element-level digitization face high instantaneous dynamic range challenges in the analog/RF domain due to the absence of analog/RF spatial filtering. Existing analog/RF spatial notch filtering techniques are limited in their noise, linearity, and spatial filtering bandwidth performance. More importantly, only single spatial notches have been demonstrated, providing insufficient filtering in practical scenarios. We propose a frequency-translational arbitrary spatial filtering technique that features not only arbitrary spatial filtering response generation at baseband for the protection of the following analog-to-digital converters, but modulated baseband input impedance that can be translated by passive mixers to achieve arbitrary spatial filtering at RF as well. This technique allows the synthesis and independent steering of an arbitrary number of spatial notches, and the independent adjustment of notch depths. Current-mode operation leads to superior linearity performance and ultra-wideband rejection. A four-element 65-nm CMOS 0.1–3.1 GHz prototype MIMO receiver array shows arbitrary spatial response formation, more than 50-dB spatial rejection across all measured directions, and an ultra-wide 320-MHz 20-dB rejection bandwidth for a single-notch setting at 500-MHz local oscillator (LO) frequency. Formation of a single spatial notch only moderately degrades the equivalent single-element double-sideband noise figure from 2.1–3.7 dB to 3.4–5.8 dB. In the notch direction, +34 dBV in-band output-referred IP3 is measured, an improvement of 33 dB compared with outside-notch directions. A wireless demonstration shows the receiver array demodulating a weak spatial signal in the presence of two strong in-band spatial signals, verifying the arbitrary spatial filtering functionality. |
VIP-Tree: An Effective Index for Indoor Spatial Queries | Due to the growing popularity of indoor location-based services, indoor data management has received significant research attention in the past few years. However, we observe that the existing indexing and query processing techniques for the indoor space do not fully exploit the properties of the indoor space. Consequently, they provide below par performance which makes them unsuitable for large indoor venues with high query workloads. In this paper, we propose two novel indexes called Indoor Partitioning Tree (IPTree) and Vivid IP-Tree (VIP-Tree) that are carefully designed by utilizing the properties of indoor venues. The proposed indexes are lightweight, have small pre-processing cost and provide nearoptimal performance for shortest distance and shortest path queries. We also present efficient algorithms for other spatial queries such as k nearest neighbors queries and range queries. Our extensive experimental study on real and synthetic data sets demonstrates that our proposed indexes outperform the existing algorithms by several orders of magnitude. |
Reference-Dependent Preferences and Labor Supply : The Case of New York City Taxi Drivers | Increased attention has been paid in recent years to deviations from the standard neoclassical model of consumer behavior. A substantial segment of this work focuses on reference-dependent preferences where there is a change in the shape of the utility function at some base (reference) level of income or consumption. These models have strong predictions for how responses to changes in prices are affected by the actual level of consumption or income relative to the reference level. A difficulty in bringing this class of models to the data is that the reference level of income or consumption is seldom observed, and we have only weak information on what determines the reference point. There are important reasons to understand how these considerations affect estimation of labor supply elasticities. Evaluation of much government policy regarding tax and transfer programs depends on having reliable estimates of the sensitivity of labor supply to wage rates and income levels. To the extent that individuals’ levels of labor supply are the result of optimization with reference-dependent preferences, the usual estimates of wage and income elasticities are likely to be misleading. In this study, I develop an empirical model of daily labor supply that incorporates referencedependent preferences but does not require that the reference level of income be observed or defined in advance. I apply this model to data on the daily labor supply of New York City taxi drivers by allowing taxi drivers to have a reference level of daily income. The estimates suggest that, while there may be a reference level of income on a given day such that there is a discrete increase in the probability of stopping when that income level is reached, the reference level varies substantially day to day for a particular driver. Additionally, most shifts end before the reference income level is reached. Essentially, the data show more smoothness in the relationship between income and the continuation and stopping probabilities than seems consistent with an important role for reference-dependent preferences. |
Interactive augmented reality using Scratch 2.0 to improve physical activities for children with developmental disabilities. | This study uses a body motion interactive game developed in Scratch 2.0 to enhance the body strength of children with disabilities. Scratch 2.0, using an augmented-reality function on a program platform, creates real world and virtual reality displays at the same time. This study uses a webcam integration that tracks movements and allows participants to interact physically with the project, to enhance the motivation of children with developmental disabilities to perform physical activities. This study follows a single-case research using an ABAB structure, in which A is the baseline and B is the intervention. The experimental period was 2 months. The experimental results demonstrated that the scores for 3 children with developmental disabilities increased considerably during the intervention phrases. The developmental applications of these results are also discussed. |
Affective Video Content Analysis: A Multidisciplinary Insight | In our present society, the cinema has become one of the major forms of entertainment providing unlimited contexts of emotion elicitation for the emotional needs of human beings. Since emotions are universal and shape all aspects of our interpersonal and intellectual experience, they have proved to be a highly multidisciplinary research field, ranging from psychology, sociology, neuroscience, etc., to computer science. However, affective multimedia content analysis work from the computer science community benefits but little from the progress achieved in other research fields. In this paper, a multidisciplinary state-of-the-art for affective movie content analysis is given, in order to promote and encourage exchanges between researchers from a very wide range of fields. In contrast to other state-of-the-art papers on affective video content analysis, this work confronts the ideas and models of psychology, sociology, neuroscience, and computer science. The concepts of aesthetic emotions and emotion induction, as well as the different representations of emotions are introduced, based on psychological and sociological theories. Previous global and continuous affective video content analysis work, including video emotion recognition and violence detection, are also presented in order to point out the limitations of affective video content analysis work. |
System structure for software fault tolerance | The paper presents, and discusses the rationale behind, a method for structuring complex computing systems by the use of what we term “recovery blocks”, “conversations” and “fault-tolerant interfaces”. The aim is to facilitate the provision of dependable error detection and recovery facilities which can cope with errors caused by residual design inadequacies, particularly in the system software, rather than merely the occasional malfunctioning of hardware components. |
Growth performance, feed utilization, digestive enzyme activity, innate immunity and protection against Vibrio harveyi of freshwater prawn, Macrobrachium rosenbergii fed diets supplemented with Bacillus coagulans | The present study was conducted to investigate the effects of dietary supplementation of Bacillus coagulans on growth, feed utilization, digestive enzyme activity, innate immune response and disease resistance of freshwater prawn Macrobrachium rosenbergii. Three treatment groups (designated as T1, T2 and T3) and a control group (C), each in triplicates, were established. The prawn in the control were fed a basal diet and those in T1, T2 and T3 were fed basal diet containing B. coagulans at 105, 107 and 109 cfu g−1, respectively. After 60 days, growth performance and feed utilization were found significantly higher (P < 0.05) in prawn fed T3 diet. The specific activities of protease, amylase and lipase digestive enzymes were significantly higher (P < 0.05) for T3. Innate immunity in terms of lysozyme and respiratory burst activities were significantly elevated (P < 0.05) in all the probiotic treatment groups as compared to control. Challenge study with Vibrio harveyi revealed significant increase (P < 0.05) in disease resistance of freshwater prawn in T2 and T3 groups. The results collectively suggested that supplementation of B. coagulans as probiotic in the diet at approximately 109 cfu g−1 can improve the growth performance, feed utilization, digestive enzyme activity, innate immune response and disease resistance of freshwater prawn. |
A federated architecture for database systems | The contemporary approach to database system architecture requires the complete integration of data into a single, centralized database; while multiple logical databases can be supported by current database management software, techniques for relating these databases are strictly ad hoc. This problem is aggravated by the trend toward networks of small to medium size computer systems, as opposed to large, stand-alone main-frames. Moreover, while current research on distributed databases aims to provide techniques that support the physical distribution of data items in a computer network environment, current approaches require a distributed database to be logically centralized. |
Safety and efficacy of prasugrel use in patients undergoing percutaneous coronary intervention and anticoagulated with bivalirudin. | The randomized TRial to Assess Improvement in Therapeutic Outcomes by Optimizing Platelet InhibitioN with Prasugrel-Thrombolysis In Myocardial Infarction (TRITON-TIMI) 38 trial compared prasugrel and clopidogrel in patients with acute coronary syndrome undergoing percutaneous coronary intervention (PCI). Patients treated with prasugrel had fewer ischemic events but more procedure-related bleeding. In the present study, we aimed to determine the effect of bivalirudin on bleeding in patients treated with prasugrel. A total of 692 patients with consecutive acute coronary syndrome underwent PCI with stent implantation and were anticoagulated with bivalirudin. The patients were divided into 2 groups according to the antiplatelet regimen (clopidogrel or prasugrel) chosen during or just after PCI. The bleeding complications during hospitalization were tabulated. Ischemic events were analyzed during hospitalization and at 30 days. Prasugrel was used in 96 patients (13.9%) and clopidogrel in 596 (86.1%). The clinical and procedural characteristics were similar, although the clopidogrel patients more often reported systemic hypertension (p = 0.01), previous PCI (p <0.001), and chronic renal insufficiency (p = 0.05). During hospitalization, the bleeding and ischemic complication rates were similar and low in both groups (major in-hospital complications 4.2% for clopidogrel vs 2.1% for prasugrel, p = 0.6; Thrombolysis In Myocardial Infarction major bleeding 2.5% vs 2.1%, p = 1.00; Thrombolysis In Myocardial Infarction minor bleeding 4.2% vs 5.2%, p = 0.6). At 30 days, no differences were found in ischemic events between both groups (target vessel revascularization/major adverse cardiac events 5.4% vs 2.1%, p = 0.2). In conclusion, prasugrel, when given after bivalirudin as the intraprocedural antithrombin agent for patients with acute coronary syndrome undergoing PCI, is as safe and effective as clopidogrel. |
Exercise training enhances autonomic function after acute myocardial infarction: a randomized controlled study. | INTRODUCTION
Heart rate recovery, defined as the fall in heart rate during the first minute after exercise, is an indicator of autonomic function, and has been found to be an independent predictor of mortality after acute myocardial infarction. Exercise training has several well-known benefits in terms of cardiorespiratory fitness, modifiable cardiovascular risk factors and prognosis after acute coronary events. However, there are no randomized controlled studies in the literature evaluating the effects of exercise training per se, controlling for changes in medication and diet, on heart rate recovery. Thus, this study aims to assess the effects of exercise training on autonomic function in coronary artery disease patients recovering from acute myocardial infarction.
METHODS
Thirty-eight patients following a first acute myocardial infarction participated in this prospective randomized clinical trial. Patients were randomized into two groups: exercise training or control. The exercise group participated in an 8-week aerobic exercise program, while the control received standard medical care and follow-up. Changes in hemodynamics at rest and at peak exercise (heart rate, systolic and diastolic blood pressure, and rate pressure product), dietary intake, cardiorespiratory fitness, and heart rate recovery were assessed.
RESULTS
Medication and diet remained unchanged in both groups during the study period. The exercise-training group improved resting hemodynamics, particularly resting heart rate (from 68.0 ± 9.2 to 62.6 ± 8.7 bpm, p=0.030) and systolic blood pressure (from 135 ± 7.1 to 125.6 ± 11.3 mm Hg, p=0.012), cardiorespiratory fitness (from 30.8 ± 7.8 to 33.9 ± 8.3 ml/min/kg, p=0.016), and heart rate recovery (from 20 ± 6 to 24 ± 5 bpm, p=0.007). No significant changes were observed in the control group.
CONCLUSIONS
Exercise training improved autonomic function, assessed by heart rate recovery, resting heart rate and systolic blood pressure, in the absence of changes in diet or medication. |
Image-based air quality analysis using deep convolutional neural network | Air pollution may cause many severe diseases. An efficient air quality monitoring system is of great benefit for human health and air pollution control. In this paper, we study image-based air quality analysis, in particular, the concentration estimation of particulate matter with diameters less than 2.5 micrometers (PM2.5). The proposed method uses a deep Convolutional Neural Network (CNN) to classify natural images into different categories based on their PM2.5 concentrations. In order to evaluate the proposed method, we created a dataset that contains total 591 images taken in Beijing with corresponding PM2.5 concentrations. The experimental results demonstrate that our method are valid for image-based PM2.5 concentration estimation. |
Nitroglycerin sensitises in healthy subjects CNS structures involved in migraine pathophysiology: Evidence from a study of nociceptive blink reflexes and visual evoked potentials | Nitroglycerin (NTG), a NO donor, induces an attack in migraine patients approximately 4-6 h after administration. The causative mechanisms are not known, but the long delay leaves room for a central effect, such as a change in neuronal excitability and synaptic transmission of various CNS areas involved in pain and behaviour including trigeminal nucleus caudalis and monoaminergic brain stem nuclei. To explore the central action of NTG, we have studied its effects on amplitude and habituation of the nociceptive blink reflex (nBR) and the visual evoked potential (VEP) before, 1 h and 4 h after administration of NTG (1.2 mg sublingual) or placebo (vehicle sublingual) in two groups of 10 healthy volunteers. We found a significant decrease in nBR pain and reflex thresholds both 1 and 4 h post-NTG. At the 4 h time point R2 latency was shorter (p=0.04) and R2 response area increased (p<0.01) after NTG but not after placebo. Habituation tended to become more pronounced after both NTG and placebo administration. There was a significant amplitude increase in the 5th VEP block (p=0.03) at 1h after NTG and in the 1st block (p=0.04) at 4 h. VEP habituation was replaced by potentiation at both delays after NTG; the change in habituation slope was significant at 1h (p=0.02). There were no significant VEP changes in subjects who received sublingual placebo. In conclusion, we found that in healthy subjects sublingual NTG, but not its vehicle, induces changes in a trigeminal nociceptive reflex and an evoked cortical response which are comparable to those found immediately before and during an attack of migraine. These changes could be relevant for the attack-triggering effect of NTG in migraineurs. |
An empirical study of the effects of test-suite reduction on fault localization | Fault-localization techniques that utilize information about all test cases in a test suite have been presented. These techniques use various approaches to identify the likely faulty part(s) of a program, based on information about the execution of the program with the test suite. Researchers have begun to investigate the impact that the composition of the test suite has on the effectiveness of these fault-localization techniques. In this paper, we present the first experiment on one aspect of test-suite composition--test-suite reduction. Our experiment studies the impact of the test-suite reduction on the effectiveness of fault-localization techniques. In our experiment, we apply 10 test-suite reduction strategies to test suites for eight subject programs. We then measure the differences between the effectiveness of four existing fault-localization techniques on the unreduced and reduced test suites. We also measure the reduction in test-suite size of the 10 test-suite reduction strategies. Our experiment shows that fault-localization effectiveness varies depending on the test-suite reduction strategy used, and it demonstrates the trade-offs between test-suite reduction and fault-localization effectiveness. |
Cation-π Interactions between Quaternary Ammonium Ions and Amino Acid Aromatic Groups in Aqueous Solution. | Cation-π interactions play important roles in the stabilization of protein structures and protein-ligand complexes. They contribute to the binding of quaternary ammonium ligands (mainly RNH3+ and RN(CH3)3+) to various protein receptors and are likely involved in the blockage of potassium channels by tetramethylammonium (TMA+) and tetraethylammonium (TEA+). Polarizable molecular models are calibrated for NH4+, TMA+, and TEA+ interacting with benzene, toluene, 4-methylphenol, and 3-methylindole (representing aromatic amino acid side chains) based on the ab initio MP2(full)/6-311++G(d,p) properties of the complexes. Whereas the gas-phase affinity of the ions with a given aromatic follows the trend NH4+ > TMA+ > TEA+, molecular dynamics simulations using the polarizable models show a reverse trend in water, likely due to a contribution from the hydrophobic effect. This reversed trend follows the solubility of aromatic hydrocarbons in quaternary ammonium salt solutions, which suggests a role for cation-π interactions in the salting-in of aromatic compounds in solution. Simulations in water show that the complexes possess binding free energies ranging from -1.3 to -3.3 kcal/mol (compared to gas-phase binding energies between -8.5 and -25.0 kcal/mol). Interestingly, whereas the most stable complexes involve TEA+ (the largest ion), the most stable solvent-separated complexes involve TMA+ (the intermediate-size ion). |
Embodied spatial cognition: Biological and artificial systems | In this paper we sketch out a computational theory of spatial cognition motivated by navigational behaviours, ecological requirements, and neural mechanisms as identified in animals and man. Spatial cognition is considered in the context of a cognitive agent built around the action-perception cycle. Besides sensors and effectors, the agent comprises multiple memory structures including a working memory and a longterm memory stage. Spatial longterm memory is modeled along the graph approach, treating recognizable places or poses as nodes and navigational actions as links. Models of working memory and its interaction with reference memory are discussed. The model provides an overall framework of spatial cognition which can be adapted to model different levels of behavioural complexity as well as interactions between working and longterm memory. A number of design questions for building cognitive robots are derived from comparison with biological systems and discussed in the paper. |
Patient priorities in osteoarthritis and comorbid conditions: a secondary analysis of qualitative data. | OBJECTIVE
A lack of agreement between clinician and patient priorities can impact the clinician-patient relationship, treatment concordance, and potential health outcomes. Studies have suggested that patients with osteoarthritis (OA) may prioritize comorbidities over their OA, but as yet no explicit systematic exploration of OA patients' priorities in relation to comorbidities exists. This study aims to explore how patients prioritize their OA among their conditions, which factors underlie this prioritization, and whether and why these priorities change over time.
METHODS
A secondary analysis of qualitative data was conducted utilizing 4 existing data sets collated from the 3 research centers involved. Purposive sampling provided a sample of 30 participants who all had OA and comorbidities. The research team collectively coded and analyzed the data thematically.
RESULTS
Three groups of patients emerged from the analysis. The 2 smaller groups had stable priorities (where OA was or was not prioritized) and illustrated the importance of factors, such as personal social context and the specific nature of the comorbid conditions. The third and largest group reported priorities that shifted over time. Shifting appeared to be influenced by the participants' perceptions of control and/or interactions with clinical professionals, and could have important consequences for self-management behavior.
CONCLUSION
The various factors underlying patients' priorities among their conditions, and the fluctuating nature of these priorities, highlight the importance of regular assessments during clinician-patient consultations to allow better communication and treatment planning, and ultimately optimize patient outcomes. |
Marshall-Olkin generalized exponential distribution | Marshall and Olkin (1997, “A new method for adding a parameter to a family of distributions with applications to the exponential and Weibull families”, Biometrika, 641 652) introduced a new way of incorporating a parameter to expand a family of distributions. In this paper we adopt the Marshall-Olkin approach to introduce an extra shape parameter to the two-parameter generalized exponential distribution. It is observed that the new three-parameter distribution is very flexible. The probability density functions can be either a decreasing or an unimodal function. The hazard function of the proposed model, can have all the four major shapes, namely increasing, decreasing, bathtub or inverted bathtub types. Different properties of the proposed distribution have been established. The new family of distributions is analytically quite tractable, and it can be used quite effectively, to analyze censored data also. Maximum likelihood method is used to compute the estimators of the unknown parameters. Two data sets have been analyzed, and the results are quite satisfactory. |
Fast and Robust Archetypal Analysis for Representation Learning | We revisit a pioneer unsupervised learning technique called archetypal analysis, [5] which is related to successful data analysis methods such as sparse coding [18] and non-negative matrix factorization [19]. Since it was proposed, archetypal analysis did not gain a lot of popularity even though it produces more interpretable models than other alternatives. Because no efficient implementation has ever been made publicly available, its application to important scientific problems may have been severely limited. Our goal is to bring back into favour archetypal analysis. We propose a fast optimization scheme using an active-set strategy, and provide an efficient open-source implementation interfaced with Matlab, R, and Python. Then, we demonstrate the usefulness of archetypal analysis for computer vision tasks, such as codebook learning, signal classification, and large image collection visualization. |
Mixed Reality: A Survey | This chapter presents an overview of the Mixed Reality (MR) paradigm, which proposes to overlay our real-world environment with digital, computer-generated objects. It presents example applications and outlines limitations and solutions for their technical implementation. In MR systems, users perceive both the physical environment around them and digital elements presented through, for example, the use of semitransparent displays. By its very nature, MR is a highly interdisciplinary field engaging signal processing, computer vision, computer graphics, user interfaces, human factors, wearable computing, mobile computing, information visualization, and the design of displays and sensors. This chapter presents potential MR applications, technical challenges in realizing MR systems, as well as issues related to usability and collaboration in MR. It separately presents a section offering a selection of MR projects which have either been partly or fully undertaken at Swiss universities and rounds off with a section on current challenges and trends. |
Left-insular cortex lesions perturb cardiac autonomic tone in humans | The insular cortex is involved in cardiac regulation. The left insula is predominantly responsible for parasympathetic cardiovascular effects. Damage to this area could shift cardiovascular balance towards increased basal sympathetic tone (a proarrhythmic condition) and contribute to the excess cardiac mortality following stroke. Acute left insular stroke increased basal cardiac sympathetic tone and was associated with a decrease in randomness of heart rate variability. In addition, phase relationships between heart rate and blood pressure were disturbed, implying a disruption of oscillators involved in cardiovascular control. The insula appears to be involved in human heart rate regulation and damage to it may encourage a pro-arrhythmic state. |
A human study of patch maintainability | Identifying and fixing defects is a crucial and expensive part of the software lifecycle. Measuring the quality of bug-fixing patches is a difficult task that affects both functional correctness and the future maintainability of the code base. Recent research interest in automatic patch generation makes a systematic understanding of patch maintainability and understandability even more critical.
We present a human study involving over 150 participants, 32 real-world defects, and 40 distinct patches. In the study, humans perform tasks that demonstrate their understanding of the control flow, state, and maintainability aspects of code patches. As a baseline we use both human-written patches that were later reverted and also patches that have stood the test of time to ground our results. To address any potential lack of readability with machine-generated patches, we propose a system wherein such patches are augmented with synthesized, human-readable documentation that summarizes their effects and context. Our results show that machine-generated patches are slightly less maintainable than human-written ones, but that trend reverses when machine patches are augmented with our synthesized documentation. Finally, we examine the relationship between code features (such as the ratio of variable uses to assignments) with participants' abilities to complete the study tasks and thus explain a portion of the broad concept of patch quality. |
Non-contact ACL injuries in female athletes: an International Olympic Committee current concepts statement. | The incidence of anterior cruciate ligament (ACL) injury remains high in young athletes. Because female athletes have a much higher incidence of ACL injuries in sports such as basketball and team handball than male athletes, the IOC Medical Commission invited a multidisciplinary group of ACL expert clinicians and scientists to (1) review current evidence including data from the new Scandinavian ACL registries; (2) critically evaluate high-quality studies of injury mechanics; (3) consider the key elements of successful prevention programmes; (4) summarise clinical management including surgery and conservative management; and (5) identify areas for further research. Risk factors for female athletes suffering ACL injury include: (1) being in the preovulatory phase of the menstrual cycle compared with the postovulatory phase; (2) having decreased intercondylar notch width on plain radiography; and (3) developing increased knee abduction moment (a valgus intersegmental torque) during impact on landing. Well-designed injury prevention programmes reduce the risk of ACL for athletes, particularly women. These programmes attempt to alter dynamic loading of the tibiofemoral joint through neuromuscular and proprioceptive training. They emphasise proper landing and cutting techniques. This includes landing softly on the forefoot and rolling back to the rearfoot, engaging knee and hip flexion and, where possible, landing on two feet. Players are trained to avoid excessive dynamic valgus of the knee and to focus on the "knee over toe position" when cutting. |
The effect of cupric ion on the reaction inactivation of ascorbic acid oxidase. | Abstract 1. 1. The reaction inactivation of functioning ascorbic acid oxidase has been found to be greatly increased in the presence of free cupric ion, whereas the incubation of nonfunctioning enzyme with Cu ++ results in no apparent inactivation. 2. 2. Other bivalent metal ions (Fe ++ , Co ++ , Ni ++ , Zn ++ ) have been found to produce no significant retardation of the enzyme's action. 3. 3. It has been demonstrated that the increased enzyme inactivation in the presence of Cu ++ cannot be attributed to the formation of H 2 O 2 resulting from cupric ion catalysis of the aerobic oxidation of ascorbic acid. 4. 4. The significance of the Cu ++ effect in respect to the possible role of —SH group participation in the enzyme's function is briefly discussed. |
Evaluation of Fuzzy K-Means And K-Means Clustering Algorithms In Intrusion Detection Systems | According to the growth of the Internet technology, there is a need to develop strategies in order to maintain security of system. One of the most effective techniques is Intrusion Detection System (IDS). This system is created to make a complete security in a computerized system, in order to pass the Intrusion system through the firewall, antivirus and other security devices detect and deal with it. The Intrusion detection techniques are divided into two groups which includes supervised learning and unsupervised learning. Clustering which is commonly used to detect possible attacks is one of the branches of unsupervised learning. Fuzzy sets play an important role to reduce spurious alarms and Intrusion detection, which have uncertain quality.This paper investigates k-means fuzzy and k-means algorithm in order to recognize Intrusion detection in system which both of the algorithms use clustering method. |
pubmed.mineR: An R package with text-mining algorithms to analyse PubMed abstracts | The PubMed literature database is a valuable source of information for scientific research. It is rich in biomedical literature with more than 24 million citations. Data-mining of voluminous literature is a challenging task. Although several text-mining algorithms have been developed in recent years with focus on data visualization, they have limitations such as speed, are rigid and are not available in the open source. We have developed an R package, pubmed.mineR, wherein we have combined the advantages of existing algorithms, overcome their limitations, and offer user flexibility and link with other packages in Bioconductor and the Comprehensive R Network (CRAN) in order to expand the user capabilities for executing multifaceted approaches. Three case studies are presented, namely, ‘Evolving role of diabetes educators’, ‘Cancer risk assessment’ and ‘Dynamic concepts on disease and comorbidity’ to illustrate the use of pubmed.mineR. The package generally runs fast with small elapsed times in regular workstations even on large corpus sizes and with compute intensive functions. The pubmed.mineR is available at http://cran.r-project.org/web/packages/pubmed.mineR . |
An RF scanning receiver based on photonics for electronic warfare applications | We present the design and simulative results of an innovative RF receiver based on photonic techniques that quickly scans the RF band of interest in warfare applications, avoiding the spectrum channelization and thus reducing the receiver SWaP. The scheme is based on a stable mode-locked laser and a specifically designed tunable optical filter, and it simultaneously samples, filters, and down-converts the detected spectrum. The analysis confirms the scheme capability for detecting signals beyond 18GHz, with instantaneous bandwidth >1GHz and dynamic range >40dB. The scheme implementation with integrated photonics technologies will ensure fast scanning (filter repositioning in <;100ns), high environmental stability, and reduced SWaP, meeting the increasingly stringent requirements of electronic warfare applications. |
Electroacupuncture for control of myeloablative chemotherapy-induced emesis: A randomized controlled trial. | CONTEXT
High-dose chemotherapy poses considerable challenges to emesis management. Although prior studies suggest that acupuncture may reduce nausea and emesis, it is unclear whether such benefit comes from the nonspecific effects of attention and clinician-patient interaction.
OBJECTIVE
To compare the effectiveness of electroacupuncture vs minimal needling and mock electrical stimulation or antiemetic medications alone in controlling emesis among patients undergoing a highly emetogenic chemotherapy regimen.
DESIGN
Three-arm, parallel-group, randomized controlled trial conducted from March 1996 to December 1997, with a 5-day study period and a 9-day follow-up.
SETTING
Oncology center at a university medical center.
PATIENTS
One hundred four women (mean age, 46 years) with high-risk breast cancer.
INTERVENTIONS
Patients were randomly assigned to receive low-frequency electroacupuncture at classic antiemetic acupuncture points once daily for 5 days (n = 37); minimal needling at control points with mock electrostimulation on the same schedule (n = 33); or no adjunct needling (n = 34). All patients received concurrent triple antiemetic pharmacotherapy and high-dose chemotherapy (cyclophosphamide, cisplatin, and carmustine).
MAIN OUTCOME MEASURES
Total number of emesis episodes occurring during the 5-day study period and the proportion of emesis-free days, compared among the 3 groups.
RESULTS
The number of emesis episodes occurring during the 5 days was lower for patients receiving electroacupuncture compared with those receiving minimal needling or pharmacotherapy alone (median number of episodes, 5, 10, and 15, respectively; P<.001). The electroacupuncture group had fewer episodes of emesis than the minimal needling group (P<.001), whereas the minimal needling group had fewer episodes of emesis than the antiemetic pharmacotherapy alone group (P =.01). The differences among groups were not significant during the 9-day follow-up period (P =.18).
CONCLUSIONS
In this study of patients with breast cancer receiving high-dose chemotherapy, adjunct electroacupuncture was more effective in controlling emesis than minimal needling or antiemetic pharmacotherapy alone, although the observed effect had limited duration. JAMA. 2000;284:2755-2761. |
Simulating makeup through physics-based manipulation of intrinsic image layers | We present a method for simulating makeup in a face image. To generate realistic results without detailed geometric and reflectance measurements of the user, we propose to separate the image into intrinsic image layers and alter them according to proposed adaptations of physically-based reflectance models. Through this layer manipulation, the measured properties of cosmetic products are applied while preserving the appearance characteristics and lighting conditions of the target face. This approach is demonstrated on various forms of cosmetics including foundation, blush, lipstick, and eye shadow. Experimental results exhibit a close approximation to ground truth images, without artifacts such as transferred personal features and lighting effects that degrade the results of image-based makeup transfer methods. |
DeepCPU: Serving RNN-based Deep Learning Models 10x Faster | Recurrent neural networks (RNNs) are an important class of deep learning (DL) models. Existing DL frameworks have unsatisfying performance for online serving: many RNN models suffer from long serving latency and high cost, preventing their deployment in production. This work characterizes RNN performance and identifies low data reuse as a root cause. We develop novel techniques and an efficient search strategy to squeeze more data reuse out of this intrinsically challenging workload. We build DeepCPU, a fast serving library on CPUs, to integrate these optimizations for efficient RNN computation. Our evaluation on various RNN models shows that DeepCPU improves latency and efficiency by an order of magnitude on CPUs compared with existing DL frameworks such as TensorFlow. It also empowers CPUs to beat GPUs on RNN serving. In production services of Microsoft, DeepCPU transforms many models from non-shippable (due to latency SLA violation) to shippable (well-fitting latency requirements) and saves millions of dollars of infrastructure costs. |
Effects of amino acids and glucagon on renal hemodynamics in type 1 diabetes. | Increased dietary protein and circulating amino acids raise glomerular filtration rate (GFR) and pressure. In diabetes, this glomerular hyperfiltration response is augmented. The purpose of this study was to determine whether glucagon mediates the augmented GFR response to amino acids in diabetes and whether the responses to amino acids and glucagon depend on prostaglandins. Patients with type 1 diabetes mellitus (n = 12) and normal control subjects (n = 12) were studied in a series of six experiments, each on different occasions. Baseline GFR was not significantly increased, but filtration fraction was higher in diabetes. In response to amino acid infusion, GFR increased more and filtration fraction was greater among those with diabetes. Their augmented GFR response to amino acids was not inhibited by octreotide or indomethacin. Participants with diabetes also had enhanced GFR and renal plasma flow responses to glucagon infusion, both of which were inhibited by indomethacin. Glomerular hyperfiltration responses induced by amino acids or glucagon occur by divergent pathways in diabetes; only the response to glucagon is prostaglandin dependent. |
A prescription fraud detection model | Prescription fraud is a main problem that causes substantial monetary loss in health care systems. We aimed to develop a model for detecting cases of prescription fraud and test it on real world data from a large multi-center medical prescription database. Conventionally, prescription fraud detection is conducted on random samples by human experts. However, the samples might be misleading and manual detection is costly. We propose a novel distance based on data-mining approach for assessing the fraudulent risk of prescriptions regarding cross-features. Final tests have been conducted on adult cardiac surgery database. The results obtained from experiments reveal that the proposed model works considerably well with a true positive rate of 77.4% and a false positive rate of 6% for the fraudulent medical prescriptions. The proposed model has the potential advantages including on-line risk prediction for prescription fraud, off-line analysis of high-risk prescriptions by human experts, and self-learning ability by regular updates of the integrative data sets. We conclude that incorporating such a system in health authorities, social security agencies and insurance companies would improve efficiency of internal review to ensure compliance with the law, and radically decrease human-expert auditing costs. |
Dual head clustering scheme in wireless sensor networks | Energy efficient data aggregation is one of the key research areas of wireless sensor networks (WSNs). Many clustering techniques have been proposed for topology maintenance and routing in these networks. In addition to this, these techniques are also beneficial in prolonging the network life time. Clustering protocols proposed in existing literature use a single Cluster Head (CH) for a group of nodes (Cluster). In these protocols, the CH performs a number of activities, such as data gathering, data aggregation and data forwarding. As a result, the CH depletes its energy quickly as compared to its member nodes. Hence, re-clustering is required frequently, which consumes considerable energy. This paper proposes an energy efficient Dual Head Clustering Scheme (DHCS) for WSNs. DHCS selects two different nodes within the cluster for cluster management and aggregation namely Cluster Head (CH) and Aggregator Head (AH) respectively. Simulation results show that the DHCS outperforms conventional clustering protocols in terms of energy conservation, network life time and network latency. |
Extreme Network Compression via Filter Group Approximation | In this paper we propose a novel decomposition method based on filter group approximation, which can significantly reduce the redundancy of deep convolutional neural networks (CNNs) while maintaining the majority of feature representation. Unlike other low-rank decomposition algorithms which operate on spatial or channel dimension of filters, our proposed method mainly focuses on exploiting the filter group structure for each layer. For several commonly used CNN models, including VGG and ResNet, our method can reduce over 80% floatingpoint operations (FLOPs) with less accuracy drop than state-of-the-art methods on various image classification datasets. Besides, experiments demonstrate that our method is conducive to alleviating degeneracy of the compressed network, which hurts the convergence and performance of the network. |
Rule-Based Forecasting: Using Judgment in Time-Series Extrapolation | Rule-Based Forecasting (RBF) is an expert system that uses judgment to develop and apply rules for combining extrapolations. The judgment comes from two sources, forecasting expertise and domain knowledge. Forecasting expertise is based on more than a half century of research. Domain knowledge is obtained in a structured way; one example of domain knowledge is managers= expectations about trends, which we call “causal forces.” Time series are described in terms of 28 conditions, which are used to assign weights to extrapolations. Empirical results on multiple sets of time series show that RBF produces more accurate forecasts than those from traditional extrapolation methods or equal-weights combined extrapolations. RBF is most useful when it is based on good domain knowledge, the domain knowledge is important, the series is well behaved (such that patterns can be identified), there is a strong trend in the data, and the forecast horizon is long. Under ideal conditions, the error for RBF’s forecasts were one-third less than those for equal-weights combining. When these conditions are absent, RBF neither improves nor harms forecast accuracy. Some of RBF’s rules can be used with traditional extrapolation procedures. In a series of studies, rules based on causal forces improved the selection of forecasting methods, the structuring of time series, and the assessment of prediction intervals. Disciplines Business | Marketing Comments Suggested Citation: Armstrong, J.S., Adya, M. and Collopy, F. Rule-Based Forecasting: Using Judgment in Time-Series Extrapolation. In Principles of Forecasting: A Handbook for Researchers and Practitioners (Ed. J. Scott Armstrong). Kluwer, 2001. Publisher URL: http://www.springer.com/business+%26+management/business+for+professionals/book/ 978-0-7923-7930-0 This book chapter is available at ScholarlyCommons: http://repository.upenn.edu/marketing_papers/149 Principles of Forecasting: A Handbook for Researchers and Practitioners, J. Scott Armstrong (ed.): Norwell, MA: Kluwer Academic Publishers, 2001. Rule-Based Forecasting: Using Judgment in Time-Series Extrapolation J. Scott Armstrong The Wharton School, University of Pennsylvania Monica Adya Department of Information Systems, U. of Maryland Baltimore County Fred Collopy The Weatherhead School of Management, Case Western Reserve University |
Intrinsic cancer subtypes-next steps into personalized medicine | Recent technological advances have significantly improved our understanding of tumor biology by means of high-throughput mutation and transcriptome analyses. The application of genomics has revealed the mutational landscape and the specific deregulated pathways in different tumor types. At a transcriptional level, multiple gene expression signatures have been developed to identify biologically distinct subgroups of tumors. By supervised analysis, several prognostic signatures have been generated, some of them being commercially available. However, an unsupervised approach is required to discover a priori unknown molecular subtypes, the so-called intrinsic subtypes. Moreover, an integrative analysis of the molecular events associated with tumor biology has been translated into a better tumor classification. This molecular characterization confers new opportunities for therapeutic strategies in the management of cancer patients. However, the applicability of these new molecular classifications is limited because of several issues such as technological validation and cost. Further comparison with well-established clinical and pathological features is expected to accelerate clinical translation. In this review, we will focus on the data reported on molecular classification in the most common tumor types such as breast, colorectal and lung carcinoma, with special emphasis on recent data regarding tumor intrinsic subtypes. Likewise, we will review the potential applicability of these new classifications in the clinical routine. |
Attack Graph Based Evaluation of Network Security | The perspective directions in evaluating network security are simulating possible malefactor’s actions, building the representation of these actions as attack graphs (trees, nets), the subsequent checking of various properties of these graphs, and determining security metrics which can explain possible ways to increase security level. The paper suggests a new approach to security evaluation based on comprehensive simulation of malefactor’s actions, construction of attack graphs and computation of different security metrics. The approach is intended for using both at design and exploitation stages of computer networks. The implemented software system is described, and the examples of experiments for analysis of network security level are considered. |
The effect of off-time and annealing on the magnetic behavior of CoxSn1−x alloy nanowires | Abstract Co x Sn 1 − x alloy nanowires were ac-pulse electrodeposited into the highly ordered alumina template in the same electrolyte concentration. The effect of off-time between pulses (0, 20, 50, 100 and 200 ms) on the magnetic properties and microstructure of the Co x Sn 1 − x nanowires was investigated. Using a single electrodeposition bath, Sn content increased in the range of 3–55 at.% with increasing the off-time. Improving hcp-Co (0 0 2) crystalline orientation with increasing the off-time was the main source to increase the coercivity of as-deposited alloy nanowires. Segregation of magnetic grains by Sn element led to improve the coercivity of the annealed samples. Existence of non-magnetic Sn prevented the oxidation of magnetic grains thereby saturation magnetization of the annealed samples was unchanged. |
Mobile device fingerprinting considered harmful for risk-based authentication | In this paper, we present a critical assessment of the use of device fingerprinting for risk-based authentication in a state-of-practice identity and access management system. Risk-based authentication automatically elevates the level of authentication whenever a particular risk threshold is exceeded. Contemporary identity and access management systems frequently leverage browser-based device fingerprints to recognize trusted devices of a certain individual. We analyzed the variability and the predictability of mobile device fingerprints. Our research shows that particularly for mobile devices the fingerprints carry a lot of similarity, even across models and brands, making them less reliable for risk assessment and step-up authentication. |
Serum uric acid in multiple sclerosis: further evidences of reduced levels. | |
Modeling the learning progressions of computational thinking of primary grade students | We introduce the Progression of Early Computational Thinking (PECT) Model, a framework for understanding and assessing computational thinking in the primary grades (Grades 1 to 6). The model synthesizes measurable evidence from student work with broader, more abstract coding design patterns, which are then mapped onto computational thinking concepts.
We present the results of a pilot-test study of the PECT Model in order to demonstrate its potential efficacy in detecting both differences in computational thinking among students of various ages as well as any clear overall progressions in increasing computational sophistication. Results of this sort are vital for establishing research-based and age-appropriate curricula for students in the primary grades, i.e., developing non-trivial, challenging but not overly daunting lesson plans that utilize the cognitive development stage of each grade level most effectively. |
Energy production from biomass (Part 1): Overview of biomass. | The use of renewable energy sources is becoming increasingly necessary, if we are to achieve the changes required to address the impacts of global warming. Biomass is the most common form of renewable energy, widely used in the third world but until recently, less so in the Western world. Latterly much attention has been focused on identifying suitable biomass species, which can provide high-energy outputs, to replace conventional fossil fuel energy sources. The type of biomass required is largely determined by the energy conversion process and the form in which the energy is required. In the first of three papers, the background to biomass production (in a European climate) and plant properties is examined. In the second paper, energy conversion technologies are reviewed, with emphasis on the production of a gaseous fuel to supplement the gas derived from the landfilling of organic wastes (landfill gas) and used in gas engines to generate electricity. The potential of a restored landfill site to act as a biomass source, providing fuel to supplement landfill gas-fuelled power stations, is examined, together with a comparison of the economics of power production from purpose-grown biomass versus waste-biomass. The third paper considers particular gasification technologies and their potential for biomass gasification. |
Sleep and delinquency: does the amount of sleep matter? | Sleep, a key indicator of health, has been linked to a variety of indicators of well-being such that people who get an adequate amount generally experience greater well-being. Further, a lack of sleep has been linked to a wide range of negative developmental outcomes, yet sleep has been largely overlooked among researchers interested in adolescent delinquency. The purpose of this study was to explore the relationship between hours of sleep and delinquent behavior among adolescents by using data from Wave 1 of the National Longitudinal Study of Adolescent Health (n = 14,382; 50.2% female, 63.5% white). A series of negative binomial regressions showed that youth who typically sleep seven or fewer hours per night reported significantly more property delinquency than youth who sleep the recommended 8-10 h. Further, youth who reported sleeping 5 or fewer hours per night reported significantly more violent delinquency than youth who reported sleeping the recommended number of hours per night. The findings suggest that sleep is an important, and overlooked, dimension of delinquent behavior and studies that focus on adolescent health should further investigate the effects of insufficient sleep. Finally, the authors recommend that sleep and other relevant health behaviors be considered in the context of more comprehensive approaches to delinquency prevention and intervention. |
Imperforate hymen and urinary retention in a newborn girl. | Urinary retention is relatively rare in infants,especially in girls. Imperforate hymen is the most frequent congenital malformation of the female genital tract and is usually asymptomatic until puberty. Mucocolpos with an abdominal mass in neonatal age is extremely rare. We report a case of a 20-day-old newborn girl with acute urinary retention due to isolated imperforate hymen and mucocolpos. |
Safety and efficacy of selective retina therapy (SRT) for the treatment of diabetic macular edema in Korean patients | Selective retina therapy (SRT) stimulates retinal pigment epithelium (RPE) cell migration and proliferation into irradiated areas. The objective of this study was to evaluate the efficacy and safety of SRT in Korean patients with clinically significant diabetic macular edema (DME). Prospective non-randomized interventional case series study. Twenty-three eyes of 21 patients with clinically significant DME were treated with SRT and followed for 6 months. Patients underwent an evaluation of best corrected visual acuity (BCVA) in Early Treatment Diabetic Retinopathy Study (ETDRS) letters. Microperimetry was employed to measure macular sensitivity within the central 10° field, and the central macular thickness (CMT) and maximum macular thickness (MMT) were measured. An improvement in BCVA of one to two ETDRS lines was observed in 41.2 % of patients and an improvement of greater than two lines in 29.4 %. Although there was no significant change in CMT (P > 0.05), MMT decreased from 465.8 ± 87.4 μm to 434.3 ± 83.9 μm (P = 0.006), and mean macular sensitivity increased from 20.8 ± 3.4dB to 22.5 ± 3.5dB (P = 0.02). The gains in BCVA and improvement in macular sensitivity demonstrated that SRT may be used as an effective and safe treatment modality in Korean patients with clinically significant DME. |
It Takes Two to Tango: Deleted Stack Overflow Question Prediction with Text and Meta Features | Stack Overflow is a popular community-based Q&A website that caters to technical needs of software developers. As of February 2015 - Stack Overflow has more than 3.9M registered users, 8.8M questions, and 41M comments. Stack Overflow provides explicit and detailed guidelines on how to post questions but, some questions are very poor in quality. Such questions are deleted by the experienced community members and moderators. Deleted questions increase maintenance cost and have an adverse impact on the user experience. Therefore, predicting deleted questions is an important task. In this study, we propose a two stage hybrid approach - DelPredictor - which combines text processing and classification techniques to predict deleted questions. In the first stage, DelPredictor converts text in the title, body, and tag fields of questions into numerical textual features via text processing and classification techniques. In the second stage, it extracts meta features that can be categorized into: profile, community, content, and syntactic features. Next, it learns and combines two independent classifiers built on the textual and meta features. We evaluate DelPredictor on 5 years (2008-2013) of deleted questions from Stack Overflow. Our experimental results show that DelPredictor improves the F1-scores over baseline prediction, a prior approach [12] and a text-based approach by 29.50%, 9.34%, and 28.11%, respectively. |
An efficient chaotic water cycle algorithm for optimization tasks | Water cycle algorithm (WCA) is a new population-based meta-heuristic technique. It is originally inspired by idealized hydrological cycle observed in natural environment. The conventional WCA is capable to demonstrate a superior performance compared to other well-established techniques in solving constrained and also unconstrained problems. Similar to other meta-heuristics, premature convergence to local optima may still be happened in dealing with some specific optimization tasks. Similar to chaos in real water cycle behavior, this article incorporates chaotic patterns into stochastic processes of WCA to improve the performance of conventional algorithm and to mitigate its premature convergence problem. First, different chaotic signal functions along with various chaotic-enhanced WCA strategies (totally 39 meta-heuristics) are implemented, and the best signal is preferred as the most appropriate chaotic technique for modification of WCA. Second, the chaotic algorithm is employed to tackle various benchmark problems published in the specialized literature and also training of neural networks. The comparative statistical results of new technique vividly demonstrate that premature convergence problem is relieved significantly. Chaotic WCA with sinusoidal map and chaotic-enhanced operators not only can exploit high-quality solutions efficiently but can outperform WCA optimizer and other investigated algorithms. |
The top-ten in journal impact factor manipulation | A considerable part of the scientific community is, at least to some degree, involved in the “impact factor game” Editors strive to increase their journals — impact factor (IF) in order to gain influence in the fields of basic and applied research and scientists seek to profit from the “added value” of publishing in top IF journals. In this article we point out the most common “tricks” of engineering and manipulating the IF undertaken by a portion of professionals of the scientific publishing industry. They attempt to increase the nominator or decrease the denominator of the IF equation by taking advantage of certain design flaws and disadvantages of the IF that permit a degree of artificial and arbitrary inflation. Some of these practices, if not scientifically unethical, are at least questionable and should be abandoned. Editors and publishers should strive for quality through fair and thoughtful selection of papers forwarded for peer review and editorial comments that enhance the quality and scientific accuracy of a manuscript. |
Conductive polymers: towards a smart biomaterial for tissue engineering. | Developing stimulus-responsive biomaterials with easy-to-tailor properties is a highly desired goal of the tissue engineering community. A novel type of electroactive biomaterial, the conductive polymer, promises to become one such material. Conductive polymers are already used in fuel cells, computer displays and microsurgical tools, and are now finding applications in the field of biomaterials. These versatile polymers can be synthesised alone, as hydrogels, combined into composites or electrospun into microfibres. They can be created to be biocompatible and biodegradable. Their physical properties can easily be optimized for a specific application through binding biologically important molecules into the polymer using one of the many available methods for their functionalization. Their conductive nature allows cells or tissue cultured upon them to be stimulated, the polymers' own physical properties to be influenced post-synthesis and the drugs bound in them released, through the application of an electrical signal. It is thus little wonder that these polymers are becoming very important materials for biosensors, neural implants, drug delivery devices and tissue engineering scaffolds. Focusing mainly on polypyrrole, polyaniline and poly(3,4-ethylenedioxythiophene), we review conductive polymers from the perspective of tissue engineering. The basic properties of conductive polymers, their chemical and electrochemical synthesis, the phenomena underlying their conductivity and the ways to tailor their properties (functionalization, composites, etc.) are discussed. |
Multiresolution splatting for indirect illumination | Global illumination provides a visual richness not achievable with the direct illumination models used by most interactive applications. To generate global effects, numerous approximations attempt to reduce global illumination costs to levels feasible in interactive contexts. One such approximation, reflective shadow maps, samples a shadow map to identify secondary light sources whose contributions are splatted into eye-space. This splatting introduces significant overdraw that is usually reduced by artificially shrinking each splat's radius of influence. This paper introduces a new, multi-resolution approach for interactively splatting indirect illumination. Instead of reducing GPU fill rate by reducing splat size, we reduce fill rate by rendering splats into a multi-resolution buffer. This takes advantage of the low-frequency nature of diffuse and glossy indirect lighting, allowing rendering of indirect contributions at low resolution where lighting changes slowly and at high resolution near discontinuities. Because this multi-resolution rendering occurs on a per-splat basis, we can significantly reduce fill rate without arbitrarily clipping splat contributions below a given threshold---those regions simply are rendered at a coarse resolution. |
Thermal analysis of a segmented stator winding design | This paper presents a thermal analysis of a segmented stator winding design. As the thermal performance is one of the main factors limiting a machine's output capability, a thermal test on a complete prototype machine is an essential part of the design process. However, for the segmented stator winding design a test-informed thermal analysis on a single stator tooth can be performed prior to the manufacture of the full machine. This approach allows for a rapid and inexpensive assessment of the thermal performance of the complete machine and early identification of design modifications needed. The research has been applied to the design of a highly efficient and compact permanent magnet (PM) traction motor. A thermal model for a single tooth was developed and supported by tests to identify key heat transfer coefficients. A number of winding assemblies were compared and the most promising was selected for the final motor prototype. The results from the approach are compared with thermal test results from the complete machine. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.