title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
A Focus Theory of Normative Conduct : Recycling the Concept of Norms to Reduce Littering in Public Places | Past research has generated mixed support among social scientists for the utility of social norms in accounting for human behavior. We argue that norms do have a substantial impact on human action; however, the impact can only be properly recognized when researchers (a) separate 2 types of norms that at times act antagonistically in a situation—injunctive norms (what most others approve or disapprove) and descriptive norms (what most others do)—and (b) focus Ss' attention principally on the type of norm being studied. In 5 natural settings, focusing Ss on either the descriptive norms or the injunctive norms regarding littering caused the Ss* littering decisions to change only in accord with the dictates of the then more salient type of norm. |
Is the patient's baseline inhaled steroid dose a factor for choosing the budesonide/formoterol maintenance and reliever therapy regimen? | OBJECTIVE
Baseline inhaled corticosteroid (ICS) dose may be a factor for prescribers to consider when they select a budesonide/formoterol maintenance and reliever therapy regimen for symptomatic asthmatics.
METHODS
A 6-month randomized study compared two maintenance doses of budesonide/formoterol 160/4.5 µg, 1 × 2 and 2 × 2, plus as needed, in 8424 asthma patients with symptoms when treated with ICS ± an inhaled long-acting β(2)-agonist (LABA). In the total study population, 1339 (17%) were high-dose ICS (HD) users (≥ 1600 µg/day budesonide). This HD stratum was compared with the rest of the study population, divided into low-dose (LD; 400 µg/day) and medium-dose strata (MD; 401-1599 µg/day) with regard to severe asthma exacerbations and mean changes in five-item Asthma Control Questionnaire (ACQ(5)) scores from baseline.
RESULTS
In all three strata there were fewer exacerbations in the 2 × 2 treatment groups (yearly rates 0.268, 0.172 and 0.094) than in the 1 × 2 treatment groups (yearly rates 0.232, 0.138 and 0.764). In no stratum was the difference between the treatment groups statistically significant. There was no statistically significant difference in time to the first severe exacerbation between the treatments 2 × 2 and 1 × 2 in the HD group (hazard ratio 0.944, p = 0.75). The adjusted mean changes in ACQ(5) scores in the HD, MD and LD strata were -0.89, -0.61 and -0.65, respectively, with 1 × 2 treatment and -0.90, -0.74 and -0.76, respectively, with 2 × 2 treatment. In the MD and LD strata, the difference between doses was significant in favour of 2 × 2 (MD p < 0.0001; LD p = 0.004), but not in the HD stratum (p = 0.870). No difference in serious adverse events was seen.
CONCLUSION
Compared with the LD and MD strata, the HD stratum patients had more exacerbations and a shorter time to first exacerbation. However, there were no differences in response between the 1 × 2 and 2 × 2 groups in any of the strata. This indicates that patients using budesonide/formoterol maintenance and reliever therapy, irrespective of baseline ICS dose, can be switched to 1 × 2 with its lower steroid load. ACQ(5) scores improved more in the HD stratum than in the MD and LD strata indicating, among other things, that HD patients were not overtreated at baseline. |
The Nature of Creativity | Like E. Paul Torrance, my colleagues and I have tried to understand the nature of creativity, to assess it, and to improve instruction by teaching for creativity as well as teaching students to think creatively. This article reviews our investment theory of creativity, propulsion theory of creative contributions, and some of the data we have collected with regard to creativity. It also describes the propulsion theory of creative contributions. Finally, it draws |
Eating when bored: revision of the emotional eating scale with a focus on boredom. | OBJECTIVE
The current study explored whether eating when bored is a distinct construct from other negative emotions by revising the emotional eating scale (EES) to include a separate boredom factor. Additionally, the relative endorsement of eating when bored compared to eating in response to other negative emotions was examined.
METHOD
A convenience sample of 139 undergraduates completed open-ended questions regarding their behaviors when experiencing different levels of emotions. Participants were then given the 25-item EES with 6 additional items designed to measure boredom.
RESULTS
On the open-ended items, participants more often reported eating in response to boredom than the other emotions. Exploratory factor analysis showed that boredom is a separate construct from other negative emotions. Additionally, the most frequently endorsed item on the EES was "eating when bored".
CONCLUSIONS
These results suggest that boredom is an important construct, and that it should be considered a separate dimension of emotional eating. |
AMS Without 4-Wise Independence on Product Domains | 1 University of California Los Angeles. Supported in part by N SF grants 0716835, 0716389, 0830803, 0916574 and Lockheed Martin Corporation. E-mail address : [email protected] URL: http://www.cs.ucla.edu/ ̃ vova 2 Harvard School of Engineering and Applied Sciences. Suppor ted by US-Israel BSF grant 2006060 and NSF grant CNS-0831289. E-mail address : [email protected] URL: http://people.seas.harvard.edu/ ̃ kmchung/ 3 Harvard School of Engineering and Applied Sciences. Suppor ted in part by NSF grant CNS-0721491. The work was finished during an internship in Microsoft Research Asia. E-mail address : [email protected] URL: http://people.seas.harvard.edu/ ̃ zliu/ 4 Harvard School of Engineering and Applied Sciences. Suppor ted in part by NSF grant CNS-0721491 and research grants from Yahoo!, Google, and Cisco. E-mail address : [email protected] URL: http://www.eecs.harvard.edu/ ̃ michaelm/ 5 University of California Los Angeles. Supported in part by I BM Faculty Award, Lockheed-Martin Corporation Research Award, Xerox Innovation Group Award, the Okawa Fou ndation Award, Intel, Teradata, NSF grants 0716835, 0716389, 0830803, 0916574 and U.C. MICRO grant. E-mail address : [email protected] URL: http://www.cs.ucla.edu/ ̃ rafail |
Document Representation and Query Expansion Models for Blog Recommendation | We explore several different document representation models and two query expansion models for the task of recommending blogs to a user in response to a query. Blog relevance ranking differs from traditional document ranking in ad-hoc information retrieval in several ways: (1) the unit of output (the blog) is composed of a collection of documents (the blog posts) rather than a single document, (2) the query represents an ongoing – and typically multifaceted – interest in the topic rather than a passing ad-hoc information need and (3) due to the propensity of spam, splogs, and tangential comments, the blogosphere is particularly challenging to use as a source for high-quality query expansion terms. We address these differences at the document representation level, by comparing retrieval models that view either the blog or its constituent posts as the atomic units of retrieval, and at the query expansion level, by making novel use of the links and anchor text in Wikipedia to expand a user’s initial query. We develop two complementary models of blog retrieval that perform at comparable levels of precision and recall. We also show consistent and significant improvement across all models using our Wikipedia expansion strategy. |
Walking in a Cube: Novel Metaphors for Safely Navigating Large Virtual Environments in Restricted Real Workspaces | Immersive spaces such as 4-sided displays with stereo viewing and high-quality tracking provide a very engaging and realistic virtual experience. However, walking is inherently limited by the restricted physical space, both due to the screens (limited translation) and the missing back screen (limited rotation). In this paper, we propose three novel locomotion techniques that have three concurrent goals: keep the user safe from reaching the translational and rotational boundaries; increase the amount of real walking and finally, provide a more enjoyable and ecological interaction paradigm compared to traditional controller-based approaches. We notably introduce the "Virtual Companion", which uses a small bird to guide the user through VEs larger than the physical space. We evaluate the three new techniques through a user study with travel-to-target and path following tasks. The study provides insight into the relative strengths of each new technique for the three aforementioned goals. Specifically, if speed and accuracy are paramount, traditional controller interfaces augmented with our novel warning techniques may be more appropriate; if physical walking is more important, two of our paradigms (extended Magic Barrier Tape and Constrained Wand) should be preferred; last, fun and ecological criteria would favor the Virtual Companion. |
Educational data mining with Python and Apache spark: a hands-on tutorial | Enormous amount of educational data has been accumulated through Massive Open Online Courses (MOOCs), as well as commercial and non-commercial learning platforms. This is in addition to the educational data released by US government since 2012 to facilitate disruption in education by making data freely available. The high volume, variety and velocity of collected data necessitate use of big data tools and storage systems such as distributed databases for storage and Apache Spark for analysis.
This tutorial will introduce researchers and faculty to real-world applications involving data mining and predictive analytics in learning sciences. In addition, the tutorial will introduce statistics required to validate and accurately report results. Topics will cover how big data is being used to transform education. Specifically, we will demonstrate how exploratory data analysis, data mining, predictive analytics, machine learning, and visualization techniques are being applied to educational big data to improve learning and scale insights driven from millions of student's records.
The tutorial will be held over a half day and will be hands on with pre-posted material. Due to the interdisciplinary nature of work, the tutorial appeals to researchers from a wide range of backgrounds including big data, predictive analytics, learning sciences, educational data mining, and in general, those interested in how big data analytics can transform learning. As a prerequisite, attendees are required to have familiarity with at least one programming language. |
The Animacy Continuum in the Human Ventral Vision Pathway | Major theories for explaining the organization of semantic memory in the human brain are premised on the often-observed dichotomous dissociation between living and nonliving objects. Evidence from neuroimaging has been interpreted to suggest that this distinction is reflected in the functional topography of the ventral vision pathway as lateral-to-medial activation gradients. Recently, we observed that similar activation gradients also reflect differences among living stimuli consistent with the semantic dimension of graded animacy. Here, we address whether the salient dichotomous distinction between living and nonliving objects is actually reflected in observable measured brain activity or whether previous observations of a dichotomous dissociation were the illusory result of stimulus sampling biases. Using fMRI, we measured neural responses while participants viewed 10 animal species with high to low animacy and two inanimate categories. Representational similarity analysis of the activity in ventral vision cortex revealed a main axis of variation with high-animacy species maximally different from artifacts and with the least animate species closest to artifacts. Although the associated functional topography mirrored activation gradients observed for animate–inanimate contrasts, we found no evidence for a dichotomous dissociation. We conclude that a central organizing principle of human object vision corresponds to the graded psychological property of animacy with no clear distinction between living and nonliving stimuli. The lack of evidence for a dichotomous dissociation in the measured brain activity challenges theories based on this premise. |
Fast Supervised Discrete Hashing | Learning-based hashing algorithms are “hot topics” because they can greatly increase the scale at which existing methods operate. In this paper, we propose a new learning-based hashing method called “fast supervised discrete hashing” (FSDH) based on “supervised discrete hashing” (SDH). Regressing the training examples (or hash code) to the corresponding class labels is widely used in ordinary least squares regression. Rather than adopting this method, FSDH uses a very simple yet effective regression of the class labels of training examples to the corresponding hash code to accelerate the algorithm. To the best of our knowledge, this strategy has not previously been used for hashing. Traditional SDH decomposes the optimization into three sub-problems, with the most critical sub-problem - discrete optimization for binary hash codes - solved using iterative discrete cyclic coordinate descent (DCC), which is time-consuming. However, FSDH has a closed-form solution and only requires a single rather than iterative hash code-solving step, which is highly efficient. Furthermore, FSDH is usually faster than SDH for solving the projection matrix for least squares regression, making FSDH generally faster than SDH. For example, our results show that FSDH is about 12-times faster than SDH when the number of hashing bits is 128 on the CIFAR-10 data base, and FSDH is about 151-times faster than FastHash when the number of hashing bits is 64 on the MNIST data-base. Our experimental results show that FSDH is not only fast, but also outperforms other comparative methods. |
Leader-follower formation control using artificial potential functions: A kinematic approach | This paper presents a novel formation control technique of a group of differentially driven wheeled mobile robots employing artificial potential field based navigation and leader-follower formation control scheme. In the proposed method, the leader robot of the group determines its path of navigation by an artificial potential field and the other robots in the group follow the leader maintaining a particular formation employing the (l - ψ) control. As the leader robot navigates itself by artificial potential field it can easily avoid the collisions with the obstacles and can follow an optimal path while reaching to the goal position. The follower robots adapt their formation by suitably controlling the desired separation distance and the bearing angle. Thus, the original formation can be regained even if the formation is temporarily lost due to passage through narrow opening / path. Therefore, the overall formation control scheme results into a robust and adaptive formation control for a group autonomous differentially driven wheeled mobile robots. The effectiveness of the proposed formation control technique has been verified in simulation. |
Data Selection Strategies for Multi-Domain Sentiment Analysis | Domain adaptation is important in sentiment analysis as sentiment-indicating words vary between domains. Recently, multi-domain adaptation has become more pervasive, but existing approaches train on all available source domains including dissimilar ones. However, the selection of appropriate training data is as important as the choice of algorithm. We undertake – to our knowledge for the first time – an extensive study of domain similarity metrics in the context of sentiment analysis and propose novel representations, metrics, and a new scope for data selection. We evaluate the proposed methods on two largescale multi-domain adaptation settings on tweets and reviews and demonstrate that they consistently outperform strong random and balanced baselines, while our proposed selection strategy outperforms instance-level selection and yields the best score on a large reviews corpus. All experiments are available at url_redacted1 |
Big behavioral data: psychology, ethology and the foundations of neuroscience | Behavior is a unifying organismal process where genes, neural function, anatomy and environment converge and interrelate. Here we review the current state and discuss the future effect of accelerating advances in technology for behavioral studies, focusing on rodents as an example. We frame our perspective in three dimensions: the degree of experimental constraint, dimensionality of data and level of description. We argue that 'big behavioral data' presents challenges proportionate to its promise and describe how these challenges might be met through opportunities afforded by the two rival conceptual legacies of twentieth century behavioral science, ethology and psychology. We conclude that, although 'more is not necessarily better', copious, quantitative and open behavioral data has the potential to transform and unify these two disciplines and to solidify the foundations of others, including neuroscience, but only if the development of new theoretical frameworks and improved experimental designs matches the technological progress. |
Intuitive visualization of Pareto Frontier for multi-objective optimization in n-dimensional performance space | A visualization methodology is presented in which a Pareto Frontier can be visualized in an intuitive and straightforward manner for an n-dimensional performance space. Based on this visualization, it is possible to quickly identify ‘good’ regions of the performance and optimal design spaces for a multi-objective optimization application, regardless of space complexity. Visualizing Pareto solutions for more than three objectives has long been a significant challenge to the multi-objective optimization community. The Hyper-space Diagonal Counting (HSDC) method described here enables the lossless visualization to be implemented. The proposed method requires no dimension fixing. In this paper, we demonstrate the usefulness of visualizing n-f space (i.e. for more than three objective functions in a multiobjective optimization problem). The visualization is shown to aid in the final decision of what potential optimal design point should be chosen amongst all possible Pareto solutions. |
Principal Component Analysis by $L_{p}$ -Norm Maximization | This paper proposes several principal component analysis (PCA) methods based on Lp-norm optimization techniques. In doing so, the objective function is defined using the Lp-norm with an arbitrary p value, and the gradient of the objective function is computed on the basis of the fact that the number of training samples is finite. In the first part, an easier problem of extracting only one feature is dealt with. In this case, principal components are searched for either by a gradient ascent method or by a Lagrangian multiplier method. When more than one feature is needed, features can be extracted one by one greedily, based on the proposed method. Second, a more difficult problem is tackled that simultaneously extracts more than one feature. The proposed methods are shown to find a local optimal solution. In addition, they are easy to implement without significantly increasing computational complexity. Finally, the proposed methods are applied to several datasets with different values of p and their performances are compared with those of conventional PCA methods. |
WUW - wear Ur world: a wearable gestural interface | Information is traditionally confined to paper or digitally to a screen. In this paper, we introduce WUW, a wearable gestural interface, which attempts to bring information out into the tangible world. By using a tiny projector and a camera mounted on a hat or coupled in a pendant like wearable device, WUW sees what the user sees and visually augments surfaces or physical objects the user is interacting with. WUW projects information onto surfaces, walls, and physical objects around us, and lets the user interact with the projected information through natural hand gestures, arm movements or interaction with the object itself. |
A Collision-Mitigation Cuckoo Hashing Scheme for Large-Scale Storage Systems | With the rapid growth of the amount of information, cloud computing servers need to process and analyze large amounts of high-dimensional and unstructured data timely and accurately. This usually requires many query operations. Due to simplicity and ease of use, cuckoo hashing schemes have been widely used in real-world cloud-related applications. However, due to the potential hash collisions, the cuckoo hashing suffers from endless loops and high insertion latency, even high risks of re-construction of entire hash table. In order to address these problems, we propose a cost-efficient cuckoo hashing scheme, called MinCounter. The idea behind MinCounter is to alleviate the occurrence of endless loops in the data insertion by selecting unbusy kicking-out routes. MinCounter selects the “cold” (infrequently accessed), rather than random, buckets to handle hash collisions. We further improve the concurrency of the MinCounter scheme to pursue higher performance and adapt to concurrent applications. MinCounter has the salient features of offering efficient insertion and query services and delivering high performance of cloud servers, as well as enhancing the experiences for cloud users. We have implemented MinCounter in a large-scale cloud testbed and examined the performance by using three real-world traces. Extensive experimental results demonstrate the efficacy and efficiency of MinCounter. |
Relative Age Effect in UEFA Championship Soccer Players | Relative Age Effect (RAE) is the breakdown by both age grouping and dates of birth of athletes. In the past 20 years the existence of this effect has been shown with higher or smaller impact in multiple sports, including soccer. The purpose of this study was to identify the existence of RAE in European soccer players. The sample included 841 elite soccer players who were participants in the UEFA European Soccer Championship in different categories. The professional category (n = 368), U-19 (n = 144) and U-17 (n = 145) were in 2012, and U-21 was in 2011 (n = 184). The Kolmogorov-Smirnov test and the Levene test recommended the use of nonparametric statistics. The results obtained by the square test ( the Kruskal-Wallis test and Cohen's effect sizes revealed the existence of RAE (χ(2) = 17.829, p < 0.001; d = 0.30), with the size of their different effects depending on their category or qualifying round achieved by the national team and the existence of significance in the observed differences by category. Therefore, we could continue examining RAE which is present in elite soccer, and could be considered a factor that influences performance of the national teams tested. RAE was not evident in the professional teams analysed, however it was present in the three lower categories analysed (youth categories), with its influence being greater on younger age categories (U-17). |
Ultra low power capless LDO with dynamic biasing of derivative feedback | In this paper, a low power, output-capacitor-free, low-dropout regulator (LDO) is proposed with a new dynamic biased, multiloop feedback strategy. Initially, a theoretical macromodel is presented based on the analogy between the capless LDO and the mechanical non-linear harmonic oscillator, enabling to shed new light on the non-linear amplification, dynamic and adaptive biasing techniques used in the multiloop feedback LDO control in the large signal context. To implement some of the unexplored abilities of this model the new capless LDO topology is proposed. The output class AB stage of the error amplifier and the non-linear derivative current amplifier of the LDO feedback loop ensure dynamical extended close-loop bandwidth gain and dynamical damping enhancement for fast load and line LDO transients. Using the new dynamic biasing of the derivative loop, an improvement is obtained in the derivative sensing of the fast output voltage variations, enabling a significant enhancement in the transient response of the capless LDO. The proposed LDO, designed for a maximum current of 50 mA in UMC RF 1P8M 0.13 mm, requires a quiescent current of only 4.1 mA and presents excellent transient response when compared to the state-of-the-art. & 2012 Elsevier Ltd. All rights reserved. |
Deep image aesthetics classification using inception modules and fine-tuning connected layer | In this paper we investigate the image aesthetics classification problem, aka, automatically classifying an image into low or high aesthetic quality, which is quite a challenging problem beyond image recognition. Deep convolutional neural network (DCNN) methods have recently shown promising results for image aesthetics assessment. Currently, a powerful inception module is proposed which shows very high performance in object classification. However, the inception module has not been taken into consideration for the image aesthetics assessment problem. In this paper, we propose a novel DCNN structure codenamed ILGNet for image aesthetics classification, which introduces the Inception module and connects intermediate Local layers to the Global layer for the output. Besides, we use a pre-trained image classification CNN called GoogLeNet on the ImageNet dataset and fine tune our connected local and global layer on the large scale aesthetics assessment AVA dataset [1]. The experimental results show that the proposed ILGNet outperforms the state of the art results in image aesthetics assessment in the AVA benchmark. |
Long-term memory, sleep, and the spacing effect. | Many studies have shown that memory is enhanced when study sessions are spaced apart rather than massed. This spacing effect has been shown to have a lasting benefit to long-term memory when the study phase session follows the encoding session by 24 hours. Using a spacing paradigm we examined the impact of sleep and spacing gaps on long-term declarative memory for Swahili-English word pairs by including four spacing delay gaps (massed, 12 hours same-day, 12 hours overnight, and 24 hours). Results showed that a 12-hour spacing gap that includes sleep promotes long-term memory retention similar to the 24-hour gap. The findings support the importance of sleep to the long-term benefit of the spacing effect. |
NON-UNIFORM RANDOM VARIATE GENERATION | This is a survey of the main methods in non-uniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various algorithms, before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods. Authors’ address: School of Computer Science, McGill University, 3480 University Street, Montreal, Canada H3A 2K6. The authors’ research was sponsored by NSERC Grant A3456 and FCAR Grant 90-ER-0291. 1. The main paradigms The purpose of this chapter is to review the main methods for generating random variables, vectors and processes. Classical workhorses such as the inversion method, the rejection method and table methods are reviewed in section 1. In section 2, we discuss the expected time complexity of various algorithms, and give a few examples of the design of generators that are uniformly fast over entire families of distributions. In section 3, we develop a few universal generators, such as generators for all log concave distributions on the real line. Section 4 deals with random variate generation when distributions are indirectly specified, e.g, via Fourier coefficients, characteristic functions, the moments, the moment generating function, distributional identities, infinite series or Kolmogorov measures. Random processes are briefly touched upon in section 5. Finally, the latest developments in Markov chain methods are discussed in section 6. Some of this work grew from Devroye (1986a), and we are carefully documenting work that was done since 1986. More recent references can be found in the book by Hörmann, Leydold and Derflinger (2004). Non-uniform random variate generation is concerned with the generation of random variables with certain distributions. Such random variables are often discrete, taking values in a countable set, or absolutely continuous, and thus described by a density. The methods used for generating them depend upon the computational model one is working with, and upon the demands on the part of the output. For example, in a ram (random access memory) model, one accepts that real numbers can be stored and operated upon (compared, added, multiplied, and so forth) in one time unit. Furthermore, this model assumes that a source capable of producing an i.i.d. (independent identically distributed) sequence of uniform [0, 1] random variables is available. This model is of course unrealistic, but designing random variate generators based on it has several advantages: first of all, it allows one to disconnect the theory of non-uniform random variate generation from that of uniform random variate generation, and secondly, it permits one to plan for the future, as more powerful computers will be developed that permit ever better approximations of the model. Algorithms designed under finite approximation limitations will have to be redesigned when the next generation of computers arrives. For the generation of discrete or integer-valued random variables, which includes the vast area of the generation of random combinatorial structures, one can adhere to a clean model, the pure bit model, in which each bit operation takes one time unit, and storage can be reported in terms of bits. Typically, one now assumes that an i.i.d. sequence of independent perfect bits is available. In this model, an elegant information-theoretic theory can be derived. For example, Knuth and Yao (1976) showed that to generate a random integer X described by the probability distribution {X = n} = pn, n ≥ 1, any method must use an expected number of bits greater than the binary entropy of the distribution, ∑ |
SHREC ’ 16 : Partial Matching of Deformable Shapes | Matching deformable 3D shapes under partiality transformations is a challenging problem that has received limited focus in the computer vision and graphics communities. With this benchmark, we explore and thoroughly investigate the robustness of existing matching methods in this challenging task. Participants are asked to provide a point-to-point correspondence (either sparse or dense) between deformable shapes undergoing different kinds of partiality transformations, resulting in a total of 400 matching problems to be solved for each method – making this benchmark the biggest and most challenging of its kind. Five matching algorithms were evaluated in the contest; this paper presents the details of the dataset, the adopted evaluation measures, and shows thorough comparisons among all competing methods. |
ata-driven soft sensor development based on deep learning echnique | In industrial process control, some product qualities and key variables are always difficult to measure online due to technical or economic limitations. As an effective solution, data-driven soft sensors provide stable and reliable online estimation of these variables based on historical measurements of easy-tomeasure process variables. Deep learning, as a novel training strategy for deep neural networks, has recently become a popular data-driven approach in the area of machine learning. In the present study, the deep learning technique is employed to build soft sensors and applied to an industrial case to estimate the heavy diesel 95% cut point of a crude distillation unit (CDU). The comparison of modeling results demonstrates that the deep learning technique is especially suitable for soft sensor modeling because of the following advantages over traditional methods. First, with a complex multi-layer structure, the deep neural network is able to contain richer information and yield improved representation ability compared with traditional data-driven models. Second, deep neural networks are established as latent variable models that help to describe highly correlated process variables. Third, the deep learning is semi-supervised so that all available process data can be utilized. Fourth, the deep learning technique is particularly efficient dealing with massive data in practice. © 2014 Elsevier Ltd. All rights reserved. |
Identifying spatial relations in images using convolutional neural networks | Traditional approaches to building a large scale knowledge graph have usually relied on extracting information (entities, their properties, and relations between them) from unstructured text (e.g. Dbpedia). Recent advances in Convolutional Neural Networks (CNN) allow us to shift our focus to learning entities and relations from images, as they build robust models that require little or no pre-processing of the images. In this paper, we present an approach to identify and extract spatial relations (e.g., The girl is standing behind the table) from images using CNNs. Our research addresses two specific challenges: providing insight into how spatial relations are learned by the network and which parts of the image are used to predict these relations. We use the pre-trained network VGGNet to extract features from an image and train a Multi-layer Perceptron (MLP) on a set of synthetic images and the sun09 dataset to extract spatial relations. The MLP predicts spatial relations without a bounding box around the objects or the space in the image depicting the relation. To understand how the spatial relations are represented in the network, a heatmap is overlayed on the image to show the regions that are deemed important by the network. Also, we analyze the MLP to show the relationship between the activation of consistent groups of nodes and the prediction of a spatial relation. We show how the loss of these groups affects the network's ability to identify relations. |
Autonomic Computing: Applications of Self-Healing Systems | Self -- Management systems are the main objective of Autonomic Computing (AC), and it is needed to increase the running system's reliability, stability, and performance. This field needs to investigate some issues related to complex systems such as, self-awareness system, when and where an error state occurs, knowledge for system stabilization, analyze the problem, healing plan with different solutions for adaptation without the need for human intervention. This paper focuses on self-healing which is the most important component of Autonomic Computing. Self-healing is a technique that aims to detect, analyze, and repair existing faults within the system. All of these phases are accomplished in real-time system. In this approach, the system is capable of performing a reconfiguration action in order to recover from a permanent fault. Moreover, self-healing system should have the ability to modify its own behavior in response to changes within the environment. Recursive neural network has been proposed and used to solve the main challenges of self-healing, such as monitoring, interpretation, resolution, and adaptation. |
Intravascular ultrasound guidance improves angiographic and clinical outcome of stent implantation for long coronary artery stenoses: final results of a randomized comparison with angiographic guidance (TULIP Study). | BACKGROUND
Long coronary lesions treated with stents have a poor outcome. This study compared the 6-month outcome of stent implantation for long lesions in patients randomized to intravascular ultrasound (IVUS; n=73) or angiographic guidance (n=71).
METHODS AND RESULTS
Stenoses >20 mm in length and a reference diameter that permitted a stent diameter > or =3 mm were eligible. Primary end points were 6-month minimal lumen diameter (MLD) and the combined end point of death, myocardial infarction, and target-lesion revascularization (TLR). Baseline clinical and angiographic data were comparable in both groups. At 6 months, MLD in the IVUS group (1.82+/-0.53 mm) was larger than in the angiography group (1.51+/-0.71 mm; P=0.042). TLR and combined end-point rates at 6 months were 4% (n=3) and 6% (n=4) in the IVUS group and 14% (n=10) and 20% (n=14) in the angiography group, respectively (P=0.037 for TLR and P=0.01 for combined events). Restenosis (>50% diameter stenosis) was found in 23% of the IVUS group and 45% of the angiography group (P=0.008). At 12 months, TLR and the combined end point occurred in 10% (n=7) and 12% (n=9) of the IVUS group and 23% (n=17) and 27% (n=19) of the angiography group (P=0.018 and P=0.026), respectively.
CONCLUSIONS
Angiographic and clinical outcome up to 12 months after long stent placement guided by IVUS is superior to guidance by angiography. |
A novel clinical risk prediction model for sudden cardiac death in hypertrophic cardiomyopathy (HCM risk-SCD). | AIMS
Hypertrophic cardiomyopathy (HCM) is a leading cause of sudden cardiac death (SCD) in young adults. Current risk algorithms provide only a crude estimate of risk and fail to account for the different effect size of individual risk factors. The aim of this study was to develop and validate a new SCD risk prediction model that provides individualized risk estimates.
METHODS AND RESULTS
The prognostic model was derived from a retrospective, multi-centre longitudinal cohort study. The model was developed from the entire data set using the Cox proportional hazards model and internally validated using bootstrapping. The cohort consisted of 3675 consecutive patients from six centres. During a follow-up period of 24 313 patient-years (median 5.7 years), 198 patients (5%) died suddenly or had an appropriate implantable cardioverter defibrillator (ICD) shock. Of eight pre-specified predictors, age, maximal left ventricular wall thickness, left atrial diameter, left ventricular outflow tract gradient, family history of SCD, non-sustained ventricular tachycardia, and unexplained syncope were associated with SCD/appropriate ICD shock at the 15% significance level. These predictors were included in the final model to estimate individual probabilities of SCD at 5 years. The calibration slope was 0.91 (95% CI: 0.74, 1.08), C-index was 0.70 (95% CI: 0.68, 0.72), and D-statistic was 1.07 (95% CI: 0.81, 1.32). For every 16 ICDs implanted in patients with ≥4% 5-year SCD risk, potentially 1 patient will be saved from SCD at 5 years. A second model with the data set split into independent development and validation cohorts had very similar estimates of coefficients and performance when externally validated.
CONCLUSION
This is the first validated SCD risk prediction model for patients with HCM and provides accurate individualized estimates for the probability of SCD using readily collected clinical parameters. |
Improvements in health-related quality of life following a group intervention for coping with AIDS-bereavement among HIV-infected men and women | Background: AIDS-related bereavement is a severe life stressor that may be particularly distressing to persons themselves infected with HIV. Increasing evidence suggests that psychological health is associated with disease progression, HIV-related symptoms, and mortality. Purpose: This study assessed change in health-related quality of life among HIV+ persons following a group intervention for coping with AIDS-related loss. Methods: The sample included 235 HIV+ men and women of diverse ethnicities and sexual orientations who had experienced an AIDS-related loss within the previous 2 years. Participants were randomly assigned to a 12-week cognitive-behavioral bereavement coping group intervention or offered individual psychotherapy upon request. Quality of life was assessed at baseline and 2 weeks after the intervention. Results: Participants in the group intervention demonstrated improvements in general health-related and HIV-specific quality of life, while those in the comparison remained the same or deteriorated. Effect sizes indicated that the majority of change occurred in women. Conclusion: This bereavement group aimed at improving coping with grief also had a positive impact on health-related quality of life among HIV+ men and women, and suggests that cognitive-behavioral interventions may have a broad impact on both emotional and physical health. |
Parametrization of quintessence and its potential | We develop a theoretical method of constructing the quintessence potential directly from the effective equation of state function w(z), which describes the properties of the dark energy. We apply our method to four parametrizations of equation of state parameter and discuss the general features of the resulting potentials. In particular, it is shown that the constructed quintessence potentials are all in the form of a runaway type. |
Test-enhanced learning: taking memory tests improves long-term retention. | Taking a memory test not only assesses what one knows, but also enhances later retention, a phenomenon known as the testing effect. We studied this effect with educationally relevant materials and investigated whether testing facilitates learning only because tests offer an opportunity to restudy material. In two experiments, students studied prose passages and took one or three immediate free-recall tests, without feedback, or restudied the material the same number of times as the students who received tests. Students then took a final retention test 5 min, 2 days, or 1 week later. When the final test was given after 5 min, repeated studying improved recall relative to repeated testing. However, on the delayed tests, prior testing produced substantially greater retention than studying, even though repeated studying increased students' confidence in their ability to remember the material. Testing is a powerful means of improving learning, not just assessing it. |
Number of hospital beds in 2030: projection with national French case-mix data | Methods At first, recent changes in hospitalization rates (HR), daycase ratios (DCR) and lengths of stay (LOS) were studied, comparing case-mix data in 1998 and 2004 for acute care patients. To accurately assess the effects of the changes, five age groups (<15, 15–64, 65–74, 75–84 and over 84) and 41 diagnoses groups were constructed. Then, three different projections, including population projections for 2030, were developed. |
Learning movement primitive libraries through probabilistic segmentation | Movement primitives are a well established approach for encoding and executing movements. While the primitives themselves have been extensively researched, the concept of movement primitive libraries has not received similar attention. Libraries of movement primitives represent the skill set of an agent. Primitives can be queried and sequenced in order to solve specific tasks. The goal of this work is to segment unlabeled demonstrations into a representative set of primitives. Our proposed method differs from current approaches by taking advantage of the often neglected, mutual dependencies between the segments contained in the demonstrations and the primitives to be encoded. By exploiting this mutual dependency, we show that we can improve both the segmentation and the movement primitive library. Based on probabilistic inference our novel approach segments the demonstrations while learning a probabilistic representation of movement primitives. We demonstrate our method on two real robot applications. First, the robot segments sequences of different letters into a library, explaining the observed trajectories. Second, the robot segments demonstrations of a chair assembly task into a movement primitive library. The library is subsequently used to assemble the chair in an order not present in the demonstrations. |
A Load-Power Adaptive Dual Pulse Modulated Current Phasor-Controlled ZVS High-Frequency Resonant Inverter for Induction Heating Applications | A new prototype of an efficiency-improved zero voltage soft-switching (ZVS) high-frequency resonant (HF-R) inverter for induction heating (IH) applications is presented in this paper. By adopting the dual pulse modulation mode (DPMM) that incorporates a submode power regulation scheme such as pulse density modulation, pulse frequency modulation, and asymmetrical pulse width modulation into main one of the resonant current phase angle difference ( θ) control, the IH load power can be widely regulated under the condition of ZVS, while significantly improving the efficiency in the low output power setting. The essential performances on the output power regulations and ZVS operations with the DPMM schemes are demonstrated in an experiment based on a 1 kW-60 kHz laboratory prototype of the ZVS HF-R inverter. The validity of each DPMM scheme is originally compared and evaluated from a practical point of view. |
Towards Security Risk-Oriented Misuse Cases | Security has turn out to be a necessity of information systems (ISs) and information per se. Nevertheless, existing practices report on numerous cases when security aspects were considered only at the end of the development process, thus, missing the systematic security analysis. Misuse case diagrams help identify security concerns at early stages of the IS development. Despite this fundamental advantage, misuse cases tend to be rather imprecise; they do not comply with security risk management strategies, and, thus, could lead to misinterpretation of the security-related concepts. Such limitations could potentially result in poor security solutions. This paper applies a systematic approach to understand how misuse case diagrams could help model organisational assets, potential risks, and security countermeasures to mitigate these risks. The contribution helps understand how misuse cases could deal with security risk management and support reasoning for security requirements and their implementation in the software system. |
Data-driven demand forecasting method for fused magnesium furnaces | The demand of fused magnesium furnaces (FMFs) refers to the average value of the power of the FMFs over a fixed period of time before the current time. The demand is an indicator of the electricity consumption of high energy-consuming FMFs. When the demand exceeds the limit of the Peak Demand (a predetermined maximum demand), the power supply of some FMF will be cut off to ensure that the demand is no more than Peak Demand. But the power cutoff will destroy the heat balance, reduce the quality and yield of the product. The composition change of magnesite in FMFs will cause demand spike occasionally, which a sudden increase in demand exceeds the limit and then drops below the limit. As a result, demand spike cause the power cutoff. In order to avoid the power cutoff at the moment of demand spike, the demand of FMFs needs to be forecasted. This paper analyzes the dynamic model of the demand of FMFs, using the power data, presents a data-driven demand forecasting method. This method consists of the following: PACF based decision module for the number of the input variables of the forecasting model, RBF neural network (RBFNN) based power variation rate forecasting model and demand forecasting model. Simulations based on actual data and industrial experiments at a fused magnesia plant show the effectiveness of the proposed method. |
Motivating participation and improving quality of contribution in ubiquitous crowdsourcing | Ubiquitous crowdsourcing, or the crowdsourcing of tasks in settings beyond the desktop, is attracting interest due to the increasing maturity of mobile and ubiquitous technology, such as smartphones and public displays. In this paper we attempt to address a fundamental challenge in ubiquitous crowdsourcing: if people can contribute to crowdsourcing anytime and anyplace, why would they choose to do so? We highlight the role of motivation in ubiquitous crowdsourcing, and its effect on participation and performance. Through a series of field studies we empirically validate various motivational approaches in the context of ubiquitous crowdsourcing, and assess the comparable advantages of ubiquitous technologies’ affordances. We show that through motivation ubiquitous crowdsourcing becomes comparable to online crowdsourcing in terms of participation and task performance, and that through motivation we can elicit better quality contributions and increased participation from workers. We also show that ubiquitous technologies’ contextual capabilities can increase participation through increasing workers’ intrinsic motivation, and that the in-situ nature of ubiquitous technologies can increase both participation and engagement of workers. Combined, our findings provide empirically validated recommendations on the design and implementation of ubiquitous crowdsourcing. © 2015 Elsevier B.V. All rights reserved. |
Cannabidiol and (-)Delta9-tetrahydrocannabinol are neuroprotective antioxidants. | The neuroprotective actions of cannabidiol and other cannabinoids were examined in rat cortical neuron cultures exposed to toxic levels of the excitatory neurotransmitter glutamate. Glutamate toxicity was reduced by both cannabidiol, a nonpsychoactive constituent of marijuana, and the psychotropic cannabinoid (-)Delta9-tetrahydrocannabinol (THC). Cannabinoids protected equally well against neurotoxicity mediated by N-methyl-D-aspartate receptors, 2-amino-3-(4-butyl-3-hydroxyisoxazol-5-yl)propionic acid receptors, or kainate receptors. N-methyl-D-aspartate receptor-induced toxicity has been shown to be calcium dependent; this study demonstrates that 2-amino-3-(4-butyl-3-hydroxyisoxazol-5-yl)propionic acid/kainate receptor-type neurotoxicity is also calcium-dependent, partly mediated by voltage sensitive calcium channels. The neuroprotection observed with cannabidiol and THC was unaffected by cannabinoid receptor antagonist, indicating it to be cannabinoid receptor independent. Previous studies have shown that glutamate toxicity may be prevented by antioxidants. Cannabidiol, THC and several synthetic cannabinoids all were demonstrated to be antioxidants by cyclic voltametry. Cannabidiol and THC also were shown to prevent hydroperoxide-induced oxidative damage as well as or better than other antioxidants in a chemical (Fenton reaction) system and neuronal cultures. Cannabidiol was more protective against glutamate neurotoxicity than either ascorbate or alpha-tocopherol, indicating it to be a potent antioxidant. These data also suggest that the naturally occurring, nonpsychotropic cannabinoid, cannabidiol, may be a potentially useful therapeutic agent for the treatment of oxidative neurological disorders such as cerebral ischemia. |
Low dose nesiritide and the preservation of renal function in patients with renal dysfunction undergoing cardiopulmonary-bypass surgery: a double-blind placebo-controlled pilot study. | BACKGROUND
Renal insufficiency is associated with increased morbidity and mortality after cardiopulmonary bypass cardiac surgery. B-type natriuretic peptide is a cardiac hormone that enhances glomerular filtration rate and inhibits aldosterone. Cystatin has been shown to be a better endogenous marker of renal function than creatinine.
METHODS AND RESULTS
We performed a double-blinded placebo-controlled proof of concept pilot study in patients (n=40) with renal insufficiency preoperatively (defined as an estimated creatinine clearance of <60 mL/min determined by the Cockroft-Gault formula), undergoing cardiopulmonary bypass cardiac surgery. Patients were randomized to placebo (n=20) or i.v. low dose nesiritide (n=20; 0.005 microg/Kg/min) for 24 hours started after the induction of anesthesia and before cardiopulmonary bypass. Patients in the nesiritide group had an increase of plasma B-type natriuretic peptide and its second messenger cGMP with a decrease in plasma cystatin levels at the end of the 24-hour infusion. These changes were not observed in the placebo group. There was a significant activation of aldosterone in the placebo group at the end of the 24-hour infusion, but not in the nesiritide group. At 48 and 72 hours, there was a decrease in estimated creatinine clearance and an increase in plasma cystatin as compared with end of the 24-hour infusion in the placebo group. In contrast, renal function was preserved in the nesiritide group with no significant change in estimated creatinine clearance and a trend for plasma cystatin to increase as compared with end of the 24-hour infusion.
CONCLUSION
This proof of concept pilot study supports the conclusion that perioperative administration of low dose nesiritide is biologically active and decreases plasma cystatin in patients with renal insufficiency undergoing cardiopulmonary bypass cardiac surgery. Further studies are warranted to determine whether these physiological observations can be translated into improved clinical outcomes. |
Implementing openflow based distributed firewall | SDN is an emerging technology which is going to drive next generation networks. Lot of companies and organizations has started using SDN applications. It is giving network administrators the flexibility in implementing their networks. But at the same time, it is bringing new security issues. To secure SDN networks, we need strong firewall application. Already some firewall applications are there but they suffer from certain shortcomings. One of the main drawbacks of existing firewall solutions is that they suffer from single point of failure due to their centralized nature and overloading of rules in single device. Other drawback of existing firewall is that they are mostly layer 2 firewalls. In this paper, we are implementing Distributed Firewall where every OpenFlow switch in a network can acts as a firewall. Plus this firewall will be capable of handling TCP, UDP and ICMP Traffic. We have tested this firewall using Mininet Emulator installed in Ubuntu 14.04 Linux installed under VirtualBox virtualization solution. We are using python based POX controller. This work is extension of our earlier work on programmable firewalls. |
Reinforcement learning models and their neural correlates: An activation likelihood estimation meta-analysis. | Reinforcement learning describes motivated behavior in terms of two abstract signals. The representation of discrepancies between expected and actual rewards/punishments-prediction error-is thought to update the expected value of actions and predictive stimuli. Electrophysiological and lesion studies have suggested that mesostriatal prediction error signals control behavior through synaptic modification of cortico-striato-thalamic networks. Signals in the ventromedial prefrontal and orbitofrontal cortex are implicated in representing expected value. To obtain unbiased maps of these representations in the human brain, we performed a meta-analysis of functional magnetic resonance imaging studies that had employed algorithmic reinforcement learning models across a variety of experimental paradigms. We found that the ventral striatum (medial and lateral) and midbrain/thalamus represented reward prediction errors, consistent with animal studies. Prediction error signals were also seen in the frontal operculum/insula, particularly for social rewards. In Pavlovian studies, striatal prediction error signals extended into the amygdala, whereas instrumental tasks engaged the caudate. Prediction error maps were sensitive to the model-fitting procedure (fixed or individually estimated) and to the extent of spatial smoothing. A correlate of expected value was found in a posterior region of the ventromedial prefrontal cortex, caudal and medial to the orbitofrontal regions identified in animal studies. These findings highlight a reproducible motif of reinforcement learning in the cortico-striatal loops and identify methodological dimensions that may influence the reproducibility of activation patterns across studies. |
Improving the analysis of esophageal acid exposure by a new parameter: area under H+ | OBJECTIVES:We aimed to compare the data provided by 24-h continuous esophageal pH monitoring in a group of patients with gastroesophageal reflux disease (GERD) to those from a group of healthy volunteers using both conventional parameters and calculated area under the curve of hydrogen ion activity (AUH+), a new value that describes the true acid exposure, through both duration and depth of acidity changes.METHODS:Thirty healthy controls and 60 patients with GERD (30 symptomatic patients without endoscopic esophagitis or nonerosive GERD and 30 symptomatic patients with Savary I–IV endoscopic esophagitis or erosive GERD) were enrolled in a study based on 24-h pH monitoring to compare reference values by means of receiver operating characteristic (ROC) discriminant analysis.RESULTS:The best ROC cutoff value for nonerosive GERD patients was AUH+ = 103.7 (mmol/L) × min with sensitivity of 76.7% and specificity of 93.3%. The best ROC cutoff value for erosive GERD patients was AUH+ = 114.1 (mmol/L) × min with sensitivity of 100% and specificity of 96.7%. These cutoff values increase the sensitivity by 16.7% for nonerosive GERD patients and 10% for erosive GERD patients when compared to a common parameter such as the percentage of total time pH is <4 with a limit of 4.2%.CONCLUSIONS:AUH+ is a valid quantitative parameter to measure 24-h esophageal acid exposure. It may be a reliable and significant clinical aid because it is a more sensitive test in discriminating negative or positive adult patients with or without esophagitis who are submitted to 24-h esophageal pH monitoring. |
Implantation of a new Vagus Nerve Stimulation (VNS) Therapy® generator, AspireSR®: considerations and recommendations during implantation and replacement surgery—comparison to a traditional system | The most widely used neuro-stimulation treatment for drug-resistant epilepsy is Vagus Nerve Stimulation (VNS) Therapy®. Ictal tachycardia can be an indicator of a seizure and, if monitored, can be used to trigger an additional on-demand stimulation, which may positively influence seizure severity or duration. A new VNS Therapy generator model, AspireSR®, was introduced and approved for CE Mark in February 2014. In enhancement of former models, the AspireSR has incorporated a cardiac-based seizure-detection (CBSD) algorithm that can detect ictal tachycardia and automatically trigger a defined auto-stimulation. To evaluate differences in preoperative, intraoperative and postoperative handling, we compared the AspireSR to a conventional generator model (Demipulse®). Between February and September 2014, seven patients with drug-resistant epilepsy and ictal tachycardia were implanted with an AspireSR. Between November 2013 and September 2014, seven patients were implanted with a Demipulse and served as control group. Operation time, skin incision length and position, and complications were recorded. Handling of the new device was critically evaluated. The intraoperative handling was comparable and did not lead to a significant increase in operation time. In our 14 operations, we had no significant short-term complications. Due to its larger size, patients with the AspireSR had significantly larger skin incisions. For optimal heart rate detection, the AspireSR had to be placed significantly more medial in the décolleté area than the Demipulse. The preoperative testing is a unique addition to the implantation procedure of the AspireSR, which may provide minor difficulties, and for which we provide several recommendations and tips. The price of the device is higher than for all other models. The new AspireSR generator offers a unique technical improvement over the previous Demipulse. Whether the highly interesting CBSD feature will provide an additional benefit for the patients, and will rectify the additional costs, respectively, cannot be answered in the short-term. The preoperative handling is straightforward, provided that certain recommendations are taken into consideration. The intraoperative handling is equivalent to former models—except for the placement of the generator, which might cause cosmetic issues and has to be discussed with the patient carefully. We recommend the consideration of the AspireSR in patients with documented ictal tachycardia to provide a substantial number of patients for later seizure outcome analysis. |
New insights and perspectives on the natural gradient method | Natural gradient descent is an optimization method traditionally motivated from the perspective of information geometry, and works well for many applications as an alternative to stochastic gradient descent. In this paper we critically analyze this method and its properties, and show how it can be viewed as a type of approximate 2nd-order optimization method, where the Fisher information matrix used to compute the natural gradient direction can be viewed as an approximation of the Hessian. This perspective turns out to have significant implications for how to design a practical and robust version of the method. Among our various other contributions is a thorough analysis of the convergence speed of natural gradient descent and more general stochastic methods, a critical examination of the oft-used “empirical” approximation of the Fisher matrix, and an analysis of the (approximate) parameterization invariance property possessed by the method, which we show still holds for certain other choices of the curvature matrix, but notably not the Hessian. ∗[email protected] 1 ar X iv :1 41 2. 11 93 v5 [ cs .L G ] 1 O ct 2 01 5 |
Homicidal hanging. | Homicide by hanging is an extremely rare incident [1]. Very few cases have been reported in which a person is rendered senseless and then hanged to simulate suicidal death; though there are a lot of cases in wherein a homicide victim has been hung later. We report a case of homicidal hanging of a young Sikh individual found hanging in a well. It became evident from the results of forensic autopsy that the victim had first been given alcohol mixed with pesticides and then hanged by his turban from a well. The rare combination of lynching (homicidal hanging) and use of organo-phosporous pesticide poisoning as a means of homicide are discussed in this paper. |
Multi‐institutional retrospective study of mucoepidermoid carcinoma treated with carbon‐ion radiotherapy | This study aimed to evaluate the clinical outcomes of patients with mucoepidermoid carcinomas in the head and neck treated with carbon-ion radiotherapy. Data from 26 patients who underwent carbon-ion radiotherapy in four facilities were analyzed in this multi-institutional retrospective study: the Japan Carbon-ion Radiation Oncology Study Group. The median follow-up time was 34 months. One patient experienced local recurrence, and the 3-year local control rate was 95%. One patient developed lymph node recurrence and five developed distant metastases. The 3-year progression-free survival rate was 73%. Five patients died, two of mucoepidermoid carcinoma and three of intercurrent disease. The 3-year overall survival rate was 89%. Acute mucositis and dermatitis of grade 3 or higher were experienced by 19% and 8% of patients, respectively; these improved with conservative therapy. Late mucositis and osteonecrosis of jaw were observed in 12% and 23% of patients, respectively. The 3-year cumulative rate of any late adverse event of grade 3 or higher was 14%. None of the patients died of the acute or late adverse events. Carbon-ion radiotherapy was efficacious and safe for treating mucoepidermoid carcinoma in this multi-institutional retrospective study (registration no. UMIN000024473). We are currently undertaking a prospective multicenter study. |
Pier Paolo Pasolini: Cinema as Heresy | The major Italian filmmaker Pier Paolo Pasolini was also a poet, novelist, essayist, and iconoclastic political commentator. Naomi Greene reveals to English-speaking readers the diverse talents that made him one of the most controversial European intellectuals of the postwar era, at the center of political and cultural debates still vital to our time. Greene presents Pasolini's films to the English-speaking world in full detail and in a rich critical context, using them to trace the evolution of his ideas and the details of his troubled personal life from 1950, when he settled in Rome, to 1975, the year of his brutal murder, apparently at the hands of a young male prostitute. "In her concise and sympathetic book, Greene intelligently explicates the political and social context within which Pasolini became both a leading figure and a significant heretic. He was an atheist who directed one of the few genuinely profound biblical films in the cinema, a communist who severely criticized many of the radical movements of modern Italy. Though he publicly acknowledged his homosexuality, he privately referred to it as his "sickness." As the book well documents, Pasolini was not a rebel but rather an authentic heretic who worked in contradiction to both his medium and milieu."--Choice |
Deep Neural Network Capacity | In recent years, deep neural network exhibits its powerful superiority on information discrimination in many computer vision applications. However, the capacity of deep neural network architecture is still a mystery to the researchers. Intuitively, larger capacity of neural network can always deposit more information to improve the discrimination ability of the model. But, the learnable parameter scale is not feasible to estimate the capacity of deep neural network. Due to the overfitting, directly increasing hidden nodes number and hidden layer number are already demonstrated not necessary to effectively increase the network discrimination ability. In this paper, we propose a novel measurement, named “total valid bits”, to evaluate the capacity of deep neural networks for exploring how to quantitatively understand the deep learning and the insights behind its super performance. Specifically, our scheme to retrieve the total valid bits incorporates the skilled techniques in both training phase and inference phase. In the network training, we design decimal weight regularization and 8-bit forward quantization to obtain the integer-oriented network representations. Moreover, we develop adaptive-bitwidth and non-uniform quantization strategy in the inference phase to find the neural network capacity, total valid bits. By allowing zero bitwidth, our adaptive-bitwidth quantization can execute the model reduction and valid bits finding simultaneously. In our extensive experiments, we first demonstrate that our total valid bits is a good indicator of neural network capacity. We also analyze the impact on network capacity from the network architecture and advanced training |
Manifold-Learning-Based Feature Extraction for Classification of Hyperspectral Data: A Review of Advances in Manifold Learning | Advances in hyperspectral sensing provide new capability for characterizing spectral signatures in a wide range of physical and biological systems, while inspiring new methods for extracting information from these data. HSI data often lie on sparse, nonlinear manifolds whose geometric and topological structures can be exploited via manifold-learning techniques. In this article, we focused on demonstrating the opportunities provided by manifold learning for classification of remotely sensed data. However, limitations and opportunities remain both for research and applications. Although these methods have been demonstrated to mitigate the impact of physical effects that affect electromagnetic energy traversing the atmosphere and reflecting from a target, nonlinearities are not always exhibited in the data, particularly at lower spatial resolutions, so users should always evaluate the inherent nonlinearity in the data. Manifold learning is data driven, and as such, results are strongly dependent on the characteristics of the data, and one method will not consistently provide the best results. Nonlinear manifold-learning methods require parameter tuning, although experimental results are typically stable over a range of values, and have higher computational overhead than linear methods, which is particularly relevant for large-scale remote sensing data sets. Opportunities for advancing manifold learning also exist for analysis of hyperspectral and multisource remotely sensed data. Manifolds are assumed to be inherently smooth, an assumption that some data sets may violate, and data often contain classes whose spectra are distinctly different, resulting in multiple manifolds or submanifolds that cannot be readily integrated with a single manifold representation. Developing appropriate characterizations that exploit the unique characteristics of these submanifolds for a particular data set is an open research problem for which hierarchical manifold structures appear to have merit. To date, most work in manifold learning has focused on feature extraction from single images, assuming stationarity across the scene. Research is also needed in joint exploitation of global and local embedding methods in dynamic, multitemporal environments and integration with semisupervised and active learning. |
Validation of a human cell based high-throughput genotoxicity assay 'Anthem's Genotoxicity screen' using ECVAM recommended lists of genotoxic and non-genotoxic chemicals. | A novel high throughput-enabled human cell based screen, Anthem's Genotoxicity screen, was developed to achieve higher specificity for predicting in vivo genotoxins by an in vitro method. The assay employs engineered human colon carcinoma cell line; HCT116 cells that are stably engineered with three promoter-reporter cassettes such that an increased reporter activity reflects the activation of associated signaling events in a human cell. The current study focuses on the evaluation of sensitivity and specificity of Anthem's Genotoxicity screen using 62 compounds recommended by the European Centre for the Validation of Alternative Methods (ECVAM). The concordance of Anthem's Genotoxicity screen with in vivo tests was 95.5% with sensitivity of 95.2% and specificity of 95.7%. Thus Anthem's Genotoxicity screen, a high-throughput mechanism based genotox indicator test can be employed by a variety of industries for rapid screening and early detection of potential genotoxins. |
Advanced electric powertrain technology: ADEPT platform overview | Design of high performance, low cost and clean propulsion systems requires multiple disciplines such as physics, mathematics, electrical engineering, mechanical engineering and specialisms like control engineering and safety. This paper details the program of EU FP7 Multi-ITN project ADvanced Electric Powertrain Technology (ADEPT) and presents a review of current research achievements of the recruited fellows. The ADEPT programme will provide a virtual research lab community from labs of European universities and industries. After finishing the ADEPT project, the know-how and expertise will also be opened to other research organizations or industry beyond those involved in the project. Special attention in this paper is given to ADEPT Virtual Library - collection of component level, sub-system level and system level components produced and maintained by the ADEPT Consortium. Functional Mock-up Interface (FMI) standard for model exchange and simulation is described and its connection with the ADEPT project is established. |
Kineograph: taking the pulse of a fast-changing and connected world | Kineograph is a distributed system that takes a stream of incoming data to construct a continuously changing graph, which captures the relationships that exist in the data feed. As a computing platform, Kineograph further supports graph-mining algorithms to extract timely insights from the fast-changing graph structure. To accommodate graph-mining algorithms that assume a static underlying graph, Kineograph creates a series of consistent snapshots, using a novel and efficient epoch commit protocol. To keep up with continuous updates on the graph, Kineograph includes an incremental graph-computation engine. We have developed three applications on top of Kineograph to analyze Twitter data: user ranking, approximate shortest paths, and controversial topic detection. For these applications, Kineograph takes a live Twitter data feed and maintains a graph of edges between all users and hashtags. Our evaluation shows that with 40 machines processing 100K tweets per second, Kineograph is able to continuously compute global properties, such as user ranks, with less than 2.5-minute timeliness guarantees. This rate of traffic is more than 10 times the reported peak rate of Twitter as of October 2011. |
Predicting crime using Twitter and kernel density estimation | Twitter is used extensively in the United States as well as globally, creating many opportunities to augment decision support systems with Twitterdriven predictive analytics. Twitter is an ideal data source for decision support: its users, who number in the millions, publicly discuss events, emotions, and innumerable other topics; its content is authored and distributed in real time at no charge; and individual messages (also known as tweets) are often tagged with precise spatial and temporal coordinates. This article presents research investigating the use of spatiotemporally tagged tweets for crime prediction. We use Twitter-specific linguistic analysis and statistical topic modeling to automatically identify discussion topics across a major city in the United States. We then incorporate these topics into a crime prediction model and show that, for 19 of the 25 crime types we studied, the addition of Twitter data improves crime prediction performance versus a standard approach based on kernel density estimation. We identify a number of performance bottlenecks that could impact the use of Twitter in an actual decision support system. We also point out important areas of future work for this research, including deeper semantic analysis of message con∗Email address: [email protected]; Tel.: 1+ 434 924 5397; Fax: 1+ 434 982 2972 Preprint submitted to Decision Support Systems January 14, 2014 tent, temporal modeling, and incorporation of auxiliary data sources. This research has implications specifically for criminal justice decision makers in charge of resource allocation for crime prevention. More generally, this research has implications for decision makers concerned with geographic spaces occupied by Twitter-using individuals. |
Combining supervised and unsupervised learning for zero-day malware detection | Malware is one of the most damaging security threats facing the Internet today. Despite the burgeoning literature, accurate detection of malware remains an elusive and challenging endeavor due to the increasing usage of payload encryption and sophisticated obfuscation methods. Also, the large variety of malware classes coupled with their rapid proliferation and polymorphic capabilities and imperfections of real-world data (noise, missing values, etc) continue to hinder the use of more sophisticated detection algorithms. This paper presents a novel machine learning based framework to detect known and newly emerging malware at a high precision using layer 3 and layer 4 network traffic features. The framework leverages the accuracy of supervised classification in detecting known classes with the adaptability of unsupervised learning in detecting new classes. It also introduces a tree-based feature transformation to overcome issues due to imperfections of the data and to construct more informative features for the malware detection task. We demonstrate the effectiveness of the framework using real network data from a large Internet service provider. |
Drugs in Development for Influenza | The emergence and global spread of the 2009 pandemic H1N1 influenza virus reminds us that we are limited in the strategies available to control influenza infection. Vaccines are the best option for the prophylaxis and control of a pandemic; however, the lag time between virus identification and vaccine distribution exceeds 6 months and concerns regarding vaccine safety are a growing issue leading to vaccination refusal. In the short-term, antiviral therapy is vital to control the spread of influenza. However, we are currently limited to four licensed anti-influenza drugs: the neuraminidase inhibitors oseltamivir and zanamivir, and the M2 ion-channel inhibitors amantadine and rimantadine. The value of neuraminidase inhibitors was clearly established during the initial phases of the 2009 pandemic when vaccines were not available, i.e. stockpiles of antivirals are valuable. Unfortunately, as drug-resistant variants continue to emerge naturally and through selective pressure applied by use of antiviral drugs, the efficacy of these drugs declines. Because we cannot predict the strain of influenza virus that will cause the next epidemic or pandemic, it is important that we develop novel anti-influenza drugs with broad reactivity against all strains and subtypes, and consider moving to multiple drug therapy in the future. In this article we review the experimental data on investigational antiviral agents undergoing clinical trials (parenteral zanamivir and peramivir, long-acting neuraminidase inhibitors and the polymerase inhibitor favipiravir [T-705]) and experimental antiviral agents that target either the virus (the haemagglutinin inhibitor cyanovirin-N and thiazolides) or the host (fusion protein inhibitors [DAS181], cyclo-oxygenase-2 inhibitors and peroxisome proliferator-activated receptor agonists). |
Inhaled Technosphere Insulin Versus Inhaled Technosphere Placebo in Insulin-Naïve Subjects With Type 2 Diabetes Inadequately Controlled on Oral Antidiabetes Agents. | OBJECTIVE
To investigate the efficacy and safety of prandial Technosphere inhaled insulin (TI), an inhaled insulin with a distinct time action profile, in insulin-naïve type 2 diabetes (T2D) inadequately controlled on oral antidiabetes agents (OADs).
RESEARCH DESIGN AND METHODS
Subjects with T2D with HbA1c levels ≥7.5% (58.5 mmol/mol) and ≤10.0% (86.0 mmol/mol) on metformin alone or two or more OADs were randomized to add-on prandial TI (n = 177) or prandial Technosphere inhaled placebo (TP) (n = 176) to their OAD regimen in this double-blind, placebo-controlled trial. Primary end point was change in HbA1c at 24 weeks.
RESULTS
TI significantly reduced HbA1c by -0.8% (-9.0 mmol/mol) from a baseline of 8.3% (66.8 mmol/mol) compared with TP -0.4% (-4.6 mmol/mol) (treatment difference -0.4% [95% CI -0.57, -0.23]; P < 0.0001). More TI-treated subjects achieved an HbA1c ≤7.0% (53.0 mmol/mol) (38% vs. 19%; P = 0.0005). Mean fasting plasma glucose was similarly reduced in both groups. Postprandial hyperglycemia, based on 7-point glucose profiles, was effectively controlled by TI. Mean weight change was 0.5 kg for TI and -1.1 kg for the TP group (P < 0.0001). Mild, transient dry cough was the most common adverse event, occurring similarly in both groups (TI, 23.7%; TP, 19.9%) and led to discontinuation in only 1.1% of TI-treated and 3.4% of TP-treated subjects. There was a small decline in forced expiratory volume in 1 s in both groups, with a slightly larger decline in the group receiving TI (TI, -0.13 L; TP, -0.04 L). The difference resolved after treatment discontinuation.
CONCLUSIONS
Prandial TI added to one or more OADs in inadequately controlled T2D is an effective treatment option. Mild, transient dry cough was the most common adverse event. |
Emotion on the Road - Necessity, Acceptance, and Feasibility of Affective Computing in the Car | Besides reduction of energy consumption, which implies alternate actuation and light construction, the main research domain in automobile development in the near future is dominated by driver assistance and natural driver-car communication. The ability of a car to understand natural speech and provide a human-like driver assistance system can be expected to be a factor decisive for market success on par with automatic driving systems. Emotional factors and affective states are thereby crucial for enhanced safety and comfort. This paper gives an extensive literature overview on work related to influence of emotions on driving safety and comfort, automatic recognition, control of emotions, and improvement of in-car interfaces by affect sensitive technology. Various use-case scenarios are outlined as possible applications for emotion-oriented technology in the vehicle. The possible acceptance of such future technology by drivers is assessed in a Wizard-Of-Oz user study, and feasibility of automatically recognising various driver states is demonstrated by an example system for monitoring driver attentiveness. Thereby an accuracy of 91.3% is reported for classifying in real-time whether the driver is attentive or distracted. |
A new approach for robot motion planning using rapidly-exploring Randomized Trees | In the last few years, car-like robots became increasingly important. Thus, motion planning algorithms for this kind of problem are needed more than ever. Unfortunately, this problem is computational difficult and so probabilistic approaches like Probabilistic Roadmaps or Rapidly-exploring Randomized Trees are often used in this context. This paper introduces a new concept for robot motion planning especially for car-like robots based on Rapidly-exploring Randomized Trees. In contrast to the conventional method, the presented approach uses a pre-computed auxiliary path to improve the distribution of random states. The main contribution of this approach is the significantly increased quality of the computed path. A proof-of-concept implementation evaluates the quality and performance of the proposed concept. |
Foster b-trees | Foster B-trees are a new variant of B-trees that combines advantages of prior B-tree variants optimized for many-core processors and modern memory hierarchies with flash storage and nonvolatile memory. Specific goals include: (i) minimal concurrency control requirements for the data structure, (ii) efficient migration of nodes to new storage locations, and (iii) support for continuous and comprehensive self-testing. Like Blink-trees, Foster B-trees optimize latching without imposing restrictions or specific designs on transactional locking, for example, key range locking. Like write-optimized B-trees, and unlike Blink-trees, Foster B-trees enable large writes on RAID and flash devices as well as wear leveling and efficient defragmentation. Finally, they support continuous and inexpensive yet comprehensive verification of all invariants, including all cross-node invariants of the B-tree structure. An implementation and a performance evaluation show that the Foster B-tree supports high concurrency and high update rates without compromising consistency, correctness, or read performance. |
Cost-based modeling for fraud and intrusion detection: results from the JAM project | In this paper we describe the results achieved using the JAM distributed data mining system for the real world problem of fraud detection in financial information systems. For this domain we provide clear evidence that state-of-the-art commercial fraud detection systems can be substantially improved in stopping losses due to fraud by combining multiple models of fraudulent transaction shared among banks. We demonstrate that the traditional statistical metrics used to train and evaluate the performance of learning systems, (i.e. statistical accuracy or ROC analysis) are misleading and perhaps inappropriate for this application. Cost-based metrics are more relevant in certain domains, and defining such metrics poses significant and interesting research questions both in evaluating systems and alternative models, and in formalizing the problems to which one may wish to apply data mining technologies. This paper also demonstrates how the techniques developed for fraud detection can be generalized and applied to the important area of Intrusion Detection in networked information systems. We report the outcome of recent evaluations of our system applied to tcpdump network intrusion data specifically with respect to statistical accuracy. This work involved building additional components of JAM that we have come to call, MADAM ID (Mining Audit Data for Automated Models for Intrusion Detection). However, taking the next step to define cost-based models for intrusion detection poses interesting new research questions. We describe our initial ideas about how to evaluate intrusion detection systems using cost models learned during our work |
Detection of phishing URLs using machine learning techniques | Phishing costs Internet users billions of dollars per year. It refers to luring techniques used by identity thieves to fish for personal information in a pond of unsuspecting Internet users. Phishers use spoofed e-mail, phishing software to steal personal information and financial account details such as usernames and passwords. This paper deals with methods for detecting phishing Web sites by analyzing various features of benign and phishing URLs by Machine learning techniques. We discuss the methods used for detection of phishing Web sites based on lexical features, host properties and page importance properties. We consider various data mining algorithms for evaluation of the features in order to get a better understanding of the structure of URLs that spread phishing. The fine-tuned parameters are useful in selecting the apt machine learning algorithm for separating the phishing sites from benign sites. |
An Architecture to Support the Collection of Big Data in the Internet of Things | The Internet of Things (IoT) relies on physical objects interconnected between each others, creating a mesh of devices producing information. In this context, sensors are surrounding our environment (e.g., cars, buildings, smartphones) and continuously collect data about our living environment. Thus, the IoT is a prototypical example of Big Data. The contribution of this paper is to define a software architecture supporting the collection of sensor-based data in the context of the IoT. The architecture goes from the physical dimension of sensors to the storage of data in a cloud-based system. It supports Big Data research effort as its instantiation supports a user while collecting data from the IoT for experimental or production purposes. The results are instantiated and validated on a project named SMARTCAMPUS, which aims to equip the SophiaTech campus with sensors to build innovative applications that supports end-users. |
Hardy inequalities in Orlicz spaces | We establish a sharp extension, in the framework of Orlicz spaces, of the (n-dimensional) Hardy inequality, involving functions defined on a domain G, their gradients and the distance function from the boundary of G. |
Nutrition knowledge, and use and understanding of nutrition information on food labels among consumers in the UK | Based on in-store observations in three major UK retailers, in-store interviews (2019) and questionnaires filled out at home and returned (921), use of nutrition information on food labels and its understanding were investigated. Respondents' nutrition knowledge was also measured, using a comprehensive instrument covering knowledge of expert recommendations, nutrient content in different food products, and calorie content in different food products. Across six product categories, 27% of shoppers were found to have looked at nutrition information on the label, with guideline daily amount (GDA) labels and the nutrition grid/table as the main sources consulted. Respondents' understanding of major front-of-pack nutrition labels was measured using a variety of tasks dealing with conceptual understanding, substantial understanding and health inferences. Understanding was high, with up to 87.5% of respondents being able to identify the healthiest product in a set of three. Differences between level of understanding and level of usage are explained by different causal mechanisms. Regression analysis showed that usage is mainly related to interest in healthy eating, whereas understanding of nutrition information on food labels is mainly related to nutrition knowledge. Both are in turn affected by demographic variables, but in different ways. |
Enhancing social collaborative filtering through the application of non-negative matrix factorization and exponential random graph models | Social collaborative filtering recommender systems extend the traditional user-to-item interaction with explicit user-to-user relationships, thereby allowing for a wider exploration of correlations among users and items, that potentially lead to better recommendations. A number of methods have been proposed in the direction of exploring the social network, either locally (i.e. the vicinity of each user) or globally. In this paper, we propose a novel methodology for collaborative filtering social recommendation that tries to combine the merits of both the aforementioned approaches, based on the soft-clustering of the Friend-of-a-Friend (FoaF) network of each user. This task is accomplished by the non-negative factorization of the adjacency matrix of the FoaF graph, while the edge-centric logic of the factorization algorithm is ameliorated by incorporating more general structural properties of the graph, such as the number of edges and stars, through the introduction of the exponential random graph models. The preliminary results obtained reveal the potential of this idea. |
Cardiotocographic Diagnosis of Fetal Health based on Multiclass Morphologic Pattern Predictions using Deep Learning Classification | Medical complications of pregnancy and pregnancy-related deaths continue to remain a major global challenge today. Internationally, about 830 maternal deaths occur every day due to pregnancy-related or childbirth-related complications. In fact, almost 99% of all maternal deaths occur in developing countries. In this research, an alternative and enhanced artificial intelligence approach is proposed for cardiotocographic diagnosis of fetal assessment based on multiclass morphologic pattern predictions, including 10 target classes with imbalanced samples, using deep learning classification models. The developed model is used to distinguish and classify the presence or absence of multiclass morphologic patterns for outcome predictions of complications during pregnancy. The testing results showed that the developed deep neural network model achieved an accuracy of 88.02%, a recall of 84.30%, a precision of 85.01%, and an F-score of 0.8508 in average. Thus, the developed model can provide highly accurate and consistent diagnoses for fetal assessment regarding complications during pregnancy, thereby preventing and/or reducing fetal mortality rate as well as maternal mortality rate during and following pregnancy and childbirth, especially in lowresource settings and developing countries. Keywords—Activation function; deep learning; deep neural network; dropout; ensemble learning; multiclass; regularization; cardiotocography; complications during pregnancy; fetal heart rate |
The Representation and Matching of Pictorial Structures | The primary problem dealt with in this paper is the following. Given some description of a visual object, find that object in an actual photograph. Part of the solution to this problem is the specification of a descriptive scheme, and a metric on which to base the decision of "goodness" of matching or detection. |
Science Policies and the New International Order | The new international political order has eroded the former bipolarity that was the mark of the postwar international system. Several cleavages within the Western and Eastern blocs, together with changes in the economic and political status of developing countries, have provided a much more complex system. The new context led to the disappearance of the former division of scientific and technological labor based on the dominance of the United States and the Soviet Union. Attempts to build some kind of "scientific internationalism" as an extension of the new international economic and political orders aspired to does not seem to have much future, as witnessed by the recent Vienna UN Conference on Science and Technology for Development. Several different national science policies seem, however, possible, and are briefly discussed, including policies of technological feats, alternative technologies, and comparative advantages. |
Pre-Processing Steps for Segmentation of Retinal Blood Vessels | Segmentation of blood vessels in retinal images is an important part in retinal image analysis for diagnosis and treatment of eye diseases for large screening systems. In this paper, we addressed the problem of background and noise extraction from retinal images. Blood vessels usually have central light reflex and poor local contrast, hence the results yield by blood vessel segmentation algorithms are not satisfactory. We used different preprocessing steps which includes central light reflex removal, background homogenization and vessel enhancement to make retinal image noise-free for post-processing. We used mean and Gaussian filtering along with Top-Hat transformation for noise extraction. The preprocessing steps were applied on 40 retinal images of DRIVE database available publically. Results show the darker retinal structures like blood vessels, fovea, and possible presence of microaneurysms or hemorrhages, get enhanced as compared to original retinal image and the brighter structures like optic disc and possible presence of exudates were get removed . The presented technique will definitely improve automatic fundus images analysis also be very useful to eye specialists in their visual examination of retina. |
PATHWAYS TO 40%-EFFICIENT CONCENTRATOR PHOTOVOLTAICS | Multijunction solar cells for terrestrial concentrator applications have reached the point at which the next set of technology improvements are likely to push cell efficiencies over 40%. This paper discusses the semiconductor device research paths being investigated with the aim of reaching this efficiency milestone. Latticematched (LM) GaInP/ GaInAs/ Ge 3-junction cells have achieved the highest independently confirmed efficiency for a photovoltaic device, at 39.0% at 236 suns, 25°C under the standard AM1.5D, low-AOD terrestrial spectrum. Lattice-mismatched, or metamorphic (MM), materials offer still higher potential efficiencies, if the crystal quality can be maintained. Theoretical efficiencies well over 50% are possible for a MM GaInP/ 1.17-eV GaInAs/ Ge 3junction cell limited by radiative recombination at 500 suns. The bandgap – open circuit voltage offset, (Eg/q) – Voc, is used as a valuable theoretical and experimental tool to characterize multijunction cells with subcell bandgaps ranging from 0.7 to 2.1 eV. Experimental results are presented for prototype 6-junction AlGaInP/ GaInP/ AlGaInAs/ GaInAs/ GaInNAs/ Ge cells employing an active ~1.1-eV dilute nitride GaInNAs subcell, with active-area efficiency greater than 23% and over 5.3 V open-circuit voltage under the 1-sun AM0 space spectrum. Such cell designs have theoretical efficiencies under the terrestrial spectrum at 500 suns concentration exceeding 55% efficiency, even for lattice-matched designs. Through a combination of device structure advances under investigation in research groups around the world, the goal of a practical 40%-efficient photovoltaic cell is near. |
Late cardiomyopathy in childhood acute myeloid leukemia survivors: a study from the L.E.A. program. | HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Late cardiomyopathy in childhood acute myeloid leukemia survivors: a study from the LEA program Vincent Barlogis, Pascal Auquier, Yves Bertrand, Pascal Chastagner, Dominique Plantaz, Maryline Poiree, Justyna Kanold, Julie Berbis, Claire Oudin, Camille Vercasson, et al. |
Effect of preoperative abstinence on poor postoperative outcome in alcohol misusers: randomised controlled trial. | OBJECTIVE
To evaluate the influence of preoperative abstinence on postoperative outcome in alcohol misusers with no symptoms who were drinking the equivalent of at least 60 g ethanol/day.
DESIGN
Randomised controlled trial.
SETTING
Copenhagen, Denmark.
SUBJECTS
42 alcoholic patients without liver disease admitted for elective colorectal surgery.
INTERVENTIONS
Withdrawal from alcohol consumption for 1 month before operation (disulfiram controlled) compared with continuous drinking.
MAIN OUTCOME MEASURES
Postoperative complications requiring treatment within the first month after surgery. Perioperative immunosuppression measured by delayed type hypersensitivity; myocardial ischaemia and arrhythmias measured by Holter tape recording; episodes of hypoxaemia measured by pulse oximetry. Response to stress during the operation were assessed by heart rate, blood pressure, serum concentration of cortisol, and plasma concentrations of glucose, interleukin 6, and catecholamines.
RESULTS
The intervention group developed significantly fewer postoperative complications than the continuous drinkers (31% v 74%, P=0.02). Delayed type hypersensitivity responses were better in the intervention group before (37 mm2 v 12 mm2, P=0.04), but not after surgery (3 mm2 v 3 mm2). Development of postoperative myocardial ischaemia (23% v 85%) and arrhythmias (33% v 86%) on the second postoperative day as well as nightly hypoxaemic episodes (4 v 18 on the second postoperative night) occurred significantly less often in the intervention group. Surgical stress responses were lower in the intervention group (P</=0.05).
CONCLUSIONS
One month of preoperative abstinence reduces postoperative morbidity in alcohol abusers. The mechanism is probably reduced preclinical organ dysfunction and reduction of the exaggerated response to surgical stress. |
Experimental Study of an Optimal-Control- Based Framework for Trajectory Planning, Threat Assessment, and Semi-Autonomous Control of Passenger Vehicles in Hazard Avoidance Scenarios | This paper describes the design of an optimal-control-based active safety framework that performs trajectory planning, threat assessment, and semiautonomous control of passenger vehicles in hazard avoidance scenarios. The vehicle navigation problem is formulated as a constrained optimal control problem with constraints bounding a navigable region of the road surface. A model predictive controller iteratively plans an optimal vehicle trajectory through the constrained corridor. Metrics from this “best-case” scenario establish the minimum threat posed to the vehicle given its current state. Based on this threat assessment, the level of controller intervention required to prevent departure from the navigable corridor is calculated and driver/controller inputs are scaled accordingly. This approach minimizes controller intervention while ensuring that the vehicle does not depart from a navigable corridor of travel. It also allows for multiple actuation modes, diverse trajectory-planning objectives, and varying levels of autonomy. Experimental results are presented here to demonstrate the framework’s semiautonomous performance in hazard avoidance scenarios. |
Accelerating LINPACK with MPI-OpenCL on Clusters of Multi-GPU Nodes | OpenCL is an open standard to write parallel applications for heterogeneous computing systems. Since its usage is restricted to a single operating system instance, programmers need to use a mix of OpenCL and MPI to program a heterogeneous cluster. In this paper, we introduce an MPI-OpenCL implementation of the LINPACK benchmark for a cluster with multi-GPU nodes. The LINPACK benchmark is one of the most widely used benchmark applications for evaluating high performance computing systems. Our implementation is based on High Performance LINPACK (HPL) and uses the blocked LU decomposition algorithm. We address that optimizations aimed at reducing the overhead of CPUs are necessary to overcome the performance gap between the CPUs and the multiple GPUs. Our LINPACK implementation achieves 93.69 Tflops (46 percent of the theoretical peak) on the target cluster with 49 nodes, each node containing two eight-core CPUs and four GPUs. |
A Hybrid Approach for Credit Card Fraud Detection using Rough Set and Decision Tree Technique | To make the business accessible to a large number of customers worldwide, many companies small and big have put up their presence on the internet. Online businesses gave birth to e-commerce platforms which in turn use digital modes of transaction such as credit-card, debit card etc. This kind of digital transaction attracted millions of users to transact on the internet. Along came the risk of online credit card frauds. |
Coupled dynamics of voltage and calcium in paced cardiac cells. | We investigate numerically and analytically the coupled dynamics of transmembrane voltage and intracellular calcium cycling in paced cardiac cells using a detailed physiological model, and its reduction to a three-dimensional discrete map. The results provide a theoretical framework to interpret various experimentally observed modes of instability ranging from electromechanically concordant and discordant alternans to quasiperiodic oscillations of voltage and calcium. |
Issues in the credit risk modeling of retail markets | We survey the most recent BIS proposals for the credit risk measurement of retail credits in capital regulations. We also describe the recent trend away from relationship lending toward transactional lending in the small business loan arena. These trends create the opportunity to adopt more analytical, data-based approaches to credit risk measurement. We survey proprietary credit scoring models (such as Fair Isaac), as well as options-theoretic structural models (such as KMV and Moody’s RiskCalc), and reduced-form models (such as Credit Risk Plus). These models allow lenders and regulators to develop techniques that rely on portfolio aggregation to measure retail credit risk exposure. 2003 Elsevier B.V. All rights reserved. JEL classification: G21; G28 |
Structured Learning for Taxonomy Induction with Belief Propagation | We present a structured learning approach to inducing hypernym taxonomies using a probabilistic graphical model formulation. Our model incorporates heterogeneous relational evidence about both hypernymy and siblinghood, captured by semantic features based on patterns and statistics from Web n-grams and Wikipedia abstracts. For efficient inference over taxonomy structures, we use loopy belief propagation along with a directed spanning tree algorithm for the core hypernymy factor. To train the system, we extract sub-structures of WordNet and discriminatively learn to reproduce them, using adaptive subgradient stochastic optimization. On the task of reproducing sub-hierarchies of WordNet, our approach achieves a 51% error reduction over a chance baseline, including a 15% error reduction due to the non-hypernym-factored sibling features. On a comparison setup, we find up to 29% relative error reduction over previous work on ancestor F1. |
Automatic detection of acute myeloid leukemia from microscopic blood smear image | Cancer of blood-forming tissues is called Leukemia. This disease hinders the body's ability to fight infection. Leukemia can be categorized into many types. Acute Lymphoblastic Leukemia (ALL) and Acute Myeloid Leukemia (AML) are the two main types. The blood cells' growth and bone marrow are affected by AML. Collection of myeloid blasts in the bone marrow is one of the main characteristics of AML. In this research, a novel method is analyzed to detect the presence of AML. The paper proposes a technique that automatically detects and segments the nucleus from white blood cells (WBCs) in the microscopic blood smear images. Segmentation and clustering is done using a K-Means algorithm, while classification is done using Support Vector Machine (SVM) with feature reduction. |
The European Heart Failure Self-care Behaviour scale revised into a nine-item scale (EHFScB-9): a reliable and valid international instrument. | AIMS
Improved self-care is the goal of many heart failure (HF) management programmes. The 12-item European Heart Failure Self-Care Behaviour Scale (EHFScB scale) was developed and tested to measure patient self-care behaviours. It is now available in 14 languages. The aim of this study was to further determine reliability and validity of the EHFScB scale.
METHODS AND RESULTS
Data from 2592 HF patients (mean age 73 years, 63% male) from six countries were analysed. Internal consistency was determined by Cronbach's alpha. Validity was established by (1) interviews with HF experts and with HF patients; (2) item analysis; (3) confirmatory factor analysis; and (4) analysing the relationship between the EHFScB scale and scales measuring quality of life and adherence. Internal consistency of the 12-item scale was 0.77 (0.71-0.85). After factor analyses and critical evaluation of both psychometric properties and content of separate items, a nine-item version was further evaluated. The reliability estimates for the total nine-item scale (EHFScB-9) was satisfactory (0.80) and Cronbach's alpha varied between 0.68 and 0.87 in the different countries. One reliable subscale was defined (consulting behaviour) with a Cronbach's alpha of 0.85. The EHFScB-9 measures a different construct than quality of life (r = 0.18) and adherence (r = 0.37).
CONCLUSION
The 12-item EHFScB scale was revised into the nine-item EHFScB-9, which can be used as an internally consistent and valid instrument to measure HF-related self-care behaviour. |
Mining infrequent patterns in data stream | In recent years researches are focused towards mining infrequent patterns rather than frequent patterns. Mining infrequent pattern plays vital role in detecting any abnormal event. In this paper, an algorithm named Infrequent Pattern Miner for Data Streams (IPM-DS) is proposed for mining nonzero infrequent patterns from data streams. The proposed algorithm adopts the FP-growth based approach for generating all infrequent patterns. The proposed algorithm (IPM-DS) is evaluated using health data set collected from wearable physiological sensors that measure vital parameters such as Heart Rate (HR), Breathing Rate (BR), Oxygen Saturation (SPO2) and Blood pressure (BP) and also with two publically available data sets such as e-coli and Wine from UCI repository. The experimental results show that the proposed algorithm generates all possible infrequent patterns in less time. |
Advances and perspectives of on-orbit geometric calibration for high-resolution optical satellites | On-orbit geometric calibration is a critical and essential step to guarantee the high geometric positioning accuracy of high-resolution optical satellite imagery. In this paper, we first review and summarize the on-orbit geometric calibration methods for high-resolution optical satellite and then analyze their advantages and disadvantages. Finally, we present our perspective on on-board geometric calibration which can be implemented automatically in real time. With the overview of geometric calibration developed in the past decades, the two conclusions could be driven up: (1) the current on-orbit geometric calibration technology based on ground control points (GCPs) has been mature, which can largely improve the geometric positioning accuracy of satellite imagery; (2) new innovation method of on-board geometric calibration for real-time improvement of satellite imagery positioning accuracy is needed. In the end, this paper presents our technique frame of on-board geometric calibration system. |
My Country and My People and Sydney Opera House: The missing link | Abstract It is known that Jorn Utzon (1918–2008), the principal architect for the Sydney Opera House project (1957–66), had a lifetime obsession for Chinese art and architecture. However, previous studies did not explore the relationship between Utzon and his venerated Chinese writer Lin Yutang (1895–1976). How Utzon represented the ideas and ideals he received from Lin Yutang׳s conceptualization of Chinese art and architecture in My Country and My People (1935) has not been systematically documented. To this end, this article examines the role of Lin Yutang׳s work in Utzon׳s architectural career generally and the architect׳s Sydney Opera House design in particular. It argues that My Country and My People nurtured young Utzon׳s own architectural philosophy, as reflected in his early manifestoes and design projects. Eventually, Lin Yutang׳s Chinese aesthetics encapsulated in calligraphy, painting and architecture helped Utzon to initiate, articulate and further communicate the design principles of his Sydney Opera House, as well as several other important architectural works before and after. Although Utzon never fully realized his Opera House due to forced resignation in 1966, the inspiration from Lin Yutang vividly remains in Utzon׳s yet to be finished masterpiece. |
Towards 3D object recognition via classification of arbitrary object tracks | Object recognition is a critical next step for autonomous robots, but a solution to the problem has remained elusive. Prior 3D-sensor-based work largely classifies individual point cloud segments or uses class-specific trackers. In this paper, we take the approach of classifying the tracks of all visible objects. Our new track classification method, based on a mathematically principled method of combining log odds estimators, is fast enough for real time use, is non-specific to object class, and performs well (98.5% accuracy) on the task of classifying correctly-tracked, well-segmented objects into car, pedestrian, bicyclist, and background classes. We evaluate the classifier's performance using the Stanford Track Collection, a new dataset of about 1.3 million labeled point clouds in about 14,000 tracks recorded from an autonomous vehicle research platform. This dataset, which we make publicly available, contains tracks extracted from about one hour of 360-degree, 10Hz depth information recorded both while driving on busy campus streets and parked at busy intersections. |
A Clinical Study of Pharyngolaryngectomy with Total Esophagectomy: Postoperative Complications, Countermeasures, and Prognoses. | OBJECTIVE
Patients with advanced hypopharyngeal or cervical esophageal cancer have a comparatively high risk of also developing thoracic esophageal cancer. Pharyngolaryngectomy with total esophagectomy is highly invasive, and few reports about it exist. We examined the postoperative complications and respective countermeasures and prognoses of patients who underwent pharyngolaryngectomy with total esophagectomy.
STUDY DESIGN
Case series with chart review.
SETTING
Department of Head and Neck Oncology, Cancer Institute Hospital of Japanese Foundation for Cancer Research, Japan.
SUBJECTS AND METHODS
We examined the postoperative complications and respective countermeasures and prognoses of 40 patients who underwent pharyngolaryngectomy with total esophagectomy in our hospital.
RESULTS
Postoperative complications were observed in 23 patients (57.5%) and consisted of 8 groups: tracheal region necrosis in 5 patients; neck abscess formation/wound infection in 5; fistula in 4; tracheostomy suture leakage in 2; ileus in 2; lymphorrhea in 2; pulmonary complications in 2; and other complications, including hemothorax, tracheoinnominate artery fistula, temporary cardiac arrest due to intraoperative mediastinum operation, methicillin-resistant Staphylococcus aureus enteritis, and sepsis, in 1 patient each. A lethal complication-brachiocephalic vein hemorrhage due to tracheostomy suture leakage and hemorrhagic shock due to tracheoinnominate artery fistula-occurred in 2 (5%) patients. The crude 5-year survival rate was 48.6%.
CONCLUSIONS
Serious postoperative complications were related to tracheostomaplasty. Although pharyngolaryngectomy with total esophagectomy is highly invasive, we believe that our outlined treatment method is the most appropriate for cases of advanced hypopharyngeal or cervical esophageal cancer that also requires concurrent surgery for esophageal cancer. |
HyperLex: A Large-Scale Evaluation of Graded Lexical Entailment | We introduce HyperLex—a data set and evaluation resource that quantifies the extent of the semantic category membership, that is, type-of relation, also known as hyponymy–hypernymy or lexical entailment (LE) relation between 2,616 concept pairs. Cognitive psychology research has established that typicality and category/class membership are computed in human semantic memory as a gradual rather than binary relation. Nevertheless, most NLP research and existing large-scale inventories of concept category membership (WordNet, DBPedia, etc.) treat category membership and LE as binary. To address this, we asked hundreds of native English speakers to indicate typicality and strength of category membership between a diverse range of concept pairs on a crowdsourcing platform. Our results confirm that category membership and LE are indeed more gradual than binary. We then compare these human judgments with the predictions of automatic systems, which reveals a huge gap between human performance and state-of-the-art LE, distributional and representation learning models, and substantial differences between the models themselves. We discuss a pathway for improving semantic models to overcome this discrepancy, and indicate future application areas for improved graded LE systems. |
Learning Feature Representations with K-Means | Many algorithms are available to learn deep hierarchies of features from unlabeled data, especially images. In many cases, these algorithms involve multi-layered networks of features (e.g., neural networks) that are sometimes tricky to train and tune and are difficult to scale up to many machines effectively. Recently, it has been found that K-means clustering can be used as a fast alternative training method. The main advantage of this approach is that it is very fast and easily implemented at large scale. On the other hand, employing this method in practice is not completely trivial: K-means has several limitations, and care must be taken to combine the right ingredients to get the system to work well. This chapter will summarize recent results and technical tricks that are needed to make effective use of K-means clustering for learning large-scale representations of images. We will also connect these results to other well-known algorithms to make clear when K-means can be most useful and convey intuitions about its behavior that are useful for debugging and engineering new systems. |
Approximation in mechanism design | This talk surveys three challenge areas for mechanism design and describes the role approximation plays in resolving them. Challenge 1: optimal mechanisms are parameterized by knowledge of the distribution of agent's private types. Challenge 2: optimal mechanisms require precise distributional information. Challenge 3: in multi-dimensional settings economic analysis has failed to characterize optimal mechanisms. The theory of approximation is well suited to address these challenges. While the optimal mechanism may be parameterized by the distribution of agent's private types, there may be a single mechanism that approximates the optimal mechanism for any distribution. While the optimal mechanism may require precise distributional assumptions, there may be approximately optimal mechanism that depends only on natural characteristics of the distribution. While the multi-dimensional optimal mechanism may resist precise economic characterization, there may be simple description of approximately optimal mechanisms. Finally, these approximately optimal mechanisms, because of their simplicity and tractability, may be much more likely to arise in practice, thus making the theory of approximately optimal mechanism more descriptive than that of (precisely) optimal mechanisms. The talk will cover positive resolutions to these challenges with emphasis on basic techniques, relevance to practice, and future research directions. |
ARmatika: 3D game for arithmetic learning with Augmented Reality technology | Learning mathematics is one of the most important aspects that determine the future of learners. However, mathematics as one of the subjects is often perceived as being complicated and not liked by the learners. Therefore, we need an application with the use of appropriate technology to create visualization effects which can attract more attention from learners. The application of Augmented Reality technology in digital game is a series of efforts made to create a better visualization effect. In addition, the system is also connected to a leaderboard web service in order to improve the learning motivation through competitive process. Implementation of Augmented Reality is proven to improve student's learning motivation moreover implementation of Augmented Reality in this game is highly preferred by students. |
Resource Allocation in Multiuser OFDMA System: Feasibility and Optimization Study | This paper studies resource allocation problem in downlink multiuser Orthogonal Frequency Division Multiple Access (OFDMA) system. The problem is formulated as sumrate (cell throughput) maximization problem with strict data rate constraints imposed by the users and peak power constraint by the base station. We discuss the feasibility of the problem and propose two algorithms when the feasibility conditions are met. In the first algorithm subcarrier and power allocation decisions are made by jointly considering all the constraints. In the second algorithm subcarrier allocation decisions are based on data rate constraints and transmit power constraint is achieved on the allocated subcarrier set. The second algorithm although suboptimal has almost identical performance compared to the first algorithm. The reduction in complexity is huge compared to the first algorithm. |
The Classic: Congenital Club Foot: The Results of Treatment | This Classic article is a reprint of the original work by Ignacio V. Ponseti and Eugene N. Smoley, Congenital Club Foot: The Results of Treatment. An accompanying biographical sketch on Ignacio V. Ponseti, MD, is available at DOI 10.1007/s11999-009-0719-8 and a second Classic article is available at 10.1007/s11999-0090721-1. This article is 1963 by the Journal of Bone and Joint Surgery, Inc., and is reprinted with permission from Ponseti IV, Smoley EN. Congenital Club Foot: The Results of Treatment. J Bone Joint Surg Am. 1963;45:261–344. The Association of Bone and Joint Surgeons 2009 Richard A. Brand MD & Clinical Orthopaedics and Related Research, 1600 Spruce Street, Philadelphia, PA 19103, USA e-mail: [email protected] Since 1948, a uniform system of treatment has been applied to all cases of congenital club foot on the Orthopedic Service of the State University of Iowa. Our aim has been to obtain a supple, well corrected foot in the shortest possible time. An end-result study of severe club-foot deformities in otherwise normal children treated initially in this department from 1948 to 1956, with a follow-up period from five to twelve years, is here presented. Three hundred and twenty-two patients with club-foot deformity were treated during this period. The following were not included in this study: One hundred and fortynine patients had been originally treated in other clinics and were referred to us for further correction. Ten patients had arthrogryposis; four had a complete or partial absence of the tibia; and eighteen had a myelomeningocele. The sacrum was absent in two and congenital constriction was present in the legs above the malleoli in two patients. In forty-six patients, the foot deformity was mild and was corrected by simple manipulations or the application of one to three plaster casts. Of the remaining ninety-one otherwise normal children with severe untreated club-foot deformities, twenty-four were lost to follow-up, usually at the end of the initial treatment. We were able to evaluate the results of treatment in only sixty-seven patients with a total of ninety-four club feet. All these deformities were severe, although many variations in the degree of rigidity of the feet were present. The age of the patient at the onset of treatment ranged from one week to six months, and the average age was one month. Of the sixty-seven patients studied, ten were female and fifty-seven were male. The deformity was, therefore, almost six times as prevalent in male as in female children. Forty patients had only one foot deformed (60 per cent) and twenty-seven patients had both feet deformed (40 per cent). In the patients with unilateral involvement, the right foot was deformed in eighteen and the left foot in twenty-two cases. Anteroposterior and lateral roentgenograms and photographs of the feet of all patients were made at the onset of treatment and again at the time of the final |
Modeling and Reasoning about Preferences with CP-nets | Structured modeling of ceteris paribus qualitative preference statements, grounded in the notions of preferential independence, has tremendous potential to provide effective and efficient tools for modeling and reasoning about decision maker preferences in real-life applications. Within the AI and phylosophical logic communities, past work on qualitative preference representation has tended to emphasize the potential of modeling ceteris paribus preferences. However, the work of Boutilier, Brafman, Hoos, and Poole on CP-nets was the first serious attempt to exploit preferential independence for compactly and efficiently representing such preferential statements. In this work we discuss various decision theoretic tools for preference representation, and examine work done in artificial intelligence and philosophical logic in an effort to understand how to treat, model, and reason about the ceteris paribus preferential statements. We perform an extensive analysis of CP-nets, a somewhat unique graphical representation of qualitative conditional preferential statements. The first part of this work is dedicated to a computational analysis of various preferential queries in CP-nets. In particular, we analyze dominance queries (testing whether an outcome is preferred to another outcome) in binary-valued CP-nets. We investigate the concrete complexity of the algorithm by Boutiler et al. for answering dominance queries in binary-valued, tree-structured CP-nets, and show that time complexity of this problem is quadratic in the number of variables. Extending the class of tractable dominance queries, we design a polynomial time algorithm for answering dominance queries in binary-valued, polytree CP-nets. We show that answering dominance queries is np-complete for directed-path singly connected and for bounded connectivity CP-nets. In particular, these complexity results on dominance queries allow us to identify various computational properties of constraint-based outcome optimization in CP-nets. In addition, we con- |
Enhanced cosmetic outcome with running horizontal mattress sutures. | BACKGROUND
Cutaneous sutures should provide good wound eversion, firm closure, and cosmetically elegant results. Simple running sutures are commonly employed in cutaneous surgery but may not always be effective in achieving wound eversion.
OBJECTIVE
We compared the cosmetic results of simple running nonabsorbable sutures with running horizontal mattress sutures in primary closures of facial defects.
METHODS
Fifty-five patients with facial Mohs surgery defects appropriate for primary multilayer repair were randomized into one of two arms. Either the superior or the inferior half of the wound was closed with a running horizontal mattress suture. The other half of the wound was closed with a traditional simple running suture. At 1 week, 6 weeks, and 6 months, the cosmetically superior half of the wound, if any, was blindly determined by the investigators.
RESULTS
The running horizontal mattress suture was significantly more cosmetically pleasing than the simple running suture. Forty-seven patients completed the study. At the 6-month follow-up, 25 patients did better with the horizontal suture and 5 did worse, and with 17 patients, there was no clinically perceptible difference. The 6-week scores predicted the outcome at 6 months, but the 1-week scores did not.
CONCLUSIONS
In primary closures of the face, the running horizontal mattress suture is a cosmetically elegant alternative to a traditional running cutaneous suture. The final scar appears smoother and flatter than those produced by traditional simple running sutures. |
A self-organizing neural network architecture for learning human-object interactions | The visual recognition of transitive actions comprising human-object interactions is a key component for artificial systems operating in natural environments. This challenging task requires jointly the recognition of articulated body actions as well as the extraction of semantic elements from the scene such as the identity of the manipulated objects. In this paper, we present a self-organizing neural network for the recognition of human-object interactions from RGB-D videos. Our model consists of a hierarchy of GrowWhen-Required (GWR) networks that learn prototypical representations of body motion patterns and objects, accounting for the development of action-object mappings in an unsupervised fashion. We report experimental results on a dataset of daily activities collected for the purpose of this study as well as on a publicly available benchmark dataset. In line with neurophysiological studies, our self-organizing architecture exhibits higher neural activation for congruent action-object pairs learned during training sessions with respect to synthetically created incongruent ones. We show that our unsupervised model shows competitive classification results on the benchmark dataset with respect to strictly supervised approaches. |
Repeatability intraexaminer and agreement in amplitude of accommodation measurements | Clinical measurement of the amplitude of accommodation (AA) provides an indication of maximum accommodative ability. To determine whether there has been a significant change in the AA, it is important to have a good idea of the repeatability of the measurement method used. The aim of the present study was to compare AA measurements made using three different subjective clinical methods: the push-up, push-down, and minus lens techniques. These methods differ in terms of the apparent size of the target, the end point used, or the components of the accommodation response stimulated. Our working hypothesis was that these methods are likely to show different degrees of repeatability such that they should not be used interchangeably. The AA of the right eye was measured on two separate occasions in 61 visually normal subjects of mean age 19.7 years (range 18 to 32). The repeatability of the tests and agreement between them was estimated by the Bland and Altman method. We determined the mean difference (MD) and the 95% limits of agreement for the repeatability study (COR) and for the agreement study (COA). The COR for the push-up, push-down, and minus lens techniques were ±4.76, ±4.00, and ±2.52D, respectively. Higher values of AA were obtained using the push-up procedure compared to the push-down and minus lens methods. The push-down method also yielded a larger mean AA than the negative-lens method. MD between the three methods were high in clinical terms, always over 1.75D, and the COA differed substantially by at least ±4.50D. The highest agreement interval was observed when we compared AA measurements made using minus lenses and the push-up method (±5.65D). The minus lens method exhibited the best repeatability, least MD (−0.08D) and the smallest COR. Agreement between the three techniques was poor. |
SHOG - Spherical HOG Descriptors for Rotation Invariant 3D Object Detection | We present a method for densely computing local spherical histograms of oriented gradients (SHOG) in volumetric images. The descriptors are based on the continuous representation of the orientation histograms in the harmonic domain, which we compute very efficiently via spherical tensor products and the fast Fourier transformation. Building upon these local spherical histogram representations, we utilize the Harmonic Filter to create a generic rotation invariant object detection system that benefits from both the highly discriminative representation of local image patches in terms of histograms of oriented gradients and an adaptable trainable voting scheme that forms the filter. We exemplarily demonstrate the effectiveness of such dense spherical 3D descriptors in a detection task on biological 3D images. In a direct comparison to existing approaches, our new filter reveals superior performance. |
An Investigation of Recurrent Neural Architectures for Drug Name Recognition | Drug name recognition (DNR) is an essential step in the Pharmacovigilance (PV) pipeline. DNR aims to find drug name mentions in unstructured biomedical texts and classify them into predefined categories. State-of-the-art DNR approaches heavily rely on hand-crafted features and domain-specific resources which are difficult to collect and tune. For this reason, this paper investigates the effectiveness of contemporary recurrent neural architectures the Elman and Jordan networks and the bidirectional LSTM with CRF decoding at performing DNR straight from the text. The experimental results achieved on the authoritative SemEval-2013 Task 9.1 benchmarks show that the bidirectional LSTM-CRF ranks closely to highly-dedicated, hand-crafted systems. |
Genome-wide association study identifies variants at CLU and PICALM associated with Alzheimer's disease | We undertook a two-stage genome-wide association study of Alzheimer's disease involving over 16,000 individuals. In stage 1 (3,941 cases and 7,848 controls), we replicated the established association with the APOE locus (most significant SNP: rs2075650, p= 1.8×10−157) and observed genome-wide significant association with SNPs at two novel loci: rs11136000 in the CLU or APOJ gene (p= 1.4×10−9) and rs3851179, a SNP 5′ to the PICALM gene (p= 1.9×10−8). Both novel associations were supported in stage 2 (2,023 cases and 2,340 controls), producing compelling evidence for association with AD in the combined dataset (rs11136000: p= 8.5×10−10, odds ratio= 0.86; rs3851179: p= 1.3×10−9, odds ratio= 0.86). We also observed more variants associated at p< 1×10−5 than expected by chance (p=7.5×10−6), including polymorphisms at the BIN1, DAB1 and CR1 loci. Alzheimer's disease (AD) is the most common form of dementia, is highly heritable (heritability of up to 76%) but genetically complex1. Neuropathologically, the disease is characterized by extracellular senile plaques containing β-amyloid (Aβ) and intracellular neurofibrillary tangles containing hyperphosphorylated τ protein1. Four genes have been definitively implicated in its etiology. Mutations of the amyloid precursor protein (APP) gene and the presenilin 1 and 2 genes (PSEN1, PSEN2) cause rare, Mendelian forms of the disease usually with an early-onset. However, in the more common form of AD, only apolipoprotein E (APOE) has been established unequivocally as a susceptibility gene1. Aiming to identify novel AD loci, several genome-wide association studies (GWAS) have been conducted prior to the present study. All have identified strong evidence for association to APOE, but less convincing evidence implicating other genes2-9. This outcome is consistent with the majority of findings from GWAS of other common phenotypes, where susceptibility alleles typically have effect sizes with odds ratios (OR) of 1.5 or less, rather than of the magnitude for APOE (OR~3). Detecting such modest effects requires much larger samples than those that have been applied in the GWAS of AD to date10, which have all included fewer than 1,100 cases. Based upon the hypothesis that risk alleles for AD are likely to confer ORs in the range seen in other common diseases, we undertook a more powerful GWAS than has been undertaken to date. We established a collaborative consortium from Europe and the USA from which we were able to draw upon a combined sample of up to 19,000 subjects (before quality control) and conducted a two-stage study. In Stage 1, 14,639 subjects were genotyped on Illumina platforms. 5,715 samples were genotyped for the present study using the Illumina 610-quadchip; genotypes for the remaining subjects were either made available to us from population control datasets or through collaboration and were genotyped on the Illumina HumanHap550 or the HumanHap300 BeadChips. Prior to association analysis, all samples and genotypes underwent stringent quality control, which resulted in the elimination of 53,383 autosomal SNPs and 2,850 subjects. Thus, in Stage 1, we tested 529,205 autosomal SNPs for association in up to 11,789 subjects (3,941 AD cases, 7,848 controls of which 2,078 were elderly screened controls, see Supplementary Table 1). The genomic control inflation factor (λ)11 was 1.037, suggesting little evidence for residual stratification. In addition to the known association with the APOE locus, GWA analysis identified two novel loci at a genome-wide level of significance (see Fig. 1). Table 1 shows SNPs which were genome-wide significant (GWS) in stage 1; 13 GWS SNPS map within or close to the APOE locus on chromosome 19 (p< 3×10−8 − 2×10−157) and the top 5 are shown in Table 1 (see Supplementary Table 2 for the complete list). The other two SNPs represent novel associations. One of the novel SNPs (rs11136000) is located within an intron of clusterin (CLU, also known as APOJ) on chromosome 8 (p= 1.4×10−9, OR=0.840); the other SNP (rs3851179) is 88.5 kb 5′ to PICALM on chromosome 11 (p = 1.9×10−8, OR=0.849). Note that there was no significant difference in allele frequencies between elderly, screened controls and Harold et al. Page 3 Nat Genet. Author manuscript; available in PMC 2010 April 1. U PM C Fders G rup Athor M anscript U PM C Fders G rup Athor M anscript population controls for these two SNPs. In stage 2, the two novel GWS SNPs were genotyped in an independent sample comprising 2,023 AD cases and 2,340 age-matched, cognitively screened controls (Supplementary Table 3). Both were independently associated in this sample (rs11136000: one-tailed p= 0.017, OR= 0.905; rs3851179: one-tailed p= 0.014, OR= 0.897). Meta-analysis of the stage 1 and 2 datasets also produced highly significant evidence of association (rs11136000: p= 8.5×10−10, OR = 0.861 and rs3851179: p= 1.3×10−9, OR= 0.859, two-tailed, Table 1) for CLU and PICALM loci respectively. We sought further evidence from the Translational Genomics Research Institute (TGEN) study9 and the Li et al. study8, two publicly available AD GWAS datasets, but neither of the novel GWS SNPs had been genotyped or could be imputed. As secondary analyses, we tested each novel finding for interaction with APOE status and for association with age at onset. No significant interactions of the novel SNPs with APOE status were observed influencing AD risk (rs11136000xAPOE-ε4 interaction p= 0.674; rs3851179xAPOE-ε4 interaction p=0.735). Although we observed significant effects of the GWS SNPs on age at onset, these were limited to SNPs at the APOE locus (data not shown). In a preliminary attempt to attribute the source of the association to a functional variant, we used publicly available data to identify additional SNPs at each locus that were correlated through linkage disequilibrium (LD) with either novel GWS SNP or that might plausibly have functional effects (see Supplementary Table 4). A synonymous SNP (rs7982) in the CLU gene was in strong LD (r2=0.95 in our extension sample) with the GWS SNP and showed a similar level of evidence for association with AD in the whole sample (meta-p =8×10−10; stage 1 genotypes were imputed). This SNP is in exon 5 of the CLU gene, which codes for part of the beta chain of the protein and may influence a predicted exon splicing enhancer. We note that Tycko and colleagues12 previously published a negative association study on the CLU gene, analyzing 4 SNPs in an AD case-control sample of African-American, Hispanic and Caucasian/ non-Hispanic individuals. Although they identified rs7982 through mutation screening (referred to as VB in their study), the SNP was not tested for association in their sample. The 4 SNPs that were analyzed were rare in Caucasians (minor allele frequency <2%) and there was very limited power to detect association in their Caucasian sub-sample of 53 AD cases and 43 controls. As these 4 SNPs were not genotyped in the present GWAS, there is no overlap between the two studies. Several potentially functional SNPs were identified at the PICALM locus. Of these, two showed good evidence for association; rs561655, which is within a putative transcription factor binding site and rs592297, which is a synonymous SNP in exon 5 of the gene that may influence a predicted exon splicing enhancer. However, neither of these SNPs showed the strength of evidence for association observed for rs3851179, the GWS SNP at the PICALM locus (i.e. rs561655: meta-p= 1×10−7 and rs592297: meta-p= 2×10−7). A number of SNPs in LD with rs3851179 and showing moderate evidence of association in the GWAS (p<1×10−4) were also followed up, most notably rs541458. This SNP is 8kb 5′ to the PICALM gene and was directly genotyped in the extension sample, the TGEN study and the Li et al. study, with p<0.05 in each. Following meta-analysis, this is one of our most significant SNPs (meta-p= 8×10−10) and is also supported by the study of Amouyel et al. published in this issue (p=3×10−3). Further genetic analyses will be required to characterize the true nature of the associations observed at these loci, which is beyond the scope of this paper. We also tested whether the observed number of significant associations observed in the GWAS exceeded what would be expected by chance. Having removed SNPs within the APOE, CLU and PICALM loci (see Methods) we focused on those which showed most evidence for association (p< 1×10−5). Approximately 13 independent signals were observed; less than 4 would be expected by chance (p= 7.5×10−6). Table 2 shows the loci implicated and provides strong evidence for association with the complement receptor 1 (CR1) gene, particularly in the Harold et al. Page 4 Nat Genet. Author manuscript; available in PMC 2010 April 1. U PM C Fders G rup Athor M anscript U PM C Fders G rup Athor M anscript context of the results of the Amouyel et al. study published in this issue. Also noteworthy are the bridging integrator 1 (BIN1) gene, which produces a protein involved in synaptic vesicle endoctyosis13, and the disabled homolog 1 (DAB1) gene, whose product is involved with tyrosine phosphorylation and microtubule function in neurons14. These data thus provide strong evidence that there are several genes associated with AD which remain to be identified. We have also tested over 100 variants highlighted by previous AD GWA studies for association in our sample (see Supplementary note for full discussion). These are summarized in Supplementary Table 5. Until now, the APOE-ε4 allele was the only consistently replicated genetic risk factor for AD. It is therefore intriguing that we find compelling evidence for association with CLU, a gene that encodes another major brain apolipoprotein15, suggesting that susceptibility genes are not randomly distributed through functional pathways. The predominant form of clusterin is a secreted heterodimeric glycoprotein of 75-80kDa. The single copy gene spans about 16kb on chromosome 8p21- |
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) | In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients - manually annotated by up to four raters - and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource. |
A randomized, multicenter, single-blinded trial comparing paclitaxel-coated balloon angioplasty with plain balloon angioplasty in drug-eluting stent restenosis: the PEPCAD-DES study. | OBJECTIVES
This study sought to define the impact of paclitaxel-coated balloon angioplasty for treatment of drug-eluting stent restenosis compared with uncoated balloon angioplasty alone.
BACKGROUND
Drug-coated balloon angioplasty is associated with favorable results for treatment of bare-metal stent restenosis.
METHODS
In this prospective, single-blind, multicenter, randomized trial, the authors randomly assigned 110 patients with drug-eluting stent restenoses located in a native coronary artery to paclitaxel-coated balloon angioplasty or uncoated balloon angioplasty. Dual antiplatelet therapy was prescribed for 6 months. Angiographic follow-up was scheduled at 6 months. The primary endpoint was late lumen loss. The secondary clinical endpoint was a composite of cardiac death, myocardial infarction attributed to the target vessel, or target lesion revascularization.
RESULTS
There was no difference in patient baseline characteristics or procedural results. Angiographic follow-up rate was 91%. Treatment with paclitaxel-coated balloon was superior to balloon angioplasty alone with a late loss of 0.43 ± 0.61 mm versus 1.03 ± 0.77 mm (p < 0.001), respectively. Restenosis rate was significantly reduced from 58.1% to 17.2% (p < 0.001), and the composite clinical endpoint was significantly reduced from 50.0% to 16.7% (p < 0.001), respectively.
CONCLUSIONS
Paclitaxel-coated balloon angioplasty is superior to balloon angioplasty alone for treatment of drug-eluting stent restenosis. (PEPCAD DES-Treatment of DES-In-Stent Restenosis With SeQuent® Please Paclitaxel Eluting PTCA Catheter [PEPCAD-DES]; NCT00998439). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.