title
stringlengths
8
300
abstract
stringlengths
0
10k
Naltrexone for the treatment of alcoholism: a meta-analysis of randomized controlled trials.
Many trials of naltrexone have been carried out in alcohol-dependent patients. This paper is aimed to systematically review its benefits, adverse effects, and discontinuation of treatment. We assessed and extracted the data of double-blind, randomized controlled trials (RCTs) comparing naltrexone with placebo or other treatment in people with alcoholism. Two primary outcomes were subjects who relapsed (including heavy drinking) and those who returned to drinking. Secondary outcomes were time to first drink, drinking days, number of standard drinks for a defined period, and craving. All outcomes were reported for the short, medium, and long term. Five common adverse effects and dropout rates in short-term treatment were also examined. A total of 2861 subjects in 24 RCTs presented in 32 papers were included. For short-term treatment, naltrexone significantly decreased relapses [relative risk (RR) 0.64, 95% confidence interval (CI) 0.51-0.82], but not return to drinking (RR 0.91, 95% CI 0.81-1.02). Short-term treatment of naltrexone significantly increased nausea, dizziness, and fatigue in comparison to placebo [RRs (95% CIs) 2.14 (1.61-2.83), 2.09 (1.28-3.39), and 1.35 (1.04-1.75)]. Naltrexone administration did not significantly diminish short-term discontinuation of treatment (RR 0.85, 95% CI 0.70-1.01). Naltrexone should be accepted as a short-term treatment for alcoholism. As yet, we do not know the appropriate duration of treatment continuation in an alcohol-dependent patient who responds to short-term naltrexone administration. To ensure that the real-world treatment is as effective as the research findings, a form of psychosocial therapy should be concomitantly given to all alcohol-dependent patients receiving naltrexone administration.
Diagnostic Classifiers Revealing how Neural Networks Process Hierarchical Structure
We investigate how neural networks can be used for hierarchical, compositional semantics. To this end, we define the simple but nontrivial artificial task of processing nested arithmetic expressions and study whether different types of neural networks can learn to add and subtract. We find that recursive neural networks can implement a generalising solution, and we visualise the intermediate steps: projection, summation and squashing. We also show that gated recurrent neural networks, which process the expressions incrementally, perform surprisingly well on this task: they learn to predict the outcome of the arithmetic expressions with reasonable accuracy, although performance deteriorates with increasing length. To analyse what strategy the recurrent network applies, visualisation techniques are less insightful. Therefore, we develop an approach where we formulate and test hypotheses on what strategies these networks might be following. For each hypothesis, we derive predictions about features of the hidden state representations at each time step, and train ’diagnostic classifiers’ to test those predictions. Our results indicate the networks follow a strategy similar to our hypothesised ’incremental strategy’.
Noise and Cooling in Electronics Packages
The noise produced by cooling air passing through electronics packages arises from two sources. One source is the noise of the air-moving fan of either an axial or centrifugal type. This noise may have both tonal and random components and is strongly dependent on the way that the fan is placed in the unit and on where its operation is on the fan's operating curve. Often, fans are the dominant noise sources. The flow of air produces random noise due to the turbulence generated throughout the unit. Because the turbulent airflow is also responsible for heat transfer between the components and the air stream, we can regard this part of the noise as the irreducible noise due to cooling. If fan noise were eliminated, this part of the noise would remain. There is a relation, therefore, between the irreducible noise and the cooling of the unit. But the fan noise must also be considered. The relation between total airflow related noise and cooling requirements is developed in this paper for the irreducible noise
Research on Combining Scrum with CMMI in Small and Medium Organizations
Agile method Scrum can effectively resolve numerous problems encountered when Capability Maturity Model Integration(CMMI) is implemented in small and medium software development organizations, but some special needs are hard to be satisfied. According to small and medium organizations' characteristic, the paper analyzes feasibility of combining Scrum and CMMI in depth. It is useful for organizations that build a new project management framework based on both CMMI and Scrum practices.
Do You Feel What I Feel? Social Aspects of Emotions in Twitter Conversations
We present a computational framework for understanding the social aspects of emotions in Twitter conversations. Using unannotated data and semisupervised machine learning, we look at emotional transitions, emotional influences among the conversation partners, and patterns in the overall emotional exchanges. We find that conversational partners usually express the same emotion, which we name Emotion accommodation, but when they do not, one of the conversational partners tends to respond with a positive emotion. We also show that tweets containing sympathy, apology, and complaint are significant emotion influencers. We verify the emotion classification part of our framework by a human-annotated corpus.
The respiratory health effects of nitrogen dioxide in children with asthma.
There is growing evidence that asthma symptoms can be aggravated or events triggered by exposure to indoor nitrogen dioxide (NO(2)) emitted from unflued gas heating. The impact of NO(2) on the respiratory health of children with asthma was explored as a secondary analysis of a randomised community trial, involving 409 households during the winter period in 2006 (June to September). Geometric mean indoor NO(2) levels were 11.4 μg · m(-3), while outdoor NO(2) levels were 7.4 μg · m(-3). Higher indoor NO(2) levels (per logged unit increase) were associated with greater daily reports of lower (mean ratio 14, 95% CI 1.12-1.16) and upper respiratory tract symptoms (mean ratio 1.03, 95% CI 1.00-1.05), more frequent cough and wheeze, and more frequent reliever use during the day, but had no effect on preventer use. Higher indoor NO(2) levels (per logged unit increase) were associated with a decrease in morning (-17.25 mL, 95% CI -27.63- -6.68) and evening (-13.21, 95% CI -26.03- -0.38) forced expiratory volume in 1 s readings. Outdoor NO(2) was not associated with respiratory tract symptoms, asthma symptoms, medication use or lung function measurements. These findings indicate that reducing NO(2) exposure indoors is important in improving the respiratory health of children with asthma.
Effects of Galla chinensis extracts on UVB-irradiated MMP-1 production in hairless mice
Galla chinensis (GAC) is a natural traditional Chinese medicine that has been widely used in folk medicine. Although GAC compounds (mainly gallic acid and methyl gallate) possess strong antiviral, antibacterial, anticancer, and antioxidant activities, there is no report regarding topical or oral administration of GAC compounds on UVB irradiation-induced photoaging in hairless mice (SKH: HR-1). In the present study, we examined cell viability, intracellular reactive oxygen species (ROS), matrix metalloproteinase-1 (MMP-1), and interleukin-6 (IL-6) in skin fibroblasts and keratinocytes induced by UVB in vitro. We also studied skin damage by measuring skin thickness, elasticity, wrinkling and levels of protein MMP-1, elastin, procollagen type I, and transforming growth factor-β1 (TGF-β1) in hairless mouse skin chronically irradiated by UVB in vivo. GAC treatment significantly prevented skin photoaging by reducing the levels of ROS, MMP-1, and IL-6 and promoting production of elastin, procollagen type I, and TGF-β1. According to the results of H&E staining and Masson’s trichrome staining, GAC reduced skin thickness and wrinkle formation while it increased skin elasticity. The effects of GAC on UVB-induced skin photoaging may be due to suppressed MMP-1 expression. These findings could be referenced for the development of new agents that target UVB-induced photoaging.
Effects of renal sympathetic denervation on exercise blood pressure, heart rate, and capacity in patients with resistant hypertension.
Renal denervation reduces office blood pressure in patients with resistant hypertension. This study investigated the effects of renal denervation on blood pressure, heart rate, and chronotropic index at rest, during exercise, and at recovery in 60 patients (renal denervation group=50, control group=10) with resistant hypertension using a standardized bicycle exercise test protocol performed 6 and 12 months after renal denervation. After renal denervation, exercise blood pressure at rest was reduced from 158±3/90±2 to 141±3/84±4 mm Hg (P<0.001 for systolic blood pressure/P=0.007 for diastolic blood pressure) after 6 months and 139±3/83±4 mm Hg (P<0.001/P=0.022) after 12 months. Exercise blood pressure tended to be lower at all stages of exercise at 6- and 12-month follow-up in patients undergoing renal denervation, although reaching statistical significance only at mild-to-moderate exercise levels (75-100 W). At recovery after 1 minute, blood pressure decreased from 201±4/95±2 to 177±4/88±2 (P<0.001/P=0.066) and 188±6/86±2 mm Hg (P=0.059/P=0.01) after 6 and 12 months, respectively. Heart rate was reduced after renal denervation from 71±3 bpm at rest, 128±5 bpm at maximum workload, and 96±5 bpm at recovery after 1 minute to 66±2 (P<0.001), 115±5 (P=0.107), and 89±3 bpm (P=0.008) after 6 months and to 69±3 (P=0.092), 122±7 (P=0.01), and 93±4 bpm (P=0.032) after 12 months. Mean exercise time increased from 6.59±0.33 to 8.4±0.32 (P<0.001) and 9.0±0.41 minutes (P=0.008), and mean workload increased from 93±2 to 100±2 (P<0.001) and 101±3 W (P=0.007) at 6- and 12-month follow-up, respectively. No changes were observed in the control group. In conclusion, renal denervation reduced blood pressure and heart rate during exercise, improved mean workload, and increased exercise time without impairing chronotropic competence.
Genetic predisposition to clinical manifestations in familial adenomatous polyposis with special reference to duodenal lesions
OBJECTIVE:In familial adenomatous polyposis (FAP), genetic predisposition for duodenal adenomatosis has not been investigated precisely. The aim of this study was to investigate the correlation between adenomatous polyposis coli (APC) gene mutation and duodenal adenomatosis in FAP.METHODS:APC gene mutation was determined by means of a protein truncation test in 34 patients from 25 families with FAP. The prevalence and grade of duodenal adenomatosis were compared among the proximal mutation group (exons 1–9), the distal mutation group (exons 10–15), and the undetermined groups. The correlation between the course of duodenal adenomatosis and APC gene mutation was retrospectively investigated in 19 patients.RESULTS:The prevalence of duodenal adenomatosis was lower in the proximal mutation group (44%) than in the distal mutation (100%) and undetermined (83%) groups. In patients with positive duodenal adenomatosis, the endoscopic grade did not differ among the groups. The endoscopic grade increased in two of the four patients with the proximal mutation group (50%), in three of 10 patients with the distal mutation group (30%), and in two of five patients (40%) with the undetermined group.CONCLUSIONS:Truncating APC gene mutation proximal to exon 9 may contribute to the less frequent development of duodenal adenomatosis in FAP, but severity and progression of duodenal adenomatosis do not seem to be determined by APC gene mutation alone.
U.S. west coast revisited: An aeromagnetic perspective
A new compilation of magnetic data for the western conterminous United States and offshore areas provides significant information about crustal units and structures in the region. Features shown on the compilation include a magnetic quiet zone along the coast and two lineaments inland. The magnetic quiet zone correlates with the accretionary prism at the western edge of the North American plate and overlies subducted ocean crust; abrupt termination of ocean-floor magnetic anomalies at, or a short distance east of, the toe of the accretionary prism is an inferred effect of subduction-induced low-temperature metamorphism of the ocean crust. The Puget Lowlands-San Joaquin lineament is an alignment of high-intensity magnetic anomalies that in the south, and possibly also in the north, are caused by bodies of mafic-ultramafic rocks accreted to North America during the Mesozoic and Tertiary. The lineup of the highs and the inferred lineup of the causative bodies may reflect fundamental structures that control Mesozoic and Tertiary evolution of the continental margin. The Mojave Desert lineament, a distinctive chain of short-wavelength magnetic anomalies in southern California, coincides partly with a zone of Mesozoic intrusions and the Cenozoic San Andreas fault system, but is likely to be older than both in origin and may reflect a Mesozoic or older crustal discontinuity.
Impact of the unknown communication channel on automatic speech recognition: a review KN-29
This review article summarizes the main difficulties encountered in Automatic Speech Recognition (ASR) when the type of communication channel is not known. This problem is crucial for the development of successful applications in promising domains such as computer telephony and cars. The main technical problems encountered are due to the speaker and the task (e.g. speaking style, Lombard reflex, vocal tract geometry), the use of microphones with different characteristics, the variable quality of the support channels (e.g. telephone channels are noisy and have different characteristics), reverberation and echoes, the variable distance and direction to the microphone introduced by hands-free recognition, and the ambient noise which distorts the input speech signals. This overview characterizes and emphasizes these problems and highlights some promising directions for future research. Finally, it presents an attempt to characterize the sensitivity of a phoneme recognizer as a function of the source of channel distortion, using the TIMIT database and several of its variants (NTIMIT, CTIMIT, FFMTIMIT).
Gas Turbine Performance.
Industrial gas turbines show performance characteristics that distinctly depend on ambient and operating conditions. They are not only influenced by site elevation, ambient temperature, and relative humidity, but also by the speed of the driven equipment, the fuel, and the load conditions. Proper application of gas turbines requires consideration of these factors. This tutorial explains these characteristics based on the performance of the engine compressor, the combustor and the turbine section, and certain control strategies. It introduces fundamental concepts that help to understand the flow of energy between the components. Additionally, methods are introduced that allow the use of data for trending and comparison purposes. The impact of component degradation on individual component performance, as well as overall engine performance is discussed, together with strategies to reduce the impact of degradation.
Intimate partner violence in late life: a review of the empirical literature.
This integrated review of the empirical literature synthesizes a decade of scientific research across scholarly and professional publications addressing intimate partner violence (IPV) in late life. Deriving insights through a qualitative coding scheme and detailed analysis of 57 empirical sources, we discuss the theoretical frameworks, conceptual themes, and methodological approaches that cut across the literature. Based on these findings, we identify future research directions for improved understanding of late-life IPV as well as implications for policy development and refined community interventions.
Beating the adaptive bandit with high probability
We provide a principled way of proving Õ(√T) high-probability guarantees for partial-information (bandit) problems over arbitrary convex decision sets. First, we prove a regret guarantee for the full-information problem in terms of “local” norms, both for entropy and self-concordant barrier regularization, unifying these methods. Given one of such algorithms as a black-box, we can convert a bandit problem into a full-information problem using a sampling scheme. The main result states that a high-probability Õ(√T) bound holds whenever the black-box, the sampling scheme, and the estimates of missing information satisfy a number of conditions, which are relatively easy to check. At the heart of the method is a construction of linear upper bounds on confidence intervals. As applications of the main result, we provide the first known efficient algorithm for the sphere with an Õ(√T) high-probability bound. We also derive the result for the n-simplex, improving the O(√nT log(nT)) bound of Auer et al [3] by replacing the log T term with log log T and closing the gap to the lower bound of Ω(√nT). While Õ(√T) high-probability bounds should hold for general decision sets through our main result, construction of linear upper bounds depends on the particular geometry of the set; we believe that the sphere example already exhibits the necessary ingredients. The guarantees we obtain hold for adaptive adversaries (unlike the in-expectation results of [1]) and the algorithms are efficient, given that the linear upper bounds on confidence can be computed.
Water Quality Measurement and Control from Remote Station for Pisiculture Using NI myRIO Beena
A Wireless sensor network is designed and implemented to automatically and remotely monitor and control the water quality parameters such as temperature, turbidity, pH and dissolved oxygen required for pisiculture. The measurements of the system which is at base station are communicated to the remote farmer through built-in Wi-Fi of NI-myRIO and displayed on user’s computer screen from where he can monitor and change the settings of the system. An audio output is also produced from speakers of the computer. SMS alert is sent to farmer’s mobile for variations in the monitored parameters of the system at the base station via GSM modem. The programming of the entire system is done using LabVIEW software and NI myRIO which has inbuilt processor, FPGA, Wi-Fi and web servicing capabilities. Web publishing tool of LabVIEW allows internet access to the front panel of the system to any remote user.
Randomized Kinodynamic Motion Planning with Moving Obstacles
This paper presents a novel randomized motion planner for robots that must achieve a specified goal under kinematic and/or dynamic motion constraints while avoiding collision with moving obstacles with known trajectories. The planner encodes the motion constraints on the robot with a control system and samples the robot’s state time space by picking control inputs at random and integrating its equations of motion. The result is a probabilistic roadmap of sampled state time points, called milestones, connected by short admissible trajectories. The planner does not precompute the roadmap; instead, for each planning query, it generates a new roadmap to connect an initial and a goal state time point. The paper presents a detailed analysis of the planner’s convergence rate. It shows that, if the state time space satisfies a geometric property called expansiveness, then a slightly idealized version of our implemented planner is guaranteed to find a trajectory when one exists, with probability quickly converging to 1, as the number of of milestones increases. Our planner was tested extensively not only in simulated environments, but also on a real robot. In the latter case, a vision module estimates obstacle motions just before planning starts. The planner is then allocated a small, fixed amount of time to compute a trajectory. If a change in the expected motion of the obstacles is detected while the robot executes the planned trajectory, the planner recomputes a trajectory on the fly. Experiments on the real robot led to several extensions of the planner in order to deal with time delays and uncertainties that are inherent to an integrated robotic system interacting with the physical world.
Efficient Spatio-Temporal Data Association Using Multidimensional Assignment in Multi-Camera Multi-Target Tracking
This paper proposes a novel multi-target tracking method which solves a data association problem using images from multi-cameras. In multicameras, two combinatorial problems should be solved at the same time: spatial data association between cameras and temporal data association between frames. The spatio-temporal data association problem is a well known NP-hard problem even in a small number of cameras or frames (more than 3). Current existing methods [1, 2] simplify the spatio-temporal data association problem through assumption of simple motion model (shortest path) and 3D location estimation. However, the complexity grows exponentially with the number of cameras. In this work, the spatio-temporal data association problem is formulated as a multidimensional assignment problem (MDA). To achieve a fast, efficient, and easily implementable algorithm, we solve the MDA problem iteratively by solving a sequence of bipartite matching problems using random splitting and merging operations. Hence, the proposed algorithm can be considered as a guided random search to find the global optimum through repeated random local searches (bipartite matchings). In addition, we design a new cost function considering 3D reconstruction accuracy, motion smoothness, visibility from cameras, starting/ending at entrance and exit zone, and false positive. Our approach reconstructs 3D trajectories that represent people’s movement as 3D cylinders whose locations are estimated considering all adjacent frames (See Figure 2). The observations during 1, ...,T frames and at cameras 1, ...K form a KT-partite (hyper) graph G = (V,E) = (I11 ∪ ...∪ IKT ,E), where vertices V are partitioned into K×T different independent sets I11, ...,IKT and each hyperedge in E contains at least one vertex in each partite set. Trajectory hypotheses set T can be defined as a set of all hyperedges E. We represent each trajectory hypothesis Tn ∈ T as a matrix whose entry in the k-th row and t-th column corresponds to an observation index at the t-th frame of the k-the camera. The problem of finding a set of disjoint trajectory hypotheses with a minimum sum of costs can be formulated as the MDA problem which is equivalent to the problem of minimizing the sum of costs of hyperedges containing one element per partite set in the hypergraph G. With binary decision variables xTn ∈ {0,1} deciding whether the trajectory Tn is in the association hypothesis H, and cost function c : T→R, the objective function and disjointness constraints are given by
Very-low-profile, stand-alone, tri-band WLAN antenna design for laptop-tablet computer with complete metal-backed cover
A very-low-profile, tri-band, planar inverted-F antenna (PIFA) with a height of 3.7 mm only for wireless local area network (WlAN) applications in the 2.4 GHz (2400∼2484 MHz), 5.2 GHz (5150∼5350 MHz), and 5.8 GHz (5725∼5825 MHz) bands for tablet-laptop computer with a metal back cover is presented. The antenna was made of a flat plate of size 18.7 mm × 32.5 mm and bent into a compact structure with the dimensions 3.7 mm × 7 mm × 32.5 mm. The PIFA comprised a radiating top patch, a feeding/shorting portion, and an antenna ground plane. The top patch was placed horizontally above the ground with the feeding/shorting portion connecting perpendicularly therebetween. The antenna was designed in a way that the radiating patch consisted of three branches, in which each branch controlled the corresponding resonant mode in the 2.4/5.2/5.8 GHz bands.
Design technology research for the nineties: more of the same?
The author examines the question of the need for further CAD research. A number of observations on future electronic products as well as the design of future systems are provided. Research and development topics in design technology for system design are summarized, and research and development organization in future design technology is reported. >
RFID-based supply chain partner authentication and key agreement
The growing use of RFID in supply chains brings along an indisputable added value from the business perspective, but raises a number of new interesting security challenges. One of them is the authentication of two participants of the supply chain that have possessed the same tagged item, but that have otherwise never communicated before. The situation is even more complex if we imagine that participants to the supply chain may be business competitors. We present a novel cryptographic scheme that solves this problem. In our solution, users exchange tags over the cycle of a supply chain and, if two entities have possessed the same tag, they agree on a secret common key they can use to protect their exchange of business sensitive information. No rogue user can be successful in a malicious authentication, because it would either be traceable or it would imply the loss of a secret key, which provides a strong incentive to keep the tag authentication information secret and protects the integrity of the supply chain. We provide game-based security proofs of our claims, without relying on the random oracle model.
The neural basis of the central executive system of working memory
WORKING memory refers to a system for temporary storage and manipulation of information in the brain, a function critical for a wide range of cognitive operations. It has been proposed that working memory includes a central executive system (CES) to control attention and information flow to and from verbal and spatial short-term memory buffers1. Although the prefrontal cortex is activated during both verbal and spatial passive working memory tasks2–8, the brain regions involved in the CES component of working memory have not been identified. We have used functional magnetic resonance imaging (fMRI) to examine brain activation during the concurrent performance of two tasks, which is expected to engage the CES. Activation of the prefrontal cortex was observed when both tasks are performed together, but not when they are performed separately. These results support the view that the prefrontal cortex is involved in human working memory.
The Measurement of Student Engagement : A Comparative Analysis of Various Methods and Student Self-report Instruments
One of the challenges with research on student engagement is the large variation in the measurement of this construct, which has made it challenging to compare fi ndings across studies. This chapter contributes to our understanding of the measurement of student in engagement in three ways. First, we describe strengths and limitations of different methods for assessing student engagement (i.e., self-report measures, experience sampling techniques, teacher ratings, interviews, and observations). Second, we compare and contrast 11 self-report survey measures of student engagement that have been used in prior research. Across these 11 measures, we describe what is measured (scale name and items), use of measure, samples, and the extent of reliability and validity information available on each measure. Finally, we outline limitations with current approaches to measurement and promising future directions. Researchers, educators, and policymakers are increasingly focused on student engagement as the key to addressing problems of low achievement, high levels of student boredom, alienation, and high dropout rates (Fredricks, Blumenfeld, & Paris, 2004 ) . Students become more disengaged as they progress from elementary to middle school, with some estimates that 25–40% of youth are showing signs of disengagement (i.e., uninvolved, apathetic, not trying very hard, and not paying attention) (Steinberg, Brown, & Dornbush, 1996 ; Yazzie-Mintz, 2007 ) . The consequences of disengagement for middle and high school youth from disadvantaged backgrounds are especially severe; these youth are less likely to graduate from high school and face limited employment prospects, increasing their risk for poverty, poorer health, and involvement in the criminal justice system (National Research Council and the Institute of Medicine, 2004 ) . Although there is growing interest in student engagement, there has been considerable variation in how this construct has been conceptualized over time (Appleton, Christenson, & Furlong, 2008 ; Fredricks et al., 2004 ; Jimerson, Campos, & Grief, 2003 ) . Scholars have used a broad range J. A. Fredricks , Ph.D. (*) Human Development , Connecticut College , New London , CT , USA e-mail: [email protected] W. McColskey , Ph.D. SERVE Center , University of North Carolina , Greensboro , NC , USA e-mail: [email protected] The Measurement of Student Engagement: A Comparative Analysis of Various Methods and Student Self-report Instruments Jennifer A. Fredricks and Wendy McColskey 764 J.A. Fredricks and W. McColskey of terms including student engagement, school engagement, student engagement in school, academic engagement, engagement in class, and engagement in schoolwork. In addition, there has been variation in the number of subcomponents of engagement including different conceptualizations. Some scholars have proposed a two-dimensional model of engagement which includes behavior (e.g., participation, effort, and positive conduct) and emotion (e.g., interest, belonging, value, and positive emotions) (Finn, 1989 ; Marks, 2000 ; Skinner, Kindermann, & Furrer, 2009b ) . More recently, others have outlined a three-component model of engagement that includes behavior, emotion, and a cognitive dimension (i.e., self-regulation, investment in learning, and strategy use) (e.g., Archaumbault, 2009 ; Fredricks et al., 2004 ; Jimerson et al., 2003 ; Wigfi eld et al., 2008 ) . Finally, Christenson and her colleagues (Appleton, Christenson, Kim, & Reschly, 2006 ; Reschly & Christenson, 2006 ) conceptualized engagement as having four dimensions: academic, behavioral, cognitive, and psychological (subsequently referred to as affective) engagement. In this model, aspects of behavior are separated into two different components: academics, which includes time on task, credits earned, and homework completion, and behavior, which includes attendance, class participation, and extracurricular participation. One commonality across the myriad of conceptualizations is that engagement is multidimensional. However, further theoretical and empirical work is needed to determine the extent to which these different dimensions are unique constructs and whether a three or four component model more accurately describes the construct of student engagement. Even when scholars have similar conceptualizations of engagement, there has been considerable variability in the content of items used in instruments. This has made it challenging to compare fi ndings from different studies. This chapter expands on our understanding of the measurement of student engagement in three ways. First, the strengths and limitations of different methods for assessing student engagement are described. Second, 11 self-report survey measures of student engagement that have been used in prior research are compared and contrasted on several dimensions (i.e., what is measured, purposes and uses, samples, and psychometric properties). Finally, we discuss limitations with current approaches to measurement. What is Student Engagement We defi ne student engagement as a meta-construct that includes behavioral, emotional, and cognitive engagement (Fredricks et al., 2004 ) . Although there are large individual bodies of literature on behavioral (i.e., time on task), emotional (i.e., interest and value), and cognitive engagement (i.e., self-regulation and learning strategies), what makes engagement unique is its potential as a multidimensional or “meta”-construct that includes these three dimensions. Behavioral engagement draws on the idea of participation and includes involvement in academic, social, or extracurricular activities and is considered crucial for achieving positive academic outcomes and preventing dropping out (Connell & Wellborn, 1991 ; Finn, 1989 ) . Other scholars defi ne behavioral engagement in terms of positive conduct, such as following the rules, adhering to classroom norms, and the absence of disruptive behavior such as skipping school or getting into trouble (Finn, Pannozzo, & Voelkl, 1995 ; Finn & Rock, 1997 ) . Emotional engagement focuses on the extent of positive (and negative) reactions to teachers, classmates, academics, or school. Others conceptualize emotional engagement as identifi cation with the school, which includes belonging, or a feeling of being important to the school, and valuing, or an appreciation of success in school-related outcomes (Finn, 1989 ; Voelkl, 1997 ) . Positive emotional engagement is presumed to create student ties to the institution and infl uence their willingness to do the work (Connell & Wellborn, 1991 ; Finn, 1989 ) . Finally, cognitive engagement is defi ned as student’s level of investment in learning. It includes being thoughtful, strategic, and willing to exert the necessary effort for comprehension of complex ideas or mastery of diffi cult skills (Corno & Mandinach, 1983 ; Fredricks et al., 2004 ; Meece, Blumenfeld, & Hoyle, 1988 ) . 765 37 The Measurement of Student Engagement... An important question is how engagement differs from motivation. Although the terms are used interchangeably by some, they are different and the distinctions between them are important. Motivation refers to the underlying reasons for a given behavior and can be conceptualized in terms of the direction, intensity, quality, and persistence of one’s energies (Maehr & Meyer, 1997 ) . A proliferation of motivational constructs (e.g., intrinsic motivation, goal theory, and expectancy-value models) have been developed to answer two broad questions “Can I do this task” and “Do I want to do this task and why?” ( Eccles, Wigfi eld, & Schiefele, 1998 ) . One commonality across these different motivational constructs is an emphasis on individual differences and underlying psychological processes. In contrast, engagement tends to be thought of in terms of action, or the behavioral, emotional, and cognitive manifestations of motivation (Skinner, Kindermann, Connell, & Wellborn, 2009a ) . An additional difference is that engagement refl ects an individual’s interaction with context (Fredricks et al., 2004 ; Russell, Ainsley, & Frydenberg, 2005 ) . In other words, an individual is engaged in something (i.e., task, activity, and relationship), and their engagement cannot be separated from their environment. This means that engagement is malleable and is responsive to variations in the context that schools can target in interventions (Fredricks et al., 2004 ; Newmann, Wehlage, & Lamborn, 1992 ). The self-system model of motivational development (Connell, 1990 ; Connell & Wellborn, 1991 ; Deci & Ryan, 1985 ) provides one theoretical model for studying motivation and engagement. This model is based on the assumption that individuals have three fundamental motivational needs: autonomy, competence, and relatedness. If schools provide children with opportunities to meet these three needs, students will be more engaged. Students’ need for relatedness is more likely to occur in classrooms where teachers and peers create a caring and supportive environment; their need for autonomy is met when they feel like they have a choice and when they are motivated by internal rather than external factors; and their need for competence is met when they experience the classroom as optimal in structure and feel like they can achieve desired ends (Fredricks et al., 2004 ) . In contrast, if students experience schools as uncaring, coercive, and unfair, they will become disengaged or disaffected (Skinner et al., 2009a, 2009b ) . This model assumes that motivation is a necessary but not suffi cient precursor to engagement (Appleton et al., 2008 ; Connell & Wellborn, 1991 ) . Methods for Studying Engagement
Negotiating the role of the professional nurse: The pedagogy of simulation: a grounded theory study.
Simulation is the mainstay of laboratory education in health sciences, yet there is a void of pedagogy-the art and science of teaching. Nursing faculty does not have adequate evidence-based resources related to how students learn through simulation. The research questions that were addressed were as follows: (a) How do students learn using simulation? (b) What is the process of learning with simulations from the students' perspective? (c) What faculty teaching styles promote learning? and (d) How can faculty support students during simulation? Grounded theory methodology was used to explore how senior baccalaureate nursing students learn using simulation. Twenty-six students participated in this research study. Sixteen nursing students who completed two semesters of simulation courses volunteered for in-depth audio-taped interviews. In addition, there were two focus groups with five senior students in each group who validated findings and identified faculty teaching styles and supportive interventions. Negotiating the Role of the Professional Nurse was the core category, which included the following phases (I) feeling like an imposter, (II) trial and error, (III) taking it seriously, (IV) transference of skills and knowledge, and (V) professionalization. Faculty traits and teaching strategies for teaching with simulation were also identified. A conceptual model of the socialization process was developed to assist faculty in understanding the ways students learn with simulation and ways to facilitate their development. These findings provide a midrange theory for the pedagogy of simulation and will help faculty gain insight and help to assimilate into teaching-learning strategies.
Indications for repair of full-thickness rotator cuff tears.
Rotator cuff repair surgery for full-thickness tears is common and accepted in orthopaedics today. Given that a significant number of people have asymptomatic rotator cuff tears, the indications for surgery are, however, somewhat unclear. Multiple factors such as duration of symptoms, acuity and size of the tear, patient age, and others require consideration and can influence the decision to perform surgery. This article reviews these variables and the indications for surgery to repair full-thickness rotator cuff tears.
Perianal abscess.
only Painful initially, patient preference No cost Digitation Pilot RCTs 28 Potential risk of premature closure No cost Open wound (fig 4⇓) RCT = randomised controlled trial. For personal use only: See rights and reprints http://www.bmj.com/permissions Subscribe: http://www.bmj.com/subscribe BMJ 2017;356:j475 doi: 10.1136/bmj.j475 (Published 2017 February 21) Page 4 of 6
Security in Automotive Bus Systems
This work presents a study of current and future bus systems with respect to their security against various malicious attacks. After a brief description of the most well-known and established vehicular communication systems, we present feasible attacks and potential exposures for these automotive networks. We also provide an approach for secured automotive communication based on modern cryptographic mechanisms that provide secrecy, manipulation prevention and authentication to solve most of the vehicular bus security issues.
Cancer genetic risk assessment and referral patterns in primary care.
PURPOSE This study was undertaken to describe cancer risk assessment practices among primary care providers (PCPs). METHODS An electronic survey was sent to PCPs affiliated with a single insurance carrier. Demographic and practice characteristics associated with cancer genetic risk assessment and testing activities were described. Latent class analysis supported by likelihood ratio tests was used to define PCP profiles with respect to the level of engagement in genetic risk assessment and referral activity based on demographic and practice characteristics. RESULTS 860 physicians responded to the survey (39% family practice, 29% internal medicine, 22% obstetrics/gynecology (OB/GYN), 10% other). Most respondents (83%) reported that they routinely assess hereditary cancer risk; however, only 33% reported that they take a full, three-generation pedigree for risk assessment. OB/GYN specialty, female gender, and physician access to a genetic counselor were independent predictors of referral to cancer genetics specialists. Three profiles of PCPs, based upon referral practice and extent of involvement in genetics evaluation, were defined. CONCLUSION Profiles of physician characteristics associated with varying levels of engagement with cancer genetic risk assessment and testing can be identified. These profiles may ultimately be useful in targeting decision support tools and services.
Examining technology acceptance by school teachers: a longitudinal study
The role of information technology (IT) in education has significantly increased, but resistance to technology by public school teachers worldwide remains high. This study examined public school teachers’ technology acceptance decision-making by using a research model that is based on key findings from relevant prior research and important characteristics of the targeted user acceptance phenomenon. The model was longitudinally tested using responses from more than 130 teachers attending an intensive 4-week training program on Microsoft PowerPoint, a common but important classroom presentation technology. In addition to identifying key acceptance determinants, we examined plausible changes in acceptance drivers over the course of the training, including their influence patterns and magnitudes. Overall, our model showed a reasonably good fit with the data and exhibited satisfactory explanatory power, based on the responses collected from training commencement and completion. Our findings suggest a highly prominent and significant core influence path from job relevance to perceived usefulness and then technology acceptance. Analysis of data collected at the beginning and the end of the training supports most of our hypotheses and sheds light on plausible changes in their influences over time. Specifically, teachers appear to consider a rich set of factors in initial acceptance but concentrate on fundamental determinants (e.g. perceived usefulness and perceived ease of use) in their continued acceptance. # 2003 Published by Elsevier B.V.
Weak supervision for detecting object classes from activities
Weakly supervised learning for object detection has been gaining significant attention in the recent past. Visually similar objects are extracted automatically from weakly labelled videos hence bypassing the tedious process of manually annotating training data. However, the problem as applied to small or medium sized objects is still largely unexplored. Our observation is that weakly labelled information can be derived from videos involving human-object interactions. Since the object is characterized neither by its appearance nor its motion in such videos, we propose a robust framework that taps valuable human context and models similarity of objects based on appearance and functionality. Furthermore, the framework is designed such that it maximizes the utility of the data by detecting possibly multiple instances of an object from each video. We show that object models trained in this fashion perform between 86% and 92% of their fully supervised counterparts on three challenging RGB and RGB-D datasets.
Measurement of the Effect of Physical Exercise on the Concentration of Individuals with ADHD
Attention Deficit Hyperactivity Disorder (ADHD) mainly affects the academic performance of children and adolescents. In addition to bringing physical and mental health benefits, physical activity has been used to prevent and improve ADHD comorbidities; however, its effectiveness has not been quantified. In this study, the effect of physical activity on children's attention was measured using a computer game. Intense physical activity was promoted by a relay race, which requires a 5-min run without a rest interval. The proposed physical stimulus was performed with 28 volunteers: 14 with ADHD (GE-EF) and 14 without ADHD symptoms (GC-EF). After 5 min of rest, these volunteers accessed the computer game to accomplish the tasks in the shortest time possible. The computer game was also accessed by another 28 volunteers: 14 with ADHD (GE) and 14 without these symptoms (GC). The response time to solve the tasks that require attention was recorded. The results of the four groups were analyzed using D'Agostino statistical tests of normality, Kruskal-Wallis analyses of variance and post-hoc Dunn tests. The groups of volunteers with ADHD who performed exercise (GE-EF) showed improved performance for the tasks that require attention with a difference of 30.52% compared with the volunteers with ADHD who did not perform the exercise (GE). The (GE-EF) group showed similar performance (2.5% difference) with the volunteers in the (GC) group who have no ADHD symptoms and did not exercise. This study shows that intense exercise can improve the attention of children with ADHD and may help their school performance.
Searching an Encrypted Cloud Meets Blockchain: A Decentralized, Reliable and Fair Realization
Enabling search directly over encrypted data is a desirable technique to allow users to effectively utilize encrypted data outsourced to a remote server like cloud service provider. So far, most existing solutions focus on an honest-but-curious server, while security designs against a malicious server have not drawn enough attention. It is not until recently that a few works address the issue of verifiable designs that enable the data owner to verify the integrity of search results. Unfortunately, these verification mechanisms are highly dependent on the specific encrypted search index structures, and fail to support complex queries. There is a lack of a general verification mechanism that can be applied to all search schemes. Moreover, no effective countermeasures (e.g., punishing the cheater) are available when an unfaithful server is detected. In this work, we explore the potential of smart contract in Ethereum, an emerging blockchain-based decentralized technology that provides a new paradigm for trusted and transparent computing. By replacing the central server with a carefully-designed smart contract, we construct a decentralized privacy-preserving search scheme where the data owner can receive correct search results with assurance and without worrying about potential wrongdoings of a malicious server. To better support practical applications, we introduce fairness to our scheme by designing a new smart contract for a financially-fair search construction, in which every participant (especially in the multiuser setting) is treated equally and incentivized to conform to correct computations. In this way, an honest party can always gain what he deserves while a malicious one gets nothing. Finally, we implement a prototype of our construction and deploy it to a locally simulated network and an official Ethereum test network, respectively. The extensive experiments and evaluations demonstrate the practicability of our decentralized search scheme over encrypted data.
A Prototype Navigation System for Guiding Blind People Indoors using NXT Mindstorms
People with visual impairment face enormous difficulties in terms of their mobility as they do not have enough information about their location and orientation with respect to traffic and obstacles on their route. Visually impaired people can navigate unknown areas by relying on the assistance of canes, other people, or specially trained guide dogs. The traditional ways of using guide dogs and long-cane only help to avoid obstacles, not to know where they are. The research presented in this paper introduces a mobile assistant navigation prototype to locate and direct blind people indoors. Since most of the existing navigation systems developed so far for blind people employ a complex conjunction of positioning systems, video cameras, locationbased and image processing algorithms, we designed an affordable low-cost prototype navigation system for orienting and tracking the position of blind people in complex environments. The prototype system is based on the inertial navigation system and experiments have been performed on NXT Mindstorms platform.
Characterization, Modeling, and Application of 10-kV SiC MOSFET
Ten-kilovolt SiC MOSFETs are currently under development by a number of organizations in the United States, with the aim of enabling their applications in high-voltage high-frequency power conversions. The aim of this paper is to obtain the key device characteristics of SiC MOSFETs so that their realistic application prospect can be provided. In particular, the emphasis is on obtaining their losses in various operation conditions from the extensive characterization study and a proposed behavioral SPICE model. Using the validated MOSFET SPICE model, a 20-kHz 370-W dc/dc boost converter based on a 10-kV 4H-SiC DMOSFET and diodes is designed and experimentally demonstrated. In the steady state of the boost converter, the total power loss in the 15.45-mm2 SiC MOSFET is 23.6 W for the input power of 428 W. The characterization study of the experimental SiC MOSFET and the experiment of the SiC MOSFET-based boost converter indicate that the turn-on losses of SiC MOSFETs are the dominant factors in determining their maximum operation frequency in hard-switched circuits with conventional thermal management. Replacing a 10-kV SiC PiN diode with a 10-kV SiC JBS diode as a boost diode and using a small external gate resistor, the turn-on loss of the SiC MOSFET can be reduced, and the 10-kV 5-A SiC MOSFET-based boost converter is predicted to be capable of a 20-kHz operation with a 5-kV dc output voltage and a 1.25-kW output power by the PSpice simulation with the MOSFET model. The low losses and fast switching speed of 10-kV SiC MOSFETs shown in the characterization study and the preliminary demonstration of the boost converter make them attractive in high-frequency high-voltage power-conversion applications.
Experimental Studies of Symbolic Shortest-Path Algorithms
Graphs can be represented symbolically by the Ordered Binary Decision Diagram (OBDD) of their characteristic function. To solve problems in such implicitly given graphs, specialized symbolic algorithms are needed which are restricted to the use of functional operations offered by the OBDD data structure. In this paper, two symbolic algorithms for the single-source shortest-path problem with nonnegative positive integral edge weights are presented which represent symbolic versions of Dijkstra’s algorithm and the Bellman-Ford algorithm. They execute O ( N ·log(NB) ) resp. O ( NM ·log(NB) ) OBDD-operations to obtain the shortest paths in a graph with N nodes, M edges, and maximum edge weight B. Despite the larger worst-case bound, the symbolic BellmanFord-approach is expected to behave much better on structured graphs because it is able to handle updates of node distances effectively in parallel. Hence, both algorithms have been studied in experiments on random, grid, and threshold graphs with different weight functions. These studies support the assumption that the Dijkstra-approach behaves efficient w. r. t. space usage, while the Bellman-Ford-approach is dominant w. r. t. runtime.
Suicide and suicidal behavior.
Suicidal behavior is a leading cause of injury and death worldwide. Information about the epidemiology of such behavior is important for policy-making and prevention. The authors reviewed government data on suicide and suicidal behavior and conducted a systematic review of studies on the epidemiology of suicide published from 1997 to 2007. The authors' aims were to examine the prevalence of, trends in, and risk and protective factors for suicidal behavior in the United States and cross-nationally. The data revealed significant cross-national variability in the prevalence of suicidal behavior but consistency in age of onset, transition probabilities, and key risk factors. Suicide is more prevalent among men, whereas nonfatal suicidal behaviors are more prevalent among women and persons who are young, are unmarried, or have a psychiatric disorder. Despite an increase in the treatment of suicidal persons over the past decade, incidence rates of suicidal behavior have remained largely unchanged. Most epidemiologic research on suicidal behavior has focused on patterns and correlates of prevalence. The next generation of studies must examine synergistic effects among modifiable risk and protective factors. New studies must incorporate recent advances in survey methods and clinical assessment. Results should be used in ongoing efforts to decrease the significant loss of life caused by suicidal behavior.
Crowdturfers, Campaigns, and Social Media: Tracking and Revealing Crowdsourced Manipulation of Social Media
Crowdturfing has recently been identified as a sinister counterpart to the enormous positive opportunities of crowdsourcing. Crowdturfers leverage human-powered crowdsourcing platforms to spread malicious URLs in social media, form “astroturf” campaigns, and manipulate search engines, ultimately degrading the quality of online information and threatening the usefulness of these systems. In this paper we present a framework for “pulling back the curtain” on crowdturfers to reveal their underlying ecosystem. Concretely, we analyze the types of malicious tasks and the properties of requesters and workers in crowdsourcing sites such as Microworkers.com, ShortTask.com and Rapidworkers.com, and link these tasks (and their associated workers) on crowdsourcing sites to social media, by monitoring the activities of social media participants. Based on this linkage, we identify the relationship structure connecting these workers in social media, which can reveal the implicit power structure of crowdturfers identified on crowdsourcing sites. We identify three classes of crowdturfers – professional workers, casual workers, and middlemen – and we develop statistical user models to automatically differentiate these workers and regular social media users.
Hand rehabilitation after stroke using a wearable, high DOF, spring powered exoskeleton
Stroke patients often have inappropriate finger flexor activation and finger extensor weakness, which makes it difficult to open their affected hand for functional grasp. The goal was to develop a passive, lightweight, wearable device to enable improved hand function during performance of activities of daily living. The device, HandSOME II, assists with opening the patient's hand using 11 elastic actuators that apply extension torques to finger and thumb joints. Device design and initial testing are described. A novel mechanical design applies forces orthogonal to the finger segments despite the fact that all of the device DOFs are not aligned with human joint DOF. In initial testing with seven stroke subjects with impaired hand function, use of HandSOME II significantly increased maximum extension angles and range of motion in all of the index finger joints (P<;0.05). HandSOME II allows performance of all the grip patterns used in daily activities and can be used as part of home-based therapy programs.
A Comparison of the Effects of Stabilization Exercises Plus Manual Therapy to Those of Stabilization Exercises Alone in Patients With Nonspecific Mechanical Neck Pain: A Randomized Clinical Trial.
STUDY DESIGN Randomized clinical trial. BACKGROUND Little is known about the efficacy of providing manual therapy in addition to cervical and scapulothoracic stabilization exercises in people with mechanical neck pain (MNP). Objectives To compare the effects of stabilization exercises plus manual therapy to those of stabilization exercises alone on disability, pain, range of motion (ROM), and quality of life in patients with MNP. METHODS One hundred two patients with MNP (18-65 years of age) were recruited and randomly allocated into 2 groups: stabilization exercise without (n = 51) and with (n = 51) manual therapy. The program was carried out 3 days per week for 4 weeks. The Neck Disability Index, visual analog pain scale, digital algometry of pressure pain threshold, goniometric measurements, and Medical Outcomes Study 36-Item Short-Form Health Survey were used to assess participants at baseline and after 4 weeks. RESULTS Improvements in Neck Disability Index score, night pain, rotation ROM, and the Medical Outcomes Study 36-Item Short-Form Health Survey score were greater in the group that received stabilization exercise with manual therapy compared to the group that only received stabilization exercise. Between-group differences (95% confidence interval) were 2.2 (0.1, 4.3) points for the Neck Disability Index, 1.1 (0.0, 2.3) cm for pain at night measured on the visual analog scale, -4.3° (-8.1°, -0.5°) and -5.0° (-8.2°, -1.7°) for right and left rotation ROM, respectively, and -2.9 (-5.4, -0.4) points and -3.1 (-6.2, 0.0) points for the Medical Outcomes Study 36-Item Short-Form Health Survey physical and mental components, respectively. Changes in resting and activity pain, pressure pain threshold, and cervical extension or lateral flexion ROM did not differ significantly between the groups. Pressure pain threshold increased only in those who received stabilization exercise with manual therapy (P<.05). CONCLUSION The results of this study suggest that stabilization exercises with manual therapy may be superior to stabilization exercises alone for improving disability, pain intensity at night, cervical rotation motion, and quality of life in patients with MNP. LEVEL OF EVIDENCE Therapy, level 1b.
Effect of DHA supplementation in a very low-calorie ketogenic diet in the treatment of obesity: a randomized clinical trial
A VLCK diet supplemented with DHA, commercially available, was tested against an isocaloric VLCK diet without DHA. The main purpose of this study was to compare the effect of DHA supplementation in classic cardiovascular risk factors, adipokine levels, and inflammation-resolving eicosanoids. A total of obese patients were randomized into two groups: a group supplemented with DHA (n = 14) (PnK-DHA group) versus a group with an isocaloric diet free of supplementation (n = 15) (control group). The follow-up period was 6 months. The average weight loss after 6 months of treatment was 20.36 ± 5.02 kg in control group and 19.74 ± 5.10 kg in PnK-DHA group, without statistical differences between both groups. The VLCK diets induced a significant change in some of the biological parameters, such as insulin, HOMA-IR, triglycerides, LDL cholesterol, C-reactive protein, resistin, TNF alpha, and leptin. Following DHA supplementation, the DHA-derived oxylipins were significantly increased in the intervention group. The ratio of proresolution/proinflammatory lipid markers was increased in plasma of the intervention group over the entire study. Similarly, the mean ratios of AA/EPA and AA/DHA in erythrocyte membranes were dramatically reduced in the PnK-DHA group and the anti-inflammatory fatty acid index (AIFAI) was consistently increased after the DHA treatment (p < 0.05). The present study demonstrated that a very low-calorie ketogenic diet supplemented with DHA was significantly superior in the anti-inflammatory effect, without statistical differences in weight loss and metabolic improvement.
Mining questions asked by web developers
Modern web applications consist of a significant amount of client- side code, written in JavaScript, HTML, and CSS. In this paper, we present a study of common challenges and misconceptions among web developers, by mining related questions asked on Stack Over- flow. We use unsupervised learning to categorize the mined questions and define a ranking algorithm to rank all the Stack Overflow questions based on their importance. We analyze the top 50 questions qualitatively. The results indicate that (1) the overall share of web development related discussions is increasing among developers, (2) browser related discussions are prevalent; however, this share is decreasing with time, (3) form validation and other DOM related discussions have been discussed consistently over time, (4) web related discussions are becoming more prevalent in mobile development, and (5) developers face implementation issues with new HTML5 features such as Canvas. We examine the implications of the results on the development, research, and standardization communities.
MPI Motion Simulator: Development and Analysis of a Novel Motion Simulator
This paper discusses the technical issues that were required to adapt a KUKA Robocoaster for use as a real-time motion simulator. Within this context, the paper addresses the physical modifications and the software control structure that were needed to have a flexible and safe experimental setup. It also addresses the delays and transfer function of the system. The paper is divided into two sections. The first section describes the control and safety structures of the MPI Motion Simulator. The second section shows measurements of latencies and frequency responses of the motion simulator. The results show that the frequency responses of the MPI Motion Simulator compare favorably with high-end Stewart Platforms, and therefore demonstrate the suitability of robot-based motion simulators for flight simulation.
The enzyme as drug: application of enzymes as pharmaceuticals.
Enzymes as drugs have two important features that distinguish them from all other types of drugs. First, enzymes often bind and act on their targets with great affinity and specificity. Second, enzymes are catalytic and convert multiple target molecules to the desired products. These two features make enzymes specific and potent drugs that can accomplish therapeutic biochemistry in the body that small molecules cannot. These characteristics have resulted in the development of many enzyme drugs for a wide range of disorders.
The association of family social support, depression, anxiety and self-efficacy with specific hypertension self-care behaviours in Chinese local community
This study aimed to test the role of family social support, depression, anxiety and self-efficacy on specific self-care behaviours. In a local community health center, 318 patients with hypertension completed a questionnaire assessing self-care, family social support, depression, anxiety and self-efficacy in 2012. Each self-care behaviour was separately analyzed with logistic regression models. The mean score of perceived family social support for hypertension treatment was 20.91 (maximum=60). Adult children were identified as the primary support source. Approximately 22.3% and 15.4% of participants reported symptoms of anxiety and depression, respectively. Participants had moderately positive levels of confidence performing self-care (42.1±13.3 out of 60). After adjusting for demographic and health variables, a 10-unit increase in family social support increased the odds ratio (OR) of taking medication by 1.39 (95% confidence interval (CI) 1.03–1.87) and increased the OR for measuring blood pressure (BP) regularly by 1.33 (95% CI 1.02–1.74). Depression and anxiety were not associated with any self-care behaviours. A10-unit increase in self-efficacy increased the adjusted OR for performing physical exercise to 1.25 (95% CI 1.04–1.49). In conclusion, family social support was positively associated with medication adherence and regular BP measurement. Strategies to improve family social support should be developed for hypertension control, yet further prospective studies are needed to understand the effects of family social support, depression, anxiety and self-efficacy on self-care behaviours.
Dispersion relations at finite temperature and density for nucleons and pions
We calculate the nucleonic and pionic dispersion relations at finite temperature T and non-vanishing chemical potentials $(\mu_f)$ in the context of an effective chiral theory that describes the strong and electromagnetic interactions for nucleons and pions. The dispersion relations are calculated in the broken chiral symmetry phase, where the nucleons are massive and pions are taken as massless. The calculation is performed at lowest order in the energy expansion, working in the framework of the real time formalism of thermal field theory in the Feynman gauge. These one-loop dispersion relations are obtained at leading order with respect to T and $\mu_f$. We also evaluate the effective masses of the quasi-nucleon and quasi-pion excitations in thermal and chemical conditions as the ones of a neutron star.
Designing good algorithms for MapReduce and beyond
As MapReduce/Hadoop grows in importance, we find more exotic applications being written this way. Not every program written for this platform performs as well as we might wish. There are several reasons why a MapReduce program can underperform expectations. One is the need to balance the communication cost of transporting data from the mappers to the reducers against the computation done at the mappers and reducers themselves. A second important issue is selecting the number of rounds of MapReduce. A third issue is that of skew. If wall-clock time is important, then using many different reduce-keys and many compute nodes may minimize the time to finish the job. Yet if the data is uncooperative, and no provision is made to distribute the data evenly, much of the work is done by a single node.
Document Structure
We argue the case for abstract document structure as a separate descriptive level in the analysis and generation of written texts. The purpose of this representation is to mediate between the message of a text (i.e., its discourse structure) and its physical presentation (i.e., its organization into graphical constituents like sections, paragraphs, sentences, bulleted lists, figures, and footnotes). Abstract document structure can be seen as an extension of Nunberg's text-grammar it is also closely related to logical markup in languages like HTML and LaTEX. We show that by using this intermediate representation, several subtasks in language generation and language understanding can be defined more cleanly.
Privacy Risk Assessment on Email Tracking
Today's online marketing industry has widely employed email tracking techniques, such as embedding a tiny tracking pixel, to track email opens of potential customers and measure marketing effectiveness. However, email tracking could allow miscreants to collect metadata information associated with email reading without user awareness and then leverage the information for stealthy surveillance, which has raised serious privacy concerns. In this paper, we present an in-depth and comprehensive study on the privacy implications of email tracking. First, we develop an email tracking system and perform realworld tracking on hundreds of solicited crowdsourcing participants. We estimate the amount of privacy-sensitive information available from email reading, assess privacy risks of information leakage, and demonstrate how easy it is to launch a long-term targeted surveillance attack in real scenarios by simply sending an email with tracking capability. Second, we investigate the prevalence of email tracking through a large-scale measurement, which includes more than 44,000 email samples obtained over a period of seven years. Third, we conduct a user study to understand users' perception of privacy infringement caused by email tracking. Finally, we evaluate existing countermeasures against email tracking and propose guidelines for developing more comprehensive and fine-grained prevention solutions.
Multiple sclerosis that is progressive from the time of onset: clinical characteristics and progression of disability.
OBJECTIVE To use the new consensus definitions of primary progressive multiple sclerosis (PPMS) and progressive relapsing multiple sclerosis (PRMS) to report the demographic, clinical, and natural history characteristics of multiple sclerosis (MS) that is progressive from the time of onset. DESIGN Retrospective study by database/chart review and telephone interview. SETTING Multiple sclerosis clinic at a university teaching hospital. PATIENTS Eighty-three patients (prevalence, 6.9%) with PPMS and 12 patients (prevalence, 1.0%) with PRMS were studied. RESULTS Fifty-nine percent of the patients with PPMS (n=49) and 67% of the patients with PRMS (n=8) were women. Mean +/- SD ages at the time of onset were 41.2 +/- 10.5 and 38.0 +/- 7.3 years, respectively; mean disease duration was 14.2 +/- 8.8 and 12.2 +/- 6.5 years, respectively. The initial symptoms involved leg weakness in 94% of the patients with PPMS (n = 78) and 100% of the patients with PRMS (n= 12). For the PPMS cohort, a syndrome consistent with isolated myelopathy was found in 36% of patients (n = 30) and arm weakness without leg weakness did not occur. Mean +/- SEM time of progression to a score of 6.0 on the Expanded Disability Status Scale was 10.2 +/- 1.0 years for patients with PPMS and 10.9 +/- 2.6 years for patients with PRMS. CONCLUSIONS The clinical characteristics and disability progression of these MS subtypes were indistinguishable, with the exception of 1 or 2 relapses in patients with PRMS that occurred 8 months to 9 years after the onset of symptoms. We see little reason to consider PPMS and PRMS separate clinical entities; however, whether they can be better distinguished by radiological, histopathological, or immunological markers of disease activity remains unknown.
Points-to analysis for JavaScript
JavaScript is widely used by web developers and the complexity of JavaScript programs has increased over the last year. Therefore, the need for program analysis for JavaScript is evident. Points-to analysis for JavaScript is to determine the set of objects to which a reference variable or an object property may point. Points-to analysis for JavaScript is a basis for further program analyses for JavaScript. It has a wide range of applications in code optimization and software engineering tools. However, points-to analysis for JavaScript has not yet been developed. JavaScript has dynamic features such as the runtime modification of objects through addition of properties or updating of methods. We propose a points-to analysis for JavaScript which precisely handles the dynamic features of JavaScript. Our work is the first attempt to analyze the points-to behavior of JavaScript. We evaluate the analysis on a set of JavaScript programs. We also apply the analysis to a code optimization technique to show that the analysis can be practically useful.
Automatic Generation of Kinematic Models for the Conversion of Human Motion Capture Data into Humanoid Robot Motion
Human motion capture is a promising technique for the generation of humanoid robot motions. To convert human motion into humanoid robot motion, we need to relate the humanoid robot kinematics to the kinematics of a human performer. In this paper we propose an automatic approach for scaling of humanoid robot kinematic parameters to the kinematic parameters of a human performer. The kinematic model is constructed directly from the motion capture data without any manual measurements. We discuss the use of the resulting kinematic model for the generation of humanoid robot motions based on the observed human motions. The results of the proposed technique on real human motion capture data are presented.
Deep Convolution Neural Networks for Twitter Sentiment Analysis
Twitter sentiment analysis technology provides the methods to survey public emotion about the events or products related to them. Most of the current researches are focusing on obtaining sentiment features by analyzing lexical and syntactic features. These features are expressed explicitly through sentiment words, emoticons, exclamation marks, and so on. In this paper, we introduce a word embeddings method obtained by unsupervised learning based on large twitter corpora, this method using latent contextual semantic relationships and co-occurrence statistical characteristics between words in tweets. These word embeddings are combined with n-grams features and word sentiment polarity score features to form a sentiment feature set of tweets. The feature set is integrated into a deep convolution neural network for training and predicting sentiment classification labels. We experimentally compare the performance of our model with the baseline model that is a word n-grams model on five Twitter data sets, the results indicate that our model performs better on the accuracy and F1-measure for twitter sentiment classification.
Algorithmic approaches to protein-protein interaction site prediction
Interaction sites on protein surfaces mediate virtually all biological activities, and their identification holds promise for disease treatment and drug design. Novel algorithmic approaches for the prediction of these sites have been produced at a rapid rate, and the field has seen significant advancement over the past decade. However, the most current methods have not yet been reviewed in a systematic and comprehensive fashion. Herein, we describe the intricacies of the biological theory, datasets, and features required for modern protein-protein interaction site (PPIS) prediction, and present an integrative analysis of the state-of-the-art algorithms and their performance. First, the major sources of data used by predictors are reviewed, including training sets, evaluation sets, and methods for their procurement. Then, the features employed and their importance in the biological characterization of PPISs are explored. This is followed by a discussion of the methodologies adopted in contemporary prediction programs, as well as their relative performance on the datasets most recently used for evaluation. In addition, the potential utility that PPIS identification holds for rational drug design, hotspot prediction, and computational molecular docking is described. Finally, an analysis of the most promising areas for future development of the field is presented.
FPGA Based Parallel Architectures for Normalized Cross-Correlation
As a similarity measure, normalized cross-correlation has found application in a broad range of image processing. A dedicated hardware implementation of normalized cross-correlation is crucial for the requirements of real-time high-speed tasks such as automatic target matching, recognition and tracking. Two efficient parallel architectures for real-time implementation of normalized cross-correlation using field programmable gate array (FPGA) are proposed in this paper. In these architectures, several novel efficient approaches are proposed to reduce logic resource usage and computation time. These two architectures can be applied in different situations according to the practical available resource of the FPGA chip used. Function and timing simulation with Quartus II 8.0 and practical experiment in target recognition using image matching have shown that these architectures can effectively improve the speed performance of the practical target recognition system.
The rkd-Tree : An Improved kd-Tree for Fast n-Closest Point Queries in Large Point Sets
The kd-tree is used in various applications, such as photon simulation with photon maps, or normal estimation in point sets for reconstruction, in order to perform fast n-closest neighbour searches in huge, static data sets of point sets of arbitrary dimensions. In a number of cases, where lower dimensional point sets are embedded in higher dimensional spaces, it has been shown that the vantage point tree (vp-tree) can significantly outperform the kd-tree. In this paper we introduce the rkd-tree, a modified version of the kd-tree that applies ideas from the vp-tree to the kd-tree. This improved kd-tree version is shown to outperform both the kd-tree and the vp-tree in a number of artificial and real test-cases.
Facial expression recognition based on local region specific features and support vector machines
Facial expressions are one of the most powerful, natural and immediate means for human being to communicate their emotions and intensions. Recognition of facial expression has many applications including human-computer interaction, cognitive science, human emotion analysis, personality development etc. In this paper, we propose a new method for the recognition of facial expressions from single image frame that uses combination of appearance and geometric features with support vector machines classification. In general, appearance features for the recognition of facial expressions are computed by dividing face region into regular grid (holistic representation). But, in this paper we extracted region specific appearance features by dividing the whole face region into domain specific local regions. Geometric features are also extracted from corresponding domain specific regions. In addition, important local regions are determined by using incremental search approach which results in the reduction of feature dimension and improvement in recognition accuracy. The results of facial expressions recognition using features from domain specific regions are also compared with the results obtained using holistic representation. The performance of the proposed facial expression recognition system has been validated on publicly available extended Cohn-Kanade (CK+) facial expression data sets.
Anatomic characteristics and clinical implications of angiographic coronary thrombus: insights from a patient-level pooled analysis of SYNTAX, RESOLUTE, and LEADERS Trials.
BACKGROUND The distribution of thrombus-containing lesions (TCLs) in an all-comer population admitted with a heterogeneous clinical presentation (stable, ustable angina, or an acute coronary syndrome) and treated with percutaneous coronary intervention is yet unclear, and the long-term prognostic implications are still disputed. This study sought to assess the distribution and prognostic implications of coronary thrombus, detected by coronary angiography, in a population recruited in all-comer percutaneous coronary intervention trials. METHODS AND RESULTS Patient-level data from 3 contemporary coronary stent trials were pooled by an independent academic research organization (Cardialysis, Rotterdam, the Netherlands). Clinical outcomes in terms of major adverse cardiac events (major adverse cardiac events, a composite of death, myocardial infarction, and repeat revascularization), death, myocardial infarction, and repeated revascularization were compared between patients with and without angiographic TCL. Preprocedural TCL was present in 257 patients (5.8%) and absent in 4193 (94.2%) patients. At 3-year follow-up, there was no difference for major adverse cardiac events (25.3 versus 25.4%; P=0.683); all-cause death (7.4 versus 6.8%; P=0.683); myocardial infarction (5.8 versus 6.0%; P=0.962), and any revascularizations (17.5 versus 17.7%; P=0.822) between patients with and without TCL. The comparison of outcomes in groups weighing the jeopardized myocardial by TCL also did not show a significant difference. TCL were seen more often in the first 2 segments of the right (43.6%) and left anterior descending (36.8%) coronary arteries. The association of TCL and bifurcation lesions was present in 40.1% of the prespecified segments. CONCLUSIONS TCL involved mainly the proximal coronary segments and did not have any effect on clinical outcomes. A more detailed thrombus burden quantification is required to investigate its prognostic implications. CLINICAL TRIAL REGISTRATION URL: http://www.clinicaltrials.gov. Unique identifiers: NCT00114972, NCT01443104, NCT00617084.
Fucosylated surfactant protein-D is a biomarker candidate for the development of chronic obstructive pulmonary disease.
UNLABELLED We previously reported that knockout mice for α1,6-fucosyltransferase (Fut8), which catalyzes the biosynthesis of core-fucose in N-glycans, develop emphysema and that Fut8 heterozygous knockout mice are more sensitive to cigarette smoke-induced emphysema than wild-type mice. Moreover, a lower FUT8 activity was found to be associated with a faster decline in lung function among chronic obstructive pulmonary disease (COPD) patients. These results led us to hypothesize that core-fucosylation levels in a glycoprotein could be used as a biomarker for COPD. We focused on a lung-specific glycoprotein, surfactant protein D (SP-D), which plays a role in immune responses and is present in the distal airways, alveoli, and blood circulation. The results of a glycomic analysis reported herein demonstrate the presence of a core-fucose in an N-glycan on enriched SP-D from pooled human sera. We developed an antibody-lectin enzyme immunoassay (EIA) for assessing fucosylation (core-fucose and α1,3/4 fucose) in COPD patients. The results indicate that fucosylation levels in serum SP-D are significantly higher in COPD patients than in non-COPD smokers. The severity of emphysema was positively associated with fucosylation levels in serum SP-D in smokers. Our findings suggest that increased fucosylation levels in serum SP-D are associated with the development of COPD. BIOLOGICAL SIGNIFICANCE It has been proposed that serum SP-D concentrations are predictive of COPD pathogenesis, but distinguishing between COPD patients and healthy individuals to establish a clear cut-off value is difficult because smoking status highly affects circulating SP-D levels. Herein, we focused on N-glycosylation in SP-D and examined whether or not N-glycosylation patterns in SP-D are associated with the pathogenesis of COPD. We performed an N-glycomic analysis of human serum SP-D and the results show that a core-fucose is present in its N-glycan. We also found that the N-glycosylation in serum SP-D was indeed altered in COPD, that is, fucosylation levels including core-fucosylation are significantly increased in COPD patients compared with non-COPD smokers. The severity of emphysema was positively associated with fucosylation levels in serum SP-D in smokers. Our findings shed new light on the discovery and/or development of a useful biomarker based on glycosylation changes for diagnosing COPD. This article is part of a Special Issue entitled: HUPO 2014.
Automatic Generation of Data-Oriented Exploits
As defense solutions against control-flow hijacking attacks gain wide deployment, control-oriented exploits from memory errors become difficult. As an alternative, attacks targeting non-control data do not require diverting the application’s control flow during an attack. Although it is known that such data-oriented attacks can mount significant damage, no systematic methods to automatically construct them from memory errors have been developed. In this work, we develop a new technique called data-flow stitching, which systematically finds ways to join data flows in the program to generate data-oriented exploits. We build a prototype embodying our technique in a tool called FLOWSTITCH that works directly on Windows and Linux binaries. In our experiments, we find that FLOWSTITCH automatically constructs 16 previously unknown and three known data-oriented attacks from eight real-world vulnerable programs. All the automatically-crafted exploits respect fine-grained CFI and DEP constraints, and 10 out of the 19 exploits work with standard ASLR defenses enabled. The constructed exploits can cause significant damage, such as disclosure of sensitive information (e.g., passwords and encryption keys) and escalation of privilege.
Oil and tocopherol content and composition of pumpkin seed oil in 12 cultivars.
Twelve pumpkin cultivars (Cucurbita maxima D.), cultivated in Iowa, were studied for their seed oil content, fatty acid composition, and tocopherol content. Oil content ranged from 10.9 to 30.9%. Total unsaturated fatty acid content ranged from 73.1 to 80.5%. The predominant fatty acids present were linoleic, oleic, palmitic, and stearic. Significant differences were observed among the cultivars for stearic, oleic, linoleic, and gadoleic acid content of oil. Low linolenic acid levels were observed (<1%). The tocopherol content of the oils ranged from 27.1 to 75.1 microg/g of oil for alpha-tocopherol, from 74.9 to 492.8 microg/g for gamma-tocopherol, and from 35.3 to 1109.7 microg/g for delta-tocopherol. The study showed potential for pumpkin seed oil from all 12 cultivars to have high oxidative stability that would be suitable for food and industrial applications, as well as high unsaturation and tocopherol content that could potentially improve the nutrition of human diets.
Active project based learning pedagogies: Learning hardware, software design and wireless sensor instrumentation
"Chalk and Talk" is an ineffective pedagogy for engineering education, but still widely used in various part of the world. In recent days, Project Based Learning (PBL) is one of the modern pedagogy in which a project, based on certain problems, is given to students. Teacher carefully breaks downs this project into some driving questions and gets involve students into group based or individual activities to answer/solve these questions. This paper discusses four stage processes to implement PBL pedagogy to teach electronics hardware, software and wireless sensor instrumentation to a group of electronics technology undergraduate students. Further division of project into different tasks/driving questions is also discussed in this paper. The observations based on students' performance showed that they take more interest in their work and enjoy more on individual or group based learning to solve certain problems.
Reader-Aware Multi-Document Summarization via Sparse Coding
We propose a new MDS paradigm called readeraware multi-document summarization (RA-MDS). Specifically, a set of reader comments associated with the news reports are also collected. The generated summaries from the reports for the event should be salient according to not only the reports but also the reader comments. To tackle this RAMDS problem, we propose a sparse-coding-based method that is able to calculate the salience of the text units by jointly considering news reports and reader comments. Another reader-aware characteristic of our framework is to improve linguistic quality via entity rewriting. The rewriting consideration is jointly assessed together with other summarization requirements under a unified optimization model. To support the generation of compressive summaries via optimization, we explore a finer syntactic unit, namely, noun/verb phrase. In this work, we also generate a data set for conducting RA-MDS. Extensive experiments on this data set and some classical data sets demonstrate the effectiveness of our proposed approach.
Quantification of the response of circulating epithelial cells to neodadjuvant treatment for breast cancer: a new tool for therapy monitoring
In adjuvant treatment for breast cancer there is no tool available with which to measure the efficacy of the therapy. In contrast, in neoadjuvant therapy reduction in tumour size is used as an indicator of the sensitivity of tumour cells to the agents applied. If circulating epithelial (tumour) cells can be shown to react to therapy in the same way as the primary tumour, then this response may be exploited to monitor the effect of therapy in the adjuvant setting. We used MAINTRAC® analysis to monitor the reduction in circulating epithelial cells during the first three to four cycles of neoadjuvant therapy in 30 breast cancer patients. MAINTRAC® analysis revealed a patient-specific response. Comparison of this response with the decline in size of the primary tumour showed that the reduction in number of circulating epithelial cells accurately predicted final tumour reduction at surgery if the entire neoadjuvant regimen consisted of chemotherapy. However, the response of the circulating tumour cells was unable to predict the response to additional antibody therapy. The response of circulating epithelial cells faithfully reflects the response of the whole tumour to adjuvant therapy, indicating that these cells may be considered part of the tumour and can be used for therapy monitoring.
Acronym Disambiguation Using Word Embedding
According to the website AcronymFinder.com which is one of the world's largest and most comprehensive dictionaries of acronyms, an average of 37 new human-edited acronym definitions are added every day. There are 379,918 acronyms with 4,766,899 definitions on that site up to now, and each acronym has 12.5 definitions on average. It is a very important research topic to identify what exactly an acronym means in a given context for document comprehension as well as for document retrieval. In this paper, we propose two word embedding based models for acronym disambiguation. Word embedding is to represent words in a continuous and multidimensional vector space, so that it is easy to calculate the semantic similarity between words by calculating the vector distance. We evaluate the models on MSH Dataset and ScienceWISE Dataset, and both models outperform the state-of-art methods on accuracy. The experimental results show that word embedding helps to improve acronym disambiguation.
SDF-1/CXCR4 promotes epithelial-mesenchymal transition and progression of colorectal cancer by activation of the Wnt/β-catenin signaling pathway.
Stromal cell-derived factor 1 (SDF-1) and its receptor, CXCR4, play an important role in angiogenesis and are associated with tumor progression. This study aimed to investigate the role of SDF-1/CXCR4-mediated epithelial-mesenchymal transition (EMT) and the progression of colorectal cancer (CRC) as well as the underlying mechanisms. The data showed that expression of CXCR4 and β-catenin mRNA and protein was significantly higher in CRC tissues than in distant normal tissues. CXCR4 expression was associated with β-catenin expression in CRC tissues, whereas high CXCR4 expression was strongly associated with low E-cadherin, high N-cadherin, and high vimentin expression, suggesting a cross talk between the SDF-1/CXCR4 axis and Wnt/β-catenin signaling pathway in CRC. In vitro, SDF-1 induced CXCR4-positive colorectal cancer cell invasion and EMT by activation of the Wnt/β-catenin signaling pathway. In contrast, SDF-1/CXCR4 axis activation-induced colorectal cancer invasion and EMT was effectively inhibited by the Wnt signaling pathway inhibitor Dickkopf-1. In conclusion, CXCR4-promoted CRC progression and EMT were regulated by the Wnt/β-catenin signaling pathway. Thus, targeting of the SDF-1/CXCR4 axis could have clinical applications in suppressing CRC progression.
Learning to Estimate 3D Human Pose and Shape from a Single Color Image
This work addresses the problem of estimating the full body 3D human pose and shape from a single color image. This is a task where iterative optimization-based solutions have typically prevailed, while Convolutional Networks (ConvNets) have suffered because of the lack of training data and their low resolution 3D predictions. Our work aims to bridge this gap and proposes an efficient and effective direct prediction method based on ConvNets. Central part to our approach is the incorporation of a parametric statistical body shape model (SMPL) within our end-to-end framework. This allows us to get very detailed 3D mesh results, while requiring estimation only of a small number of parameters, making it friendly for direct network prediction. Interestingly, we demonstrate that these parameters can be predicted reliably only from 2D keypoints and masks. These are typical outputs of generic 2D human analysis ConvNets, allowing us to relax the massive requirement that images with 3D shape ground truth are available for training. Simultaneously, by maintaining differentiability, at training time we generate the 3D mesh from the estimated parameters and optimize explicitly for the surface using a 3D per-vertex loss. Finally, a differentiable renderer is employed to project the 3D mesh to the image, which enables further refinement of the network, by optimizing for the consistency of the projection with 2D annotations (i.e., 2D keypoints or masks). The proposed approach outperforms previous baselines on this task and offers an attractive solution for direct prediction of3D shape from a single color image.
Diversity Regularized Spatiotemporal Attention for Video-Based Person Re-identification
Video-based person re-identification matches video clips of people across non-overlapping cameras. Most existing methods tackle this problem by encoding each video frame in its entirety and computing an aggregate representation across all frames. In practice, people are often partially occluded, which can corrupt the extracted features. Instead, we propose a new spatiotemporal attention model that automatically discovers a diverse set of distinctive body parts. This allows useful information to be extracted from all frames without succumbing to occlusions and misalignments. The network learns multiple spatial attention models and employs a diversity regularization term to ensure multiple models do not discover the same body part. Features extracted from local image regions are organized by spatial attention model and are combined using temporal attention. As a result, the network learns latent representations of the face, torso and other body parts using the best available image patches from the entire video sequence. Extensive evaluations on three datasets show that our framework outperforms the state-of-the-art approaches by large margins on multiple metrics.
How do APIs evolve? A story of refactoring
Frameworks and libraries change their APIs. Migrating an application to the new API is tedious and disrupts the development process. Although some tools and ideas have been proposed to solve the evolution of APIs, most updates are done manually. To better understand the requirements for migration tools, we studied the API changes of four frameworks and one library. We discovered that the changes that break existing applications are not random, but tend to fall into particular categories. Over 80% of these changes are refactorings. This suggests that refactoring-based migration tools should be used to update applications.
Change Distilling:Tree Differencing for Fine-Grained Source Code Change Extraction
A key issue in software evolution analysis is the identification of particular changes that occur across several versions of a program. We present change distilling, a tree differencing algorithm for fine-grained source code change extraction. For that, we have improved the existing algorithm by Chawathe et al. for extracting changes in hierarchically structured data. Our algorithm extracts changes by finding both a match between the nodes of the compared two abstract syntax trees and a minimum edit script that can transform one tree into the other given the computed matching. As a result, we can identify fine-grained change types between program versions according to our taxonomy of source code changes. We evaluated our change distilling algorithm with a benchmark that we developed, which consists of 1,064 manually classified changes in 219 revisions of eight methods from three different open source projects. We achieved significant improvements in extracting types of source code changes: Our algorithm approximates the minimum edit script 45 percent better than the original change extraction approach by Chawathe et al. We are able to find all occurring changes and almost reach the minimum conforming edit script, that is, we reach a mean absolute percentage error of 34 percent, compared to the 79 percent reached by the original algorithm. The paper describes both our change distilling algorithm and the results of our evolution.
Learning the Number of Neurons in Deep Networks
Nowadays, the number of layers and of neurons in each layer of a deep network are typically set manually. While very deep and wide networks have proven effective in general, they come at a high memory and computation cost, thus making them impractical for constrained platforms. These networks, however, are known to have many redundant parameters, and could thus, in principle, be replaced by more compact architectures. In this paper, we introduce an approach to automatically determining the number of neurons in each layer of a deep network during learning. To this end, we propose to make use of a group sparsity regularizer on the parameters of the network, where each group is defined to act on a single neuron. Starting from an overcomplete network, we show that our approach can reduce the number of parameters by up to 80% while retaining or even improving the network accuracy.
Exact PDF equations and closure approximations for advective-reactive transport
Mathematical models of advection–reaction phenomena rely on advective flow velocity and (bio) chemical reaction rates that are notoriously random. By using functional integral methods, we derive exact evolution equations for the probability density function (PDF) of the state variables of the advection–reaction system in the presence of random transport velocity and random reaction rates with rather arbitrary distributions. These PDF equations are solved analytically for transport with deterministic flow velocity and a linear reaction rate represented mathematically by a heterog eneous and strongly-correlated random field. Our analytical solution is then used to investigate the accuracy and robustness of the recently proposed large-eddy diffusivity (LED) closure approximation [1]. We find that the solution to the LED-based PDF equation, which is exact for uncorrelated reaction rates, is accurate even in the presence of strong correlations and it provides an upper bound of predictive uncertainty. 2013 Elsevier Inc. All rights reserved. 1. Introductio n The mathematical modeling of realistic advection–reaction phenomena involve many random parameters. For example, the dynamics of solute transport in an heterogeneous porous medium is significantly influenced by the epistemic uncertainty in the porosity distribution , advective (Darcy) flow velocity and (bio) chemical reaction rates [2]. These random fields render the governing equations of the advection–reaction system stochastic. A common approach to quantifyi ng the statistical propertie s of the state variables of the system, e.g., the solute concentratio n, relies in modeling their probability density functions (PDFs) directly via determinist ic equations. This has advantag es over other uncertainty quantification methods, such as polynomial chaos [3–6], probabilistic collocation [7,8], perturba tion methods [35–38], and generalized spectral decompositi ons [9–13]. In particular , PDF methods allow to directly ascertain the tails of probabilistic distribut ions thus facilitating the assessment of rare events and associated risks. This is in contrast to many other stochasti c approach es that use the variance (or the standard deviation) of a system variable as a measure of predictive uncertainty . Another key advantage of PDF methods is that they do not suffer from the curse of dimensionality. Moreover, they can be used to tackle several open problems in stochasti c dynamics such as discontinui ties in parametric space [6] and long-term integration [14,15]. 324 D. Venturi et al. / Journal of Computational Physics 243 (2013) 323–343 As is well known, in general it is not possible to determine an exact one-point (in space and time) PDF equation for the solution to an arbitrary partial different ial equation. This is due to the fact that the solution to the PDE can be nonlocal in space and time. Howeve r, a functional differential equation for the Hopf characte ristic function al can always be determined [17–19], although effective analytic al or numerical methods to solve it are still lacking. Moreover, the Hopf functional equation is equivalent to an infinite hierarchy of equations involving multi-poi nt PDFs of increasing order [20–23]. First-order stochastic partial different ial equations (SPDEs), such as the advection–reaction equation, are often amenable to exact treatment with PDF methods [1,24–26]. The main reason is that these equations can be reduced to a finite set of ordinary differential equations along the characterist ic curves. This enables one to obtain an exact evolution equation for the joint PDF of the state variables and the external random fields appearing in the SPDE. In general, these random fields are high-dimens ional, e.g., represented in terms of many random variables in a Karhunen –Loève expansion, and therefore the corresponding exact equation for the joint PDF is high-dim ensional as well [27,26]. From a computational viewpoint this could be an issue despite the recent advances in numerical methods for high-dimens ional problems such as proper generalized decompositi on [28–30], sparse grid collocation [31,7] or functional ANOVA techniques [32–34]. A closure approximat ion can significantly reduce the number of parameters appearing in the PDF equation and therefore it can provide an effective computational tool that allows for an efficient integration. In particular, the large-eddy diffusivity (LED) closure for advection–reaction equations [1] has been shown to be effective for uncorrelated and weakly correlated random reaction rates. However, the performance of the LED-based PDF equation s for strongly correlated reaction rates remains unexplored. Its investigation is the main objective of the present study. To this end, we consider the prototype problem proposed in [1] and obtain analytical solutions to the PDF equation for two different random reaction models. The analytical solutions will be employed as useful benchma rk to test the accuracy and effectivenes s of the LED closure. This paper is organized as follows. In Section 2 we formulat e the governing equation s of advectivereactive transport in porous media and obtain an evolution law for the correspondi ng indicator function ([39, Ch. 3] ). Section 2.2 presents the LED-based PDF equation for random advection–reaction transport. The exact evolution equation for the joint responseexcitation PDF of the advection–reaction system is derived and discussed in Section 2.3. In Section 3 and Section 4 we compare the LED approximat ion with exact analytica l results for a prototype advection–reaction problem involving linear reactions and strongly correlated random reaction rates. Finally, the main findings and their implication s are summarized in Section 5. We also include two brief appendices, where we obtain analytica l solutions to the equation for the joint responseexcitation PDF and the advection–reaction equation in physical space. 2. Problem formulation Let us consider the dimensio nless form of the advection–reaction equation for a scalar concentr ation field cðx; tÞ
Recommender Systems and their Security Concerns
Instead of simply using two-dimensional User × Item features, advanced recommender systems rely on more additional dimensions (e.g. time, location, social network) in order to provide better recommendation services. In the first part of this paper, we will survey a variety of dimension features and show how they are integrated into the recommendation process. When the service providers collect more and more personal information, it brings great privacy concerns to the public. On another side, the service providers could also suffer from attacks launched by malicious users who want to bias the recommendations. In the second part of this paper, we will survey attacks from and against recommender service providers, and existing solutions.
The synthesis and rendering of eroded fractal terrains
In standard fractal terrain models based on fractional Brownian motion the statistical character of the surface is, by design, the same everywhere. A new approach to the synthesis of fractal terrain height fields is presented which, in contrast to previous techniques, features locally independent control of the frequencies composing the surface, and thus local control of fractal dimension and other statistical characteristics. The new technique, termed noise synthesis, is intermediate in difficulty of implementation, between simple stochastic subdivision and Fourier filtering or generalized stochastic subdivision, and does not suffer the drawbacks of creases or periodicity. Varying the local crossover scale of fractal character or the fractal dimension with altitude or other functions yields more realistic first approximations to eroded landscapes. A simple physical erosion model is then suggested which simulates hydraulic and thermal erosion processes to create gloabl stream/valley networks and talus slopes. Finally, an efficient ray tracing algorithm for general height fields, of which most fractal terrains are a subset, is presented.
Horizon-split ambient occlusion
Ambient occlusion is a technique that computes the amount of light reaching a point on a diffuse surface based on its directly visible occluders. It gives perceptual clues of depth, curvature, and spatial proximity and thus is important for realistic rendering. Traditionally, ambient occlusion is calculated by integrating the visibility function over the normal-oriented hemisphere around any given surface point. In this paper we show this hemisphere can be partitioned into two regions by a horizon line defined by the surface in a local neighborhood of such point. We introduce an image-space algorithm for finding an approximation of this horizon and, furthermore, we provide an analytical closed form solution for the occlusion below the horizon, while the rest of the occlusion is computed by sampling based on a distribution to improve the convergence. The proposed ambient occlusion algorithm operates on the depth buffer of the scene being rendered and the associated per-pixel normal buffer. It can be implemented on graphics hardware in a pixel shader, independently of the scene geometry. We introduce heuristics to reduce artifacts due to the incompleteness of the input data and we include parameters to make the algorithm easy to customize for quality or performance purposes. We show that our technique can render high-quality ambient occlusion at interactive frame rates on current GPUs. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation—; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—;
DenseNet: Implementing Efficient ConvNet Descriptor Pyramids
Convolutional Neural Networks (CNNs) can provide accurate object classification. They can be extended to perform object detection by iterating over dense or selected proposed object regions. However, the runtime of such detectors scales as the total number and/or area of regions to examine per image, and training such detectors may be prohibitively slow. However, for some CNN classifier topologies, it is possible to share significant work among overlapping regions to be classified. This paper presents DenseNet, an open source system that computes dense, multiscale features from the convolutional layers of a CNN based object classifier. Future work will involve training efficient object detectors with DenseNet feature descriptors.
Self-efficacy I. Sources of Self-efficacy Beliefs Ii. Efficacy-mediated Processes Iii. Adaptive Benefits of Optimistic Self-beliefs of Efficacy Iv. Development and Exercise of Self-efficacy over the Lifespan I. Sources of Self-efficacy Ii. Efficacy-activated Processes
Glossary Affective Processes: Processes regulating emotional states and elicitation of emotional reactions. Cognitive Processes: Thinking processes involved in the acquisition, organization and use of information. Motivation: Activation to action. Level of motivation is reflected in choice of courses of action, and in the intensity and persistence of effort. Perceived Self-Efficacy: People's beliefs about their capabilities to produce effects. Self-Regulation: Exercise of influence over one's own motivation, thought processes, emotional states and patterns of behavior. Perceived self-efficacy is defined as people's beliefs about their capabilities to produce designated levels of performance that exercise influence over events that affect their lives. Self-efficacy beliefs determine how people feel, think, motivate themselves and behave. Such beliefs produce these diverse effects through four major processes. They include cognitive, motivational, affective and selection processes. A strong sense of efficacy enhances human accomplishment and personal well-being in many ways. People with high assurance in their capabilities approach difficult tasks as challenges to be mastered rather than as threats to be avoided. Such an efficacious outlook fosters intrinsic interest and deep engrossment in activities. They set themselves challenging goals and maintain strong commitment to them. They heighten and sustain their efforts in the face of failure. They quickly recover their sense of efficacy after failures or setbacks. They attribute failure to insufficient effort or deficient knowledge and skills which are acquirable. They approach threatening situations with assurance that they can exercise control over them. Such an efficacious outlook produces personal accomplishments, reduces stress and lowers vulnerability to depression. 2 In contrast, people who doubt their capabilities shy away from difficult tasks which they view as personal threats. They have low aspirations and weak commitment to the goals they choose to pursue. When faced with difficult tasks, they dwell on their personal deficiencies, on the obstacles they will encounter, and all kinds of adverse outcomes rather than concentrate on how to perform successfully. They slacken their efforts and give up quickly in the face of difficulties. They are slow to recover their sense of efficacy following failure or setbacks. Because they view insufficient performance as deficient aptitude it does not require much failure for them to lose faith in their capabilities. They fall easy victim to stress and depression. Lisa's notes: So here is the crux of the matter: How do we raise children with serious, chronic illnesses to have strong self-efficacy? It can literally be a matter of …
Dynamic source routing in ad hoc wireless networks
An ad hoc network is a collection of wireless mobile hosts forming a temporary network without the aid of any established infrastructure or centralized administration. In such an environment, it may be necessary for one mobile host to enlist the aid of other hosts in forwarding a packet to its destination, due to the limited range of each mobile host’s wireless transmissions. This paper presents a protocol for routing in ad hoc networks that uses dynamic source routing. The protocol adapts quickly to routing changes when host movement is frequent, yet requires little or no overhead during periods in which hosts move less frequently. Based on results from a packet-level simulation of mobile hosts operating in an ad hoc network, the protocol performs well over a variety of environmental conditions such as host density and movement rates. For all but the highest rates of host movement simulated, the overhead of the protocol is quite low, falling to just 1% of total data packets transmitted for moderate movement rates in a network of 24 mobile hosts. In all cases, the difference in length between the routes used and the optimal route lengths is negligible, and in most cases, route lengths are on average within a factor of 1.01 of optimal.
Adoptive immunotherapy for cancer or viruses.
Adoptive immunotherapy, or the infusion of lymphocytes, is a promising approach for the treatment of cancer and certain chronic viral infections. The application of the principles of synthetic biology to enhance T cell function has resulted in substantial increases in clinical efficacy. The primary challenge to the field is to identify tumor-specific targets to avoid off-tumor, on-target toxicity. Given recent advances in efficacy in numerous pilot trials, the next steps in clinical development will require multicenter trials to establish adoptive immunotherapy as a mainstream technology.
A survey on colombian agriculture during the 1990s
This survey reviews some of the key developments in Colombian agriculture during the 1990s. While economic reform and macro policy appear to largely determine the evolution of the sector throughout most of the decade, the impact of sectoral policy is not that clear. The long-run significance of changes brought about in the structure of agricultural production, trade balance, and social conditions in rural areas is unclear. Whether they are the product of a transitional period between two macro and sectoral policy perspectives, of a temporarily distorted set of incentives, or a combination of the two is an open question. Hopefully, a set of interrogations may arise that help improve our understanding of Colombian agriculture.
On composition of a federated web search result page: using online users to provide pairwise preference for heterogeneous verticals
Modern web search engines are federated --- a user query is sent to the numerous specialized search engines called verticals like web (text documents), News, Image, Video, etc. and the results returned by these engines are then aggregated and composed into a search result page (SERP) and presented to the user. For a specific query, multiple verticals could be relevant, which makes the placement of these vertical results within blocks of textual web results challenging: how do we represent, assess, and compare the relevance of these heterogeneous entities? In this paper we present a machine-learning framework for SERP composition in the presence of multiple relevant verticals. First, instead of using the traditional label generation method of human judgment guidelines and trained judges, we use a randomized online auditioning system that allows us to evaluate triples of the form query, web block, vertical>. We use a pairwise click preference to evaluate whether the web block or the vertical block had a better users' engagement. Next, we use a hinged feature vector that contains features from the web block to create a common reference frame and augment it with features representing the specific vertical judged by the user. A gradient boosted decision tree is then learned from the training data. For the final composition of the SERP, we place a vertical result at a slot if the score is higher than a computed threshold. The thresholds are algorithmically determined to guarantee specific coverage for verticals at each slot. We use correlation of clicks as our offline metric and show that click-preference target has a better correlation than human judgments based models. Furthermore, on online tests for News and Image verticals we show higher user engagement for both head and tail queries.
Credit Scoring, Statistical Techniques and Evaluation Criteria: A Review of the Literature
Credit scoring has been regarded as a core appraisal tool of different institutions during the last few decades, and has been widely investigated in different areas, such as finance and accounting. Different scoring techniques are being used in areas of classification and prediction, where statistical techniques have conventionally been used. Both sophisticated and traditional techniques, as well as performance evaluation criteria are investigated in the literature. The principal aim of this paper is to carry out a comprehensive review of 214 articles/books/theses that involve credit scoring applications in various areas, in general, but primarily in finance and banking, in particular. This paper also aims to investigate how credit scoring has developed in importance, and to identify the key determinants in the construction of a scoring model, by means of a widespread review of different statistical techniques and performance evaluation criteria. Our review of literature revealed that there is no overall best statistical technique used in building scoring models and the best technique for all circumstances does not yet exist. Also, the applications of the scoring methodologies have been widely extended to include different areas, and this subsequently can help decision makers, particularly in banking, to predict their clients‟ behaviour. Finally, this paper also suggests a number of directions for future research.
Subject Content-Based Intelligent Cropping of Digital Photos
Image cropping is one of the most important operations performed to enhance photographs. The direct benefit is improved image composition in terms of better subject location, better subject magnification, and reduced background clutter. Indirectly, cropping can also help image and subject rendering because unwanted distractions are eliminated before image enhancement. We present a robust and efficient solution to this challenging problem for consumer snapshot photos. First, a main subject detection algorithm is employed to produce a belief map probabilistically indicating the subject content. Facilitated by the use of an integral image, a globally optimum cropping window is located to maximize the subject content within the cropping window while satisfying multiple constraints. Extensive evaluation using multiple judges has shown its overwhelming advantage over a commercially used cropping scheme.
Intellectual Curiosity and the Scientiic Revolution-A Global Perspective
Huff, Toby E., Intellectual Curiosity and the Scientiic Revolution - A Global Perspective. Cambridge University Press, 2010. The surprising rise of Europe that began in the 17th century quickly eclipsed the glories of China, India, or the Middle East at a time that nobody would have predicted. Why did this happen despite the population and wealth advantages of those other three civilizations? Huff says: "On the road to modernity, we are accustomed to identifying the Industrial Revolution of the 1 8th century as a great landmark. The present inquiry will lead us to consider whether that great transformation could have taken place without the scientific revolution and, above all, Newton's Principia Mathematica and the related developments in astronomy and the science of mechanics that occurred uniquely in Western Europe. "It may be more than coincidence that the absence of those developments in other regions of the world had something to do with the economic and political stagnation that persisted outside Europe (and Europe overseas) all the way to the mid-twentieth century. Such are some of the questions that need to be examined in an age of apparent instant thought and communication that has everyone wired." Huff notes that the 17th century in Europe was the great divide that separated Western European development from the rest of the world for the next 3-1/2 centuries. The flow of discoveries included astronomy, optics, science of motion, math, and new physics. The Newtonian synthesis created an integrated celestial and terrestrial physics within the framework of universal gravitation. There were advances in hydraulics and pneumatics, medicine, microscopy, and human and animal anatomy. Also - big steps toward discovery of electricity. Why did this only happen in the West, and not in China, India, or the Muslim Middle East, all of which were much more prosperous than the Europe of the time? Cultures are not uniformly alike. The shape of a culture is affected by its geography, its religions, and its social practices. Europe had accumulated an enormous amount of intellectual capital absent everywhere else. It began in the 12th and 13th centuries in philosophy, law, institution building, and education. The scientific revolution then flowered in the Enlightenment of the 18th century as an ongoing process. The key to all of this was the educational system in Europe - the universities that had no counterparts elsewhere. The entire European worldview was different, held together by a very different concept of law and legal structure, a system constantly in reform and renewal. European law had a tradition of considering the rights of many participants: citizens, professionals, and nobles. This legal tradition does not develop in authoritarian civilizations. The rise of European literacy in the 16th and 17th centuries was a major part of that success. Note also the newspaper revolution in England and tiien in Europe - but not in China or Muslim world until the 19th century, and even then, not anywhere as free in expression as the western world. The telescope of 1608 that started a whole train of scientific discoveries was taken by travelers to China, India, and Ottoman Turkey - but while used in these cultures as a toy, was never improved and never spurred new inventions. While the new geography spurred by the 16th century discoveries sent Europeans to explore and visit other cultures, the other tiiree civilizations did not exhibit a like curiosity. What Europe had developed by the 16th century was a whole system of law derived from Roman and Medieval Christian culture that included merchant law, contracts for international commerce, and even a system of petitions for redress from citizens to their monarchs. And then add to this the Protestant Reformation with its work ethic - the idea that work was a good thing in itself, not just a necessary evil, and it is apparent why Europe and Europe abroad flourished. …
The first cosmetic treatise of history. A female point of view.
The Schola Medica Salernitana was an early medieval medical school in the south Italian city of Salerno and the most important native source of medical knowledge in Europe at the time. The school achieved its splendour between the 10th and 13th centuries, during the final decades of Longobard kingdom. In the school, women were involved as both teachers and students for medical learning. Among these women, there was Trotula de Ruggiero (11th century), a teacher whose main interest was to alleviate suffering of women. She was the author of many medical works, the most notable being De Passionibus Mulierum Curandarum (about women's diseases), also known as Trotula Major. Another important work she wrote was De Ornatu Mulierum (about women's cosmetics), also known as Trotula Minor, in which she teaches women to conserve and improve their beauty and treat skin diseases through a series of precepts, advices and natural remedies. She gives lessons about make-up, suggests the way to be unwrinkled, remove puffiness from face and eyes, remove unwanted hair from the body, lighten the skin, hide blemishes and freckles, wash teeth and take away bad breath, dying hair, wax, treat lips and gums chaps.
The T-Wing : A VTOL UAV for Defense and Civilian Applications
This paper describes progress made on the T-Wing tail-sitter UAV programme currently being undertaken via a collaborative research agreement between Sonacom Pty Ltd and the University of Sydney. This vehicle is being developed in response to a perceived requirement for a more flexible surveillance and remote sensing platform than is currently available. Missions for such a platform include coastal surveillance, defence intelligence gathering and environmental monitoring. The use of an unmanned air-vehicle (UAV) with a vertical takeoff and landing (VTOL) capability that can still enjoy efficient horizontal flight promises significant advantages over other vehicles for such missions. One immediate advantage is the potential to operate from small patrol craft and frigates equipped with helipads. In this role such a vehicle could be used for maritime surveillance; sonobuoy or other store deployment; communication relay; convoy protection; and support for ground and helicopter operations. The programme currently being undertaken involves building a 50-lb fully autonomous VTOL tail-sitter UAV to demonstrate successful operation near the ground in windy conditions and to perform the transition maneuvers between vertical and horizontal flight. This will then allow the development of a full-size prototype vehicle, (The “Mirli”) to be undertaken as a prelude to commercial production. The Need for a Tail-Sitter UAV Defence Applications Although conflicts over the last 20 years have demonstrated the importance of UAV systems in facilitating real-time intelligence gathering, it is clear that most current systems still do not possess the operational flexibility that is desired by force commanders. One of the reasons for this is that most UAVs have adopted relatively conventional aircraft configurations. This leads directly to operational limitations because it either necessitates take-off and landing from large fixed runways; or the use of specialized launch and recovery methods such catapults, rockets, nets, parachutes and airbags. One potential solution to these operational difficulties is a tail-sitter VTOL UAV. Such a vehicle has few operational requirements other than a small clear area for take-off and landing. While other VTOL concepts share this operational advantage over conventional vehicles the tail-sitter has some other unique benefits. In comparison to helicopters, a tailsitter vehicle does not suffer the same performance penalties in terms of dash-speed, range and endurance because it spends the majority of its mission in a more efficient airplane flight mode. The only other VTOL concepts that combine vertical and horizontal flight are the tiltrotor and tilt-wing, however, both involve significant extra mechanical complexity in comparison to the tail-sitter vehicle, which has fixed wings and nacelles. A further simplification can be made in comparison to other VTOL designs by the use of prop-wash over wing and fin mounted control surfaces to effect control during vertical flight, thus obviating the need for cyclic rotor control. For naval forces, a tail-sitter VTOL UAV has enormous potential as an aircraft that can be deployed from small ships and used for long-range reconnaissance and surveillance; over† Department of Aeronautical Engineering, University of Sydney ‡ Sonacom Pty Ltd the-horizon detection of low-flying missiles and aircraft; deployment of remote acoustic sensors; and as a platform for aerial support and communications. The vehicle could also be used in anti-submarine activities and anti-surface operations and is ideal for battlefield monitoring over both sea and land. The obvious benefit in comparison to a conventional UAV is the operational flexibility provided by the vertical launch and recovery of the vehicle. The US Navy and Marine Corps who anticipate spending approximately US$350m on their VTUAV program have clearly recognized this fact. Figure 1: A Typical Naval UAV Mission: Monitoring Acoustic Sensors For ground based forces a tail-sitter vehicle is also attractive because it allows UAV systems to be quickly deployed from small cleared areas with a minimum of support equipment. This makes the UAVs less vulnerable to attacks on fixed bases without the need to set-up catapult launchers or recovery nets. It is envisaged that ground forces would mainly use small VTOL UAVs as reconnaissance and communication relay platforms. Civilian Applications Besides the defence requirements, there are also many civilian applications for which a VTOL UAV is admirably suited. Coastal surveillance to protect national borders from illegal immigrants and illicit drugs is clearly an area where such vehicles could be used. The VTOL characteristics in this role are an advantage, as they allow such vehicles to be based in remote areas without the fixed infrastructure of airstrips, or to be operated from small coastal patrol vessels. Further applications are also to be found in mineral exploration and environmental monitoring in remote locations. While conventional vehicles could of course accomplish such tasks their effectiveness may be limited if forced to operate from bases a long way from the area of interest. Tail-Sitters: A Historical Perspective Although tail-sitter vehicles have been investigated over the last 50 years as a means to combine the operational advantages of vertical flight enjoyed by helicopters with the better horizontal flight attributes of conventional airplanes, no successful tail-sitter vehicles have ever been produced. One of the primary reasons for this is that tail-sitters such as the Convair XF-Y1 and Lockheed XF-V1 (Figure 2) experimental vehicles of the 1950s proved to be very difficult to pilot during vertical flight and the transition maneuvers. Figure 2: Convair XF-Y1 and Lockheed XF-V1 Tail-Sitter Aircraft. 2 With the advent of modern computing technology and improvements in sensor reliability, capability and cost it is now possible to overcome these piloting disadvantages by transitioning the concept to that of an unmanned vehicle. With the pilot replaced by modern control systems it should be possible to realise the original promise of the tail-sitter configuration. The tail-sitter aircraft considered in this paper differs substantially from its earlier counterparts and is most similar in configuration to the Boeing Heliwing vehicle of the early 1990s. This vehicle had a 1450-lb maximum takeoff weight (MTOW) with a 200-lb payload, 5-hour endurance and 180 kts maximum speed and used twin rotors powered by a single 240 SHP turbine engine. A picture of the Heliwing is shown in Figure 3. Figure 3: Boeing Heliwing Vehicle
BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains
We present BrainNet which, to our knowledge, is the first multi-person non-invasive direct brain-to-brain interface for collaborative problem solving. The interface combines electroencephalography (EEG) to record brain signals and transcranial magnetic stimulation (TMS) to deliver information noninvasively to the brain. The interface allows three human subjects to collaborate and solve a task using direct brain-to-brain communication. Two of the three subjects are designated as “Senders” whose brain signals are decoded using real-time EEG data analysis. The decoding process extracts each Sender’s decision about whether to rotate a block in a Tetris-like game before it is dropped to fill a line. The Senders’ decisions are transmitted via the Internet to the brain of a third subject, the “Receiver,” who cannot see the game screen. The Senders’ decisions are delivered to the Receiver’s brain via magnetic stimulation of the occipital cortex. The Receiver integrates the information received from the two Senders and makes a decision using an EEG interface about either turning the block or keeping it in the same position. A second round of the game provides an additional chance for the Senders to evaluate the Receiver’s decision and send feedback to the Receiver’s brain, and for the Receiver to rectify a possible incorrect decision made in the first round. We evaluated the performance of BrainNet in terms of (1) Group-level performance during the game; (2) True/False positive rates of subjects’ decisions; (3) Mutual information between subjects. Five groups, each with three human subjects, successfully used BrainNet to perform the Tetris task, with an average accuracy of 81.25%. Furthermore, by varying the information reliability of the Senders by artificially injecting noise into one Sender’s signal, we investigated how the Receiver learns to integrate noisy signals in order to make a correct decision. We found that Receivers are able to learn which Sender is more reliable based solely on the information transmitted to their brains. Our results raise the possibility of future brain-to-brain interfaces that enable cooperative problem solving by humans using a “social network” of connected brains.
Microwave synthesis process for ZSM-11 molecular sieve
The invention relates to a microwave synthesis process for a ZSM-11 molecular sieve. The process uses aluminum source, alkali source, silicon source, tetrabutylammonium bromide and de-ionized water as the raw materials and adopts the heating mode of microwave radiation. The reaction mixture is crystallized for 1 to 8 hours under microwave radiation and self-generated pressure at 130 to 180 DEG C, solid is separated from the mother liquid, and then the product is washed by the de-ionized water until the pH value is 8 to 9, thus obtaining the raw powder of the ZSM-11 molecular sieve. The microwave synthesis process for the ZSM-11 molecular sieve has the advantages of replacing traditional water heating by microwave radiation heating and obtaining the ZSM-11 molecular sieve with the advantages of high crystallinity, pure crystal phase and controllable and wide range of grain size, and the process accelerates the nucleation rate and crystal growth speed obviously and shortens the crystallization time greatly by pre-placing crystal seeds.
Slow progress in changing the school food environment: nationally representative results from public and private elementary schools.
BACKGROUND Children spend much of their day in school, and authorities have called for improvements in the school food environment. However, it is not known whether changes have occurred since the federal wellness policy mandate took effect in 2006-2007. OBJECTIVE We examined whether the school food environment in public and private elementary schools changed over time and examined variations by school type and geographic division. DESIGN AND PARTICIPANTS Survey data were gathered from respondents at nationally representative samples of elementary schools during the 2006-2007 and 2009-2010 school years (respectively, 578 and 680 public schools, and 259 and 313 private schools). MAIN OUTCOME MEASURES Topics assessed included competitive foods, school meals, and other food-related practices (eg, school gardens and nutrition education). A 16-item food environment summary score was computed, with possible scores ranging from 0 (least healthy) to 100 (healthiest). ANALYSES Multivariate regression models were used to examine changes over time in the total school food environment score and component items, and variations by US census division. RESULTS Many practices improved, such as participation in school gardens or farm-to-school programs, and availability of whole grains and only lower-fat milks in lunches. Although the school food environment score increased significantly, the magnitude of change was small; as of 2009-2010 the average score was 53.5 for public schools (vs 50.1 in 2006-2007) and 42.2 for private schools (vs 37.2 in 2006-2007). Scores were higher in public schools than in private schools (P<0.001), but did not differ by race/ethnicity or school size. For public schools, scores were higher in the Pacific and West South Central divisions compared with the national average. CONCLUSIONS Changes in the school food environment have been minimal, with much room remaining for improvement. Additional policy changes may be needed to speed the pace of improvement.
Multiagent Reinforcement Learning for Integrated Network of Adaptive Traffic Signal Controllers (MARLIN-ATSC): Methodology and Large-Scale Application on Downtown Toronto
Population is steadily increasing worldwide, resulting in intractable traffic congestion in dense urban areas. Adaptive traffic signal control (ATSC) has shown strong potential to effectively alleviate urban traffic congestion by adjusting signal timing plans in real time in response to traffic fluctuations to achieve desirable objectives (e.g., minimize delay). Efficient and robust ATSC can be designed using a multiagent reinforcement learning (MARL) approach in which each controller (agent) is responsible for the control of traffic lights around a single traffic junction. Applying MARL approaches to the ATSC problem is associated with a few challenges as agents typically react to changes in the environment at the individual level, but the overall behavior of all agents may not be optimal. This paper presents the development and evaluation of a novel system of multiagent reinforcement learning for integrated network of adaptive traffic signal controllers (MARLIN-ATSC). MARLIN-ATSC offers two possible modes: 1) independent mode, where each intersection controller works independently of other agents; and 2) integrated mode, where each controller coordinates signal control actions with neighboring intersections. MARLIN-ATSC is tested on a large-scale simulated network of 59 intersections in the lower downtown core of the City of Toronto, ON, Canada, for the morning rush hour. The results show unprecedented reduction in the average intersection delay ranging from 27% in mode 1 to 39% in mode 2 at the network level and travel-time savings of 15% in mode 1 and 26% in mode 2, along the busiest routes in Downtown Toronto.
[Vagal nerve stimulation for refractory epilepsy in children].
Epileptic disease is defined as recurrent seizures not as a result of fever or acute cerebral insult. It is very common in all age groups. In the majority of cases, satisfactory control is being achieved, leading to normal life. However, in some cases, the disease is resistant to a variety of medications. In these cases, an attempt to decrease the number of epileptic episodes is done by trying other methods such as a ketogenic diet or neurosurgical interventions. Recently, a new modality of treatment with vagal nerve stimulation was introduced, particularly for cases resistant to medications and are not candidates for neurosurgical intervention.
Microstructure evolution of TC4 titanium alloy/316L stainless steel dissimilar joint vacuum-brazed with Ti-Zr-Cu amorphous filler metal
TC4 titanium alloy was vacuum-brazed to 316L stainless steel (SS) with Ti-Zr-Cu amorphous filler metal. The effect of brazing time and temperature on the interfacial microstructure and mechanical properties of joints was investigated. Electron probe micro-analyzer (EPMA) and scanning electron microscopy (SEM) equipped with energy dispersive spectroscopy (EDS) were used to study the joint microstructure; meanwhile, the reaction phases on fracture surfaces were identified by X-ray diffraction (XRD). The results show that all joints had similar interfacial microstructure of TC4 titanium substrate/Widmanstätten/β-Ti + Ti2Cu/(α-Ti + λ-Cu2TiZr) + Ti2Cu/Ti-Fe-Cu/TiFe/(Fe, Cr)2Ti/σ-phase + Fess/316L stainless steel substrate. Three reaction layers TiFe/(Fe, Cr)2Ti/σ-phase + Fess formed close to 316L stainless steel substrate and could benefit the mechanical properties of joints. The maximum shear strength of 65 MPa was obtained at 950 °C for 10 min. During shear test, cracks initiated at the interface of Ti-Cu-Fe layer/TiFe layer and propagated along the brazed seam/316L interface with a large amount of cleavage facets existing on the fracture surface.
Beyond Independence: Probabilistic Models for Query Approximation on Binary Transaction Data
We investigate the problem of generating fast approximate a nswers to queries posed to large sparse binary data sets. We focus in particular on probabilistic mode l-based approaches to this problem and develop a number of techniques that are significantly more accurate t han a baseline independence model. In particular, we introduce two techniques for building probabil ist c models from frequent itemsets: the itemset maximum entropy method, and the itemset inclusion-exclusi on model. In the maximum entropy method we treat itemsets as constraints on the distribution of the q uery variables and use the maximum entropy principle to build a joint probability model for the query at tributes online. In the inclusion-exclusion model itemsets and their frequencies are stored in a data structur e alled an ADtree that supports an efficient implementation of the inclusion-exclusion principle in orde r to answer the query. We empirically compare these two itemset-based models to direct querying of the ori ginal data, querying of samples of the original data, as well as other probabilistic models such as the indep endence model, the Chow-Liu tree model, and the Bernoulli mixture model. These models are able to handle high-dimensionality (hundreds or thousands of attributes), whereas most other work on this topic has foc used on relatively low-dimensional OLAP problems. Experimental results on both simulated and realwor d transaction data sets illustrate various fundamental tradeoffs between approximation error, model complexity, and the online time required to compute a query answer.
Image-Based Proxy Accumulation for Real-Time Soft Global Illumination
We present a new, general, and real-time technique for soft global illumination in low-frequency environmental lighting. It accumulates over relatively few spherical proxies which approximate the light blocking and re-radiating effect of dynamic geometry. Soft shadows are computed by accumulating log visibility vectors for each sphere proxy as seen by each receiver point. Inter-reflections are computed by accumulating vectors representing the proxy's unshadowed radiance when illuminated by the environment. Both vectors capture low-frequency directional dependence using the spherical harmonic basis. We also present a new proxy accumulation strategy that splats each proxy to receiver pixels in image space to collect its shadowing and indirect lighting contribution. Our soft GI rendering pipeline unifies direct and indirect soft effects with a simple accumulation strategy that maps entirely to the GPU and outperforms previous vertex-based methods.
Atypical and Conventional Antipsychotic Drugs in Treatment-Naive First-Episode Schizophrenia: A 52-Week Randomized Trial of Clozapine Vs Chlorpromazine
The purported advantages of second-generation or ‘atypical’ antipsychotics relative to first-generation antipsychotics have not been examined in patients with a first episode of schizophrenia. This flexible-dose study examined efficacy and safety in a randomized, double-blind, 52-week trial, comparing chlorpromazine (CPZ) and clozapine (CLZ) in treatment naive patients experiencing their first episode of schizophrenia. In all, 160 inpatients with first-episode schizophrenia or schizophreniform disorder were randomized to CPZ or CLZ and followed them for 52 weeks or until dropout. The primary efficacy measure was time to first remission and proportion of time remaining in remission. The analysis was supplemented by comparisons on a profile of clinical symptoms and side effects. Of these first-episode patients, 80% achieved remission within 1 year (79% CPZ, 81% CLZ). The Kaplan–Meier estimated median time to first remission was 8 weeks for CLZ vs 12 weeks for CPZ (χ2(1)=5.56, p=0.02). Both the rate of first achieving remission and the odds for being in remission during the trial were almost doubled for the CLZ group in comparison with the CPZ group. At 12 weeks, CLZ was superior on many rating scale measures of symptom severity while CPZ was not superior on any. These symptom differences remained significant when controlling for EPS differences. By 52 weeks many of the symptom differences between groups were no longer significantly different. Generally, CLZ produced fewer side effects than CPZ, particularly extrapyramidal side effects. There was no significant difference between treatments in weight change or glucose metabolism. For each prior year of untreated psychosis, there was a 15% decrease in the odds of achieving remission (OR=0.85; CI 0.75–0.95). A high proportion of first-episode patients remitted within 1 year. We detected no difference in the proportion of first-episode patients receiving CLZ or CPZ that achieved remission. However, first-episode patients receiving CLZ remitted significantly faster and remained in remission longer than subjects receiving CPZ. While the CLZ group showed significantly less symptomatology on some measures and fewer side effects at 12 weeks, the two treatment groups seemed to converge by 1 year. Longer duration of untreated psychosis was associated with lower odds of achieving remission.
A comparison of alcohol-induced and independent depression in alcoholics with histories of suicide attempts.
OBJECTIVE Alcohol-dependent men and women are at high risk for two types of major depressive episodes and for suicide attempts. The aim of this study is to compare the characteristics of two groups: (1) alcohol-dependent subjects with histories of suicide attempts and independent mood disorders and (2) a similar population of alcoholics with histories of self harm but who have only experienced alcohol-induced depressions. METHOD As part of the Collaborative Study on the Genetics of Alcoholism (COGA), semistructured detailed interviews were administered to 371 alcohol-dependent individuals (62% women) with histories of suicide attempts and major mood disorders. Of the total, 145 (39.1%) had ever had an independent depressive episode and 226 (60.9%) had experienced only alcohol-induced depressions. Information was obtained about socioeconomic characteristics, suicidal behavior, independent and induced psychiatric conditions, and aspects of alcohol dependence. RESULTS Univariate and multivariate comparisons revealed that alcohol-dependent individuals with a history of suicide attempts and independent depression had a higher number of suicide attempts, were less likely to have been drinking during their most severe attempt, and were more likely to have an independent panic disorder. Univariate analyses indicated that these subjects reported a less severe history of alcohol dependence. CONCLUSIONS The results indicate that a distinction between independent and alcohol-induced mood disorders in alcoholics with a history of suicide attempts may be useful.
Prevalence of erosive tooth wear and associated risk factors in 2-7-year-old German kindergarten children.
OBJECTIVES The aims of this study were to (1) investigate prevalence and severity of erosive tooth wear among kindergarten children and (2) determine the relationship between dental erosion and dietary intake, oral hygiene behaviour, systemic diseases and salivary concentration of calcium and phosphate. MATERIALS AND METHODS A sample of 463 children (2-7 years old) from 21 kindergartens were examined under standardized conditions by a calibrated examiner. Dental erosion of primary and permanent teeth was recorded using a scoring system based on O'Sullivan Index [Eur J Paediatr Dent 2 (2000) 69]. Data on the rate and frequency of dietary intake, systemic diseases and oral hygiene behaviour were obtained from a questionnaire completed by the parents. Unstimulated saliva samples of 355 children were analysed for calcium and phosphate concentration by colorimetric assessment. Descriptive statistics and multiple regression analysis were applied to the data. RESULTS Prevalence of erosion amounted to 32% and increased with increasing age of the children. Dentine erosion affecting at least one tooth could be observed in 13.2% of the children. The most affected teeth were the primary maxillary first and second incisors (15.5-25%) followed by the canines (10.5-12%) and molars (1-5%). Erosions on primary mandibular teeth were as follows: incisors: 1.5-3%, canines: 5.5-6% and molars: 3.5-5%. Erosions of the primary first and second molars were mostly seen on the occlusal surfaces (75.9%) involving enamel or enamel-dentine but not the pulp. In primary first and second incisors and canines, erosive lesions were often located incisally (51.2%) or affected multiple surfaces (28.9%). None of the permanent incisors (n = 93) or first molars (n=139) showed signs of erosion. Dietary factors, oral hygiene behaviour, systemic diseases and salivary calcium and phosphate concentration were not associated with the presence of erosion. CONCLUSIONS Erosive tooth wear of primary teeth was frequently seen in primary dentition. As several children showed progressive erosion into dentine or exhibited severe erosion affecting many teeth, preventive and therapeutic measures are recommended.
Beyond g : Putting multiple intelligences theory to the test
We investigated Gardner's “Theory of Multiple Intelligences” in a sample of 200 adults. For each of the hypothesized eight “intelligence” domains–Linguistic, Logical/Mathematical, Spatial, Interpersonal, Intrapersonal, Musical, Bodily-Kinesthetic, Naturalistic–we selected two tests based on Gardner's description of its content. Factor analysis revealed a large g factor having substantial loadings for tests assessing purely cognitive abilities–Linguistic, Logical/Mathematical, Spatial, Naturalistic, Interpersonal–but lower loadings for tests of other abilities, especially Bodily-Kinesthetic. Within most domains, the two tests showed some (weak) non-g associations, thus providing modest support for the coherence of those domains, which resemble the group factors of hierarchical models of intelligence. Results support previous findings that highly diverse tests of purely cognitive abilities share strong loadings on a factor of general intelligence, and that abilities involving sensory, motor, or personality influences are less strongly g-loaded. © 2006 Elsevier Inc. All rights reserved.
Arty Portfolios: Manifesting Artistic Work in Interaction Design Research
As artist researchers in HCI, we experience the lack of appropriate ways to describe and communicate artistic work and its value for and influences on interaction design research practice. We introduce the arty portfolio approach to support artistic research within HCI. Arty portfolios value reflection, articulation and communication of artistic work in HCI. Presenting our own work and its relation to our research, we demonstrate and reflect on the mutual influences between our artistic inquiries and interaction design practice. We explore arty portfolios' potentials to strengthen artistic enquiry in the context of HCI.
Modeling the Clickstream : Implications for Web-Based Advertising Efforts
Advertising revenues have become a critical element in the business plans of most commercial Web sites. Despite extensive research on advertising in traditional media, managers and researchers face considerable uncertainty about its role in the online environment. The benefits offered by the medium notwithstanding, the lack of models to measure and predict advertising performance is a major deterrent to acceptance of the Web by mainstream advertisers. Presently, existing media models based on aggregate vehicle and advertising exposure are being adapted which underutilize the unique characteristics of the medium. What is required are methods that measure how consumers interact with advertising stimuli in ad-supported Web sites, going beyond mere counts of individual "eyeballs" attending to media. Given the enormous potential of this dynamic medium, academic investigation is sorely needed before cost implications and skepticism endanger the ability of the medium to generate and maintain advertiser support. This paper addresses advertiser and publisher need to understand and predict how consumers interact with advertising stimuli placed at Web sites. We do so by developing a framework to formally model the commercial “clickstream” at an advertiser supported Web site with mandatory visitor registration. Consumers visiting such Web sites are conceptualized as deriving utility from navigating through editorial and advertising content subject to time constraints. The clickstream represents a new source of consumer response data detailing the content and banner ads that consumers click on during the online navigation process. To our knowledge, this paper is the first to model the clickstream from an actual commercial Web site. Clickstream data allow us to investigate how consumers respond to advertising over time at an individual level. Such modeling is not possible in broadcast media because the data do not exist. Our results contrast dramatically from those typically found in traditional broadcast media. First, 3 the effect of repeated exposures to banner ads is U-shaped. This is in contrast with the inverted Ushaped response found in broadcast media. Second, the differential effects of each successive ad exposure is initially negative, but non-linear, and becomes positive later at higher levels of passive ad exposures. Third, the negative response to repeated banner ad exposures increases for consumers who visit the site more frequently. Fourth, in contrast to findings in traditional media the effect of exposure to competing ads is either insignificant or positive. However, carryover effects of past advertising exposures are similar to those proposed in broadcast media. Finally, heterogeneity in cumulative effects of advertising exposure and involvement across consumers captured by cumulative click behavior across visits and click behavior during the visit was found to significantly contribute to differences in click response. This has implications for dynamic ad placement based on past history of consumer exposure and interaction with advertising at the Web site. Response parameters of consumer visitor segments can be used by media buyers and advertisers to understand the benefits consumers seek from the advertising vehicle, and thus guide advertising media placement decisions. In sum, our modeling effort offers an important first look into how advertisers can use the Web medium to maximize their desired advertising outcomes.
Expression and purification of E. coli BirA biotin ligase for in vitro biotinylation.
The extremely tight binding between biotin and avidin or streptavidin makes labeling proteins with biotin a useful tool for many applications. BirA is the Escherichia coli biotin ligase that site-specifically biotinylates a lysine side chain within a 15-amino acid acceptor peptide (also known as Avi-tag). As a complementary approach to in vivo biotinylation of Avi-tag-bearing proteins, we developed a protocol for producing recombinant BirA ligase for in vitro biotinylation. The target protein was expressed as both thioredoxin and MBP fusions, and was released from the corresponding fusion by TEV protease. The liberated ligase was separated from its carrier using HisTrap HP column. We obtained 24.7 and 27.6 mg BirA ligase per liter of culture from thioredoxin and MBP fusion constructs, respectively. The recombinant enzyme was shown to be highly active in catalyzing in vitro biotinylation. The described protocol provides an effective means for making BirA ligase that can be used for biotinylation of different Avi-tag-bearing substrates.
Behavioural interventions to prevent HIV infection: rapid evolution, increasing rigour, moderate success.
Behavioural interventions aim to alter behaviours that make individuals more vulnerable to becoming infected or infecting others with HIV. Research in this field has developed rapidly in recent years. Increased rigour in the design and conduct of evaluations and moderate successes in bringing about behaviour change in target populations are the key achievements so far. This paper reflects on these developments, addresses recent innovations and highlights likely areas for future work. Discussion focuses on maximising the potential effectiveness of new interventions, methodological issues relating to evaluation and implementation of interventions into practice. The paper concludes there is evidence that interventions deemed effective under evaluation conditions can be implemented in HIV prevention services and that this is the next major challenge. The immediate goal should be consolidation of the learning that has occurred, particularly efforts to maintain theoretical and evaluative rigour whilst encouraging increased collaborative partnerships between researchers, service providers and affected communities.