title
stringlengths
8
300
abstract
stringlengths
0
10k
Application of Causal Inference to Genomic Analysis: Advances in Methodology
The current paradigm of genomic studies of complex diseases is association and correlation analysis. Despite significant progress in dissecting the genetic architecture of complex diseases by genome-wide association studies (GWAS), the identified genetic variants by GWAS can only explain a small proportion of the heritability of complex diseases. A large fraction of genetic variants is still hidden. Association analysis has limited power to unravel mechanisms of complex diseases. It is time to shift the paradigm of genomic analysis from association analysis to causal inference. Causal inference is an essential component for the discovery of mechanism of diseases. This paper will review the major platforms of the genomic analysis in the past and discuss the perspectives of causal inference as a general framework of genomic analysis. In genomic data analysis, we usually consider four types of associations: association of discrete variables (DNA variation) with continuous variables (phenotypes and gene expressions), association of continuous variables (expressions, methylations, and imaging signals) with continuous variables (gene expressions, imaging signals, phenotypes, and physiological traits), association of discrete variables (DNA variation) with binary trait (disease status) and association of continuous variables (gene expressions, methylations, phenotypes, and imaging signals) with binary trait (disease status). In this paper, we will review algorithmic information theory as a general framework for causal discovery and the recent development of statistical methods for causal inference on discrete data, and discuss the possibility of extending the association analysis of discrete variable with disease to the causal analysis for discrete variable and disease.
Structural Neighborhood Based Classification of Nodes in a Network
Classification of entities based on the underlying network structure is an important problem. Networks encountered in practice are sparse and have many missing and noisy links. Statistical learning techniques have been used in intra-network classification; however, they typically exploit only the local neighborhood, so may not perform well. In this paper, we propose a novel structural neighborhood-based classifier learning using a random walk. For classifying a node, we take a random walk from the node and make a decision based on how nodes in the respective k^th-level neighborhood are labeled. We observe that random walks of short length are helpful in classification. Emphasizing role of longer random walks may cause the underlying Markov chain to converge to a stationary distribution. Considering this, we take a lazy random walk based approach with variable termination probability for each node, based on the node's structural properties including its degree. Our experimental study on real world datasets demonstrates the superiority of the proposed approach over the existing state-of-the-art approaches.
Landing dynamic analysis for landing leg of lunar lander using Abaqus/Explicit
One of the major tasks in the design and optimization process of a new landing gear system for a lunar lander is to accurately determine the loads and energy absorption capability during the landing event. The works of this paper describes a new approach of landing impact dynamic analysis using nonlinear finite element method. Abaqus/Explicit, which is part of Abaqus suite of finite element analysis software, is selected to simulate the landing event for its excellent nonlinear, transient dynamics capabilities. The aluminum honeycomb shock absorber is modeled by plastic crushable foam material model, while the lunar soil is modeled by Drucker-Prager/Cap material model. Simulation results, including the load at connector between structure and landing gear, acceleration response of structure and dissipated energy by the shock absorber and lunar soil, are given and discussed. It shows that the performance of landing gear meets the design requirements.
On Learning Vector-Valued Functions
In this letter, we provide a study of learning in a Hilbert space of vector-valued functions. We motivate the need for extending learning theory of scalar-valued functions by practical considerations and establish some basic results for learning vector-valued functions that should prove useful in applications. Specifically, we allow an output space Y to be a Hilbert space, and we consider a reproducing kernel Hilbert space of functions whose values lie in Y. In this setting, we derive the form of the minimal norm interpolant to a finite set of data and apply it to study some regularization functionals that are important in learning theory. We consider specific examples of such functionals corresponding to multiple-output regularization networks and support vector machines, for both regression and classification. Finally, we provide classes of operator-valued kernels of the dot product and translation-invariant type.
Adherence and well-being in overweight and obese patients referred to an exercise on prescription scheme: a self-determination theory perspective.
Objectives: Based on Self-Determination Theory [SDT; Deci & Ryan, 1985. Intrinsic motivation and self determination in human behavior. New York: Plenum Press], this study examined differences in perceived autonomy support, psychological need satisfaction, self-determined motivation, exercise behaviour, exercise-related cognitions and general well-being, between overweight/obese individuals who demonstrated greater adherence to an exercise on prescription programme and those who adhered less. In addition, this study explored the motivational sequence embedded in SDT by testing autonomy support as a predictor of psychological need satisfaction, autonomy support and psychological need satisfaction as predictors of the motivational regulations, and autonomy support, psychological need satisfaction and the motivational regulations as predictors of behavioural, cognitive and well-being outcomes. Method: Before commencing, at 1-month, and upon terminating a 3-month exercise on prescription programme, overweight/obese individuals (N 1⁄4 49; M Body Mass Index 1⁄4 38.75) completed a multisection questionnaire tapping all aforementioned variables. Participants’ adherence to the scheme was assessed using attendance records. Results: Multilevel regression analyses revealed that, at the end of the exercise prescription, those individuals who adhered more reported more self-efficacy to overcome barriers to exercise versus those who adhered less. In addition, those individuals who showed greater adherence demonstrated an increase in relatedness need satisfaction over time. For the whole sample, need satisfaction predicted self-determined see front matter r 2006 Elsevier Ltd. All rights reserved. psychsport.2006.07.006 ding author. Tel.: +44 2476 887820. ress: [email protected] (J. Edmunds).
Real-time dynamic wrinkling of coarse animated cloth
Dynamic folds and wrinkles are an important visual cue for creating believably dressed characters in virtual environments. Adding these fine details to real-time cloth visualization is challenging, as the low-quality cloth used for real-time applications often has no reference shape, an extremely low triangle count, and poor temporal and spatial coherence. We introduce a novel real-time method for adding dynamic, believable wrinkles to such coarse cloth animation. We trace spatially and temporally coherent wrinkle paths, overcoming the inaccuracies and noise in low-end cloth animation, by employing a two stage stretch tensor estimation process. We first employ a graph-cut segmentation technique to extract spatially and temporally reliable surface motion patterns, detecting consistent compressing, stable, and stretching patches. We then use the detected motion patterns to compute a per-triangle temporally adaptive reference shape and a stretch tensor based on it. We use this tensor to dynamically generate new wrinkle geometry on the coarse cloth mesh by taking advantage of the GPU tessellation unit. Our algorithm produces plausible fine wrinkles on real-world data sets at real-time frame rates, and is suitable for the current generation of consoles and PC graphics cards.
A control-oriented model for piston trajectory-based HCCI combustion
Previously, the authors have proposed the concept of piston trajectory-based homogeneous charge compression ignition (HCCI) combustion control enabled by a free piston engine and shown the effects of variable piston trajectories on the start of combustion timing, heat loss amount and indicated output work. In order to realize this new control in practical applications, a control-oriented model with reduced chemical kinetics has to be developed. In this paper, such a model is presented and is compared to two existing models: a simplified model using a global reaction and a complex model including detailed chemical reaction mechanisms. A cycle separation method is employed in the proposed model to significantly reduce the computational time and guarantee the prediction accuracy simultaneously. A feedback controller is also implemented on the control-oriented model to control the HCCI combustion phasing by varying the trajectories. The simulation results show that the combustion phasing can be adjusted as desired, which demonstrates the effectiveness of the piston trajectory-based combustion control.
Default Recovery Rates and LGD in Credit Risk Modeling and Practice
Evidence from many countries in recent years suggests that collateral values and recovery rates on corporate defaults can be volatile and, moreover, that they tend to go down just when the number of defaults goes up in economic downturns. This link between recovery rates and default rates has traditionally been neglected by credit risk models, as most of them focused on default risk and adopted static loss assumptions, treating the recovery rate either as a constant parameter or as a stochastic variable independent from the probability of default. This traditional focus on default analysis has been partly reversed by the recent significant increase in the number of studies dedicated to the subject of recovery rate estimation and the relationship between default and recovery rates. This paper presents a detailed review of the way credit risk models, developed during the last thirty years, treat the recovery rate and, more specifically, its relationship with the probability of default of an obligor. We also review the efforts by rating agencies to formally incorporate recovery ratings into their assessment of corporate loan and bond credit risk and the recent efforts by the Basel Committee on Banking Supervision to consider “downturn LGD” in their suggested requirements under Basel II. Recent empirical evidence concerning these issues and the latest data on high-yield bond and leverage loan defaults is also presented and discussed.
Deep Reinforcement Learning for Soft Robotic Applications: Brief Overview with Impending Challenges
The increasing trend of studying the innate softness of robotic structures and amalgamating it with the benefits of the extensive developments in the field of embodied intelligence has led to sprouting of a relatively new yet extremely rewarding sphere of technology. The fusion of current deep reinforcement algorithms with physical advantages of a soft bio-inspired structure certainly directs us to a fruitful prospect of designing completely self-sufficient agents that are capable of learning from observations collected from their environment to achieve a task they have been assigned. For soft robotics structure possessing countless degrees of freedom, it is often not easy (something not even possible) to formulate mathematical constraints necessary for training a deep reinforcement learning (DRL) agent for the task in hand, hence, we resolve to imitation learning techniques due to ease of manually performing such tasks like manipulation that could be comfortably mimicked by our agent. Deploying current imitation learning algorithms on soft robotic systems have been observed to provide satisfactory results but there are still challenges in doing so. This review article thus posits an overview of various such algorithms along with instances of them being applied to real world scenarios and yielding state-of-the-art results followed by brief descriptions on various pristine branches of DRL research that may be centers of future research in this field of interest.
KALMAN FILTERING IN SEMI-ACTIVE SUSPENSION CONTROL
This paper focuses on estimation of the vertical velocity of the vehicle chassis and the relative velocity between chassis and wheel. These velocities are important variables in semi-active suspension control. A model-based estimator is proposed including a Kalman filter and a non-linear model of the damper. Inputs to the estimator are signals from wheel displacement sensors and from accelerometers placed at the chassis. In addition, the control signal is used as input to the estimator. The Kalman filter is analyzed in the frequency domain and compared with a conventional filter solution including derivation of the displacement signal and integration of the acceleration signal.
Literary objects : Flaubert
Literary Objects: Flaubert explores the concept of the commodity as seen through the writings of Gustav Flaubert. A selection of furniture, paintings, prints, sculpture, and other objets d'art together with selected passages from Flaubert's writings refer to the overlapping worlds of the French middle class: the domestic interior, the political arena, the imagined Orient, and the historical past. The book seeks to both evoke and illuminate Flaubert's literary concerns in new and powerful visual terms, while providing a new context for 19th-century French artworks.
ReBudget: Trading Off Efficiency vs. Fairness in Market-Based Multicore Resource Allocation via Runtime Budget Reassignment
Efficiently allocating shared resources in computer systems is critical to optimizing execution. Recently, a number of market-based solutions have been proposed to attack this problem. Some of them provide provable theoretical bounds to efficiency and/or fairness losses under market equilibrium. However, they are limited to markets with potentially important constraints, such as enforcing equal budget for all players, or curve-fitting players' utility into a specific function type. Moreover, they do not generally provide an intuitive "knob" to control efficiency vs. fairness. In this paper, we introduce two new metrics, Market Utility Range (MUR) and Market Budget Range (MBR), through which we provide for the first time theoretical bounds on efficiency and fairness of market equilibria under arbitrary budget assignments. We leverage this result and propose ReBudget, an iterative budget re-assignment algorithm that can be used to control efficiency vs. fairness at run-time. We apply our algorithm to a multi-resource allocation problem in multicore chips. Our evaluation using detailed execution-driven simulations shows that our budget re-assignment technique is intuitive, effective, and efficient.
Optimizing the Hadoop MapReduce Framework with high-performance storage devices
Solid-state drives (SSDs) are an attractive alternative to hard disk drives (HDDs) to accelerate the Hadoop MapReduce Framework. However, the SSD characteristics and today’s Hadoop framework exhibit mismatches that impede indiscriminate SSD integration. This paper explores how to optimize a Hadoop MapReduce Framework with SSDs in terms of performance, cost, and energy consumption. It identifies extensible best practices that can exploit SSD benefits within Hadoop when combined with high network bandwidth and increased parallel storage access. Our Terasort benchmark results demonstrate that Hadoop currently does not sufficiently exploit SSD throughput. Hence, using faster SSDs in Hadoop does not enhance its performance. We show that SSDs presently deliver significant efficiency when storing intermediate Hadoop data, leaving HDDs for Hadoop Distributed File System (HDFS). The proposed configuration is optimized with the JVM reuse option and frequent heartbeat interval option. Moreover, we examined the performance of a state-of-the-art non-volatile memory express interface SSD within the Hadoop MapReduce Framework. While HDFS read and write throughput increases with high-performance SSDs, achieving complete system performance improvement requires carefully balancing CPU, network, and storage resource capabilities at a system level.
A Framework for Clustering Massive Text and Categorical Data Streams
Many applications such as news group filtering, text crawling, and document organization require real time clustering and segmentation of text data records. The categorical data stream clustering problem also has a number of applications to the problems of customer segmentation and real time trend analysis. We will present an online approach for clustering massive text and categorical data streams with the use of a statistical summarization methodology. We present results illustrating the effectiveness of the technique.
Validity of the school setting interview for students with special educational needs in regular high school – a Rasch analysis
BACKGROUND Participation in education is a vital component of adolescents' everyday life and a determinant of health and future opportunities in adult life. The School Setting Interview (SSI) is an instrument which assesses student-environment fit and reflects the potential needs for adjustments to enhance students' participation in school activities. The aim of the study was to investigate the psychometric properties of the SSI for students with special educational needs in regular high school. METHODS A sample of 509 students with special educational needs was assessed with the SSI. The polytomous unrestricted Rasch model was used to analyze the psychometric properties of the SSI regarding targeting, model fit, differential item functioning (DIF), response category functioning and unidimensionality. RESULTS The SSI generally confirmed fit to assumptions of the Rasch model. Reliability was acceptable (0.73) and the SSI scale was able to separate students into three different levels of student-environment fit. DIF among gender was detected in item "Remember things" and in item "Homework" DIF was detected among students with or without diagnosis. All items had disordered thresholds. The SSI demonstrated unidimensionality and no response dependence was present among items. CONCLUSION The results suggest that the SSI is valid for use among students with special educational needs in order to provide and evaluate environmental adjustments. However, the items with the detected DIF and the SSI rating scale with its disordered thresholds needs to be further scrutinized.
Adherence to protein restriction in patients with type 2 diabetes mellitus: a randomized trial
Objective: To describe the extent to which diet counselling can decrease protein intake, and to identify predictors of adherence.Design: (1) Randomized trial; (2) observational longitudinal study.Subjects: (1) 125 type 2 diabetic patients in primary care; (2) 59 patients in the experimental group.Intervention: For a period of 12 months, dieticians provided guidance on protein restriction (experimental group, n=59) or the usual dietary advice (control group, n=66).Outcome measures: Adherence was estimated primarily from urinary urea excretion (UUE), but also from food-frequency questionnaires (FFQ).Results: After 6 months protein intake was, according to the UUE and the FFQ, respectively, 8 g/day (95% CI −2, 13) (8%) and 15 g/day (95%−CI 9, 22) (16%) lower in the experimental than in the control group. After 12 months these differences were smaller. Linear regression analysis indicated that protein restriction was greater in patients who were well satisfied with their pre-existing diet (r=0.32, bper 1/10=3.6 (1, 6) g), in patients who were less overweight (r=0.32, bper kg.m−2=1.1 (0.2, 2.0) g), and in patients living alone (r=0.22, b=7.7 (−2, 17) g). These combined factors explained only 11% of variation in adherence. Adherence was not predicted by the number of barriers reported by the patients or by coinciding changes in diet satisfaction.Conclusions: The diet counselling resulted in a very moderate degree of protein restriction only. Predictors of adherence could be identified, but only a few, and their predictive power was limited.European Journal of Clinical Nutrition (2000) 54, 347–352
Autonomous stair climbing for mobile tracked robot
In this paper, we consider the problem of overcoming a flight of stairs in an uncertain environment with an autonomous tracked robot. A complete autonomous stair climbing module is developed which handles a number of tasks involved in stair climbing and a divide and conquer approach is adopted where the stair climbing challenge is further divided into several individual tasks such as stairs detection in an uncertain environment, intelligent climbing control to overcome the stairs, detection of the end of the stairs and the subsequent landing procedure to prevent damage to the onboard devices. A fuzzy control is developed and experimental results obtained have clearly illustrated the effectiveness of the proposed approach.
Hierarchical Structure of Microgrids Control System
Advanced control strategies are vital components for realization of microgrids. This paper reviews the status of hierarchical control strategies applied to microgrids and discusses the future trends. This hierarchical control structure consists of primary, secondary, and tertiary levels, and is a versatile tool in managing stationary and dynamic performance of microgrids while incorporating economical aspects. Various control approaches are compared and their respective advantages are highlighted. In addition, the coordination among different control hierarchies is discussed.
Deferred voxel shading for real-time global illumination
Computing indirect illumination is a challenging and complex problem for real-time rendering in 3D applications. We present a global illumination approach that computes indirect lighting in real time using a simplified version of the outgoing radiance and the scene stored in voxels. This approach comprehends two-bounce indirect lighting for diffuse, specular and emissive materials. Our voxel structure is based on a directional hierarchical structure stored in 3D textures with mipmapping, the structure is updated in real time utilizing the GPU which enables us to approximate indirect lighting for dynamic scenes. Our algorithm employs a voxel-light pass which calculates voxel direct and global illumination for the simplified outgoing radiance. We perform voxel cone tracing within this voxel structure to approximate different lighting phenomena such as ambient occlusion, soft shadows and indirect lighting. We demonstrate with different tests that our developed approach is capable to compute global illumination of complex scenes on interactive times.
TiNb2O7 nanoparticles assembled into hierarchical microspheres as high-rate capability and long-cycle-life anode materials for lithium ion batteries.
As a competitor for Li4Ti5O12 with a higher capacity and extreme safety, monoclinic TiNb2O7 has been considered as a promising anode material for next-generation high power lithium ion batteries. However, TiNb2O7 suffers from low electronic conductivity and ionic conductivity, which restricts the electrochemical kinetics. Herein, a facile and advanced architecture design of hierarchical TiNb2O7 microspheres is successfully developed for large-scale preparation without any surfactant assistance. To the best of our knowledge, this is the first report on the one step solvothermal synthesis of TiNb2O7 microspheres with micro- and nano-scale composite structures. When evaluated as an anode material for lithium ion batteries, the electrode exhibits excellent high rate capacities and ultra-long cyclability, such as 258 mA h g(-1) at 1 C, 175 mA h g(-1) at 5 C, and 138 mA h g(-1) at 10 C, extending to more than 500 cycles.
Quintessence Cosmology and Varying α
If the reported measurements of the time variation of the fine structure constant from observations of distant QSOs are correct, combined with the Oklo limit, they would strongly constrain the class of the quintessence potential. If these results prove valid, future satellite experiments (STEP) should measure the induced violation of the weak equivalence principle. Future cosmological observations of nearby (z < 0.5) absorption systems would make it clear whether the variation is significant or not.
Additional effects of aPDT on nonsurgical periodontal treatment with doxycycline in type II diabetes: a randomized, controlled clinical trial
The association of doxycycline and periodontal treatment in non-controlled diabetes mellitus (DM) has shown positive results on clinical and metabolic parameters. Antimicrobial photodynamic therapy (aPDT) is a local and painless antimicrobial treatment that can be applied in periodontal treatment without systemic risks. The aim of this study was to evaluate the potential improvement of aPDT on clinical and metabolic effects in patients with type 2 diabetes mellitus in conjunction with nonsurgical periodontal treatment plus doxycycline. Thirty patients with type 2 diabetes and diagnosis of chronic periodontitis were treated with scaling and root planning (SRP; N = 15) or SRP plus phenothiazine chloride photosensitizer-induced aPDT (SRP + aPDT, N = 15). Patients of both groups took doxycycline (100 mg/day) for 2 weeks and plaque index, bleeding on probe (BOP), probing pocket depth (PPD), suppuration, clinical attachment level (CAL), and glycated hemoglobin levels (HbA1c) were measured at baseline and 3 months after therapy. An improvement in clinical parameters such as PPD, CAL, S, and BOP between groups was observed but without statistical significance (p > 0.05). Intragroup analysis showed a significant reduction of HbA1c (8.5 ± 0.9 to 7.5 ± 0.1, p < 0.01) in the SRP + aPDT group. The differences of HbA1c between baseline and 3 months were greater for the SRP + aPDT (11.4 %) than SRP (10 %) (0.87 ± 0.9 and 0.4 ± 0.84 respectively; p < 0.05). A single application of the aPDT as an adjunct to periodontal treatment did not show additional benefits in the clinical parameters but resulted in a slight greater decrease in HbA1c.
Toward Practical Privacy-Preserving Analytics for IoT and Cloud-Based Healthcare Systems
Modern healthcare systems now rely on advanced computing methods and technologies, such as Internet of Things (IoT) devices and clouds, to collect and analyze personal health data at an unprecedented scale and depth. Patients, doctors, healthcare providers, and researchers depend on analytical models derived from such data sources to remotely monitor patients, early-diagnose diseases, and find personalized treatments and medications. However, without appropriate privacy protection, conducting data analytics becomes a source of a privacy nightmare. In this article, we present the research challenges in developing practical privacy-preserving analytics in healthcare information systems. The study is based on kHealth-a personalized digital healthcare information system that is being developed and tested for disease monitoring. We analyze the data and analytic requirements for the involved parties, identify the privacy assets, analyze existing privacy substrates, and discuss the potential tradeoff among privacy, efficiency, and model quality.
Generalised spatial modulation with multiple active transmit antennas
We propose a new generalised spatial modulation (GSM) technique, which can be considered as a generalisation of the recently proposed spatial modulation (SM) technique. SM can be seen as a special case of GSM with only one active transmit antenna. In contrast to SM, GSM uses the indices of multiple transmit antennas to map information bits, and is thus able to achieve substantially increased spectral efficiency. Furthermore, selecting multiple active transmit antennas enables GSM to harvest significant transmit diversity gains in comparison to SM, because all the active antennas transmit the same information. On the other hand, inter-channel interference (ICI) is completely avoided by transmitting the same symbols through these active antennas. We present theoretical analysis using order statistics for the symbol error rate (SER) performance of GSM. The analytical results are in close agreement with our simulation results. The bit error rate performance of GSM and SM is simulated and compared, which demonstrates the superiority of GSM. Moreover, GSM systems with configurations of different transmit and receive antennas are studied. Our results suggest that using a less number of transmit antennas with a higher modulation order will lead to better BER performance.
Recursion-Closed Algebraic Theories
Abstract A class of algebraic theories called “recursion-closed,” which generalize the rational theories studied by J. A. Goguen, J. W. Thatcher, E. G. Wagner and J. B. Wright in [ in “Proceedings, 17th IEEE Symposium on Foundations of Computer Science, Houston, Texas, October 1976,” pp. 147–158; in “Mathematical Foundations of Computer Science, 1978,” Lecture Notes in Computer Science, Vol. 64, Springer-Verlag, New York/Berlin, 1978; “Free Continuous Theories,” Technical Report RC 6906, IBM T. J. Watson Reserch Center, Yorktown Heights, N.Y., December 1977; “Notes on Algebraic Fundamentals for Theoretical Computer Science,” IBM Technical Report, 1979], is investigated. This work is motivated by the problem of providing the semantics of arbitrary polyadic recursion schemes in the framework of algebraic theories. It is suggested by Goguen et al. (“Proceedings, 17th IEEE Symposium”) that the semantics of arbitrary polyadic recursion schemes can be handled using algebraic theories. The results show that this is indeed the case, but that “rational theories” are insufficient and that it is necessary to introduce a new class of “recursion-closed” algebraic theories. This new class of algebraic theories, is defined and studied, and “free recursion-closed algebraic theories” are proved to exist.
A Sense of Shock: The Impact of Impressionism on Modern British and Irish Writing
A Sense of Shock: The Impact of Impressionism on Modern British and Irish Writing. Adam Parkes (Oxford: Oxford UP, 2011) xviii + 284 pp. At the Violet Hour: Modernism and Violence in England and Ireland. Sarah Cole (Oxford: Oxford UP, 2012) xiv + 377pp. Violence pervades late-nineteenth and early-twentieth century Europe, so much so that we would be hard-pressed to find an arena of political or aesthetic life unmarked by its presence. In its horrifying material forms, anarchist bombings stunned people in metropolitan capitals, imperial violence scarred colonial subjects, and mass warfare took lives at the front and in civilian bombings during the Spanish Civil War and two World Wars. Critics have long seen the connection between these historical events and writing explicitly about war, or in selected avant-garde literary movements--like Vorticism and Futurism--that placed rhetorical violence at the center of their calls to BLAST cultural enemies or to find in art beautiful ideas which kill. But a far broader range of literature in the period attempted to come to terms with a culture defined through shock, trauma, and violence; this literature imagined the ways private life and aesthetic form engage with violence. Sarah Cole's At the Violet Hour: Modernism and Violence in England and Ireland and Adam Parkes's A Sense of Shock: The Impact of Impressionism on Modern British and Irish Writing--both virtuosic in their scope and in their close readings--trace the pervasiveness of violence and shock in the period. They frame their inquiries differently: Cole uses violence as a thematic frame for reading literary forms and Parkes takes a particular literary form, impressionism, as the context within which the shocks of perception register. But in doing so both explore the way form enacts, contains, and ultimately theorizes the relation between literature and violence. Cole's At the Violet Hour asks how modernism imagines the capacities of literary form in direct response to the violence of its historical moment. Shadowed in large part by the presence in our own time of what Cole evocatively terms disenchanted violence--those dead or injured bodies that remain flesh, refusing symbolic or redemptive meaning--At the Violet Hour investigates the ways modernist forms can render death and violence meaningful or refuse to do so. In contrast, Parkes seeks the figure of violence within a more consistently aesthetic history. He traces the way fictional and visual impressionism embedded the impact of the outer world and the traumas of the psyche in a model of subjectivity as ruptured and dissolving. Both books read across different disciplines of writing, from revolutionary tracts to art criticism to poetry, seeing violence as a problem of aesthetic form as well as bombs. Cole's magisterial study takes "violence in modernism [to be] so deeply embedded as to function almost as the literary itself' (26). She examines the work of a wide spectrum of writers and expands our sense of the links between modernism and violence well beyond those figures best known to have glamorized violence. Violence here is not only a historical force that modernist literature encounters; it also charges literary figuration so fully as to become entangled with symbolic potency and the possibility of meaning-making itself. At the Violet Hour uses a key scene from Joyce's Portrait of the Artist as a Young Man as a paradigm of the way modernism suggests violence allows literature to look inward at the private and bodily as well as expand outward toward the representative or allegorical. Cole analyzes the moment where the prefect of studies pandies young Stephen Dedalus, and shows how this scene not only offers an imaginary origin of language--Stephen's elemental cry--that moves outward into obtrusive literary patterning, but prefigures the way violence lets texts point simultaneously inward toward the body, "forc[ing] the imagination back to the moment of injury," and outward toward the symbolic and abstract, into an allegory of hierarchical power (10). …
Eigen-Distortions of Hierarchical Representations
We develop a method for comparing hierarchical image representations in terms of their ability to explain perceptual sensitivity in humans. Specifically, we utilize Fisher information to establish a model-derived prediction of sensitivity to local perturbations of an image. For a given image, we compute the eigenvectors of the Fisher information matrix with largest and smallest eigenvalues, corresponding to the model-predicted mostand least-noticeable image distortions, respectively. For human subjects, we then measure the amount of each distortion that can be reliably detected when added to the image. We use this method to test the ability of a variety of representations to mimic human perceptual sensitivity. We find that the early layers of VGG16, a deep neural network optimized for object recognition, provide a better match to human perception than later layers, and a better match than a 4-stage convolutional neural network (CNN) trained on a database of human ratings of distorted image quality. On the other hand, we find that simple models of early visual processing, incorporating one or more stages of local gain control, trained on the same database of distortion ratings, provide substantially better predictions of human sensitivity than either the CNN, or any combination of layers of VGG16. Human capabilities for recognizing complex visual patterns are believed to arise through a cascade of transformations, implemented by neurons in successive stages in the visual system. Several recent studies have suggested that representations of deep convolutional neural networks trained for object recognition can predict activity in areas of the primate ventral visual stream better than models constructed explicitly for that purpose (Yamins et al. [2014], Khaligh-Razavi and Kriegeskorte [2014]). These results have inspired exploration of deep networks trained on object recognition as models of human perception, explicitly employing their representations as perceptual distortion metrics or loss functions (Hénaff and Simoncelli [2016], Johnson et al. [2016], Dosovitskiy and Brox [2016]). On the other hand, several other studies have used synthesis techniques to generate images that indicate a profound mismatch between the sensitivity of these networks and that of human observers. Specifically, Szegedy et al. [2013] constructed image distortions, imperceptible to humans, that cause their networks to grossly misclassify objects. Similarly, Nguyen and Clune [2015] optimized randomly initialized images to achieve reliable recognition by a network, but found that the resulting ∗Currently at Google, Inc. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. ‘fooling images’ were uninterpretable by human viewers. Simpler networks, designed for texture classification and constrained to mimic the early visual system, do not exhibit such failures (Portilla and Simoncelli [2000]). These results have prompted efforts to understand why generalization failures of this type are so consistent across deep network architectures, and to develop more robust training methods to defend networks against attacks designed to exploit these weaknesses (Goodfellow et al. [2014]). From the perspective of modeling human perception, these synthesis failures suggest that representational spaces within deep neural networks deviate significantly from those of humans, and that methods for comparing representational similarity, based on fixed object classes and discrete sampling of the representational space, are insufficient to expose these deviations. If we are going to use such networks as models for human perception, we need better methods of comparing model representations to human vision. Recent work has taken the first step in this direction, by analyzing deep networks’ robustness to visual distortions on classification tasks, as well as the similarity of classification errors that humans and deep networks make in the presence of the same kind of distortion (Dodge and Karam [2017]). Here, we aim to accomplish something in the same spirit, but rather than testing on a set of handselected examples, we develop a model-constrained synthesis method for generating targeted test stimuli that can be used to compare the layer-wise representational sensitivity of a model to human perceptual sensitivity. Utilizing Fisher information, we isolate the model-predicted most and least noticeable changes to an image. We test these predictions by determining how well human observers can discriminate these same changes. We apply this method to six layers of VGG16 (Simonyan and Zisserman [2015]), a deep convolutional neural network (CNN) trained to classify objects. We also apply the method to several models explicitly trained to predict human sensitivity to image distortions, including both a 4-stage generic CNN, an optimally-weighted version of VGG16, and a family of highly-structured models explicitly constructed to mimic the physiology of the early human visual system. Example images from the paper, as well as additional examples, are available at http://www.cns.nyu.edu/~lcv/eigendistortions/. 1 Predicting discrimination thresholds Suppose we have a model for human visual representation, defined by conditional density p(~r|~x), where ~x is an N -dimensional vector containing the image pixels, and ~r is an M -dimensional random vector representing responses internal to the visual system (e.g., firing rates of a population of neurons). If the image is modified by the addition of a distortion vector, ~x+ αû, where û is a unit vector, and scalar α controls the amplitude of distortion, the model can be used to predict the threshold at which the distorted image can be reliably distinguished from the original image. Specifically, one can express a lower bound on the discrimination threshold in direction û for any observer or model that bases its judgments on ~r (Seriès et al. [2009]): T (û; ~x) ≥ β √ ûTJ−1[~x]û (1) where β is a scale factor that depends on the noise amplitude of the internal representation (as well as experimental conditions, when measuring discrimination thresholds of human observers), and J [~x] is the Fisher information matrix (FIM; Fisher [1925]), a second-order expansion of the log likelihood: J [~x] = E~r|~x [( ∂ ∂~x log p(~r|~x) )( ∂ ∂~x log p(~r|~x) )T] (2) Here, we restrict ourselves to models that can be expressed as a deterministic (and differentiable) mapping from the input pixels to mean output response vector, f(~x), with additive white Gaussian noise in the response space. The log likelihood in this case reduces to a quadratic form: log p(~r|~x) = − 2 ( [~r − f(~x)] [~r − f(~x)] ) + const. Substituting this into Eq. (2) gives: J [~x] = ∂f ∂~x T ∂f ∂~x Thus, for these models, the Fisher information matrix induces a locally adaptive Euclidean metric on the space of images, as specified by the Jacobian matrix, ∂f/∂~x.
Reception and elaboration of the contemporary political rhetoric: the case of the last Raboni's poetry
In the last phase of his poetry, Giovanni Raboni was concerned with the the issue of rhetoric and the question of the contemporary language of Italian politics. In Ultimi Versi , published posthumously in 2006 by Garzanti, the author makes a satirical use of syntagms, phraseology and verbose expressions typical of Berlusconi's propaganda, introducing these elements in his verses with a denunciation intent. This peculiar operation can be illustrated by comparing the poems of Ultimi versi and the political compositions of Cadenza d'inganno , another collection published in 1975, as both of them present precise similarities. Contextualising the posthumous collection in Raboni's work makes it possible to analyse this peculiar phenomenon: the poet, by the mean of his poetry, re-uses the language created by Berlusconi's politics. This study is based on the close examination of the autograph manuscripts, the preparatory writings for Ultimi versi , and of the drafts – written in a notebook kept by the private archive “Valduga” in Milan. These materials prove the way Raboni unveiled this specific political language with a mocking and denunciation intent, and the way the power rhetoric became part of one of the most important poetical works of the 20th century, though in a overturning perspective of estrangement.
Patient-related quality assurance with different combinations of treatment planning systems, techniques, and machines
This project compares the different patient-related quality assurance systems for intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT) techniques currently used in the central Germany area with an independent measuring system. The participating institutions generated 21 treatment plans with different combinations of treatment planning systems (TPS) and linear accelerators (LINAC) for the QUASIMODO (Quality ASsurance of Intensity MODulated radiation Oncology) patient model. The plans were exposed to the ArcCHECK measuring system (Sun Nuclear Corporation, Melbourne, FL, USA). The dose distributions were analyzed using the corresponding software and a point dose measured at the isocenter with an ionization chamber. According to the generally used criteria of a 10 % threshold, 3 % difference, and 3 mm distance, the majority of plans investigated showed a gamma index exceeding 95 %. Only one plan did not fulfill the criteria and three of the plans did not comply with the commonly accepted tolerance level of ±3 % in point dose measurement. Using only one of the two examined methods for patient-related quality assurance is not sufficiently significant in all cases. Im Rahmen des Projekts sollten die verschiedenen derzeit im mitteldeutschen Raum eingesetzten patientenbezogenen Qualitätssicherungssysteme zur intensitätsmodulierten Radiotherapie (IMRT) und volumenmodulierten Arc-Radiotherapie (VMAT) mit einem unabhängigen Messsystem verglichen werden. Die teilnehmenden Einrichtungen berechneten insgesamt 21 Bestrahlungspläne mit verschiedenen Planungssystemen (TPS) und Linearbeschleunigern (LINAC) für das Patientenmodell QUASIMODO (Quality ASsurance of Intensity MODulated radiation Oncology), die dann auf das ArcCHECK-Phantom (Sun Nuclear Corporation, Melbourne, FL, USA) übertragen und abgestrahlt wurden. Zur Auswertung wurde sowohl eine Punktmessung im Isozentrum als auch die Dosisverteilung in der Diodenebene des Messphantoms betrachtet. Nach den allgemein üblichen Kriterien von 10 %-Schwellenwert, 3 %-Abweichung und 3‑mm-Abstand zeigten die meisten Pläne dieser Studie einen Gamma-Index größer 95 %, lediglich ein Plan erfüllte diese Kriterien nicht. Bei der Dosispunktmessung lagen drei Pläne außerhalb der üblichen Toleranz der 3 %-Abweichung. Für die patientenbezogene Qualitätssicherung ist die Punktmessung der Dosis oder die alleinige Gammaanalyse zur Planverifizierung nicht in allen Fällen ausreichend signifikant.
High-resolution magnetic resonance-guided posterior femoral cutaneous nerve blocks
To assess the feasibility, technical success, and effectiveness of high-resolution magnetic resonance (MR)-guided posterior femoral cutaneous nerve (PFCN) blocks. A retrospective analysis of 12 posterior femoral cutaneous nerve blocks in 8 patients [6 (75 %) female, 2 (25 %) male; mean age, 47 years; range, 42–84 years] with chronic perineal pain suggesting PFCN neuropathy was performed. Procedures were performed with a clinical wide-bore 1.5-T MR imaging system. High-resolution MR imaging was utilized for visualization and targeting of the PFCN. Commercially available, MR-compatible 20-G needles were used for drug delivery. Variables assessed were technical success (defined as injectant surrounding the targeted PFCN on post-intervention MR images) effectiveness, (defined as post-interventional regional anesthesia of the target area innervation downstream from the posterior femoral cutaneous nerve block), rate of complications, and length of procedure time. MR-guided PFCN injections were technically successful in 12/12 cases (100 %) with uniform perineural distribution of the injectant. All blocks were effective and resulted in post-interventional regional anesthesia of the expected areas (12/12, 100 %). No complications occurred during the procedure or during follow-up. The average total procedure time was 45 min (30–70) min. Our initial results demonstrate that this technique of selective MR-guided PFCN blocks is feasible and suggest high technical success and effectiveness. Larger studies are needed to confirm our initial results.
Feedforward friction compensation of Bowden-cable transmission via loop routing
Friction along the Bowden-cable transmission degenerates control performance unless it is properly compensated. Friction is produced when the bending angle of the Bowden-cable changes as the relative position of the actuator and the end-effector changes. This study proposes a method, termed loop routing, to compensate friction along the Bowden-cable. Loop routing involves making a one-round loop along the sheath that continuously maintains the sheath's bending angle at 2π regardless of the end-effector's position in 2-D space. This minimizes the bending angle change of the sheath as the end-effector translates in a 3-D workspace, which minimizes the friction change and enables feedforward friction compensation of the Bowden-cable without employing a sensor. An experiment in open-loop tension control of the Bowden-cable was conducted to evaluate the performance of the proposed method. Results show that the output tension follows the reference tension well, with an RMS error of 4.3% and a peak error of 13.3% of maximum reference.
Effectiveness of domestic wastewater treatment using microbial fuel cells at ambient and mesophilic temperatures.
Domestic wastewater treatment was examined under two different temperature (23+/-3 degrees C and 30+/-1 degrees C) and flow modes (fed-batch and continuous) using single-chamber air-cathode microbial fuel cells (MFCs). Temperature was an important parameter for treatment efficiency and power generation. The highest power density of 422 mW/m(2) (12.8 W/m(3)) was achieved under continuous flow and mesophilic conditions, at an organic loading rate of 54 g COD/L-d, achieving 25.8% COD removal. Energy recovery was found to depend significantly on the operational conditions (flow mode, temperature, organic loading rate, and HRT) as well as the reactor architecture. The results demonstrate that the main advantages of using temperature-phased, in-series MFC configurations for domestic wastewater treatment are power savings, low solids production, and higher treatment efficiency.
Linguistically Motivated Large-Scale NLP with C&C and Boxer
The statistical modelling of language, together with advances in wide-coverage grammar development, have led to high levels of robustness and efficiency in NLP systems and made linguistically motivated large-scale language processing a possibility (Matsuzaki et al., 2007; Kaplan et al., 2004). This paper describes an NLP system which is based on syntactic and semantic formalisms from theoretical linguistics, and which we have used to analyse the entire Gigaword corpus (1 billion words) in less than 5 days using only 18 processors. This combination of detail and speed of analysis represents a breakthrough in NLP technology. The system is built around a wide-coverage Combinatory Categorial Grammar (CCG) parser (Clark and Curran, 2004b). The parser not only recovers the local dependencies output by treebank parsers such as Collins (2003), but also the long-range depdendencies inherent in constructions such as extraction and coordination. CCG is a lexicalized grammar formalism, so that each word in a sentence is assigned an elementary syntactic structure, in CCG’s case a lexical category expressing subcategorisation information. Statistical tagging techniques can assign lexical categories with high accuracy and low ambiguity (Curran et al., 2006). The combination of finite-state supertagging and highly engineered C++ leads to a parser which can analyse up to 30 sentences per second on standard hardware (Clark and Curran, 2004a). The C&C tools also contain a number of Maximum Entropy taggers, including the CCG supertagger, a POS tagger (Curran and Clark, 2003a), chunker, and named entity recogniser (Curran and Clark, 2003b). The taggers are highly efficient, with processing speeds of over 100,000 words per second. Finally, the various components, including the morphological analyser morpha (Minnen et al., 2001), are combined into a single program. The output from this program — a CCG derivation, POS tags, lemmas, and named entity tags — is used by the module Boxer (Bos, 2005) to produce interpretable structure in the form of Discourse Representation Structures (DRSs).
Fuzzy association rule mining approaches for enhancing prediction performance
This paper presents an investigation into two fuzzy association rule mining models for enhancing prediction performance. The first model (the FCM-Apriori model) integrates Fuzzy C-Means (FCM) and the Apriori approach for road traffic performance prediction. FCM is used to define the membership functions of fuzzy sets and the Apriori approach is employed to identify the Fuzzy Association Rules (FARs). The proposed model extracts knowledge from a database for a Fuzzy Inference System (FIS) that can be used in prediction of a future value. The knowledge extraction process and the performance of the model are demonstrated through two case studies of road traffic data sets with different sizes. The experimental results show the merits and capability of the proposed KD model in FARs based knowledge extraction. The second model (the FCM-MSapriori model) integrates FCM and a Multiple Support Apriori (MSapriori) approach to extract the FARs. These FARs provide the knowledge base to be utilized within the FIS for prediction evaluation. Experimental results have shown that the FCM-MSapriori model predicted the future values effectively and outperformed the FCM-Apriori model and other models reported in the literature.
Piecewise Linear Neural Network verification: A comparative study
The success of Deep Learning and its potential use in many important safety- critical applications has motivated research on formal verification of Neural Network (NN) models. Despite the reputation of learned NN models to behave as black boxes and the theoretical hardness of proving their properties, researchers have been successful in verifying some classes of models by exploiting their piecewise linear structure. Unfortunately, most of these approaches test their algorithms without comparison with other approaches. As a result, the pros and cons of the different algorithms are not well understood. Motivated by the need to accelerate progress in this very important area, we investigate the trade-offs of a number of different approaches based on Mixed Integer Programming, Satisfiability Modulo Theory, as well as a novel method based on the Branch-and-Bound framework. We also propose a new data set of benchmarks, in addition to a collection of pre- viously released testcases that can be used to compare existing methods. Our analysis not only allows a comparison to be made between different strategies, the comparison of results from different solvers also revealed implementation bugs in published methods. We expect that the availability of our benchmark and the analysis of the different approaches will allow researchers to develop and evaluate promising approaches for making progress on this important topic.
Semantic Web Services-A Survey
The technology where the meaning of the information and the service of the web is defined by making the web to understand and satisfies the request of the people is called Semantic Web Services. That is the idea of having data on the web defined and linked in a way that it can be used by machines not just for display purpose, but for automation, integration and reuse of data across various application .The idea of the semantic is raised to overcome the limitation of the Web services such as Average WWW searches examines only about 25% of potentially relevant sites and return a lot of unwanted information, Information on web is not suitable for software agent and Doubling of size. It is built on top of the Web Services extended with rich semantic representations along with capabilities for automatic reasoning developed in the field of artificial intelligence. This survey attempts to give an overview of the underlying concepts and technologies along with the categorization, selection and discovery of services based on semantic.
Binder-free Co(OH)2 nanoflake–ITO nanowire heterostructured electrodes for electrochemical energy storage with improved high-rate capabilities
We present the fabrication of binder-free Co(OH)2 nanoflake–ITO nanowire heterostructured electrodes via a combination of chemical vapor deposition and electrodeposition methods for electrochemical energy storage applications. Detailed studies showed that the specific capacitance retention capabilities of these hybrid electrodes were greatly enhanced in comparison to electrodes without nanowire augmentation. The improvement was further verified by our statistical studies of electrodes with loading masses in the range of 0–500 mg cm . The highly conductive ITO nanowires can serve as direct electron paths during the charge/discharge process, facilitating the full utilization of electroactive materials. These rigid oxide nanowire supports enable facile and uniform surface coating and are expected to be more stable than previous composite electrodes based on carbon nanotubes. This study provides a promising architecture for binder-free electrochemical capacitors with excellent capacitance retention capabilities.
Gas-inducible transgene expression in mammalian cells and mice
We describe the design and detailed characterization of a gas-inducible transgene control system functional in different mammalian cells, mice and prototype biopharmaceutical manufacturing. The acetaldehyde-inducible AlcR-PalcA transactivator-promoter interaction of the Aspergillus nidulans ethanol-catabolizing regulon was engineered for gas-adjustable transgene expression in mammalian cells. Fungal AlcR retained its transactivation characteristics in a variety of mammalian cell lines and reversibly adjusted transgene transcription from chimeric mammalian promoters (PAIR) containing PalcA-derived operators in a gaseous acetaldehyde-dependent manner. Mice implanted with microencapsulated cells engineered for acetaldehyde-inducible regulation (AIR) of the human glycoprotein secreted placental alkaline phosphatase showed adjustable serum phosphatase levels after exposure to different gaseous acetaldehyde concentrations. AIR-controlled interferon-β production in transgenic CHO-K1-derived serum-free suspension cultures could be modulated by fine-tuning inflow and outflow of acetaldehyde-containing gas during standard bioreactor operation. AIR technology could serve as a tool for therapeutic transgene dosing as well as biopharmaceutical manufacturing.
Face Recognition Based on Curvefaces
A new method called curvefaces was firstly presented for face recognition, which is based on curvelet transform. Curvelet is the latest multiscale geometric analysis tool. Contrast to wavelet transform, curvelet transform directly takes edges as the basic representation elements and is anisotropic with strong direction. It is a multiresolution, band pass and directional function analysis method which is useful to represent the image edges and the curved singularities in images more efficiently. It yields a more sparse representation of the image than wavelet and ridgelet transform. In face recognition, the curvelet coefficients can better represent the main features of the faces. The support vector machine (SVM) can then be used to classify the images. SVM is based on the statistical learning theory and is especially valid for small sample set and can get high recognition rate. Multi-class SVM is employed in this paper. The simulation shows that the proposed method is better than wavelet based method.
HaarHOG: Improving the HOG Descriptor for Image Classification
The Histograms of Oriented Gradients (HOG) descriptor represents shape information by storing the local gradients in an image. The Haar wavelet transform is a simple yet powerful technique that can separately enhance the horizontal and vertical local features in an image. In this paper, we enhance the HOG descriptor by subjecting the image to the Haar wavelet transform and then computing HOG from the result in a manner that enriches the shape information encoded in the descriptor. First, we define the novel HaarHOG descriptor for grayscale images and extend this idea for color images. Second, we compare the image recognition performance of the HaarHOG descriptor with the traditional HOG descriptor in four different color spaces and grayscale. Finally, we compare the image classification performance of the HaarHOG descriptor with some popular descriptors used by other researchers on four grand challenge datasets.
Bayesian Unsupervised Word Segmentation with Nested Pitman-Yor Language Modeling
In this paper, we propose a new Bayesian model for fully unsupervised word segmentation and an efficient blocked Gibbs sampler combined with dynamic programming for inference. Our model is a nested hierarchical Pitman-Yor language model, where Pitman-Yor spelling model is embedded in the word model. We confirmed that it significantly outperforms previous reported results in both phonetic transcripts and standard datasets for Chinese and Japanese word segmentation. Our model is also considered as a way to construct an accurate word n-gram language model directly from characters of arbitrary language, without any “word” indications.
Discovering Commonsense Entailment Rules Implicit in Sentences
Reasoning about ordinary human situations and activities requires the availability of diverse types of knowledge, including expectations about the probable results of actions and the lexical entailments for many predicates. We describe initial work to acquire such a collection of conditional (if–then) knowledge by exploiting presuppositional discourse patterns (such as ones involving ‘but’, ‘yet’, and ‘hoping to’) and abstracting the matched material into general rules.
Compact UWB Bandpass Filter With Ultra Narrow Notched Band
A compact ultra-wideband (UWB) bandpass filter with an ultra narrow notched band is proposed using a hybrid microstrip and coplanar waveguide (CPW) structure. The CPW detached-mode resonator (DMR) composed of a quarter-wavelength (¿/4) nonuniform CPW resonator with a short-stub and a ¿/4 single-mode CPW resonator (SMCR) can allocate three split frequencies at the lower end, middle, higher end of the UWB passband. The conventional broadside-coupled microstrip/CPW structure is introduced to improve the bandwidth enhancement around the split frequencies, which leads to good UWB operation. To avoid the interferences such as WLAN signals, the ¿/4 meander slot-line structure embedded in the DMR is employed to obtain the notched band inside the UWB passband. The design is then verified by experiment. Good passband and stopband performances are achieved. Specifically, the fabricated filter has a 10 dB notched fractional bandwidth (FBW) of 2.06% at the notched center frequency of 5.80 GHz.
Pragmatic randomized trial evaluating the clinical and economic effectiveness of acupuncture for chronic low back pain.
In a randomized controlled trial plus a nonrandomized cohort, the authors investigated the effectiveness and costs of acupuncture in addition to routine care in the treatment of chronic low back pain and assessed whether the effects of acupuncture differed in randomized and nonrandomized patients. In 2001, German patients with chronic low back pain were allocated to an acupuncture group or a no-acupuncture control group. Persons who did not consent to randomization were included in a nonrandomized acupuncture group. All patients were allowed to receive routine medical care in addition to study treatment. Back function (Hannover Functional Ability Questionnaire), pain, and quality of life were assessed at baseline and after 3 and 6 months, and cost-effectiveness was analyzed. Of 11,630 patients (mean age=52.9 years (standard deviation, 13.7); 59% female), 1,549 were randomized to the acupuncture group and 1,544 to the control group; 8,537 were included in the nonrandomized acupuncture group. At 3 months, back function improved by 12.1 (standard error (SE), 0.4) to 74.5 (SE, 0.4) points in the acupuncture group and by 2.7 (SE, 0.4) to 65.1 (SE, 0.4) points among controls (difference=9.4 points (95% confidence interval 8.3, 10.5); p<0.001). Nonrandomized patients had more severe symptoms at baseline and showed improvements in back function similar to those seen in randomized patients. The incremental cost-effectiveness ratio was euro10,526 (euros) per quality-adjusted life year. Acupuncture plus routine care was associated with marked clinical improvements in these patients and was relatively cost-effective.
Light-Exoskeleton and Data-Glove integration for enhancing virtual reality applications
This contribution discusses the advanced technologies integration: the PERCRO Light-Exoskeleton and the PERCRO Data-Glove wherein the main pursuit is the enhancement of the human grasping skills inside virtual environments. The mastering of motor human skills for taking up, lifting or manipulating objects during grasping tasks, could be effectively enhanced thanks to the inclusion of an arm exoskeleton which provides arm haptic-guidance-positioning (HGPo) into spatial coordinates and the data-glove for hand dexterity practicing. This paper takes into account a synergetic fusion of theoretical concepts and practical developments, pointing out the feasibility of having robust and accurate hand/arm haptic-robotic guidance positioning within virtual reality environments. Notice that those applications demand high precision, robustness, repeatability and hand/arm dexterity; such a mentioned properties are covered by the proposed system.
Compressive Spectrum Sensing for Cognitive Radio Networks
............................................................................................................................... 3 RÉSUME .................................................................................................................................... 5 ACKNOWLEDGEMENT .......................................................................................................... 7 TABLE OF CONTENTS ............................................................................................................ 9 LIST OF FIGURES .................................................................................................................. 12 LIST OF TABLES .................................................................................................................... 13 NOTATION .............................................................................................................................. 14 LIST OF ABBREVIATION ..................................................................................................... 15 Chapter I .................................................................................................................................... 16 INTRODUCTION .................................................................................................................... 16 I.1 Spectrum Management and Cognitive Radio .................................................................. 16 I.2 Cognitive Radio Cycle ..................................................................................................... 18 I.3 Compressive Sensing ....................................................................................................... 19 I.4 Dissertation Objectives .................................................................................................... 19 I.5 Dissertation Contributions ............................................................................................... 20 I.6 List of Publications .......................................................................................................... 21 I.7 Dissertation Organization ................................................................................................ 23 Chapter II .................................................................................................................................. 24 SPECTRUM SENSING ........................................................................................................... 24 II.1 Spectrum Sensing Model ............................................................................................... 24 II.2 Spectrum Sensing Techniques ....................................................................................... 26 II.2.1 Energy detection ...................................................................................................... 26 II.2.2 Autocorrelation based Detection ............................................................................. 28 II.2.3 Euclidian Distance based Detection ........................................................................ 30 II.2.4 Wavelet based Sensing ............................................................................................ 31 II.2.5 Matched Filter Detection ......................................................................................... 33 II.2.6 Evaluation Metrics .................................................................................................. 34 II.3 Conclusion ...................................................................................................................... 35 Chapter III ................................................................................................................................. 36 COMPRESSIVE SENSING ..................................................................................................... 36 III.
Centrality and network flow
Centralitymeasures, or at least popular interpretations of thesemeasures,make implicit assumptions about the manner in which traffic flows through a network. For example, some measures count only geodesic paths, apparently assuming that whatever flows through the network only moves along the shortest possible paths. This paper lays out a typology of network flows based on two dimensions of variation, namely the kinds of trajectories that traffic may follow (geodesics, paths, trails, or walks) and the method of spread (broadcast, serial replication, or transfer). Measures of centrality are then matched to the kinds of flows that they are appropriate for. Simulations are used to examine the relationship between type of flow and the differential importance of nodes with respect to key measurements such as speed of reception of traffic and frequency of receiving traffic. It is shown that the off-the-shelf formulas for centrality measures are fully applicable only for the specific flow processes they are designed for, and that when they are applied to other flow processes they get the “wrong” answer. It is noted that the most commonly used centrality measures are not appropriate for most of the flows we are routinely interested in. A key claim made in this paper is that centrality measures can be regarded as generating expected values for certain kinds of node outcomes (such as speed and frequency of reception) given implicit models of how traffic flows, and that this provides a new and useful way of thinking about centrality. © 2004 Elsevier B.V. All rights reserved.
Precision agriculture using remote monitoring systems in Brazil
Soil and nutrient depletion from intensive use of land is a critical issue for food production. An understanding of whether the soil is adequately treated with appropriate crop management practices in real-time during production cycles could prevent soil erosion and the overuse of natural or artificial resources to keep the soil healthy and suitable for planting. Precision agriculture traditionally uses expensive techniques to monitor the health of soil and crops including images from satellites and airplanes. Recently there are several studies using drones and a multitude of sensors connected to farm machinery to observe and measure the health of soil and crops during planting and harvesting. This paper describes a real-time, in-situ agricultural internet of things (IoT) device designed to monitor the state of the soil and the environment. This device was designed to be compatible with open hardware and it is composed of temperature and humidity sensors (soil and environment), electrical conductivity of the soil and luminosity, Global Positioning System (GPS) and a ZigBee radio for data communication. The field trial involved soil testing and measurements of the local climate in Sao Paulo, Brazil. The measurements of soil temperature, humidity and conductivity are used to monitor soil conditions. The local climate data could be used to support decisions about irrigation and other activities related to crop health. On-going research includes methods to reduce the consumption of energy and increase the number of sensors. Future applications include the use of the IoT device to detect fire in crops, a common problem in sugar cane crops and the integration of the IoT device with irrigation management systems to improve water usage.
Test Anxiety and Academic Performance among Undergraduates: The Moderating Role of Achievement Motivation.
This study investigated the moderating role of achievement motivation in the relationship between test anxiety and academic performance. Three hundred and ninety three participants (192 males and 201 females) selected from a public university in Ondo State, Nigeria using a purposive sampling technique, participated in the study. They responded to measures of test anxiety and achievement motivation. Three hypotheses were tested using moderated hierarchical multiple regression analysis. Results showed that test anxiety had a negative impact on academic performance (β = -.23; p < .05). Achievement motivation had a positive impact on academic performance (β = .38; p < .05). Also, achievement motivation significantly moderated the relationship between test anxiety and academic performance (β = .10; p < .01). These findings suggest that university management should design appropriate psycho-educational interventions that would enhance students' achievement motivation.
A Statistical Model of Human Pose and Body Shape
Generation and animation of realistic humans is an essential part of many projects in today’s media industry. Especially, the games and special effects industry heavily depend on realistic human animation. In this work a unified model that describes both, human pose and body shape is introduced which allows us to accurately model muscle deformations not only as a function of pose but also dependent on the physique of the subject. Coupled with the model’s ability to generate arbitrary human body shapes, it severely simplifies the generation of highly realistic character animations. A learning based approach is trained on approximately 550 full body 3D laser scans taken of 114 subjects. Scan registration is performed using a non-rigid deformation technique. Then, a rotation invariant encoding of the acquired exemplars permits the computation of a statistical model that simultaneously encodes pose and body shape. Finally, morphing or generating meshes according to several constraints simultaneously can be achieved by training semantically meaningful regressors.
Increasing individual upper alpha power by neurofeedback improves cognitive performance in human subjects.
The hypothesis was tested of whether neurofeedback training (NFT)--applied in order to increase upper alpha but decrease theta power--is capable of increasing cognitive performance. A mental rotation task was performed before and after upper alpha and theta NFT. Only those subjects who were able to increase their upper alpha power (responders) performed better on mental rotations after NFT. Training success (extent of NFT-induced increase in upper alpha power) was positively correlated with the improvement in cognitive performance. Furthermore, the EEG of NFT responders showed a significant increase in reference upper alpha power (i.e. in a time interval preceding mental rotation). This is in line with studies showing that increased upper alpha power in a prestimulus (reference) interval is related to good cognitive performance.
Multilingual Hierarchical Attention Networks for Document Classification
Hierarchical attention networks have recently achieved remarkable performance for document classification in a given language. However, when multilingual document collections are considered, training such models separately for each language entails linear parameter growth and lack of cross-language transfer. Learning a single multilingual model with fewer parameters is therefore a challenging but potentially beneficial objective. To this end, we propose multilingual hierarchical attention networks for learning document structures, with shared encoders and/or shared attention mechanisms across languages, using multi-task learning and an aligned semantic space as input. We evaluate the proposed models on multilingual document classification with disjoint label sets, on a large dataset which we provide, with 600k news documents in 8 languages, and 5k labels. The multilingual models outperform monolingual ones in low-resource as well as full-resource settings, and use fewer parameters, thus confirming their computational efficiency and the utility of cross-language transfer.
Deep Multispectral Semantic Scene Understanding of Forested Environments Using Multimodal Fusion
Semantic scene understanding of unstructured environments is a highly challenging task for robots operating in the real world. Deep Convolutional Neural Network architectures define the state of the art in various segmentation tasks. So far, researchers have focused on segmentation with RGB data. In this paper, we study the use of multispectral and multimodal images for semantic segmentation and develop fusion architectures that learn from RGB, Near-InfraRed channels, and depth data. We introduce a first-of-its-kind multispectral segmentation benchmark that contains 15, 000 images and 366 pixel-wise ground truth annotations of unstructured forest environments. We identify new data augmentation strategies that enable training of very deep models using relatively small datasets. We show that our UpNet architecture exceeds the state of the art both qualitatively and quantitatively on our benchmark. In addition, we present experimental results for segmentation under challenging real-world conditions. Benchmark and demo are publicly available at http://deepscene.cs.uni-freiburg.de.
Quality models: Role and value in software engineering
Software quality is the totality of features and characteristics of a product or a service that bears on its ability to satisfy the given needs. Poor quality of the software product in sensitive systems may lead to loss of human life, permanent injury, mission failure, or financial loss. So the quality of the project should be maintained at appropriate label. To maintain the quality, there are different quality models. ″A high quality product is one which has associated with it a number of quality factors. These could be described in the requirements specification; they could be cultured, in that they are normally associated with the artifact through familiarity of use and through the shared experience of users. In this paper, we will discuss all the quality models: McCall's quality model, Boehm's quality model, Dromey's quality model, and FURPS quality model and focus on a comparison between these models, and find the key differences between them.
Pipsqueak: Lean Lambdas with Large Libraries
Microservices are usually fast to deploy because each microservice is small, and thus each can be installed and started quickly. Unfortunately, lean microservices that depend on large libraries will start slowly and harm elasticity. In this paper, we explore the challenges of lean microservices that rely on large libraries in the context of Python packages and the OpenLambda serverless computing platform. We analyze the package types and compressibility of libraries distributed via the Python Package Index and propose PipBench, a new tool for evaluating package support. We also propose Pipsqueak, a package-aware compute platform based on OpenLambda.
Postural aberrations in low back pain.
The purpose of this study was to measure and describe postural aberrations in chronic and acute low back pain in search of predictors of low back pain. The sample included 59 subjects recruited to the following three groups: chronic, acute, or no low back pain. Diagnoses included disc disease, mechanical back pain, and osteoarthritis. Lumbar lordosis, thoracic kyphosis, head position, shoulder position, shoulder height, pelvic tilt, and leg length were measured using a photographic technique. In standing, chronic pain patients exhibited an increased lumbar lordosis compared with controls (p < .05). Acute patients had an increased thoracic kyphosis and a forward head position compared with controls (p < .05). In sitting, acute patients had an increased thoracic kyphosis compared with controls (p < .05). These postural parameters identified discrete postural profiles but had moderate value as predictors of low back pain. Therefore other unidentified factors are also important in the prediction of low back pain.
First-line sequential high-dose VIP chemotherapy with autologous transplantation for patients with primary mediastinal nonseminomatous germ cell tumours: a prospective trial
To determine the efficacy of first-line sequential high-dose VIP chemotherapy (HD-VIP) in patients with primary mediastinal nonseminomatous germ cell tumours (GCT), 28 patients were enrolled on a German multicentre trial. High-Dose VIP chemotherapy consisted of 3–4 cycles of dose-intensive etoposide and ifosfamide plus cisplatin, q22days, each cycle followed by autologous peripheral blood stem cell transplantation plus granulocyte-colony stimulating factor (G-CSF) support. One cycle of standard-dose VIP was applied to harvest peripheral blood stem cells. Ten patients had mediastinal involvement as the only manifestation (36 %), 18 of 28 patients had additional metastatic sites, such as lung (n=17; 61%), liver (n=7; 25%), bone (n=5; 18%), lymph nodes (n=3; 11%) and CNS (n=3; 11%). Median follow-up was 43 months (range, 7–113) for all patients and 52 months (range, 22–113) for surviving patients. Nineteen of 28 patients obtained a disease-free status; 11 with HD-VIP alone and eight with adjunctive surgery. In addition, one of the four patients with marker negative partial remission after HD-VIP without resection of residual masses is currently alive. Two patients developed recurrence of GCT or teratoma. Two patients have died due to an associated haematologic disorder. The 2-year progression-free survival and overall survival rates are 64 and 68%, respectively. This report represents a subgroup analysis of 28 patients with mediastinal nonsemina within the German first-line study for ‘poor prognosis’ GCT. Compared to data of an international database analysis including 253 patients with mediastinal nonseminoma treated with conventional chemotherapy, the results may indicate that HD-VIP results in an approximately 15% survival improvement.
Taming Information-Stealing Smartphone Applications (on Android)
Smartphones have been becoming ubiquitous and mobile users are increasingly relying on them to store and handle personal information. However, recent studies also reveal the disturbing fact that users’ personal information is put at risk by (rogue) smartphone applications. Existing solutions exhibit limitations in their capabilities in taming these privacy-violating smartphone applications. In this paper, we argue for the need of a new privacy mode in smartphones. The privacy mode can empower users to flexibly control in a fine-grained manner what kinds of personal information will be accessible to an application. Also, the granted access can be dynamically adjusted at runtime in a fine-grained manner to better suit a user’s needs in various scenarios (e.g., in a different time or location). We have developed a system called TISSA that implements such a privacy mode on Android. The evaluation with more than a dozen of information-leaking Android applications demonstrates its effectiveness and practicality. Furthermore, our evaluation shows that TISSA introduces negligible performance overhead.
Fast CAD and optimization of waveguide components and aperture antennas by hybrid MM/FE/MoM/FD methods-state-of-the-art and recent advances
This paper presents an overview of the state-of-the-art of hybrid mode-matching (MM)/finite-element (FE)/method-of-moments (MoM)/finite-difference (FD) techniques applied for the rigorous, fast computer-aided design and optimization of waveguide components, combline filters, and coupled horns, as well as of slot arrays, and describes some recent advances. Related aspects involve the inclusion of coaxial and dielectric structures for related filters, the extension to multiports at cross-coupled filters, the rigorous consideration of outer and Inner mutual coupling effects at coupled horn and slot arrays, the application of the multilevel fast multipole algorithm for the more efficient MoM calculation part of horns and horn clusters, and the utilization of the MoM for the design of arbitrarily shaped three-dimensional waveguide elements. The described hybrid techniques combine advantageously the efficiency of the MM method with the flexibility of FE, MoM, and FD methods. Topical application examples demonstrate the versatility of the hybrid techniques; their accuracy is verified by available measurements.
Application development of virtual metrology in semiconductor industry
Daily wafer fabrication in semiconductor foundry depends on considerable metrology operations for tool-quality and process-quality assurance. The metrology operations required a lot of metrology tools, which increase FAB's investment. Also, these metrology operations will increase cycle time of wafer process. Metrology operations do not bring any value added to wafer but only quality assurance. This article provides a new method denoted virtual metrology (VM) to utilize sensor data collected from 300 mm FAB's tools to forecast quality data of wafers and tools. This proposed method designs key steps to establish a VM control model based on neural networks and to develop and deploy applications following SEMI EDA (equipment data acquisition) standards.
Scalable Mutual Information Estimation Using Dependence Graphs
The Mutual Information (MI) is an often used measure of dependency between two random variables utilized in informa- tion theory, statistics and machine learning. Recently several MI estimators have been proposed that can achieve paramet- ric MSE convergence rate. However, most of the previously proposed estimators have high computational complexity of at least $({N^2})$ . We propose a unified method for empirical non-parametric estimation of general MI function between random vectors in ${\mathbb{R}^d}$ based on $N$ i.i.d. samples. The re- duced complexity MI estimator, called the ensemble depen- dency graph estimator (EDGE), combines randomized locality sensitive hashing (LSH), dependency graphs, and ensemble bias-reduction methods. We prove that EDGE achieves op- timal computational complexity $(N)$ , and can achieve the optimal parametric MSE rate of $O(1/N)$ if the density is $d$ times differentiable. To the best of our knowledge EDGE is the first non-parametric MI estimator that can achieve paramet- ric MSE rates with linear time complexity. We illustrate the utility of EDGE for the analysis of the information plane (IP) in deep learning. Using EDGE we shed light on a controversy on whether or not the compression property of information bottleneck (IB) in fact holds for ReLu and other rectification functions in deep neural networks (DNN).
Bracketing programme constructs
Amused by the Levity of {3} and annoyed by th e attitude of {2}, I should like to make a seriou s contribution to the do. .sod controversy. In the realm of expressions of the traditional. arithmetic kind, the use of brackets and parentheses L clarify and disambiguate generally goes unchallenged. Certainly for expressions of Limited complexity th e method usually works for the human reader : whilst fo r a machine free of problems of the psychology o perception, greater depths of nesting present n o theoretical difficulty. As Lisp and Algol-68 have shewn, the same metho d may be applied to bracketing other programm e constructs : yet the majority of human readers tend t) find this Lacking in clarity. Lisp provides n<' alternative, and has accordingly been much abused an d reviled. Algol-68 permits the use of symmetric keywor d pairs, such as comment. . .tnemmoc : probably the on e example of a symmetry which is not beautiful. The origin of the controversy lies in th e realisation that the method pioneered by Algol-• 0 an y adopted by many other Languages since, namely the us e of begin. . .end pairs, suffers from the fact that ont o END looks pretty much the same as any other : and whe n a number of END''s occur in direct succession a commonly observed phenomenon-it is difficult for a human to match BEGIN's with ENO's and see how th e programme is really structured. Even the technique o f "prettyprint ing" does not solve the proolem : for whe n a compiler complains "Missing ENO" the programmer stil t has to determine which construct has been incorrectl y terminated .
The Genia Event Extraction Shared Task, 2013 Edition - Overview
The Genia Event Extraction task is organized for the third time, in BioNLP Shared Task 2013. Toward knowledge based construction, the task is modified in a number of points. As the final results, it received 12 submissions, among which 2 were withdrawn from the final report. This paper presents the task setting, data sets, and the final results with discussion for possible future directions.
Vision-based and marker-less surgical tool detection and tracking: a review of the literature
In recent years, tremendous progress has been made in surgical practice for example with Minimally Invasive Surgery (MIS). To overcome challenges coming from deported eye-to-hand manipulation, robotic and computer-assisted systems have been developed. Having real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy is a key ingredient for such systems. In this paper, we present a review of the literature dealing with vision-based and marker-less surgical tool detection. This paper includes three primary contributions: (1) identification and analysis of data-sets used for developing and testing detection algorithms, (2) in-depth comparison of surgical tool detection methods from the feature extraction process to the model learning strategy and highlight existing shortcomings, and (3) analysis of validation techniques employed to obtain detection performance results and establish comparison between surgical tool detectors. The papers included in the review were selected through PubMed and Google Scholar searches using the keywords: "surgical tool detection", "surgical tool tracking", "surgical instrument detection" and "surgical instrument tracking" limiting results to the year range 2000 2015. Our study shows that despite significant progress over the years, the lack of established surgical tool data-sets, and reference format for performance assessment and method ranking is preventing faster improvement.
The HybrEx Model for Confidentiality and Privacy in Cloud Computing
This paper proposes a new execution model for confidentiality and privacy in cloud computing, called the HybrEx (Hybrid Execution) model. The HybrEx model provides a seamless way for an organization to utilize their own infrastructure for sensitive, private data and computation, while integrating public clouds for nonsensitive, public data and computation. We outline how to realize this model in one specific execution environment, MapReduce over Bigtable.
Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text
Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations.
Access Pattern disclosure on Searchable Encryption: Ramification, Attack and Mitigation
The advent of cloud computing has ushered in an era of mass data storage in remote servers. Remote data storage offers reduced data management overhead for data owners in a cost effective manner. Sensitive documents, however, need to be stored in encrypted format due to security concerns. But, encrypted storage makes it difficult to search on the stored documents. Therefore, this poses a major barrier towards selective retrieval of encrypted documents from the remote servers. Various protocols have been proposed for keyword search over encrypted data to address this issue. Most of the available protocols leak data access patterns due to efficiency reasons. Although, oblivious RAM based protocols can be used to hide data access patterns, such protocols are computationally intensive and do not scale well for real world datasets. In this paper, we introduce a novel attack that exploits data access pattern leakage to disclose significant amount of sensitive information using a modicum of prior knowledge. Our empirical analysis with a real world dataset shows that the proposed attack is able to disclose sensitive information with a very high accuracy. Additionally, we propose a simple technique to mitigate the risk against the proposed attack at the expense of a slight increment in computational resources and communication cost. Furthermore, our proposed mitigation technique is generic enough to be used in conjunction with any searchable encryption scheme that reveals data access pattern.
End-to-End Offline Goal-Oriented Dialog Policy Learning via Policy Gradient
Learning a goal-oriented dialog policy is generally performed offline with supervised learning algorithms or online with reinforcement learning (RL). Additionally, as companies accumulate massive quantities of dialog transcripts between customers and trained human agents, encoder-decoder methods have gained popularity as agent utterances can be directly treated as supervision without the need for utterance-level annotations. However, one potential drawback of such approaches is that they myopically generate the next agent utterance without regard for dialog-level considerations. To resolve this concern, this paper describes an offline RL method for learning from unannotated corpora that can optimize a goal-oriented policy at both the utterance and dialog level. We introduce a novel reward function and use both on-policy and off-policy policy gradient to learn a policy offline without requiring online user interaction or an explicit state space definition.
The Laplace-Jaynes approach to induction
An approach to induction is presented, based on the idea of analysing the context of a given problem into `circumstances'. This approach, fully Bayesian in form and meaning, provides a complement or in some cases an alternative to that based on de Finetti's representation theorem and on the notion of infinite exchangeability. In particular, it gives an alternative interpretation of those formulae that apparently involve `unknown probabilities' or `propensities'. Various advantages and applications of the presented approach are discussed, especially in comparison to that based on exchangeability. Generalisations are also discussed.
A survey of Community Question Answering
With the advent of numerous community forums, tasks associated with the same have gained importance in the recent past. With the influx of new questions every day on these forums, the issues of identifying methods to find answers to said questions, or even trying to detect duplicate questions, are of practical importance and are challenging in their own right. This paper aims at surveying some of the aforementioned issues, and methods proposed for tackling the same.
C R ] 2 1 O ct 2 01 7 Generic Black-Box End-to-End Attack Against RNNs and Other API Call Based Malware Classifiers
Deep neural networks (DNNs) are used to solve complex classification problems, for which other machine learning classifiers, such as SVM, fall short. Recurrent neural networks (RNNs) have been used for tasks that involves sequential inputs, such as speech to text. In the cyber security domain, RNNs based on API calls have been used effectively to classify previously un-encountered malware. In this paper, we present a blackbox attack against RNNs, focusing on finding adversarial API call sequences that would be misclassified by a RNN without affecting the malware functionality. We also show that this attack is effective against many classifiers, due-to the transferability principle between RNN variants, feed-forward DNNs and traditional machine learning classifiers such as SVM. Finally, we implemented GADGET, a software framework to convert any malware binary to a binary undetected by an API calls based malware classifier, using the proposed attack, without access to the malware source code. We conclude by discussing possible defense mechanisms and countermeasures against the attack.
A capacitive-loaded level shift circuit for improving the noise immunity of high voltage gate drive IC
A high voltage gate drive IC achieving the high dVS/dt noise immunity up to 85V/ns and the allowable negative VS swing to -12V at 15V supply voltage is proposed for the first time. The robust features are due to the presented capacitive loaded level shift circuit used in the gate driver. Measured and simulated results are performed to verify the electrical characteristics of the designed gate driver which is implemented in a 0.5um 600V Bipolar-CMOS-DMOS (BCD) technology.
Spotting Suspicious Behaviors in Multimodal Data: A General Metric and Algorithms
Many commercial products and academic research activities are embracing behavior analysis as a technique for improving detection of attacks of many sorts-from retweet boosting, hashtag hijacking to link advertising. Traditional approaches focus on detecting dense blocks in the adjacency matrix of graph data, and recently, the tensors of multimodal data. No method gives a principled way to score the suspiciousness of dense blocks with different numbers of modes and rank them to draw human attention accordingly. In this paper, we first give a list of axioms that any metric of suspiciousness should satisfy; we propose an intuitive, principled metric that satisfies the axioms, and is fast to compute; moreover, we propose CrossSpot, an algorithm to spot dense blocks that are worth inspecting, typically indicating fraud or some other noteworthy deviation from the usual, and sort them in the order of importance (“suspiciousness”). Finally, we apply CrossSpot to the real data, where it improves the F1 score over previous techniques by 68 percent and finds suspicious behavioral patterns in social datasets spanning 0.3 billion posts.
Efficient Far-Field Radio Frequency Energy Harvesting for Passively Powered Sensor Networks
An RF-DC power conversion system is designed to efficiently convert far-field RF energy to DC voltages at very low received power and voltages. Passive rectifier circuits are designed in a 0.25 mum CMOS technology using floating gate transistors as rectifying diodes. The 36-stage rectifier can rectify input voltages as low as 50 mV with a voltage gain of 6.4 and operates with received power as low as 5.5 muW(22.6 dBm). Optimized for far field, the circuit operates at a distance of 44 m from a 4 W EIRP source. The high voltage range achieved at low load current make it ideal for use in passively powered sensor networks.
G-CNN: An Iterative Grid Based Object Detector
We introduce G-CNN, an object detection technique based on CNNs which works without proposal algorithms. G-CNN starts with a multi-scale grid of fixed bounding boxes. We train a regressor to move and scale elements of the grid towards objects iteratively. G-CNN models the problem of object detection as finding a path from a fixed grid to boxes tightly surrounding the objects. G-CNN with around 180 boxes in a multi-scale grid performs comparably to Fast R-CNN which uses around 2K bounding boxes generated with a proposal technique. This strategy makes detection faster by removing the object proposal stage as well as reducing the number of boxes to be processed.
Análisis de la Simplificación de Expresiones Numéricas en Español mediante un Estudio Empírico
In this paper we present the results of an empirical study carried out on a parallel corpus of original and manually simplified texts in Spanish and a subsequent survey, with the aim of targeting simplification operations concerning numerical expressions. For the purpose of the study, a “numerical expression” is understood as any phrase expressing quantity possibly modified with a numerical hedge, such as almost a quarter. Data is analyzed both in context and in isolation, and attention is paid to the difference the target reader makes to simplification. Our future work aims at computational implementation of the transformation rules extracted so far.
Dense Pixel Matching between Unrectified and Distorted Images using Dynamic Programming
In this paper, a novel framework for dense pixel matching based on dynamic programming is introduced. Unlike most techniques proposed in the literature, our approach assumes neither known camera geometry nor the availability of rectified images. Under such conditions, the matching task cannot be reduced to finding correspondences between a pair of scanlines. We propose to extend existing dynamic programming methodologies to a larger dimensional space by using a 3D scoring matrix so that correspondences between a line and a whole image can be calculated. After assessing our framework on a standard evaluation dataset of rectified stereo images, experiments are conducted on unrectified and non-linearly distorted images. Results validate our new approach and reveal the versatility of our algorithm.
A survey of DHT security techniques
Peer-to-peer networks based on distributed hash tables (DHTs) have received considerable attention ever since their introduction in 2001. Unfortunately, DHT-based systems have been shown to be notoriously difficult to protect against security attacks. Various reports have been published that discuss or classify general security issues, but so far a comprehensive survey describing the various proposed defenses has been lacking. In this article, we present an overview of techniques reported in the literature for making DHT-based systems resistant to the three most important attacks that can be launched by malicious nodes participating in the DHT: (1) the Sybil attack, (2) the Eclipse attack, and (3) routing and storage attacks. We review the advantages and disadvantages of the proposed solutions and, in doing so, confirm how difficult it is to secure DHT-based systems in an adversarial environment.
Prognostic significance of angiogenic growth factor serum levels in patients with acute coronary syndromes.
BACKGROUND In patients with acute coronary syndromes, compensatory processes are initiated, including angiogenesis and endothelial regeneration of ruptured or eroded plaques. Angiogenic growth factors like vascular endothelial growth factor (VEGF), hepatocyte growth factor (HGF), and basic fibroblast growth factor (bFGF) are upregulated during ischemia. However, it is unknown whether their serum levels are related to clinical outcome. METHODS AND RESULTS We measured VEGF, HGF, and bFGF levels in 1090 patients with acute coronary syndromes. Angiographic evaluation was performed at baseline as well as death, and nonfatal myocardial infarctions were recorded during 6-month follow-up. HGF and VEGF, but not bFGF, were significantly and independently associated with the patients' outcome. Patients with elevated VEGF serum levels suffered from adverse outcome (adjusted hazard ratio, 2.50 [1.52 to 4.82]; P=0.002). VEGF elevation was associated with evidence of ischemia and was a significant predictor of the effect of glycoprotein IIb/IIIa inhibition. In contrast, patients with high HGF levels had a significantly lower event rate compared with patients with low HGF levels (adjusted hazard ratio, 0.33 [0.21 to 0.51]; P<0.001). HGF levels did not correlate with evidence of ischemia and did not predict the effect of abciximab. Intriguingly, however, HGF levels significantly correlated with angiographically visible collateralization of the target vessel (22.4% versus 10.5%; P<0.001). CONCLUSIONS The angiogenic growth factors VEGF and HGF are independent predictors of the patients' prognosis in acute coronary syndromes. Whereas VEGF elevation correlated with the evidence of myocardial ischemia and indicated an adverse outcome, HGF elevation was independent of ischemia and associated with improved collateralization as well as a favorable prognosis.
Semantics Centric Solutions for Application and Data Portability in Cloud Computing
Cloud computing has become one of the key considerations both in academia and industry. Cheap, seemingly unlimited computing resources that can be allocated almost instantaneously and pay-as-you-go pricing schemes are some of the reasons for the success of Cloud computing. The Cloud computing landscape, however, is plagued by many issues hindering adoption. One such issue is vendor lock-in, forcing the Cloud users to adhere to one service provider in terms of data and application logic. Semantic Web has been an important research area that has seen significant attention from both academic and industrial researchers. One key property of Semantic Web is the notion of interoperability and portability through high level models. Significant work has been done in the areas of data modeling, matching, and transformations. The issues the Cloud computing community is facing now with respect to portability of data and application logic are exactly the same issue the Semantic Web community has been trying to address for some time. In this paper we present an outline of the use of well established semantic technologies to overcome the vendor lock-in issues in Cloud computing. We present a semantics-centric programming paradigm to create portable Cloud applications and discuss MobiCloud, our early attempt to implement the proposed approach.
Using Factor Analysis to Generate Clusters of Agile Practices (A Guide for Agile Process Improvement)
In this paper, factor analysis is applied on a set of data that was collected to study the effectiveness of 58 different agile practices. The analysis extracted 15 factors, each was associated with a list of practices. These factors with the associated practices can be used as a guide for agile process improvement. Correlations between the extracted factors were calculated, and the significant correlation findings suggested that people who applied iterative and incremental development and quality assurance practices had a high success rate, that communication with the customer was not very popular as it had negative correlations with governance and iterative and incremental development. Also, people who applied governance practices also applied quality assurance practices. Interestingly success rate related negatively with traditional analysis methods such as Gantt chart and detailed requirements specification.
Pathological burst fracture in the cervical spine with negative red flags: a case report.
OBJECTIVE To report on a case of a pathological burst fracture in the cervical spine where typical core red flag tests failed to identify a significant lesion, and to remind chiropractors to be vigilant in the recognition of subtle signs and symptoms of disease processes. CLINICAL FEATURES A 61-year-old man presented to a chiropractic clinic with neck pain that began earlier that morning. After a physical exam that was relatively unremarkable, imaging identified a burst fracture in the cervical spine. INTERVENTION & OUTCOMES The patient was sent by ambulance to the hospital where he was diagnosed with multiple myeloma. No medical intervention was performed on the fracture. SUMMARY The patient's initial physical examination was largely unremarkable, with an absence of clinical red flags. The screening tools were non-diagnostic. Pain with traction and the sudden onset of symptoms prompted further investigation with plain film imaging of the cervical spine. This identified a pathological burst fracture in the C4 vertebrae.
Converted-wave seismic exploration: Applications
Converted seismic waves (P-to-S on reflection) are being increasingly used to explore for subsurface targets. Rapid advancements in multicomponent acquisition methods and processing techniques have led to numerous applications for P-S images. Uses that have arisen include sand/shale differentiation, carbonate identification, definition of interfaces with low P-wave contrast, anisotropy analysis, imaging through gas zones, shallow high-resolution imaging, and reservoir monitoring. Marine converted-wave analysis using 4-C recordings (a three-component geophone plus a hydrophone) has generated some remarkable images.
Exploring Gamification Techniques and Applications for Sustainable Tourism
Tourism is perceived as an appropriate solution for pursuing sustainable economic growth due to its main characteristics. In the context of sustainable tourism, gamification can act as an interface between tourists (clients), organisations (companies, NGOs, public institutions) and community, an interface built in a responsible and ethical way. The main objective of this study is to identify gamification techniques and applications used by organisations in the hospitality and tourism industry to improve their sustainable activities. The first part of the paper examines the relationship between gamification and sustainability, highlighting the links between these two concepts. The second part identifies success stories of gamification applied in hospitality and tourism and reviews gamification benefits by analysing the relationship between tourism organisations and three main tourism stakeholders: tourists, tourism employees and local community. The analysis is made in connection with the main pillars of sustainability: economic, social and environmental. This study is positioning the role of gamification in the tourism and hospitality industry and further, into the larger context of sustainable development.
Noise2Noise: Learning Image Restoration without Clean Data
We apply basic statistical reasoning to signal reconstruction by machine learning – learning to map corrupted observations to clean signals – with a simple and powerful conclusion: it is possible to learn to restore images by only looking at corrupted examples, at performance at and sometimes exceeding training using clean data, without explicit image priors or likelihood models of the corruption. In practice, we show that a single model learns photographic noise removal, denoising synthetic Monte Carlo images, and reconstruction of undersampled MRI scans – all corrupted by different processes – based on noisy data only.
Molecular Hypergraph Grammar with its Application to Molecular Optimization
This paper is concerned with a molecular optimization framework using variational autoencoders (VAEs). In this paradigm, VAE allows us to convert a molecular graph into/from its latent continuous vector, and therefore, the molecular optimization problem can be solved by continuous optimization techniques. One of the longstanding issues in this area is that it is difficult to always generate valid molecules. The very recent work called the junction tree variational autoencoder (JT-VAE) successfully solved this issue by generating a molecule fragment-by-fragment. While it achieves the state-of-the-art performance, it requires several neural networks to be trained, which predict which atoms are used to connect fragments and stereochemistry of each bond. In this paper, we present a molecular hypergraph grammar variational autoencoder (MHG-VAE), which uses a single VAE to address the issue. Our idea is to develop a novel graph grammar for molecular graphs called molecular hypergraph grammar (MHG), which can specify the connections between fragments and the stereochemistry on behalf of neural networks. This capability allows us to address the issue using only a single VAE. We empirically demonstrate the effectiveness of MHGVAE over existing methods.
3D Object Localisation from Multi-View Image Detections
In this work we present a novel approach to recover objects 3D position and occupancy in a generic scene using only 2D object detections from multiple view images. The method reformulates the problem as the estimation of a quadric (ellipsoid) in 3D given a set of 2D ellipses fitted to the object detection bounding boxes in multiple views. We show that a closed-form solution exists in the dual-space using a minimum of three views while a solution with two views is possible through the use of non-linear optimisation and object constraints on the size of the object shape. In order to make the solution robust toward inaccurate bounding boxes, a likely occurrence in object detection methods, we introduce a data preconditioning technique and a non-linear refinement of the closed form solution based on implicit subspace constraints. Results on synthetic tests and on different real datasets, involving challenging scenarios, demonstrate the applicability and potential of our method in several realistic scenarios.
Authorship Attribution in Bengali Language
We describe Authorship Attribution of Bengali literary text. Our contributions include a new corpus of 3,000 passages written by three Bengali authors, an end-to-end system for authorship classification based on character n-grams, feature selection for authorship attribution, feature ranking and analysis, and learning curve to assess the relationship between amount of training data and test accuracy. We achieve state-of-theart results on held-out dataset, thus indicating that lexical n-gram features are unarguably the best discriminators for authorship attribution of Bengali literary text.
A trip into the countryside: an experience design for explorative car cruises
In-car navigation systems are designed with effectiveness and efficiency (e.g., guiding accuracy) in mind. However, finding a way and discovering new places could also be framed as an adventurous, stimulating experience for the driver and passengers. Inspired by Gaver and Martin's (2000) notion of "ambiguity and detour" and Hassenzahl's (2010) Experience Design, we built ExplorationRide, an in-car navigation system to foster exploration. An empirical in situ exploration demonstrated the system's ability to create an exploration experience, marked by a relaxed at-mosphere, a loss of sense of time, excitement about new places and an intensified relationship with the landscape.
Preparation and characterization of high quality diamond films by DC arc plasma jet CVD method
Under optimal conditions free-standing high quality diamond films were prepared by DC arc plasma jet CVD method at a growth rate of 7-10 μm/h. Surface and cross section morphologies of the diamond films were observed by SEM. Raman spectrometer was used to characterize the quality of diamond films. The IR transmittivity measured by IR spectrometer is close to the theoretical value of about 71% in the far infrared band. The thermal conductivity measured by photothermal deflection exceeds 18 W/cm·K. 〈110〉 is the preferential growth orientation of the films detected by X-ray diffractometer. As s result, the extremely high temperature of DC arc plasma jet can produce supersaturated atomic hydrogen, which played an important role in the process for the deposition of high quality diamond films.
A Broadband Dual-Polarized Dual-OAM-Mode Antenna Array for OAM Communication
The generation of multimode orbital angular momentum (OAM) carrying beams has attracted more and more attention. A broadband dual-polarized dual-OAM-mode uniform circular array is proposed in this letter. The proposed antenna array, which consists of a broadband dual-polarized bow-tie dipole array and a broadband phase-shifting feeding network, can be used to generate OAM mode −1 and OAM mode 1 beams from 2.1 to 2.7 GHz (a bandwidth of 25%) for each of two polarizations. Four orthogonal channels can be provided by the proposed antenna array. A 2.5-m broadband OAM link is built. The measured crosstalk between the mode matched channels and the mode mismatched channels is less than −12 dB at 2.1, 2.4, and 2.7 GHz. Four different data streams are transmitted simultaneously by the proposed array with a bit error rate less than 4.2×10-3 at 2.1, 2.4, and 2.7 GHz.
Removing gamification from an enterprise SNS
Gamification, the use of game mechanics in non-gaming applications, has been applied to various systems to encourage desired user behaviors. In this paper, we examine patterns of user activity in an enterprise social network service after the removal of a points-based incentive system. Our results reveal that the removal of the incentive scheme did reduce overall participation via contribution within the SNS. We also describe the strategies by point leaders and observe that users geographically distant from headquarters tended to comment on profiles outside of their home country. Finally, we describe the implications of the removal of extrinsic rewards, such as points and badges, on social software systems, particularly those deployed within an enterprise.
BM3D filter in salt-and-pepper noise removal
There is a significant recent advance in filtering of the salt-and-pepper noise for digital images. However, almost all recent schemes for filtering of this type of noise are not taking into an account the shape of objects (in particular edges) in images. We have applied the block-matching and 3D filtering (BM3D) scheme in order to refine the output of the decision-based/adaptive median techniques. Obtained results are excellent, surpassing current state-of-the-art for about 2 dB for both grayscale and color images.
Reading Text in the Wild with Convolutional Neural Networks
In this work we present an end-to-end system for text spotting—localising and recognising text in natural scene images—and text based image retrieval. This system is based on a region proposal mechanism for detection and deep convolutional neural networks for recognition. Our pipeline uses a novel combination of complementary proposal generation techniques to ensure high recall, and a fast subsequent filtering stage for improving precision. For the recognition and ranking of proposals, we train very large convolutional neural networks to perform word recognition on the whole proposal region at the same time, departing from the character classifier based systems of the past. These networks are trained solely on data produced by a synthetic text generation engine, requiring no human labelled data. Analysing the stages of our pipeline, we show state-of-the-art performance throughout. We perform rigorous experiments across a number of standard end-to-end text spotting benchmarks and text-based image retrieval datasets, showing a large improvement over all previous methods. Finally, we demonstrate a real-world application of our text spotting system to allow thousands of hours of news footage to be instantly searchable via a text query.
Underestimation of myelodysplastic syndrome incidence by cancer registries: Results from a population-based data linkage study.
BACKGROUND Myelodysplastic syndromes (MDS) appear to be underreported to cancer registries, with important implications for cancer and transfusion support service planning and delivery. Two population-based databases were linked to estimate MDS incidence more accurately. METHODS Data from the statewide Victorian Cancer Registry (VCR) and Victorian Admitted Episode Dataset (VAED, capturing all inpatient admissions), in Australia, were linked. Incidence rates were calculated based on VCR reported cases and using additional MDS cases identified in VAED. Differences between reported and nonreported cases were assessed. A multivariate capture-recapture method was used to estimate missed cases. RESULTS Between 2003 and 2010, 2692 cases were reported to VCR and an additional 1562 cases were identified in VAED. Annual incidence rate for those aged 65 years and older based on VCR was 44 per 100,000 (95% confidence interval [CI] = 43-45 per 100,000) and 68 per 100,000 (95% CI = 67-70 per 100,000) using both data sets. Cases not reported to VCR were more likely to have had previous malignancies recorded in VAED (23% versus 19%, P = .003) and to require red cell transfusion (59% versus 54%, P = .003). Using the multivariate model, an estimated 1292 cases were missed by both data sources: the re-estimate was 5546 (95% CI = 5438-5655) MDS cases, with an annual incidence in those aged 65 or older of 103 per 100,000 (95% CI = 100-106). CONCLUSIONS This study reports a higher incidence of MDS using 2 data sources from a large and well-defined population than reported using cancer registry notifications alone.
Thermodynamics and biological properties of the aqueous solutions of new glucocationic surfactants.
Thermodynamic properties of aqueous solutions of newly synthesized compounds, namely, N-[2-(beta-D-glucopyranosyl)ethyl]-N,N-dimethyl-N-alkylammonium bromides with hydrophobic tails of 12 (C12DGCB) and 16 (C16DGCB) carbon atoms, determined as a function of concentration by means of direct methods, are reported here. Dilution enthalpies, densities, and sound velocities were measured at 298 K, allowing for the determination of apparent and partial molar enthalpies, volumes, and compressibilities. Changes in thermodynamic quantities upon micellization were derived using a pseudophase-transition approach. From a comparison with the corresponding acetylated compounds N-[2-(2,3,4,6-tetra-O-acetyl-beta-D-glucopyranosyl)ethyl]-N,N-dimethyl-N-dodecylammonium bromide (C12AGCB) and N-[2-(2,3,4,6-tetra-O-acetyl-beta-D-glucopyanosyl)ethyl]-N,N-dimethyl-N-hexadecylammonium bromide (C16AGCB), the role played in the micellization process by the acetylated glycosyl moiety was inferred: it enhances the hydrophobic character of the molecule and lowers the change in enthalpy of micelle formation by about 1.5 kJ mol(-1). By comparing the volume of C12DGCB with those of DEDAB and DTAB, the volumes taken up by the (beta- d-glucopyranosyl)ethyl and beta- d-glucopyranosyl groups were found to be 133 and 99 cm3 mol(-1), respectively. Regarding the interaction with DPPC membranes, it seems that the sugar moiety of the hexadecyl deacetylated compound gives rise to hydrogen bonds with the oxygen atoms of the lipid phosphates, shifting the phase transition of DPPC from a bilayer gel to a bilayer liquid crystal to lower temperatures. C16AGCB induces significantly greater changes than C16DGCB in the structure of liposomes, suggesting the formation of domains. The interaction is strongly enhanced by the presence of water. Neither compound interacts strongly with DNA or compacts it, as shown by EMSA assays and AFM images. Only C16AGCB is able to deliver little DNA inside cells when coformulated with DOPE, as shown by the transient transfection assay. This might be related to the ability of C16AGCB to form surfactant-rich domains in the lipid structure.
Color Correction of Underwater Images for Aquatic Robot Inspection
In this paper, we consider the problem of color restoration using statistical priors. This is applied to color recovery for underwater images, using an energy minimization formulation. Underwater images present a challenge when trying to correct the blue-green monochrome look to bring out the color we know marine life has. For aquatic robot tasks, the quality of the images is crucial and needed in real-time. Our method enhances the color of the images by using a Markov Random Field (MRF) to represent the relationship between color depleted and color images. The parameters of the MRF model are learned from the training data and then the most probable color assignment for each pixel in the given color depleted image is inferred by using belief propagation (BP). This allows the system to adapt the color restoration algorithm to the current environmental conditions and also to the task requirements. Experimental results on a variety of underwater scenes demonstrate the feasibility of our method.
Clinical trial of ghrelin synthesis administration for upper GI surgery.
Appetite and weight loss following gastrectomy or esophagectomy is one of the major problems that affect the postoperative QoL. Ghrelin, mainly secreted from the stomach, is related to appetite, weight gain, and positive energy balance. This hormone level had been shown to become low for a long time after upper GI surgery. The efficacy of ghrelin synthesis administration for postoperative weight loss was investigated from a clinical trial to develop a new strategy for weight gain. In addition to this treatment for appetite and weight loss, we focused on the anti-inflammatory role of ghrelin. For the purpose of controlling postoperative cytokine storm after esophagectomy, this hormone was introduced in the clinical trial. Finally, ghrelin replacement therapy during chemotherapy in patients with esophageal cancer is also presented. Our clinical trials and their results are presented in this chapter.
Sentiment analysis of Twitter data within big data distributed environment for stock prediction
This paper covers design, implementation and evaluation of a system that may be used to predict future stock prices basing on analysis of data from social media services. The authors took advantage of large datasets available from Twitter micro blogging platform and widely available stock market records. Data was collected during three months and processed for further analysis. Machine learning was employed to conduct sentiment classification of data coming from social networks in order to estimate future stock prices. Calculations were performed in distributed environment according to Map Reduce programming model. Evaluation and discussion of results of predictions for different time intervals and input datasets proved efficiency of chosen approach is discussed here.
Association between treatment or usual care region and hospitalization for fall-related traumatic brain injury in the Connecticut Collaboration for Fall Prevention.
OBJECTIVES To evaluate the association between the treatment region (TR) or usual care region (UCR) of the Connecticut Collaboration for Fall Prevention (CCFP), a clinical intervention for prevention of falls, and the rate of hospitalization for fall-related traumatic brain injury (FR-TBI) in persons aged 70 and older and to describe the Medicare charges for FR-TBI hospitalizations. DESIGN Using a quasi-experimental design, rates of hospitalization for FR-TBI were recorded over an 8-year period (2000-2007) in two distinct geographic regions (TR and UCR) chosen for their similarity in characteristics associated with occurrence of falls. SETTING Two geographical regions in Connecticut. PARTICIPANTS More than 200,000 persons aged 70 and older. INTERVENTION Clinicians in the TR translated research protocols from the Yale Frailty and Injuries: Cooperative Studies of Intervention Techniques, a successful fall-prevention randomized clinical trial, into discipline- and site-specific fall-prevention procedures for integration into their clinical practices. MEASUREMENTS Rate of hospitalization for FR-TBI in persons aged 70 and older. RESULTS Connecticut Collaboration for Fall Prevention's TR exhibited lower rates of hospitalization for FR-TBI than the UCR (risk ratio = 0.84, 95% credible interval = 0.72-0.99). CONCLUSION The significantly lower rate of hospitalization for FR-TBI in CCFP's TR suggests that the engagement of practicing clinicians in the implementation of evidence-based fall-prevention practices may reduce hospitalizations for FR-TBI.