title
stringlengths
8
300
abstract
stringlengths
0
10k
Protective immunity of E. coli-synthesized NS1 protein of Japanese encephalitis virus
Immunogenicity and protective efficacy of recombinant Japanese encephalitis virus (JEV) NS1 proteins generated using DNA vaccines and recombinant viruses have been demonstrated to induce protection in mice against a challenge of JEV at a lethal dose. The West Nile virus NS1 region expressed in E. coli is recognized by these protective monoclonal antibodies and, in this study, we compare immunogenicity and protective immunity of the E. coli-synthesized NS1 protein with another protective immunogen, the envelope domain III (ED3). Pre-challenge, detectable titers of JEV-specific neutralizing antibody were detected in the immunized mice with E. coli-synthesized ED3 protein (PRNT50 = 1:28) and the attenuated JEV strain T1P1 (PRNT50 = 1:53), but neutralizing antibodies were undetectable in the immunized mice with E. coli-synthesized NS1 protein (PRNT50 < 1:10). However, the survival rate of the NS1-immunized mice against the JEV challenge was 87.5% (7/8), showing significantly higher levels of protection than the ED3-immunized mice, 62.5% (5/8) (P = 0.041). In addition, E. coli-synthesized NS1 protein induced a significant increase of anti-NS1 IgG1 antibodies, resulting in an ELISA titer of 100,1000 in the immunized sera before lethal JEV challenge. Surviving mice challenged with the virulent JEV strain Beijing-1 showed a ten-fold or greater rise in IgG1 and IgG2b titers of anti-NS1 antibodies, implying that the Th2 cell activation might be predominantly responsible for antibody responses and mice protection.
Online content-aware video condensation
Explosive growth of surveillance video data presents formidable challenges to its browsing, retrieval and storage. Video synopsis, an innovation proposed by Peleg and his colleagues, is aimed for fast browsing by shortening the video into a synopsis while keeping activities in video captured by a camera. However, the current techniques are offline methods requiring that all the video data be ready for the processing, and are expensive in time and space. In this paper, we propose an online and efficient solution, and its supporting algorithms to overcome the problems. The method adopts an online content-aware approach in a step-wise manner, hence applicable to endless video, with less computational cost. Moreover, we propose a novel tracking method, called sticky tracking, to achieve high-quality visualization. The system can achieve a faster-than-real-time speed with a multi-core CPU implementation. The advantages are demonstrated by extensive experiments with a wide variety of videos. The proposed solution and algorithms could be integrated with surveillance cameras, and impact the way that surveillance videos are recorded.
Reliability and validity of a tool to measure the severity of tongue thrust in children: the Tongue Thrust Rating Scale.
This study aimed to develop a scale called Tongue Thrust Rating Scale (TTRS), which categorised tongue thrust in children in terms of its severity during swallowing, and to investigate its validity and reliability. The study describes the developmental phase of the TTRS and presented its content and criterion-based validity and interobserver and intra-observer reliability. For content validation, seven experts assessed the steps in the scale over two Delphi rounds. Two physical therapists evaluated videos of 50 children with cerebral palsy (mean age, 57·9 ± 16·8 months), using the TTRS to test criterion-based validity, interobserver and intra-observer reliability. The Karaduman Chewing Performance Scale (KCPS) and Drooling Severity and Frequency Scale (DSFS) were used for criterion-based validity. All the TTRS steps were deemed necessary. The content validity index was 0·857. A very strong positive correlation was found between two examinations by one physical therapist, which indicated intra-observer reliability (r = 0·938, P < 0·001). A very strong positive correlation was also found between the TTRS scores of two physical therapists, indicating interobserver reliability (r = 0·892, P < 0·001). There was also a strong positive correlation between the TTRS and KCPS (r = 0·724, P < 0·001) and a very strong positive correlation between the TTRS scores and DSFS (r = 0·822 and r = 0·755; P < 0·001). These results demonstrated the criterion-based validity of the TTRS. The TTRS is a valid, reliable and clinically easy-to-use functional instrument to document the severity of tongue thrust in children.
Role of Connectivity in Citizen Centered E-Governance in Myanmar: Learning from Indian Experience
Globally, e-Governance has been fast maturing for rendering citizens centered services. Many nations have recognized the need to declare e-Governance as part of their national agenda and development goals. Despite growing acceptance of e-Governance as a service and infrastructure, many nations still face challenges of digital divide. Especially, these challenges with varying degrees are still prevalent in developing nations. However, there is opportunity for many nations who are in planning stage to learn from other nations to adopt a benchmarked process for e-Governance. Many successful e-Governance plans suggest for supporting telecommunication infrastructure at a national level with due importance on connectivity in the access layer of the service delivery architecture. In this paper, case of Myanmar is discussed which is in the early stage of adoption of e-Governance plan. It is now investing in deployment of connectivity plan. India being a neighboring country has similar demographic, social and cultural atmosphere and having experienced in managing e-Governance project life cycle; provides scope for consideration for Myanmar e-Governance project.
Attitudes and actions of asthma patients on regular maintenance therapy: the INSPIRE study
BACKGROUND This study examined the attitudes and actions of 3415 physician-recruited adults aged > or = 16 years with asthma in eleven countries who were prescribed regular maintenance therapy with inhaled corticosteroids or inhaled corticosteroids plus long-acting beta2-agonists. METHODS Structured interviews were conducted to assess medication use, asthma control, and patients' ability to recognise and self-manage worsening asthma. RESULTS Despite being prescribed regular maintenance therapy, 74% of patients used short-acting beta2-agonists daily and 51% were classified by the Asthma Control Questionnaire as having uncontrolled asthma. Even patients with well-controlled asthma reported an average of 6 worsenings/year. The mean period from the onset to the peak symptoms of a worsening was 5.1 days. Although most patients recognised the early signs of worsenings, the most common response was to increase short-acting beta2-agonist use; inhaled corticosteroids were increased to a lesser extent at the peak of a worsening. CONCLUSION Previous studies of this nature have also reported considerable patient morbidity, but in those studies approximately three-quarters of patients were not receiving regular maintenance therapy and not all had a physician-confirmed diagnosis of asthma. This study shows that patients with asthma receiving regular maintenance therapy still have high levels of inadequately controlled asthma. The study also shows that patients recognise deteriorating asthma control and adjust their medication during episodes of worsening. However, they often adjust treatment in an inappropriate manner, which represents a window of missed opportunity.
Personalized news recommendation based on click behavior
Online news reading has become very popular as the web provides access to news articles from millions of sources around the world. A key challenge of news websites is to help users find the articles that are interesting to read. In this paper, we present our research on developing personalized news recommendation system in Google News. For users who are logged in and have explicitly enabled web history, the recommendation system builds profiles of users' news interests based on their past click behavior. To understand how users' news interests change over time, we first conducted a large-scale analysis of anonymized Google News users click logs. Based on the log analysis, we developed a Bayesian framework for predicting users' current news interests from the activities of that particular user and the news trends demonstrated in the activity of all users. We combine the content-based recommendation mechanism which uses learned user profiles with an existing collaborative filtering mechanism to generate personalized news recommendations. The hybrid recommender system was deployed in Google News. Experiments on the live traffic of Google News website demonstrated that the hybrid method improves the quality of news recommendation and increases traffic to the site.
Nursing care of the small animal neurological patient.
Nursing care of long-term recumbent small animals, with emphasis on the neurological patient, is described. Principles of general nursing care, particularly nutritional support and the prevention and treatment of urinary complications, are of major concern in any weak or recumbent patient. The estimation of nutritional requirements and adjustments, information on South African commercial liquid diets and practical rehabilitation are described.
Knowledge and Trust in E-consumers' Online Shopping Behavior
Consumer trust is a critical enabler to the success of online retailing and knowledge is one important factor influencing the level of trust. However, there is no consensus on the relationship between knowledge and trust. Some studies argued a negative relationship between knowledge and trust while the others argued positive. This study discussed the relationship between knowledge, trust in online shopping, and the intention to go shopping online. The results revealed that knowledge is positively associated with trust and online shopping activities. In other words, people who know more about online shopping will trust and go shopping more online. Online retailing practice should make the public knowledgeable about online transaction security mechanisms to build userspsila trust in online shopping.
The semantics of poetry: A distributional reading
Poetry is rarely a focus of linguistic investigation. This i s far from surprising, as poetic language, especially in modern and contemporary l ite ature, seems to defy the general rules of syntax and semantics. This paper as sumes, however, that linguistic theories should ideally be able to account for cr eative uses of language, down to their most difficult incarnations. It proposes that a t he semantic level, what distinguishes poetry from other uses of language may be its ability to trace conceptual patterns which do not belong to everyday discour e but are latent in our shared language structure. Distributional semantics p rovides a theoretical and experimental basis for this exploration. First, the notion of a specific ‘semantics of poetry’ is discussed, with some help from literary criticis m and philosophy. Then, distributionalism is introduced as a theory supporting the notion that the meaning of poetry comes from the meaning of ordinary language. In the second part of the paper, experimental results are provided showing that a) di stributional representations can model the link between ordinary and poetic languag e, b) a distributional model can experimentally distinguish between poetic and ra omised textual output, regardless of the complexity of the poetry involved, c) there is a stable, but not immediately transparent, layer of meaning in poetry, wh ich can be captured distributionally, across different levels of poetic compl exity.
A dynamic model for integrating simple web spam classification techniques
Over the last years, Internet spam content has spread enormously inside web sites mainly due to the emergence of new web technologies oriented towards the online sharing of resources and information. In such a situation, both academia and industry have shown their concern to accurately detect and effectively control web spam, resulting in a good number of anti-spam techniques currently available. However, the successful integration of different algorithms for web spam classification is still a challenge. In this context, the present study introduces WSF2, a novel web spam filtering framework specifically designed to take advantage of multiple classification schemes and algorithms. In detail, our approach encodes the life cycle of a case-based reasoning system, being able to use appropriate knowledge and dynamically adjust different parameters to ensure continuous improvement in filtering precision with the passage of time. In order to correctly evaluate the effectiveness of the dynamic model, we designed a set of experiments involving a publicly available corpus, as well as different simple well-known classifiers and ensemble approaches. The results revealed that WSF2 performed well, being able to take advantage of each classifier and to achieve a better performance when compared to other alternatives. WSF2 is an open-source project licensed under the terms of the LGPL publicly available at https://sourceforge.net/
A holistic approach for quality assurance and advanced decision making for academic institutions using the balanced scorecard technique
Quality assurance in higher education has been established officially, through the establishment of appropriate procedures and through the appointment of supervising bodies. After an initial period of experimental application of evaluation activities, the need for optimization of processes and exploitation of recorded observation has arisen; the obvious aim is the provision of educational processes in accordance with the quality standards. Within the aforementioned framework the Balanced Scorecard technique provides a useful framework that assists in converting the organizational scope to measurable indexes and processes. This procedure however, in order to be applied to the educational domain, demands an appropriate adjustment of the model to the educational needs. Within this framework, this work describes an attempt to develop the required infrastructure in processes, information systems and regulations for a technological institution, in order to ensure quality in relation to the goals agreed by the Hellenic Quality Assurance and Accreditation Agency and support the administration of a higher education institution in taking decisions in a contemporary and continuously improved manner.
A Survey of Visualization Systems for Network Security
Security Visualization is a very young term. It expresses the idea that common visualization techniques have been designed for use cases that are not supportive of security-related data, demanding novel techniques fine tuned for the purpose of thorough analysis. Significant amount of work has been published in this area, but little work has been done to study this emerging visualization discipline. We offer a comprehensive review of network security visualization and provide a taxonomy in the form of five use-case classes encompassing nearly all recent works in this area. We outline the incorporated visualization techniques and data sources and provide an informative table to display our findings. From the analysis of these systems, we examine issues and concerns regarding network security visualization and provide guidelines and directions for future researchers and visual system developers.
Location management in cellular mobile computing systems with dynamic hierarchical location databases
An important issue in the design of a mobile computing system is how to manage the location information of mobile clients. In the existing commercial cellular mobile computing systems, a twotier architecture is adopted (Mouly and Pautet, 1992). However, the two-tier architecture is not scalable. In the literatures (Pitoura and Samaras, 2001; Pitoura and Fudos, 1998), a hierarchical database structure is proposed in which the location information of mobile clients within a cell is managed by the location database responsible for the cell. The location databases of different cells are organized into a tree-like structure to facilitate the search of mobile clients. Although this architecture can distribute the updates and the searching workload amongst the location databases in the system, location update overheads can be very expensive when the mobility of clients is high. In this paper, we study the issues on how to generate location updates under the distance-based method for systems using hierarchical location databases. A cost-based method is proposed for calculating the optimal distance threshold with the objective to minimize the total location management cost. Furthermore, under the existing hierarchical location database scheme, the tree structure of the location databases is static. It cannot adapt to the changes in mobility patterns of mobile clients. This will affect the total location management cost in the system. In the second part of the paper, we present a re-organization strategy to re-structure the hierarchical tree of location databases according to the mobility patterns of the clients with the objective to minimize the location management cost. Extensive simulation experiments have been performed to investigate the re-organization strategy when our location update generation method is applied.
Featureless Visual Processing for SLAM in Changing Outdoor Environments
Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features.
Aligning where to see and what to tell: image caption with region-based attention and scene factorization
Recent progress on automatic generation of image captions has shown that it is possible to describe the most salient information conveyed by images with accurate and meaningful sentences. In this paper, we propose an image caption system that exploits the parallel structures between images and sentences. In our model, the process of generating the next word, given the previously generated ones, is aligned with the visual perception experience where the attention shifting among the visual regions imposes a thread of visual ordering. This alignment characterizes the flow of “abstract meaning”, encoding what is semantically shared by both the visual scene and the text description. Our system also makes another novel modeling contribution by introducing scene-specific contexts that capture higher-level semantic information encoded in an image. The contexts adapt language models for word generation to specific scene types. We benchmark our system and contrast to published results on several popular datasets. We show that using either region-based attention or scene-specific contexts improves systems without those components. Furthermore, combining these two modeling ingredients attains the state-of-the-art performance.
Reduction in catheter-associated urinary tract infections by bundling interventions.
OBJECTIVE Urinary tract infections (UTIs) are the most common type of hospital-acquired infection, and most are associated with indwelling urinary catheters, that is, catheter-associated UTIs (CAUTIs). Our goal was to reduce the CAUTI rate. DESIGN SETTING INTERVENTIONS We retrospectively examined the feasibility and cost-effectiveness of a bundle of four evidence-based interventions upon the incidence rate (IR) of CAUTIs in a community hospital. The first intervention was the exclusive use of silver alloy catheters in the hospital's acute care areas. The second intervention was a securing device to limit the movement of the catheter after insertion. The third intervention was repositioning of the catheter tubing if it was found to be touching the floor. The fourth intervention was removal of the indwelling urinary catheter on postoperative Day 1 or 2, for most surgical patients. MAIN OUTCOME MEASURE Rates of CAUTI per 1000 catheter days were estimated and compared using the generalized estimating equations Poisson regression analysis. RESULTS During the study period, 33 of the 2228 patients were diagnosed with a CAUTI. The CAUTI IR for the pre-intervention period was 5.2/1000. For the 7 months following the implementation of the fourth intervention, the IR was 1.5/1000 catheter days, a significant reduction relative to the pre-intervention period (P = 0.03). The annualized projection for the cost of implementing this bundle of four interventions is $23 924. CONCLUSION A bundle of four evidence-based interventions reduced the incidence of CAUTIs in a community hospital. It is relatively simple, appears to be cost-effective and might be sustainable and adaptable by other hospitals.
Font and Background Color Independent Text
We propose a novel method for binarization of color documents whereby the foreground text is output as black and the background as white regardless of the polarity of foreground-background shades. The method employs an edge-based connected component approach and automatically determines a threshold for each component. It has several advantages over existing binarization methods. Firstly, it can handle documents with multi-colored texts with different background shades. Secondly, the method is applicable to documents having text of widely varying sizes, usually not handled by local binarization methods. Thirdly, the method automatically computes the threshold for binarization and the logic for inverting the output from the image data and does not require any input parameter. The proposed method has been applied to a broad domain of target document types and environment and is found to have a good adaptability.
Geochemistry of Mississippian tuffs from the Ouachita Mountains, and implications for the tectonics of the Ouachita orogen, Oklahoma and Arkansas
The Ouachita orogeny was the result of plate convergence at the southern margin of the North American continent, although the nature of the converging southern plate and the direction of subduction remain uncertain. The presence of areally extensive tuff layers interbedded with shale in the Mississippian Stanley Group of the Ouachita Mountains, Oklahoma and Arkansas, provides the potential to define the tectonic environment of volcanism from the geochemistry of the tuffs and thereby delimit the subduction configuration. The tuffs contain relic primary magmatic quartz, plagioclase, and alkali feldspar and range from crystal- to vitric-rich. Mineralogical sorting and diagenetic effects have caused chemical variability within individual tuff units, but overall the tuffs have retained their primary igneous geochemical characteristics. The Beavers Bend and Hatton tuffs are geochemically very similar and are more evolved (higher SiO2, Rb, Th, REE; lower Sr, Ba) than the Mud Creek tuff. The stratigraphically equivalent Sabine Rhyolite is geochemically distinct from the tuffs, having less fractionated rare earth element (REE) patterns and different trace element ratios. Both the Ouachita tuffs and Sabine Rhyolite have the geochemical characteristics of subduction-related magmas (for example, strong depletion of Nb and Ta relative to other incompatible trace elements). Consideration of trace element systematics in the tuffs compared to those of modern high-silica volcanic rocks from different subduction-related tectonic settings suggests a continental arc origin and implies southward subduction beneath a southern continent during Carboniferous ocean basin closure.
Brief report: development of the adolescent empathy and systemizing quotients.
Adolescent versions of the Empathy Quotient (EQ) and Systemizing Quotient (SQ) were developed and administered to n = 1,030 parents of typically developing adolescents, aged 12-16 years. Both measures showed good test-retest reliability and high internal consistency. Girls scored significantly higher on the EQ, and boys scored significantly higher on the SQ. A sample of adolescents with Autism Spectrum Conditions (ASC) (n = 213) scored significantly lower on the EQ, and significantly higher on the SQ, compared to typical boys. Similar patterns of sex differences and cognitive brain types are observed in children, adolescents and adults, suggesting from cross-sectional studies that the behaviours measured by age-appropriate versions of the EQ and SQ are stable across time. Longitudinal studies would be useful to test this stability in the future. Finally, relative to typical sex differences, individuals with ASC, regardless of age, on average exhibit a 'hyper-masculinized' profile.
HIV viraemia and mother-to-child transmission risk after antiretroviral therapy initiation in pregnancy in Cape Town, South Africa.
OBJECTIVES Maternal HIV viral load (VL) drives mother-to-child HIV transmission (MTCT) risk but there are few data from sub-Saharan Africa, where most MTCT occurs. We investigated VL changes during pregnancy and MTCT following antiretroviral therapy (ART) initiation in Cape Town, South Africa. METHODS We conducted a prospective study of HIV-infected women initiating ART within routine antenatal services in a primary care setting. VL measurements were taken before ART initiation and up to three more times within 7 days postpartum. Analyses examined VL changes over time, viral suppression (VS) at delivery, and early MTCT based on polymerase chain reaction (PCR) testing up to 8 weeks of age. RESULTS A total of 620 ART-eligible HIV-infected pregnant women initiated ART, with 2425 VL measurements by delivery (median gestation at initiation, 20 weeks; median pre-ART VL, 4.0 log10 HIV-1 RNA copies/mL; median time on ART before delivery, 118 days). At delivery, 91% and 73% of women had VL ≤ 1000 and ≤ 50 copies/mL, respectively. VS was strongly predicted by time on therapy and pre-ART VL. The risk of early MTCT was strongly associated with delivery VL, with risks of 0.25, 2.0 and 8.5% among women with VL < 50, 50-1000 and > 1000 copies/mL at delivery, respectively (P < 0.001). CONCLUSIONS High rates of VS at delivery and low rates of MTCT can be achieved in a routine care setting in sub-Saharan Africa, indicating the effectiveness of currently recommended ART regimens. Women initiating ART late in pregnancy and with high VL appear substantially less likely to achieve VS and require targeted research and programmatic attention.
Security analysis on consumer and industrial IoT devices
The fast development of Internet of Things (IoT) and cyber-physical systems (CPS) has triggered a large demand of smart devices which are loaded with sensors collecting information from their surroundings, processing it and relaying it to remote locations for further analysis. The wide deployment of IoT devices and the pressure of time to market of device development have raised security and privacy concerns. In order to help better understand the security vulnerabilities of existing IoT devices and promote the development of low-cost IoT security methods, in this paper, we use both commercial and industrial IoT devices as examples from which the security of hardware, software, and networks are analyzed and backdoors are identified. A detailed security analysis procedure will be elaborated on a home automation system and a smart meter proving that security vulnerabilities are a common problem for most devices. Security solutions and mitigation methods will also be discussed to help IoT manufacturers secure their products.
Application of adaptive design methodology in development of a long-acting glucagon-like peptide-1 analog (dulaglutide): statistical design and simulations.
BACKGROUND Dulaglutide (dula, LY2189265), a long-acting glucagon-like peptide-1 analog, is being developed to treat type 2 diabetes mellitus. METHODS To foster the development of dula, we designed a two-stage adaptive, dose-finding, inferentially seamless phase 2/3 study. The Bayesian theoretical framework is used to adaptively randomize patients in stage 1 to 7 dula doses and, at the decision point, to either stop for futility or to select up to 2 dula doses for stage 2. After dose selection, patients continue to be randomized to the selected dula doses or comparator arms. Data from patients assigned the selected doses will be pooled across both stages and analyzed with an analysis of covariance model, using baseline hemoglobin A1c and country as covariates. The operating characteristics of the trial were assessed by extensive simulation studies. RESULTS Simulations demonstrated that the adaptive design would identify the correct doses 88% of the time, compared to as low as 6% for a fixed-dose design (the latter value based on frequentist decision rules analogous to the Bayesian decision rules for adaptive design). CONCLUSIONS This article discusses the decision rules used to select the dula dose(s); the mathematical details of the adaptive algorithm-including a description of the clinical utility index used to mathematically quantify the desirability of a dose based on safety and efficacy measurements; and a description of the simulation process and results that quantify the operating characteristics of the design.
Effect of deltoid tension and humeral version in reverse total shoulder arthroplasty: a biomechanical study.
BACKGROUND No clear recommendations exist regarding optimal humeral component version and deltoid tension in reverse total shoulder arthroplasty (TSA). MATERIALS AND METHODS A biomechanical shoulder simulator tested humeral versions (0°, 10°, 20° retroversion) and implant thicknesses (-3, 0, +3 mm from baseline) after reverse TSA in human cadavers. Abduction and external rotation ranges of motion as well as abduction and dislocation forces were quantified for native arms and arms implanted with 9 combinations of humeral version and implant thickness. RESULTS Resting abduction angles increased significantly (up to 30°) after reverse TSA compared with native shoulders. With constant posterior cuff loads, native arms externally rotated 20°, whereas no external rotation occurred in implanted arms (20° net internal rotation). Humeral version did not affect rotational range of motion but did alter resting abduction. Abduction forces decreased 30% vs native shoulders but did not change when version or implant thickness was altered. Humeral center of rotation was shifted 17 mm medially and 12 mm inferiorly after implantation. The force required for lateral dislocation was 60% less than anterior and was not affected by implant thickness or version. CONCLUSION Reverse TSA reduced abduction forces compared with native shoulders and resulted in limited external rotation and abduction ranges of motion. Because abduction force was reduced for all implants, the choice of humeral version and implant thickness should focus on range of motion. Lateral dislocation forces were less than anterior forces; thus, levering and inferior/posterior impingement may be a more probable basis for dislocation (laterally) than anteriorly directed forces.
Evaluation measures for hierarchical classification: a unified view and novel approaches
Hierarchical classification addresses the problem of classifying items into a hierarchy of classes. An important issue in hierarchical classification is the evaluation of different classification algorithms, an issue which is complicated by the hierarchical relations among the classes. Several evaluation measures have been proposed for hierarchical classification using the hierarchy in different ways without however providing a unified view of the problem. This paper studies the problem of evaluation in hierarchical classification by analysing and abstracting the key components of the existing performance measures. It also proposes two alternative generic views of hierarchical evaluation and introduces two corresponding novel measures. The proposed measures, along with the state-of-the-art ones, are empirically tested on three large datasets from the domain of text classification. The empirical results illustrate the undesirable behaviour of existing approaches and how the proposed methods overcome most of these problems across a range of cases.
Dimensional Synthesis of Evanescent-Mode Ridge Waveguide Bandpass Filters
This paper introduces a method giving dimensions of inline evanescent-mode ridge waveguide bandpass filters. Evanescent mode couplings are evaluated individually, without optimization of the entire filter. This is obtained through an improved network model of the evanescent-mode coupling, together with novel analytical formulas to correct the resonators slope parameters. Unlike prior works based on full-wave optimization of the overall structure, this method is fast and leads to accurate bandwidth results. Several filter examples are included to support the design method. A prototype filter has been manufactured and the RF measurements are in good agreement with theory.
Development and psychometric evaluation of a brief version of the hyperventilation questionnaire: the HVQ-B.
The fear of arousal sensations characterizes some anxiety disorders and is a core feature of an established risk factor for anxiety and related disorders (i.e. anxiety sensitivity; Taylor, 1999). Anxiety sensitivity (AS) refers to a fear of anxiety-related bodily sensations stemming from beliefs that these have catastrophic consequences. Interoceptive exposure (IE; repeated exposure to feared arousal sensations) has been shown to decrease AS. The 33-item Hyperventilation Questionnaire (HVQ; Rapee & Medoro, 1994) measures state levels of cognitive, affective, and somatic responses to IE and arousal induction exercises more generally. The aim of the present set of studies was to develop and evaluate a brief version of the HVQ, the HVQ-B, in order to facilitate its use in research and clinical settings. In Study 1, three existing data sets that used the long version of the HVQ were combined to select the items to be retained for the HVQ-B. In Study 2, the 18-item HVQ-B was administered and its psychometric properties were evaluated. In Study 3, a confirmatory factor analysis (CFA) was performed on the 18 items of the HVQ-B. The HVQ-B demonstrated excellent psychometric properties, and accounted for most of the variance of the questionnaire's longer version. CFA indicated a reasonably good fit of the three-factor measurement model. Finally, the HVQ-B was able to distinguish between responses to arousal induction exercises by high versus low AS participants. The HVQ-B is a useful tool to assess cognitive, affective, and somatic responsivity to arousal sensations in both research and practice.
Ground leakage current suppression in a 50 kW 5-level T-type transformerless PV inverter
In this paper ground leakage current suppression in a 50 kW 5-level T-type transformerless PV inverter has been presented. Compared to a 3-level T-type PV inverter, this topology allows a simple modulation method such as carrier-based (CB) PWM can be used to suppress the leakage current without the penalty of the traditional methods. Phase disposition (PD) and phase opposition disposition (POD) based CB PWM method has been applied to this 5-level topology. The spectrum analysis has demonstrated that PD and POD will generate the same phase voltage spectrum. In addition, the common-mode (CM) voltage of a 5-level T-type PV inverter has been derived and the CM choke has been designed. The value of the CM choke is a 73% reduction compared to that of a 3-level T-type PV inverter. The simulation and experimental verification have been provided.
Efficacy and safety of omeprazole in Japanese patients with nonerosive reflux disease
There is increasing awareness of nonerosive reflux disease (NERD) as a disease requiring treatment in Japan. This randomized, double-blind, placebo-controlled, parallel-group study was conducted to investigate the efficacy and safety of omeprazole 10 mg and 20 mg once daily in Japanese patients with NERD. Patients with heartburn for at least 2 days a week during the month before entry into the study and no endoscopic signs of a mucosal break (grade M or N according to Hoshihara’s modification of the Los Angeles classification) were randomly assigned to one of three groups (omeprazole 10 mg or 20 mg, or placebo) once daily for 4 weeks. Overall, 355 patients were enrolled, of whom 284 were randomly assigned to one of the three groups (omeprazole 10 mg, n = 96; omeprazole 20 mg, n = 93; placebo, n = 95). The rate of complete resolution of heartburn in week 4 was significantly higher in patients treated with omeprazole 10 mg [32.3%, 95% confidence interval (CI), 22.9%–41.6%] or 20 mg (25.8%, 95% CI, 16.9%–34.7%) than in the placebo group (12.0%, 95% CI, 5.3%–18.6%). No significant difference between the two omeprazole groups was observed. The rate of complete resolution of heartburn by omeprazole was similar between patients with grade M and those with grade N esophagus. Omeprazole also increased the rate of sufficient relief from heartburn. Omeprazole was well tolerated. Omeprazole 10 mg or 20 mg once daily is effective and well tolerated in patients with NERD regardless of their endoscopic classification.
La cavalcade des rois guerriers: les effigies équestres des souvenirs de Naples au Quattrocento
The cavalcade of the warrior kings. The equestrian effigies of the kings of Naples in the Quattrocento. Ladislas Anjou Duras, Alphonse V, Ferrante I and Alphonse II of Aragon, were all Napolitan sovereigns who distinguished themselves on the field of battle and were all the subjects of equestrain effigies. Original synthesis of the problematic of equestrain potraiture in Naples in the 15th century, this article highlights the distinctive characteristics shared by these conquering kings who personally went to battle at the head of their armies, through their equestrian effigies, which share in numerous formal, iconographical and stylistic characteristics, in particular a double influence, both chivalrous and imperial, in a unique combination of International Gothic and Antique styles. Nevertheless, the proper understanding of these images,which form veritable tools of political communication, engenders the emphasis, as well, of their differences and their singularity. Such instruments of legitimization and propaganda are also based on the play of repetition and the effects inherent in the serial process, as well as on the system of connections between these monarchs, in addition to the ties to their illustrious predecessors. Thus is highlighted the particualr type of official portrait represented by the equestrian effigy in the context of the Neapolitan city in the Quattrocento, favored notably by a local equestrian tradition, and an important part of the military and artistic politics of the warrior kings.
Developmental changes in P1 and N1 central auditory responses elicited by consonant-vowel syllables.
Normal maturation and functioning of the central auditory system affects the development of speech perception and oral language capabilities. This study examined maturation of central auditory pathways as reflected by age-related changes in the P1/N1 components of the auditory evoked potential (AEP). A synthesized consonant-vowel syllable (ba) was used to elicit cortical AEPs in 86 normal children ranging in age from 6 to 15 years and ten normal adults. Distinct age-related changes were observed in the morphology of the AEP waveform. The adult response consists of a prominent negativity (N1) at about 100 ms, preceded by a smaller P1 component at about 50 ms. In contrast, the child response is characterized by a large P1 response at about 100 ms. This wave decreases significantly in latency and amplitude up to about 20 years of age. In children, P1 is followed by a broad negativity at about 200 ms which we term N1b. Many subjects (especially older children) also show an earlier negativity (N1a). Both N1a and N1b latencies decrease significantly with age. Amplitudes of N1a and N1b do not show significant age-related changes. All children have the N1b; however, the frequency of occurrence of N1a increases with age. Data indicate that the child P1 develops systematically into the adult response; however, the relationship of N1a and N1b to the adult N1 is unclear. These results indicate that maturational changes in the central auditory system are complex and extend well into the second decade of life.
Validity and reliability of the Internalized Stigma of Smoking Inventory: An exploration of shame, isolation, and discrimination in smokers with mental health diagnoses.
BACKGROUND AND OBJECTIVES De-normalization of smoking as a public health strategy may create shame and isolation in vulnerable groups unable to quit. To examine the nature and impact of smoking stigma, we developed the Internalized Stigma of Smoking Inventory (ISSI), tested its validity and reliability, and explored factors that may contribute to smoking stigma. METHODS We evaluated the ISSI in a sample of smokers with mental health diagnoses (N = 956), using exploratory and confirmatory factor analysis, and assessed construct validity. RESULTS Results reduced the ISSI to eight items with three subscales: smoking self-stigma related to shame, felt stigma related to social isolation, and discrimination experiences. Discrimination was the most commonly endorsed of the three subscales. A multivariate generalized linear model predicted 21-30% of the variance in the smoking stigma subscales. Self-stigma was greatest among those intending to quit; felt stigma was highest among those experiencing stigma in other domains, namely ethnicity and mental illness-based; and smoking-related discrimination was highest among women, Caucasians, and those with more education. DISCUSSION AND CONCLUSION Smoking stigma may compound stigma experiences in other areas. Aspects of smoking stigma in the domains of shame, isolation, and discrimination were related to modeled stigma responses, particularly readiness to quit and cigarette addiction, and were found to be more salient for groups where tobacco use is least prevalent. SCIENTIFIC SIGNIFICANCE The ISSI measure is useful for quantifying smoking-related stigma in multiple domains.
Abnormal crowd behavior detection based on social attribute-aware force model
In this paper, a novel social attribute-aware force model is presented for abnormal crowd pattern detection in video sequences. We take social characteristics of crowd behaviors into account in order to improve the effectiveness of the simulation on the interaction behaviors of the crowd. A quick unsupervised method is proposed to estimate the scene scale. Both the social disorder attribute and congestion attribute are introduced to describe the realistic social behaviors by using statistical context feature. Through the semantic attribute-aware enhancement, we obtain an improved model on the basis of social force. We validate our method in public available datasets for abnormal detection, and the experimental results show promising performance compared with other state of the art methods.
Ossifying fibroma of the jaws: a clinicopathological case series study.
The aim of this study was to assess the clinical, radiographic and microscopic features of a case series of ossifying fibroma (OF) of the jaws. For the study, all cases with OF diagnosis from the files of the Oral Pathology Laboratory, University of Ribeirão Preto, Ribeirão Preto, SP, Brazil, were reviewed. Clinical data were obtained from the patient files and the radiographic features were evaluated in each case. All cases were reviewed microscopically to confirm the diagnosis. Eight cases were identified, 5 in females and 3 in males. The mean age of the patients was 33.7 years and most lesions (7 cases) occurred in the mandible. Radiographically, all lesions appeared as unilocular images and most of them (5 cases) were of mixed type. The mean size of the tumor was 3.1 cm and 3 cases caused displacement of the involved teeth. Microscopically, all cases showed several bone-like mineralized areas, immersed in the cellular connective tissue. From the 8 cases, 5 underwent surgical excision and 1 patient refused treatment. In the remaining 2 cases, this information was not available. In conclusion, OF occurs more commonly in women in the fourth decade of life, frequently as a mixed radiographic image in the mandible. Coherent differential diagnoses are important to guide the most adequate clinical approach. A correlation between clinical, imaginological and histopathological features is the key to establish the correct diagnosis.
Efficient Continuous-Time SLAM for 3D Lidar-Based Online Mapping
Modern 3D laser-range scanners have a high data rate, making online simultaneous localization and mapping (SLAM) computationally challenging. Recursive state estimation techniques are efficient but commit to a state estimate immediately after a new scan is made, which may lead to misalignments of measurements. We present a 3D SLAM approach that allows for refining alignments during online mapping. Our method is based on efficient local mapping and a hierarchical optimization back-end. Measurements of a 3D laser scanner are aggregated in local multiresolution maps by means of surfel-based registration. The local maps are used in a multi-level graph for allocentric mapping and localization. In order to incorporate corrections when refining the alignment, the individual 3D scans in the local map are modeled as a sub-graph and graph optimization is performed to account for drift and misalignments in the local maps. Furthermore, in each sub-graph, a continuous-time representation of the sensor trajectory allows to correct measurements between scan poses. We evaluate our approach in multiple experiments by showing qualitative results. Furthermore, we quantify the map quality by an entropy-based measure.
Telling More Than We Can Know : Verbal Reports on Mental Processes
Evidence is reviewed which suggests that there may be little or no direct introspective access to higher order cognitive processes. Subjects are sometimes (a) unaware of the existence of a stimulus that importantly influenced a response, (b) unaware of the existence of the response, and (c) unaware that the stimulus has affected the response. It is proposed that when people attempt to report on their cognitive processes, that is, on the processes mediating the effects of a stimulus on a response, they do not do so on the basis of any true introspection. Instead, their reports are based on a priori, implicit causal theories, or judgments about the extent to which a particular stimulus is a plausible cause of a given response. This suggests that though people may not be able to observe directly their cognitive processes, they will sometimes be able to report accurately about them. Accurate reports will occur when influential stimuli are salient and are plausible causes of the responses they produce, and will not occur when stimuli are not salient or are not plausible causes.
Association between angiotensin-converting-enzyme gene polymorphism and failure of renoprotective therapy
BACKGROUND Polymorphism in the gene for angiotensin-converting enzyme (ACE), especially the DD genotype, is associated with risk for cardiovascular disease. Glomerulosclerosis has similarities to atherosclerosis, and we looked at ACE gene polymorphism in patients with kidney disease who were in a trial of long-term therapy with an ACE inhibitor or a beta-blocker. METHODS 81 patients with non-diabetic renal disease had been entered into a randomised comparison of oral atenolol or enalapril to prevent progressive decline in renal function. The dose was titrated to a goal diastolic blood pressure of 10 mm Hg below baseline and/or below 95 mm Hg. The mean (SE) age was 50 (1) years, and the group included 49 men. Their renal function had been monitored over 3-4 years. We have looked at their ACE genotype, which we assessed with PCR. FINDINGS 27 patients had the II genotype, 37 were ID, and 17 were DD. 11 patients were lost to follow-up over 1-3 years. The decline of glomerular filtration rate over the years was significantly steeper in the DD group than in the ID and the II groups (p = 0.02; means -3.79, -1.37, and -1.12 mL/min per year, respectively). The DD patients treated with enalapril fared as equally a bad course as the DD patients treated with atenolol. Neither drug lowered the degree of proteinuria in the DD group. INTERPRETATION Our data show that patients with the DD genotype are resistant to commonly advocated renoprotective therapy.
Hepatitis B and immigrants: a SIMIT multicenter cross-sectional study
The continuing migration of individuals from geographic areas with high/medium endemicity has determined the arrival of new chronic hepatitis B virus (HBV) carriers in Italy. The magnitude of this phenomenon and clinical/virological features of HBsAg-positive migrants remain not very well defined. To evaluate the proportion of HBsAg-positive immigrants enrolled in this multicenter Società Italiana di Malattie Infettive e Tropicali (SIMIT) cross-sectional study and to compare the characteristics of chronic hepatitis B infection in migrants to those of Italian carriers. From February 1 to July 31 2008, anonymous data were obtained from all HBsAg-positive patients aged ≥18 years observed at 74 Italian centers of infectious diseases. Of the 3,760 HBsAg-positive subjects enrolled, 932 (24.8 %) were immigrants, with a prevalent distribution in central to northern Italy. The areas of origin were: Far East (37.1 %), Eastern Europe (35.4 %), Sub-Saharan Africa (17.5 %), North Africa (5.5 %), and 4.5 % from various other sites. Compared to Italian carriers, migrants were significantly younger (median age 34 vs. 52 years), predominantly female (57.5 vs. 31 %), and most often at first observation (incident cases 34.2 vs. 13.3 %). HBeAg-positives were more frequent among migrants (27.5 vs. 14 %). Genotype D, found in 87.8 % of Italian carriers, was present in only 40 % of migrants, who were more frequently inactive HBV carriers, with a lower prevalence of chronic hepatitis, cirrhosis, and hepatocellular carcinoma (HCC). Only 27.1 % of migrants received antiviral treatment compared to 50.3 % of Italians. Twenty-five percent of all HBV carriers examined at Italian centers was composed of immigrants with demographic, serological, and virological characteristics that differed from those of natives and appeared to have an inferior access to treatment.
Direction-Aware Spatial Context Features for Shadow Detection
Shadow detection is a fundamental and challenging task, since it requires an understanding of global image semantics and there are various backgrounds around shadows. This paper presents a novel network for shadow detection by analyzing image context in a direction-aware manner. To achieve this, we first formulate the direction-aware attention mechanism in a spatial recurrent neural network (RNN) by introducing attention weights when aggregating spatial context features in the RNN. By learning these weights through training, we can recover direction-aware spatial context (DSC) for detecting shadows. This design is developed into the DSC module and embedded in a CNN to learn DSC features at different levels. Moreover, a weighted cross entropy loss is designed to make the training more effective. We employ two common shadow detection benchmark datasets and perform various experiments to evaluate our network. Experimental results show that our network outperforms state-of-the-art methods and achieves 97% accuracy and 38% reduction on balance error rate.
Multi-dimensional Topic Modeling with Determinantal Point Processes
Probabilistic topics models such as Latent Dirichlet Allocation (LDA) provide a useful and elegant tool for discovering hidden structure within large data sets of discrete data, such as corpuses of text. However, LDA implicitly discovers topics along only a single dimension. Recent research on multi-dimensional topic modeling aims to devise techniques that can discover multiple groups of topics, where each group models some different dimension, or aspect, of the data. For example, applying a multi-dimensional topic modeling algorithm to a text corpus could result in three dimensions that turn out to represent “subject,” “sentiment,” and “political ideology.” In this work, we present a new multi-dimensional topic model that uses a determinantal point process prior to encourage different groups of topics to model different dimensions of the data. Determinantal point processes are probabilistic models of repulsive phenomena which originated in statistical physics but have recently seen interest from the machine learning community. Though topic models are usually applied to text, we motivate our method with the problem of discovering “tastes” (topics) in a data set of recipes that have been rated by users of a cooking web site. We present both an unsupervised algorithm, which discovers dimensions and their tastes automatically, and a semisupervised algorithm, which allows the modeler to steer the tastes (topics) towards ones that will be semantically meaningful, by “seeding” each taste with a small number of representative recipes (words). Our results on the recipe data set are mixed, but we are hopeful that the general technique presented here might very well prove useful to multi-dimensional topic modeling in other domains.
Design for variety : developing standardized and modularized product platform architectures
Developing a robust, product platform architecture brings an important competitive advantage to a company. The major benefits are reduced design effort and time-to-market for future generations of the product. This paper describes a step-by-step method that aids companies in developing such product platform architectures. Using the concept of specification ‘‘flows’’ within a product development project, the design for variety (DFV) method develops two indices to measure a product’s architecture. The first index is the generational variety index (GVI), a measure for the amount of redesign effort required for future designs of the product. The second index is the coupling index (CI), a measure of the coupling among the product components. The design team uses these two indices to develop a decoupled architecture that requires less design effort for follow-on products. This paper describes the DFV method and uses a water cooler example to illustrate the method.
Flight Dynamics-Based Recovery of a UAV Trajectory Using Ground Cameras
We propose a new method to estimate the 6-dof trajectory of a flying object such as a quadrotor UAV within a 3D airspace monitored using multiple fixed ground cameras. It is based on a new structure from motion formulation for the 3D reconstruction of a single moving point with known motion dynamics. Our main contribution is a new bundle adjustment procedure, which in addition to optimizing the camera poses, regularizes the point trajectory using a prior based on motion dynamics (or specifically flight dynamics). Furthermore, we can infer the underlying control input sent to the UAVs autopilot that determined its flight trajectory. Our method requires neither perfect single-view tracking nor appearance matching across views. For robustness, we allow the tracker to generate multiple detections per frame in each video. The true detections and the data association across videos is estimated using robust multi-view triangulation and subsequently refined in our bundle adjustment formulation. Quantitative evaluation on simulated data and experiments on real videos from indoor and outdoor scenes shows that our technique is superior to existing methods.
The Contemporary Theory of Metaphor
inferences as metaphorical spatial inferences Spatial inferences are characterized by the topological structure of image-schemas. We have seen cases such as CATEGORIES ARE CONTAINERS and LINEAR SCALES ARE PATHS where image-schema structure is preserved by metaphor and where abstract inferences about categories and linear scales are metaphorical versions of spatial inferences about containers and paths. The Invariance Principle hypothesizes that imageschema structure is always preserved by metaphor. The Invariance Principle raises the possibility that a great many, if not all, abstract inferences are actually metaphorical versions of spatial inferences that are inherent in the topological structure of imageschemas. What I will do now is turn to other cases of basic, but abstract, concepts to see what evidence there is for the claim that such concepts are fundamentally characterized by metaphor.
Fault Diagnosis from Raw Sensor Data Using Deep Neural Networks Considering Temporal Coherence
Intelligent condition monitoring and fault diagnosis by analyzing the sensor data can assure the safety of machinery. Conventional fault diagnosis and classification methods usually implement pretreatments to decrease noise and extract some time domain or frequency domain features from raw time series sensor data. Then, some classifiers are utilized to make diagnosis. However, these conventional fault diagnosis approaches suffer from the expertise of feature selection and they do not consider the temporal coherence of time series data. This paper proposes a fault diagnosis model based on Deep Neural Networks (DNN). The model can directly recognize raw time series sensor data without feature selection and signal processing. It also takes advantage of the temporal coherence of the data. Firstly, raw time series training data collected by sensors are used to train the DNN until the cost function of DNN gets the minimal value; Secondly, test data are used to test the classification accuracy of the DNN on local time series data. Finally, fault diagnosis considering temporal coherence with former time series data is implemented. Experimental results show that the classification accuracy of bearing faults can get 100%. The proposed fault diagnosis approach is effective in recognizing the type of bearing faults.
Interpreting TF-IDF term weights as making relevance decisions
A novel probabilistic retrieval model is presented. It forms a basis to interpret the TF-IDF term weights as making relevance decisions. It simulates the local relevance decision-making for every location of a document, and combines all of these “local” relevance decisions as the “document-wide” relevance decision for the document. The significance of interpreting TF-IDF in this way is the potential to: (1) establish a unifying perspective about information retrieval as relevance decision-making; and (2) develop advanced TF-IDF-related term weights for future elaborate retrieval models. Our novel retrieval model is simplified to a basic ranking formula that directly corresponds to the TF-IDF term weights. In general, we show that the term-frequency factor of the ranking formula can be rendered into different term-frequency factors of existing retrieval systems. In the basic ranking formula, the remaining quantity - log p(&rmacr;|t ∈ d) is interpreted as the probability of randomly picking a nonrelevant usage (denoted by &rmacr;) of term t. Mathematically, we show that this quantity can be approximated by the inverse document-frequency (IDF). Empirically, we show that this quantity is related to IDF, using four reference TREC ad hoc retrieval data collections.
Software requirement optimization using a multiobjective swarm intelligence evolutionary algorithm
The selection of the new requirements which should be included in the development of the release of a software product is an important issue for software companies. This problem is known in the literature as the Next Release Problem (NRP). It is an NP-hard problem which simultaneously addresses two apparently contradictory objectives: the total cost of including the selected requirements in the next release of the software package, and the overall satisfaction of a set of customers who have different opinions about the priorities which should be given to the requirements, and also have different levels of importance within the company. Moreover, in the case of managing real instances of the problem, the proposed solutions have to satisfy certain interaction constraints which arise among some requirements. In this paper, the NRP is formulated as a multiobjective optimization problem with two objectives (cost and satisfaction) and three constraints (types of interactions). A multiobjective swarm intelligence metaheuristic is proposed to solve two real instances generated from data provided by experts. Analysis of the results showed that the proposed algorithm can efficiently generate high quality solutions. These were evaluated by comparing them with different proposals (in terms of multiobjective metrics). The results generated by the present approach surpass those generated in other relevant work in the literature (e.g. our technique can obtain a HV of over 60% for the most complex dataset managed, while the other approaches published cannot obtain an HV of more than 40% for the same dataset). 2015 Elsevier B.V. All rights reserved.
A decision support system for demand forecasting with artificial neural networks and neuro-fuzzy models: A comparative analysis
An organization has to make the right decisions in time depending on demand information to enhance the commercial competitive advantage in a constantly fluctuating business environment. Therefore, estimating the demand quantity for the next period most likely appears to be crucial. This work presents a comparative forecasting methodology regarding to uncertain customer demands in a multi-level supply chain (SC) structure via neural techniques. The objective of the paper is to propose a new forecasting mechanism which is modeled by artificial intelligence approaches including the comparison of both artificial neural networks and adaptive network-based fuzzy inference system techniques to manage the fuzzy demand with incomplete information. The effectiveness of the proposed approach to the demand forecasting issue is demonstrated using real-world data from a company which is active in durable consumer goods industry in Istanbul, Turkey. Crown Copyright 2008 Published by Elsevier Ltd. All rights reserved.
Tunable Bandpass Filter With Independently Controllable Dual Passbands
This paper presents a two-pole dual-band tunable bandpass filter (BPF) with independently controllable dual passbands based on a novel tunable dual-mode resonator. This resonator principally comprises a λ/2 resonator and two varactor diodes. One varactor is placed at the center of the resonator to determine the dominant even-mode resonant frequency; the other is installed between two ends of the resonator to control the dominant odd-mode resonant frequency. These two distinct odd- and even-mode resonances can be independently generated, and they are used to realize the two separated passbands as desired. Detailed discussion is carried on to provide a set of closed-form design equations for determination of all of the elements involved in this tunable filter, inclusive of capacitively loaded quarter-wavelength or λ/2 resonators, external quality factor, and coupling coefficient. Finally, a prototype tunable dual-band filter is fabricated and measured. Measured and simulated results are found in good agreement with each other. The results show that the first passband can be tuned in a frequency range from 0.77 to 1.00 GHz with the 3-dB fractional-bandwidth of 20.3%-24.7%, whereas the second passband varies from 1.57 to 2.00 GHz with the 3-dB absolute-bandwidth of 120 ± 8 MHz.
Parkinsons Disease Classification using Neural Network and Feature Selection
In this study, the Multi-Layer Perceptron (MLP)with Back-Propagation learning algorithm are used to classify to effective diagnosis Parkinsons disease(PD).It’s a challenging problem for medical community.Typically characterized by tremor, PD occurs due to the loss of dopamine in the brains thalamic region that results in involuntary or oscillatory movement in the body. A feature selection algorithm along with biomedical test values to diagnose Parkinson disease.Clinical diagnosis is done mostly by doctor’s expertise and experience.But still cases are reported of wrong diagnosis and treatment.Patients are asked to take number of tests for diagnosis.In many cases,not all the tests contribute towards effective diagnosis of a disease.Our work is to classify the presence of Parkinson disease with reduced number of attributes.Original,22 attributes are involved in classify.We use Information Gain to determine the attributes which reduced the number of attributes which is need to be taken from patients.The Artificial neural networks is used to classify the diagnosis of patients.Twenty-Two attributes are reduced to sixteen attributes.The accuracy is in training data set is 82.051% and in the validation data set is 83.333%. Keywords—Data mining , classification , Parkinson disease , Artificial neural networks , Feature Selection , Information Gain
Cloud Federation
This paper suggests a definition of the term Cloud Federation, a concept of service aggregation characterized by interoperability features, which addresses the economic problems of vendor lock-in and provider integration. Furthermore, it approaches challenges like performance and disaster-recovery through methods such as co-location and geographic distribution. The concept of Cloud Federation enables further reduction of costs due to partial outsourcing to more cost-efficient regions, may satisfy security requirements through techniques like fragmentation and provides new prospects in terms of legal aspects. Based on this concept, we discuss a reference architecture that enables new service models by horizontal and vertical integration. The definition along with the reference architecture serves as a common vocabulary for discussions and suggests a template for creating value-added software solutions.
TouchIn: Sightless two-factor authentication on multi-touch mobile devices
Mobile authentication is indispensable for preventing unauthorized access to multi-touch mobile devices. Existing mobile authentication techniques are often cumbersome to use and also vulnerable to shoulder-surfing and smudge attacks. This paper focuses on designing, implementing, and evaluating TouchIn, a two-factor authentication system on multi-touch mobile devices. TouchIn works by letting a user draw on the touchscreen with one or multiple fingers to unlock his mobile device, and the user is authenticated based on the geometric properties of his drawn curves as well as his behavioral and physiological characteristics. TouchIn allows the user to draw on arbitrary regions on the touchscreen without looking at it. This nice sightless feature makes TouchIn very easy to use and also robust to shoulder-surfing and smudge attacks. Comprehensive experiments on Android devices confirm the high security and usability of TouchIn.
A framework to enable the semantic inferencing and querying of multimedia content
Cultural institutions, broadcasting companies, academic, scientific and defence organisations are producing vast quantities of digital multimedia content. With this growth in audiovisual material comes the need for standardised representations encapsulating the rich semantic meaning required to enable the automatic filtering, machine processing, interpretation and assimilation of multimedia resources. Additionally generating high-level descriptions is difficult and manual creation is expensive although significant progress has been made in recent years on automatic segmentation and low-level feature recognition for multimedia. Within this paper we describe the application of semantic web technologies to enable the generation of high-level, domain-specific, semantic descriptions of multimedia content from low-level, automatically-extracted features. By applying the knowledge reasoning capabilities provided by ontologies and inferencing rules to large, multimedia data sets generated by scientific research communities, we hope to expedite solutions to the complex scientific problems they face.
Low power and area efficient Wallace tree multiplier using carry select adder with binary to excess-1 converter
Multipliers are major blocks in the most of the digital and high performance systems such as Microprocessors, Signal processing Circuits, FIR filters etc. In the present scenario, Fast multipliers with less power consumption are leading with their performance. Wallace tree multiplier with carry select adder (CSLA) is one of the fastest multiplier but utilizes more area. To improve the performance of this multiplier, CSLA is replaced by binary excess-1 counter(BEC) which not only reduces the area at gate level but also reduces power consumption. Area and power calculations for the Wallace tree multiplier using CSLA with BEC are giving good results compared to regular Wallace tree multiplier.
Casinos and Economic Growth
Casino gambling is a popular form of entertainment and is purported to have positive effects on host economies. The industry surely affects local labor markets and tax revenues. However, there has been little evidence on the effects of casino gambling on state economic growth. This paper examines that relationship using Granger-causality analysis modified for use with panel data. Our results indicate that there is no Granger-causal relationship between real casino revenues and real per capita income at the state level. The results are based on annual data from 1991 to 2005. These findings contradict an earlier study that found that casino revenues Granger-cause economic growth, using quarterly data from 1991 to 1996. Possible explanations for the differences in short- and long-run effects are discussed. Copyright 2007 American Journal of Economics and Sociology, Inc..
The incidence of myocardial injury following post-operative Goal Directed Therapy
BACKGROUND Studies suggest that Goal Directed Therapy (GDT) results in improved outcome following major surgery. However, there is concern that pre-emptive use of inotropic therapy may lead to an increased incidence of myocardial ischaemia and infarction. METHODS Post hoc analysis of data collected prospectively during a randomised controlled trial of the effects of post-operative GDT in high-risk general surgical patients. Serum troponin T concentrations were measured at baseline and on day 1 and day 2 following surgery. Continuous ECG monitoring was performed during the eight hour intervention period. Patients were followed up for predefined cardiac complications. A univariate analysis was performed to identify any associations between potential risk factors for myocardial injury and elevated troponin T concentrations. RESULTS GDT was associated with fewer complications, and a reduced duration of hospital stay. Troponin T concentrations above 0.01 microg l-1 were identified in eight patients in the GDT group and six in the control group. Values increased above 0.05 microg l-1 in four patients in the GDT group and two patients in the control group. There were no overall differences in the incidence of elevated troponin T concentrations. The incidence of cardiovascular complications was also similar. None of the patients, in whom troponin T concentrations were elevated, developed ECG changes indicating myocardial ischaemia during the intervention period. The only factor to be associated with elevated troponin T concentrations following surgery was end-stage renal failure. CONCLUSION The use of post-operative GDT does not result in an increased incidence of myocardial injury.
Den bortglömda krisen – en kritisk diskursanalys på nyhetsrapporteringen av Centralafrikanska republiken
Central African Republic has partly been portrayed in the media as a forgotten crisis, but also as something that is completely natural for how it is in Africa. The society has perceptions about Africa, which has been reproduced and reconstructed in the selected articles. The discourses in particular have appeared in articles are discourses about: us and them, the confidence in experts, social representations especially (emotional roots), ideological inequities and power relations between different parties.
Defining social networking sites and measuring their use: How narcissists differ in their use of Facebook and Twitter
As research on the connection between narcissism and social networking site (SNS) use grows, definitions of SNS and measurements of their use continue to vary, leading to conflicting results. To improve understanding of the relationship between narcissism and SNS use, as well as the implications of differences in definition and measurement, we examine two ways of measuring Facebook and Twitter use by testing the hypothesis that SNS use is positively associated with narcissism. We also explore the relation between these types of SNS use and different components of narcissism within college students and general adult samples. Our findings suggest that for college students, posting on Twitter is associated with the Superiority component of narcissistic personality while Facebook posting is associated with the Exhibitionism component. Conversely, adults high in Superiority post on Facebook more rather than Twitter. For adults, Facebook and Twitter are both used more by those focused on their own appearances but not as a means of showing off, as is the case with college students. Given these differences, it is essential for future studies of SNS use and personality traits to distinguish between different types of SNS, different populations, and different types of use. 2013 Elsevier Ltd. All rights reserved.
Spam, Damn Spam, and Statistics: Using Statistical Analysis to Locate Spam Web Pages
The increasing importance of search engines to commercial web sites has given rise to a phenomenon we call "web spam", that is, web pages that exist only to mislead search engines into (mis)leading users to certain web sites. Web spam is a nuisance to users as well as search engines: users have a harder time finding the information they need, and search engines have to cope with an inflated corpus, which in turn causes their cost per query to increase. Therefore, search engines have a strong incentive to weed out spam web pages from their index.We propose that some spam web pages can be identified through statistical analysis: Certain classes of spam pages, in particular those that are machine-generated, diverge in some of their properties from the properties of web pages at large. We have examined a variety of such properties, including linkage structure, page content, and page evolution, and have found that outliers in the statistical distribution of these properties are highly likely to be caused by web spam.This paper describes the properties we have examined, gives the statistical distributions we have observed, and shows which kinds of outliers are highly correlated with web spam.
Injectable, cellular-scale optoelectronics with applications for wireless optogenetics.
Successful integration of advanced semiconductor devices with biological systems will accelerate basic scientific discoveries and their translation into clinical technologies. In neuroscience generally, and in optogenetics in particular, the ability to insert light sources, detectors, sensors, and other components into precise locations of the deep brain yields versatile and important capabilities. Here, we introduce an injectable class of cellular-scale optoelectronics that offers such features, with examples of unmatched operational modes in optogenetics, including completely wireless and programmed complex behavioral control over freely moving animals. The ability of these ultrathin, mechanically compliant, biocompatible devices to afford minimally invasive operation in the soft tissues of the mammalian brain foreshadow applications in other organ systems, with potential for broad utility in biomedical science and engineering.
Intensity modulated radiation therapy versus volumetric intensity modulated arc therapy
The advanced developments in external beam radiation therapy (EBRT) over the past few decades have improved dose conformity to the target while minimizing dose to the surrounding organs at risk (OAR). Intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are two commonly used EBRT techniques to treat cancer. In sliding window (SW) or dynamic IMRT, each radiation beam is modulated by continuously moving multileaf collimators (MLC), whereas in step-and-shoot (SS) or static IMRT, the MLC divide each radiation beam into a set of smaller segments of differing MLC shape, and the radiation beam is switched off between the segments. The modulation of beam intensity within each treatment field leads to construction of conformal dose distributions around the target volume. However, the delivery of a modulated IMRT plan takes longer than the delivery of a nonmodulated three-dimensional (3D) plan due to increased number of monitor units (MU). In contrast, the VMAT can decrease the treatment delivery time as VMAT has more beam entry angles, which likely contributes to the lower number of MU needed compared with the IMRT plan. In the VMAT, one or multiple arcs are used for the treatment, and the delivery technique allows the simultaneous variation in gantry rotation speed, dose rate, and MLC leaf positions. Recently, there has been increased interest in treating cancer using VMAT. Several authors have done the treatment planning studies comparing IMRT versus VMAT for different tumour sites, but the findings from one study are conflicting with those of another study in some cases. For example, current literature comparing VMAT and IMRT for a lung tumour shows that both techniques could provide comparable target coverage and dose conformity. However, the OAR results in the case of lung tumour are contradictory among different studies. Rao et al. showed that the relative volume of normal lung receiving 20 Gy (V20) was higher in the VMAT plans than in the IMRT plans. In contrast, Verbakel et al. showed that the VMAT and IMRT plans achieved comparable V20 of normal lung. The planning studies of prostate cancer have produced inconsistent results too. Yoo et al. reported lower doses to the OAR in the IMRT plans than in the VMAT plans, but Ost et al. showed that VMAT was better at reducing rectal dose compared to IMRT. Furthermore, the planning techniques within the VMAT have shown inconsistent results as well. For prostate cancer, in comparison to the single-arc technique (SA), Sze et al. reported that the double-arc technique (DA) produced higher bladder dose, whereas Yoo et al. showed that the DA produced lower doses to the bladder. Guckenberger et al. showed that the DA yielded higher rectal dose, whereas Sze et al. reported lower rectal doses with the DA when compared to the SA. The inconsistency in the results among different planning studies may have been due to difference in selection of beam parameters, dose calculation algorithm, plan optimization technique, and delivery technique of the treatment machine. In comparison to the VMAT plan with one arc, the VMAT plan with multiple arcs has more control points that give higher degree of freedom for possible MLC positions. This could result in higher degree of modulation and better plan quality, especially for a complex-shaped target volume. However, a higher degree of modulation generally increases the planning time due to longer plan optimization and dose calculation processes. Thus, treatment planning personnel may be required to make a compromise between planning time and plan quality depending on the physician requirements and available planning resources. The dosimetric results of the OAR can also be affected by the design of the treatment machine as dose to the OAR is dependent on
Knowledge of breast cancer and practice of breast self examination among female senior secondary school students in Abuja, Nigeria.
INTRODUCTION Breast cancer is a public health problem that is increasing throughout the world especially in developing countries. The study was aimed at assessing the knowledge of breast cancer and practice of breast self examination (BSE) among female senior secondary school students in the municipal council area of Abuja, Nigeria. METHODS This descriptive cross sectional study was carried out among female senior secondary school students from selected schools in the municipal area council of Abuja. The tool for data collection was a structured self administered questionnaire. Data were analysed using SPSS version 16.0. RESULTS Two hundred and eighty-seven students participated in the study. Their mean age was 16.5 +/- 1.4 years. A greater proportion of respondents 163 (56.8%) had poor knowledge of breast cancer while 217 (75.6%) had poor knowledge of BSE. Only 114 (39.7%) of the respondents knew that being a female was a risk factor for breast cancer and the least known risk factors were obesity and aging. The major source of information for breast cancer and BSE among the respondents was the mass media. Only 29 (10.1%) of respondents had practiced BSE. Knowledge of BSE was significantly associated with BSE practice. CONCLUSION This study revealed that female secondary school students have poor knowledge of breast cancer. A good proportion of them knew that BSE could be used as a screening method for breast cancer but only few had practiced BSE. There is need for adequate health education on breast cancer and BSE among adolescent females in Nigeria.
Water vulnerability assessment in karst environments : a new method of defining protection areas using a multi-attribute approach and GIS tools ( EPIK method )
Groundwater resources from karst aquifers play a major role in the water supply in karst areas in the world, such as in Switzerland. Defining groundwater protection zones in karst environment is frequently not founded on a solid hydrogeological basis. Protection zones are often inadequate and as a result they may be ineffective. In order to improve this situation, the Federal Office for Environment, Forests and Landscape with the Swiss National Hydrological and Geological Survey contracted the Centre of Hydrogeology of the Neuchâtel University to develop a new groundwater protection-zones strategy in karst environment. This approach is based on the vulnerability mapping of the catchment areas of water supplies provided by springs or boreholes. Vulnerability is here defined as the intrinsic geological and hydrogeological characteristics which determine the sensitivity of groundwater to contamination by human activities. The EPIK method is a multi-attribute method for vulnerability mapping which takes into consideration the specific hydrogeological behaviour of karst aquifers. EPIK is based on a conceptual model of karst hydrological systems, which suggests considering four karst aquifer attributes: (1) Epikarst, (2) Protective cover, (3) Infiltration conditions and (4) Karst network development. Each of these four attributes is subdivided into classes which are mapped over the whole water catchment. The attributes and their classes are then weighted. Attribute maps are overlain in order to obtain a final vulnerability map. From the vulnerability map, the groundwater protection zones are defined precisely. This method was applied at several sites in Switzerland where agriculture contamination problems have frequently occurred. These applications resulted in recommend new boundaries for the karst water supplies protection-zones.
A prospective cohort study of surgical treatment for back pain with degenerated discs; study protocol
BACKGROUND The diagnosis of discogenic back pain often leads to spinal fusion surgery and may partly explain the recent rapid increase in lumbar fusion operations in the United States. Little is known about how patients undergoing lumbar fusion compare in preoperative physical and psychological function to patients who have degenerative discs, but receive only non-surgical care. METHODS Our group is implementing a multi-center prospective cohort study to compare patients with presumed discogenic pain who undergo lumbar fusion with those who have non-surgical care. We identify patients with predominant low back pain lasting at least six months, one or two-level disc degeneration confirmed by imaging, and a normal neurological exam. Patients are classified as surgical or non-surgical based on the treatment they receive during the six months following study enrollment. RESULTS Three hundred patients discogenic low back pain will be followed in a prospective cohort study for two years. The primary outcome measure is the Modified Roland-Morris Disability Questionnaire at 24-months. We also evaluate several other dimensions of outcome, including pain, functional status, psychological distress, general well-being, and role disability. CONCLUSION The primary aim of this prospective cohort study is to better define the outcomes of lumbar fusion for discogenic back pain as it is practiced in the United States. We additionally aim to identify characteristics that result in better patient selection for surgery. Potential predictors include demographics, work and disability compensation status, initial symptom severity and duration, imaging results, functional status, and psychological distress.
Direct Ray Tracing of Displacement Mapped Triangles
We present an algorithm for ray tracing displacement maps that requires no additional storage over the base model. Displacement maps are rarely used in ray tracing due to the cost associated with storing and intersecting the displaced geometry. This is unfortunate because displacement maps allow the addition of large amounts of geometric complexity into models. Our method works for models composed of triangles with normals at the vertices. In addition, we discuss a special purpose displacement that creates a smooth surface that interpolates the triangle vertices and normals of a mesh. The combination allows relatively coarse models to be displacement mapped and ray traced effectively.
Number 693 December 2000 ON THE EFFECT OF THE INTERNET ON INTERNATIONAL TRADE
The Internet stimulates trade. Using a gravity equation of trade among 56 countries, we find no evidence of an effect of the Internet on total trade flows in 1995 and only weak evidence of an effect in 1996. However, we find an increasing and significant impact from 1997 to 1999. Specifically, our results imply that a 10 percent increase in the relative number of web hosts in one country would have led to about 1 percent greater trade in 1998 and 1999. Surprisingly, we find that the effect of the Internet on trade has been stronger for poor countries than for rich countries, and that there is little evidence that the Internet has reduced the impact of distance on trade. The evidence is consistent with a model in which the Internet creates a global exchange for goods, thereby reducing market-specific sunk costs of exporting.
A Bayesian approach to binocular steropsis
We develop a computational model for binocular stereopsis, attempting to explain the process by which the information detailing the 3-D geometry of object surfaces is encoded in a pair of stereo images. We design our model within a Bayesian framework, making explicit all of our assumptions about the nature of image coding and the structure of the world. We start by deriving our model for image formation, introducing a definition of half-occluded regions and deriving simple equations relating these regions to the disparity function. We show that the disparity function alone contains enough information to determine the half-occluded regions. We use these relations to derive a model for image formation in which the half-occluded regions are explicitly represented and computed. Next, we present our prior model in a series of three stages, or “worlds,” where each world considers an additional complication to the prior. We eventually argue that the prior model must be constructed from all of the local quantities in the scene geometry-i.e., depth, surface orientation, object boundaries, and surface creases. In addition, we present a new dynamic programming strategy for estimating these quantities. Throughout the article, we provide motivation for the development of our model by psychophysical examinations of the human visual system.
Primary chemotherapy with gemcitabine as prolonged infusion, non-pegylated liposomal doxorubicin and docetaxel in patients with early breast cancer: final results of a phase II trial.
BACKGROUND Combinations of anthracyclines, taxanes and gemcitabine have shown high activity in breast cancer. This trial was designed to evaluate a modified combination regimen as primary chemotherapy. Non-pegylated liposomal doxorubicin (NPLD) was used instead of conventional doxorubicin to improve cardiac safety. Gemcitabine was given 72 h after NPLD and docetaxel as a prolonged infusion over 4 h in order to optimize synergistic effects and accumulation of active metabolites. PATIENTS AND METHODS Forty-four patients with histologically confirmed stage II or III breast cancer were treated with NPLD (60 mg/m(2)) and docetaxel (75 mg/m(2)) on day 1 and gemcitabine as 4-h infusion (350 mg/m(2)) on day 4. Treatment was repeated every 3 weeks for a maximum of six cycles. All patients received prophylactically recombinant granulocyte colony-stimulating factor. Patients with axillary lymph node involvement after primary chemotherapy received adjuvant treatment with cyclophosphamide, methotrexate and fluorouracil. RESULTS The clinical response rate was 80%, and complete remissions of the primary tumor occurred in 10 patients (25%). Breast conservation surgery was performed in 19 out of 20 patients (95%) with an initial tumor size of less than 3 cm and in 14 patients (70%) with a tumor size <or=3 cm. Seven patients had histologically confirmed complete responses accounting for a pCR rate of 17.5%. Expression of Ki--67 was the most important predictive parameter for response with high 38.9% breast pCR rate in patients with elevated Ki--67 expression. Although the predominant toxicity was myelosuppression with grade 3/4 neutropenia in 61% of patients few neutropenic complications resulted. Non-hematological toxicity was generally moderate with grade 3 or 4 toxicity in 10.0% of cycles. Most common non-hematologic toxicities were nausea, vomiting, alopecia, mucositis, asthenia and elevation of liver enzymes. CONCLUSION The evaluated schedule provides a safe and highly effective combination treatment for patients with early breast cancer, which is suitable for phase III studies.
GUIDELINES FOR HANDHELD MOBILE DEVICE INTERFACE DESIGN
While there has been much successful work in developing rules to guide the design and implementation of interfaces for desktop machines and their applications, the design of mobile device interfaces is still relatively unexplored and unproven. This paper discusses the characteristics and limitations of current mobile device interfaces, especially compared to the desktop environment. Using existing interface guidelines as a starting point, a set of practical design guidelines for mobile device interfaces is proposed.
Exploiting Social Ties for Cooperative D2D Communications: A Mobile Social Networking Case
Thanks to the convergence of pervasive mobile communications and fast-growing online social networking, mobile social networking is penetrating into our everyday life. Aiming to develop a systematic understanding of mobile social networks, in this paper we exploit social ties in human social networks to enhance cooperative device-to-device (D2D) communications. Specifically, as handheld devices are carried by human beings, we leverage two key social phenomena, namely social trust and social reciprocity, to promote efficient cooperation among devices. With this insight, we develop a coalitional game-theoretic framework to devise social-tie-based cooperation strategies for D2D communications. We also develop a network-assisted relay selection mechanism to implement the coalitional game solution, and show that the mechanism is immune to group deviations, individually rational, truthful, and computationally efficient. We evaluate the performance of the mechanism by using real social data traces. Simulation results corroborate that the proposed mechanism can achieve significant performance gain over the case without D2D cooperation.
Automatic Feature Learning for MOOC Forum Thread Classification
Discussion thread classification plays an important role for Massive Open Online Courses (MOOCs) forum. Most existing methods in this filed focus on extracting text features (e.g. key words) from the content of discussions using NLP methods. However, diversity of languages used in MOOC forums results in poor expansibility of these methods. To tackle this problem, in this paper, we artificially design 23 language independent features related to structure, popularity and underlying social network of thread. Furthermore, a hybrid model which combine Gradient Boosting Decision Tree (GBDT) with Linear Regression (LR) (GBDT + LR) is employed to reduce the traditional cost of feature learning for discussion threads classification manually. Experiments are carried out on the datasets contributed by Coursera with nearly 100, 000 discussion threads of 60 courses taught in 4 different languages. Results demonstrate that our method can significantly improve the performance of discussion threads classification. It is worth drawing that the average AUC of our model is 0.832, outperforming baseline by 15%.
Learning to Program = Learning to Construct Mechanisms and Explanations
Teaching effective problem-solving skills in the context of teaching programming necessitates a revised curriculum for introductory computer programming courses.
Lean Software Development
Lean deveLopment is a product development paradigm with an endto-end focus on creating value for the customer, eliminating waste, optimizing value streams, empowering people, and continuously improving (see Figure 11). Lean thinking has penetrated many industries. It was first used in manufacturing, with clear goals to empower teams, reduce waste, optimize work streams, and above all keep market and customer needs as the primary decision driver.2 This IEEE Software special issue addresses lean software development as opposed to management or manufacturing theories. In that context, we sought to address some key questions: What design principles deliver value, and how are they introduced to best manage change?
Neural Speech Recognizer: Acoustic-to-Word LSTM Model for Large Vocabulary Speech Recognition
We present results that show it is possible to build a competitive, greatly simplified, large vocabulary continuous speech recognition system with whole words as acoustic units. We model the output vocabulary of about 100,000 words directly using deep bi-directional LSTM RNNs with CTC loss. The model is trained on 125,000 hours of semi-supervised acoustic training data, which enables us to alleviate the data sparsity problem for word models. We show that the CTC word models work very well as an end-to-end all-neural speech recognition model without the use of traditional context-dependent sub-word phone units that require a pronunciation lexicon, and without any language model removing the need to decode. We demonstrate that the CTC word models perform better than a strong, more complex, state-of-the-art baseline with sub-word units.
Recent Advances in Neural Program Synthesis
In recent years, deep learning has made tremendous progress in a number of fields that were previously out of reach for artificial intelligence. The successes in these problems has led researchers to consider the possibilities for intelligent systems to tackle a problem that humans have only recently themselves considered: program synthesis. This challenge is unlike others such as object recognition and speech translation, since its abstract nature and demand for rigor make it difficult even for human minds to attempt. While it is still far from being solved or even competitive with most existing methods, neural program synthesis is a rapidly growing discipline which holds great promise if completely realized. In this paper, we start with exploring the problem statement and challenges of program synthesis. Then, we examine the fascinating evolution of program induction models, along with how they have succeeded, failed and been reimagined since. Finally, we conclude with a contrastive look at program synthesis and future research recommendations for the field.
Infants' preferences for toys, colors, and shapes: sex differences and similarities.
Girls and boys differ in their preferences for toys such as dolls and trucks. These sex differences are present in infants, are seen in non-human primates, and relate, in part, to prenatal androgen exposure. This evidence of inborn influences on sex-typed toy preferences has led to suggestions that object features, such as the color or the shape of toys, may be of intrinsically different interest to males and females. We used a preferential looking task to examine preferences for different toys, colors, and shapes in 120 infants, ages 12, 18, or 24 months. Girls looked at dolls significantly more than boys did and boys looked at cars significantly more than girls did, irrespective of color, particularly when brightness was controlled. These outcomes did not vary with age. There were no significant sex differences in infants' preferences for different colors or shapes. Instead, both girls and boys preferred reddish colors over blue and rounded over angular shapes. These findings augment prior evidence of sex-typed toy preferences in infants, but suggest that color and shape do not determine these sex differences. In fact, the direction of influence could be the opposite. Girls may learn to prefer pink, for instance, because the toys that they enjoy playing with are often colored pink. Regarding within sex differences, as opposed to differences between boys and girls, both boys and girls preferred dolls to cars at age 12-months. The preference of young boys for dolls over cars suggests that older boys' avoidance of dolls may be acquired. Similarly, the sex similarities in infants' preferences for colors and shapes suggest that any subsequent sex differences in these preferences may arise from socialization or cognitive gender development rather than inborn factors.
Debiasing the mind through meditation: mindfulness and the sunk-cost bias.
In the research reported here, we investigated the debiasing effect of mindfulness meditation on the sunk-cost bias. We conducted four studies (one correlational and three experimental); the results suggest that increased mindfulness reduces the tendency to allow unrecoverable prior costs to influence current decisions. Study 1 served as an initial correlational demonstration of the positive relationship between trait mindfulness and resistance to the sunk-cost bias. Studies 2a and 2b were laboratory experiments examining the effect of a mindfulness-meditation induction on increased resistance to the sunk-cost bias. In Study 3, we examined the mediating mechanisms of temporal focus and negative affect, and we found that the sunk-cost bias was attenuated by drawing one's temporal focus away from the future and past and by reducing state negative affect, both of which were accomplished through mindfulness meditation.
Verification of Embolic Channel Causing Blindness Following Filler Injection
Ocular complications following cosmetic filler injections are serious situations. This study provided scientific evidence that filler in the facial and the superficial temporal arteries could enter into the orbits and the globes on both sides. We demonstrated the existence of an embolic channel connecting the arterial system of the face to the ophthalmic artery. After the removal of the ocular contents from both eyes, liquid dye was injected into the cannulated channel of the superficial temporal artery in six soft embalmed cadavers and different color dye was injected into the facial artery on both sides successively. The interior sclera was monitored for dye oozing from retrograde ophthalmic perfusion. Among all 12 globes, dye injections from the 12 superficial temporal arteries entered ipsilateral globes in three and the contralateral globe in two arteries. Dye from the facial artery was infused into five ipsilateral globes and in three contralateral globes. Dye injections of two facial arteries in the same cadaver resulted in bilateral globe staining but those of the superficial temporal arteries did not. Direct communications between the same and different arteries of the four cannulated arteries were evidenced by dye dripping from the cannulating needle hubs in 14 of 24 injected arteries. Compression of the orbital rim at the superior nasal corner retarded ocular infusion in 11 of 14 arterial injections. Under some specific conditions favoring embolism, persistent interarterial anastomoses between the face and the eye allowed filler emboli to flow into the globe causing ocular complications. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .
The Role of Actively Open-Minded Thinking in Information Acquisition , Accuracy , and Calibration
Errors in estimating and forecasting often result from the failure to collect and consider enough relevant information. We examine whether attributes associated with persistence in information acquisition can predict performance in an estimation task. We focus on actively open-minded thinking (AOT), need for cognition, grit, and the tendency to maximize or satisfice when making decisions. In three studies, participants made estimates and predictions of uncertain quantities, with varying levels of control over the amount of information they could collect before estimating. Only AOT predicted performance. This relationship was mediated by information acquisition: AOT predicted the tendency to collect information, and information acquisition predicted performance. To the extent that available information is predictive of future outcomes, actively open-minded thinkers are more likely than others to make accurate forecasts.
Virtual reality job interview training in adults with autism spectrum disorder.
The feasibility and efficacy of virtual reality job interview training (VR-JIT) was assessed in a single-blinded randomized controlled trial. Adults with autism spectrum disorder were randomized to VR-JIT (n = 16) or treatment-as-usual (TAU) (n = 10) groups. VR-JIT consisted of simulated job interviews with a virtual character and didactic training. Participants attended 90 % of laboratory-based training sessions, found VR-JIT easy to use and enjoyable, and they felt prepared for future interviews. VR-JIT participants had greater improvement during live standardized job interview role-play performances than TAU participants (p = 0.046). A similar pattern was observed for self-reported self-confidence at a trend level (p = 0.060). VR-JIT simulation performance scores increased over time (R(2) = 0.83). Results indicate preliminary support for the feasibility and efficacy of VR-JIT, which can be administered using computer software or via the internet.
Iterative Integration of Visual Insights during Scalable Patent Search and Analysis
Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz”, a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.
Audio-Visual Biometrics
Biometric characteristics can be utilized in order to enable reliable and robust-to-impostor-attacks person recognition. Speaker recognition technology is commonly utilized in various systems enabling natural human computer interaction. The majority of the speaker recognition systems rely only on acoustic information, ignoring the visual modality. However, visual information conveys correlated and complimentary information to the audio information and its integration into a recognition system can potentially increase the system's performance, especially in the presence of adverse acoustic conditions. Acoustic and visual biometric signals, such as the person's voice and face, can be obtained using unobtrusive and user-friendly procedures and low-cost sensors. Developing unobtrusive biometric systems makes biometric technology more socially acceptable and accelerates its integration into every day life. In this paper, we describe the main components of audio-visual biometric systems, review existing systems and their performance, and discuss future research and development directions in this area
Risk prediction models for hospital readmission: a systematic review.
CONTEXT Predicting hospital readmission risk is of great interest to identify which patients would benefit most from care transition interventions, as well as to risk-adjust readmission rates for the purposes of hospital comparison. OBJECTIVE To summarize validated readmission risk prediction models, describe their performance, and assess suitability for clinical or administrative use. DATA SOURCES AND STUDY SELECTION The databases of MEDLINE, CINAHL, and the Cochrane Library were searched from inception through March 2011, the EMBASE database was searched through August 2011, and hand searches were performed of the retrieved reference lists. Dual review was conducted to identify studies published in the English language of prediction models tested with medical patients in both derivation and validation cohorts. DATA EXTRACTION Data were extracted on the population, setting, sample size, follow-up interval, readmission rate, model discrimination and calibration, type of data used, and timing of data collection. DATA SYNTHESIS Of 7843 citations reviewed, 30 studies of 26 unique models met the inclusion criteria. The most common outcome used was 30-day readmission; only 1 model specifically addressed preventable readmissions. Fourteen models that relied on retrospective administrative data could be potentially used to risk-adjust readmission rates for hospital comparison; of these, 9 were tested in large US populations and had poor discriminative ability (c statistic range: 0.55-0.65). Seven models could potentially be used to identify high-risk patients for intervention early during a hospitalization (c statistic range: 0.56-0.72), and 5 could be used at hospital discharge (c statistic range: 0.68-0.83). Six studies compared different models in the same population and 2 of these found that functional and social variables improved model discrimination. Although most models incorporated variables for medical comorbidity and use of prior medical services, few examined variables associated with overall health and function, illness severity, or social determinants of health. CONCLUSIONS Most current readmission risk prediction models that were designed for either comparative or clinical purposes perform poorly. Although in certain settings such models may prove useful, efforts to improve their performance are needed as use becomes more widespread.
Swarm Intelligence in Optimization
Optimization techniques inspired by swarm intelligence have become increasingly popular during the last decade. They are characterized by a decentralized way of working that mimics the behavior of swarms of social insects, flocks of birds, or schools of fish. The advantage of these approaches over traditional techniques is their robustness and flexibility. These properties make swarm intelligence a successful design paradigm for algorithms that deal with increasingly complex problems. In this chapter we focus on two of the most successful examples of optimization techniques inspired by swarm intelligence: ant colony optimization and particle swarm optimization. Ant colony optimization was introduced as a technique for combinatorial optimization in the early 1990s. The inspiring source of ant colony optimization is the foraging behavior of real ant colonies. In addition, particle swarm optimization was introduced for continuous optimization in the mid-1990s, inspired by bird flocking.
Ave Maria, FL: A Mixed Methods Material Culture Analysis
IntroductionThis article provides an explanation of how the spatial distribution of material culture contributes to the construction of identity in a place-based, domaincentered community of practice. Understanding the spatial distribution of the material culture within a landscape leads to a more nuanced and robust understanding of the cultural meanings and values than can be gained simply from the analysis of the objects themselves. Geographic Information Systems (GIS) is used in this study as a platform to represent the landscape of the community and to conduct quantitative spatial analysis of the material culture in that landscape.The community that serves as a case study in this research is Ave Maria, Florida. The domain-focus of Ave Maria is the day-to-day practice of conservative Catholicism. The landscape of this new, urban-style town contains many material cultural traces such as the Oratory (Figure 1), a church structure that is the tallest feature in the landscape, visible throughout the development. It is a focal point that dramatically represents the community's domain focus.Barron Collier Companies, a real estate developer in Florida, and Tom Monaghan, a billionaire conservative Catholic philanthropist, were the two principal developers of Ave Maria. Barron Collier owned extensive tracts of interior South Florida land and wanted to develop it. Tom Monaghan developed a conservative Catholic college that he wanted to expand into a university. The town they designed, Ave Maria, was expected to convert 5,000 acres of interior South Florida agricultural land into a new urban community with 20,000 residents and 5,500 university students (Turner 2010, 36). Planning and permitting for the community began in 2001. Construction of Ave Maria started in 2005, and in 2007 the town opened (Figure 2).In 2010, when fieldwork for this study was conducted, Ave Maria encompassed several residential neighborhoods. A town center featured a number of shops, restaurants, offices, residential condominiums, and the centrally located Oratory A nascent industrial park was located in the development, as were several urban recreational areas including parks, tennis courts, and a golf course. Adjacent to the town center was the private conservative Catholic university, which served as a cornerstone for community development (Figure 3).It is difficult to overstate the degree of homogeneity that exists in Ave Maria. The 2010 Decennial Census indicated Ave Maria had a total population of 1,279 persons. 78 percent of the population were adults, 91 percent were white. This stands in contrast to the neighboring unincorporated town of Immokalee, which contained 24,154 persons, 66 percent of whom were adults and 44 percent white.The purpose of this article is to explore how the spatial distribution of material culture contributes to the construction of identity in a place-based, domaincentered community of practice. The article is particularly concerned with several ways quantitative GIS can contribute to a broader mixed-methods analysis of spatial distribution. It begins with a brief overview of the portions of the history of material cultural analysis in geography that I have found particularly meaningful. Many other readings of this tradition exist that would rightly emphasize the work of other geographers or other aspects of the work of those whom I mention. This is followed by an overview of portions of the community of practice literature that inform my work generally, and that I find particularly useful for understanding Ave Maria. I then discuss two quantitative analytical techniques that are implemented through a GIS and the use of surveys. The results and analysis of the fieldwork data suggests the presence of a religious district in the town center that emphasizes the domain focus of the community and provides support for the community's day-to-day practices.Geographies of Material Cultural LandscapesStudying material culture constructed at the scales of geographic space in order to critically understand the values, ideas, attitudes, and assumptions of those who build, maintain, and occupy the landscape has become common in geography. …
Automatic fuzzy ontology generation for semantic Web
Ontology is an effective conceptualism commonly used for the semantic Web. Fuzzy logic can be incorporated to ontology to represent uncertainty information. Typically, fuzzy ontology is generated from a predefined concept hierarchy. However, to construct a concept hierarchy for a certain domain can be a difficult and tedious task. To tackle this problem, this paper proposes the FOGA (fuzzy ontology generation framework) for automatic generation of fuzzy ontology on uncertainty information. The FOGA framework comprises the following components: fuzzy formal concept analysis, concept hierarchy generation, and fuzzy ontology generation. We also discuss approximating reasoning for incremental enrichment of the ontology with new upcoming data. Finally, a fuzzy-based technique for integrating other attributes of database to the ontology is proposed
Distribution Free Prediction Bands
We study distribution free, nonparametric prediction bands with a special focus on their finite sample behavior. First we investigate and develop different notions of finite sample coverage guarantees. Then we give a new prediction band estimator by combining the idea of “conformal prediction” (Vovk et al., 2009) with nonparametric conditional density estimation. The proposed estimator, called COPS (Conformal Optimized Prediction Set), always has finite sample guarantee in a stronger sense than the original conformal prediction estimator. Under regularity conditions the estimator converges to an oracle band at a minimax optimal rate. A fast approximation algorithm and a data driven method for selecting the bandwidth are developed. The method is illustrated first in simulated data. Then, an application shows that the proposed method gives desirable prediction intervals in an automatic way, as compared to the classical linear regression modeling.
Typical vasovagal syncope as a “defense mechanism” for the heart by contrasting sympathetic overactivity
Many observations suggest that typical (emotional or orthostatic) vasovagal syncope (VVS) is not a disease, but rather a manifestation of a non-pathological trait. Some authors have hypothesized this type of syncope as a “defense mechanism” for the organism and a few theories have been postulated. Under the human violent conflicts theory, the VVS evolved during the Paleolithic era only in the human lineage. In this evolutionary period, a predominant cause of death was wounding by a sharp object. This theory could explain the occurrence of emotional VVS, but not of the orthostatic one. The clot production theory suggests that the vasovagal reflex is a defense mechanism against hemorrhage in mammals. This theory could explain orthostatic VVS, but not emotional VVS. The brain self-preservation theory is mainly based on the observation that during tilt testing a decrease in cerebral blood flow often precedes the drop in blood pressure and heart rate. The faint causes the body to take on a gravitationally neutral position, and thereby provides a better chance of restoring brain blood supply. However, a decrease in cerebral blood flow has not been demonstrated during negative emotions, which trigger emotional VVS. Under the heart defense theory, the vasovagal reflex seems to be a protective mechanism against sympathetic overactivity and the heart is the most vulnerable organ during this condition. This appears to be the only unifying theory able to explain the occurrence of the vasovagal reflex and its associated selective advantage, during both orthostatic and emotional stress.
Transmembrane helix prediction using amino acid property features and latent semantic analysis
Prediction of transmembrane (TM) helices by statistical methods suffers from lack of sufficient training data. Current best methods use hundreds or even thousands of free parameters in their models which are tuned to fit the little data available for training. Further, they are often restricted to the generally accepted topology "cytoplasmic-transmembrane-extracellular" and cannot adapt to membrane proteins that do not conform to this topology. Recent crystal structures of channel proteins have revealed novel architectures showing that the above topology may not be as universal as previously believed. Thus, there is a need for methods that can better predict TM helices even in novel topologies and families. Here, we describe a new method "TMpro" to predict TM helices with high accuracy. To avoid overfitting to existing topologies, we have collapsed cytoplasmic and extracellular labels to a single state, non-TM. TMpro is a binary classifier which predicts TM or non-TM using multiple amino acid properties (charge, polarity, aromaticity, size and electronic properties) as features. The features are extracted from sequence information by applying the framework used for latent semantic analysis of text documents and are input to neural networks that learn the distinction between TM and non-TM segments. The model uses only 25 free parameters. In benchmark analysis TMpro achieves 95% segment F-score corresponding to 50% reduction in error rate compared to the best methods not requiring an evolutionary profile of a protein to be known. Performance is also improved when applied to more recent and larger high resolution datasets PDBTM and MPtopo. TMpro predictions in membrane proteins with unusual or disputed TM structure (K+ channel, aquaporin and HIV envelope glycoprotein) are discussed. TMpro uses very few free parameters in modeling TM segments as opposed to the very large number of free parameters used in state-of-the-art membrane prediction methods, yet achieves very high segment accuracies. This is highly advantageous considering that high resolution transmembrane information is available only for very few proteins. The greatest impact of TMpro is therefore expected in the prediction of TM segments in proteins with novel topologies. Further, the paper introduces a novel method of extracting features from protein sequence, namely that of latent semantic analysis model. The success of this approach in the current context suggests that it can find potential applications in other sequence-based analysis problems. http://linzer.blm.cs.cmu.edu/tmpro/ and http://flan.blm.cs.cmu.edu/tmpro/
Heat Advisory: Protecting Health on a Warming Planet
The 2016 John S. Curran Lecture: Oct 13 Assessing the Quality of Perinatal Care: Advances and Barriers Scott Lorch, MD Associate Professor of Pediatrics at the Perelman School of Medicine at the University of Pennsylvania, and Senior Fellow of the Leonard Davis Institute of Health Economics at the University of Pennsylvania Please join us for a breakfast reception in the MacInnes Auditorium foyer following the Curran Lecture
On line PD measurements and diagnosis on power transformers
On line partial discharge (PD) measurements provide information on the dielectric integrity of the high voltage insulation of oil insulated power transformers during service. These PD measurements are performed using a broadband antenna on an oil valve and UHF detection of PD signals. The sensitivity of this new detection technique is high and immunity for electromagnetic (EM) interference and corona discharges in air outside the transformer is very good. Dielectric faults can be detected in this way in an early stage of development and are classified in four categories. Automatic pattern recognition techniques of phase resolved pulse sequence (PRPS) data are very useful in risk assessment of dielectric faults.
Fingerprint authentication using geometric features
Biometric based authentication, particularly for fingerprint authentication systems play a vital role in identifying an individual. The existing fingerprint authentication systems depend on specific points known as minutiae for recognizing an individual. Designing a reliable automatic fingerprint authentication system is still very challenging, since not all fingerprint information is available. Further, the information obtained is not always accurate due to cuts, scars, sweat, distortion and various skin conditions. Moreover, the existing fingerprint authentication systems do not utilize other significant minutiae information, which can improve the accuracy. Various local feature detectors such as Difference-of-Gaussian, Hessian, Hessian Laplace, Harris Laplace, Multiscale Harris, and Multiscale Hessian have been extensively used for feature detection. However, these detectors have not been employed for detecting fingerprint image features. In this article, a versatile local feature fingerprint matching scheme is proposed. The local features are obtained by exploiting these local geometric detectors and SIFT descriptor. This scheme considers local characteristic features of the fingerprint image, thus eliminating the issues caused in existing fingerprint feature based matching techniques. Computer simulations of the proposed algorithm on specific databases show significant improvements when compared to existing fingerprint matchers, such as minutiae matcher, hierarchical matcher and graph based matcher. Computer simulations conducted on the Neurotechnology database demonstrates a very low Equal Error Rate (EER) of 0.8%. The proposed system a) improves the accuracy of the fingerprint authentication system, b) works when the minutiae information is sparse, and c) produces satisfactory matching accuracy in the case when minutiae information is unavailable. The proposed system can also be employed for partial fingerprint authentication.
Sparse Bayesian Learning and the Relevance Vector Machine
This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classi­ cation tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance vector machine’ (RVM), a model of identical functional form to the popular and state-of-the-art `support vector machine’ (SVM). We demonstrate that by exploiting a probabilistic Bayesian learning framework, we can derive accurate prediction models which typically utilise dramatically fewer basis functions than a comparable SVM while o¬ering a number of additional advantages. These include the bene­ ts of probabilistic predictions, automatic estimation of `nuisance’ parameters, and the facility to utilise arbitrary basis functions (e.g. non-`Mercer’ kernels). We detail the Bayesian framework and associated learning algorithm for the RVM, and give some illustrative examples of its application along with some comparative benchmarks. We o¬er some explanation for the exceptional degree of sparsity obtained, and discuss and demonstrate some of the advantageous features, and potential extensions, of Bayesian relevance learning.
Kineticons: using iconographic motion in graphical user interface design
Icons in graphical user interfaces convey information in a mostly universal fashion that allows users to immediately interact with new applications, systems and devices. In this paper, we define Kineticons - an iconographic scheme based on motion. By motion, we mean geometric manipulations applied to a graphical element over time (e.g., scale, rotation, deformation). In contrast to static graphical icons and icons with animated graphics, kineticons do not alter the visual content or "pixel-space" of an element. Although kineticons are not new - indeed, they are seen in several popular systems - we formalize their scope and utility. One powerful quality is their ability to be applied to GUI elements of varying size and shape from a something as small as a close button, to something as large as dialog box or even the entire desktop. This allows a suite of system-wide kinetic behaviors to be reused for a variety of uses. Part of our contribution is an initial kineticon vocabulary, which we evaluated in a 200 participant study. We conclude with discussion of our results and design recommendations.
A Perceptual Colour Segmentation Algorithm
This paper presents a simple method for segmenting colour regions into categories like red, green, blue, and yellow. We are interested in studying how colour categories influence colour selection during scientific visualization. The ability to name individual colours is also important in other problem domains like real-time displays, user-interface design, and medical imaging systems. Our algorithm uses the Munsell and CIE LUV colour models to automatically segment a colour space like RGB or CIE XYZ into ten colour categories. Users are then asked to name a small number of representative colours from each category. This provides three important results: a measure of the perceptual overlap between neighbouring categories, a measure of a category’s strength, and a user-chosen name for each strong category. We evaluated our technique by segmenting known colour regions from the RGB, HSV, and CIE LUV colour models. The names we obtained were accurate, and the boundaries between different colour categories were well defined. We concluded our investigation by conducting an experiment to obtain user-chosen names and perceptual overlap for ten colour categories along the circumference of a colour wheel in CIE LUV.
A Simple Algorithm for Identifying Negated Findings and Diseases in Discharge Summaries
Narrative reports in medical records contain a wealth of information that may augment structured data for managing patient information and predicting trends in diseases. Pertinent negatives are evident in text but are not usually indexed in structured databases. The objective of the study reported here was to test a simple algorithm for determining whether a finding or disease mentioned within narrative medical reports is present or absent. We developed a simple regular expression algorithm called NegEx that implements several phrases indicating negation, filters out sentences containing phrases that falsely appear to be negation phrases, and limits the scope of the negation phrases. We compared NegEx against a baseline algorithm that has a limited set of negation phrases and a simpler notion of scope. In a test of 1235 findings and diseases in 1000 sentences taken from discharge summaries indexed by physicians, NegEx had a specificity of 94.5% (versus 85.3% for the baseline), a positive predictive value of 84.5% (versus 68.4% for the baseline) while maintaining a reasonable sensitivity of 77.8% (versus 88.3% for the baseline). We conclude that with little implementation effort a simple regular expression algorithm for determining whether a finding or disease is absent can identify a large portion of the pertinent negatives from discharge summaries.
Outcomes in patients with cardiogenic shock following percutaneous coronary intervention in the contemporary era: an analysis from the BCIS database (British Cardiovascular Intervention Society).
OBJECTIVES This study sought to determine mortality rates among cardiogenic shock (CGS) patients undergoing percutaneous coronary intervention (PCI) for acute coronary syndrome in the contemporary treatment era and to determine predictors of mortality. BACKGROUND It is unclear whether recent advances in pharmacological and interventional strategies have resulted in further improvements in short- and long-term mortality and which factors are associated with adverse outcomes in patients presenting with CGS and undergoing PCI in the setting of acute coronary syndrome. METHODS This study analyzed prospectively collected data for patients undergoing PCI in the setting of CGS as recorded in the BCIS (British Cardiovascular Intervention Society) PCI database. RESULTS In England and Wales, 6,489 patients underwent PCI for acute coronary syndrome in the setting of CGS. The mortality rates at 30 days, 90 days, and 1 year were 37.3%, 40.0%, and 44.3%, respectively. On multiple logistic regression analysis, age (for each 10-year increment of age: odds ratio [OR]: 1.59, 95% confidence interval [CI]: 1.51 to 1.68; p < 0.0001), diabetes mellitus (OR: 1.47, 95% CI: 1.28 to 1.70; p < 0.0001), history of renal disease (OR: 2.03, 95% CI: 1.63 to 2.53; p < 0.0001), need for artificial mechanical ventilation (OR: 2.56, 95% CI: 2.23 to 2.94; p < 0.0001), intra-aortic balloon pump use (OR: 1.57, 95% CI: 1.40 to 1.76; p < 0.0001), and need for left main stem PCI (OR: 1.90, 95% CI: 1.62 to 2.23; p < 0.0001) were associated with higher mortality at 1 year. CONCLUSIONS In this large U.K. cohort of patients undergoing PCI in the context of CGS, mortality remains high in spite of the use of contemporary PCI strategies. The highest mortality occurs early, and this time period may be a particular target of therapeutic intervention.
Combination of two urinary biomarkers predicts acute kidney injury after adult cardiac surgery.
BACKGROUND Urinary L-type fatty acid-binding protein (L-FABP) has not been evaluated for adult post-cardiac surgery acute kidney injury (AKI) to date. This study was undertaken to evaluate a biomarker panel consisting of urinary L-FABP and N-acetyl-β-D-glucosaminidase (NAG), a more established urinary marker of kidney injury, for AKI diagnosis in adult post-cardiac surgery patients. METHODS This study prospectively evaluated 77 adult patients who underwent cardiac surgery at 2 general hospitals. Urinary L-FABP and NAG were measured before surgery, at intensive care unit arrival after surgery (0 hours), 4, and 12 hours after arrival. The AKI was diagnosed by the Acute Kidney Injury Network criteria. RESULTS Of 77 patients, 28 patients (36.4%) developed AKI after surgery. Urinary L-FABP and NAG were significantly increased. However, receiver operating characteristic (ROC) analysis revealed that the biomarkers' performance was statistically significant but limited for clinical translation (area under the curve of ROC [AUC-ROC] for L-FABP at 4 hours 0.72 and NAG 0.75). Urinary L-FABP showed high sensitivity and NAG detected AKI with high specificity. Therefore, we combined these 2 biomarkers, which revealed that this combination panel can detect AKI with higher accuracy than either biomarker measurement alone (AUC-ROC 0.81). Moreover, this biomarker panel improved AKI risk prediction significantly compared with predictions made using the clinical model alone. CONCLUSIONS When urinary L-FABP and NAG are combined, they can detect AKI adequately, even in a heterogeneous population of adult post-cardiac surgery AKI. Combining 2 markers with different sensitivity and specificity presents a reasonable strategy to improve the diagnostic performance of biomarkers.
Performance analysis of precoding techniques for Massive MU-MIMO systems
Massive Multi-user Multiple Input Multiple Output (MU-MIMO) systems can yield a considerable increase in the capacity and BER for the requirement of higher data rate by using large number of antennas at transmitter and receiver side. In case of multi-user MIMO, a multi-antenna transmitter communicates simultaneously with multiple receivers. But the MU-MIMO system suffers from huge inter-user interference. By using precoding techniques in massive MU-MIMO the inter-user interference of the system are analyzed and it can be substantially reduced. In precoding technique the transmit diversity is exploited by weighting information stream so that it reduces the corrupted effect in communication channel, for this purpose the CSI are needed to be known by the transmitter. It is analyzed that at 6dB of SNR MMSE precoding outperforms error rate of 10-3.65 compared to Block diagonalization and 10-3.62 to Tomlinson-Harashima precoding method for Massive MU-MIMO system.
Quantum computation in brain microtubules: decoherence and biological feasibility.
The Penrose-Hameroff orchestrated objective reduction (orch. OR) model assigns a cognitive role to quantum computations in microtubules within the neurons of the brain. Despite an apparently "warm, wet, and noisy" intracellular milieu, the proposal suggests that microtubules avoid environmental decoherence long enough to reach threshold for "self-collapse" (objective reduction) by a quantum gravity mechanism put forth by Penrose. The model has been criticized as regards the issue of environmental decoherence, and a recent report by Tegmark finds that microtubules can maintain quantum coherence for only 10(-13) s, far too short to be neurophysiologically relevant. Here, we critically examine the decoherence mechanisms likely to dominate in a biological setting and find that (1) Tegmark's commentary is not aimed at an existing model in the literature but rather at a hybrid that replaces the superposed protein conformations of the orch. OR theory with a soliton in superposition along the microtubule; (2) recalculation after correcting for differences between the model on which Tegmark bases his calculations and the orch. OR model (superposition separation, charge vs dipole, dielectric constant) lengthens the decoherence time to 10(-5)-10(-4) s; (3) decoherence times on this order invalidate the assumptions of the derivation and determine the approximation regime considered by Tegmark to be inappropriate to the orch. OR superposition; (4) Tegmark's formulation yields decoherence times that increase with temperature contrary to well-established physical intuitions and the observed behavior of quantum coherent states; (5) incoherent metabolic energy supplied to the collective dynamics ordering water in the vicinity of microtubules at a rate exceeding that of decoherence can counter decoherence effects (in the same way that lasers avoid decoherence at room temperature); (6) microtubules are surrounded by a Debye layer of counterions, which can screen thermal fluctuations, and by an actin gel that might enhance the ordering of water in bundles of microtubules, further increasing the decoherence-free zone by an order of magnitude and, if the dependence on the distance between environmental ion and superposed state is accurately reflected in Tegmark's calculation, extending decoherence times by three orders of magnitude; (7) topological quantum computation in microtubules may be error correcting, resistant to decoherence; and (8) the decohering effect of radiative scatterers on microtubule quantum states is negligible. These considerations bring microtubule decoherence into a regime in which quantum gravity could interact with neurophysiology.
Comparison of glucosamine sulfate and a polyherbal supplement for the relief of osteoarthritis of the knee: a randomized controlled trial [ISRCTN25438351]
BACKGROUND The efficacy and safety of a dietary supplement derived from South American botanicals was compared to glucosamine sulfate in osteoarthritis subjects in a Mumbai-based multi-center, randomized, double-blind study. METHODS Subjects (n = 95) were screened and randomized to receive glucosamine sulfate (n = 47, 1500 mg/day) or reparagen (n = 48, 1800 mg/day), a polyherbal consisting of 300 mg of vincaria (Uncaria guianensis) and 1500 mg of RNI 249 (Lepidium meyenii) administered orally, twice daily. Primary efficacy variable was response rate based on a 20% improvement in WOMAC pain scores. Additional outcomes were WOMAC scores for pain, stiffness and function, visual analog score (VAS) for pain, with assessments at 1, 2, 4, 6 and 8 weeks. Tolerability, investigator and subject global assessments and rescue medication consumption (paracetamol) were measured together with safety assessments including vital signs and laboratory based assays. RESULTS Subject randomization was effective: age, gender and disease status distribution was similar in both groups. The response rates (20% reduction in WOMAC pain) were substantial for both glucosamine (89%) and reparagen (94%) and supported by investigator and subject assessments. Using related criteria response rates to reparagen were favorable when compared to glucosamine. Compared to baseline both treatments showed significant benefits in WOMAC and VAS outcomes within one week (P < 0.05), with a similar, progressive improvement over the course of the 8 week treatment protocol (45-62% reduction in WOMAC or VAS scores). Tolerability was excellent, no serious adverse events were noted and safety parameters were unchanged. Rescue medication use was significantly lower in the reparagen group (p < 0.01) at each assessment period. Serum IGF-1 levels were unaltered by treatments. CONCLUSION Both reparagen and glucosamine sulfate produced substantial improvements in pain, stiffness and function in subjects with osteoarthritis. Response rates were high and the safety profile was excellent, with significantly less rescue medication use with reparagen. Reparagen represents a new natural productive alternative in the management of joint health. TRIAL REGISTRATION Current Controlled Trials ISRCTN25438351.
Neurologic features of Hutchinson-Gilford progeria syndrome after lonafarnib treatment.
OBJECTIVES The objective of this study was to retrospectively evaluate neurologic status pre- and posttreatment with the oral farnesyltransferase inhibitor lonafarnib in children with Hutchinson-Gilford progeria syndrome (HGPS), a rare, fatal disorder of segmental premature aging that results in early death by myocardial infarction or stroke. METHODS The primary outcome measure for intervention with lonafarnib was to assess increase over pretherapy in estimated annual rate of weight gain. In this study, neurologic signs and symptoms were compared pre- and posttreatment with lonafarnib. RESULTS Twenty-six participants were treated for a minimum of 2 years. Frequency of clinical strokes, headaches, and seizures was reduced from pretrial rates. Three patients with a history of frequent TIAs and average clinical stroke frequency of 1.75/year during the year before treatment experienced no new events during treatment. One patient with a history of stroke died due to large-vessel hemispheric stroke after 5 months on treatment. Headache prevalence and frequency were reduced. Four patients exhibited pretherapy seizures and no patients experienced recurrent or new-onset seizures. CONCLUSIONS This study provides preliminary evidence that lonafarnib therapy may improve neurologic status of children with HGPS. To address this question, we have incorporated prospective neuroimaging and neurologic assessments as measures in subsequent studies involving children with HGPS. CLASSIFICATION OF EVIDENCE This study provides Class IV evidence that lonafarnib 115-150 mg/m(2) for 24 to 29 months reduces the prevalence of stroke and TIA and the prevalence and frequency of headache over the treatment period.