title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Breast cancer detection using automated whole breast ultrasound and mammography in radiographically dense breasts | Mammography, the standard method of breast cancer screening, misses many cancers, especially in dense-breasted women. We compared the performance and diagnostic yield of mammography alone versus an automated whole breast ultrasound (AWBU) plus mammography in women with dense breasts and/or at elevated risk of breast cancer. AWBU screening was tested in 4,419 women having routine mammography (Trial Registration: ClinicalTrials.gov Identifier: NCT00649337). Cancers occurring during the study and subsequent 1-year follow-up were evaluated. Sensitivity, specificity and positive predictive value (PPV) of biopsy recommendation for mammography alone, AWBU and mammography with AWBU were calculated. Breast cancer detection doubled from 23 to 46 in 6,425 studies using AWBU with mammography, resulting in an increase in diagnostic yield from 3.6 per 1,000 with mammography alone to 7.2 per 1,000 by adding AWBU. PPV for biopsy based on mammography findings was 39.0% and for AWBU 38.4%. The number of detected invasive cancers 10 mm or less in size tripled from 7 to 21 when AWBU findings were added to mammography. AWBU resulted in significant cancer detection improvement compared with mammography alone. Additional detection and the smaller size of invasive cancers may justify this technology’s expense for women with dense breasts and/or at high risk for breast cancer. |
Demographics, injury characteristics and outcome of traumatic brain injuries in northern Sweden. | OBJECTIVES - To describe demographics, injury characteristics and outcome of traumatic brain injury (TBI) in northern Sweden over 10 years. MATERIAL AND METHODS - Data were retrospectively collected on those individuals (n = 332) in Norrbotten, northern Sweden, with a TBI who had been transferred for neurosurgical care from 1992 to 2001. RESULTS - A majority were older men with a mild TBI and an acute or chronic subdural hematoma following a fall. Younger individuals were fewer but had more often a severe TBI from a traffic accident. Most individuals received post-acute care and brain injury rehabilitation. A majority had a moderate or severe disability, but many were discharged back home with no major changes in their physical or social environment. CONCLUSIONS - Our data confirm the relationship between age, cause of injury, injury severity and outcome in relation to TBI and underscore the need for prevention as well as the importance of TBI as a cause of long-term disability. |
Culture-dependent and -independent methods to investigate the microbial ecology of Italian fermented sausages. | In this study, the microbial ecology of three naturally fermented sausages produced in northeast Italy was studied by culture-dependent and -independent methods. By plating analysis, the predominance of lactic acid bacteria populations was pointed out, as well as the importance of coagulase-negative cocci. Also in the case of one fermentation, the fecal enterocci reached significant counts, highlighting their contribution to the particular transformation process. Yeast counts were higher than the detection limit (> 100 CFU/g) in only one fermented sausage. Analysis of the denaturing gradient gel electrophoresis (DGGE) patterns and sequencing of the bands allowed profiling of the microbial populations present in the sausages during fermentation. The bacterial ecology was mainly characterized by the stable presence of Lactobacillus curvatus and Lactobacillus sakei, but Lactobacillus paracasei was also repeatedly detected. An important piece of evidence was the presence of Lactococcus garvieae, which clearly contributed in two fermentations. Several species of Staphylococcus were also detected. Regarding other bacterial groups, Bacillus sp., Ruminococcus sp., and Macrococcus caseolyticus were also identified at the beginning of the transformations. In addition, yeast species belonging to Debaryomyces hansenii, several Candida species, and Willopsis saturnus were observed in the DGGE gels. Finally, cluster analysis of the bacterial and yeast DGGE profiles highlighted the uniqueness of the fermentation processes studied. |
A Practical Bayesian Framework for Backpropagation Networks | A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian "evidence" automatically embodies "Occam's razor," penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained. |
Persuasive System Design Does Matter: A Systematic Review of Adherence to Web-Based Interventions | BACKGROUND
Although web-based interventions for promoting health and health-related behavior can be effective, poor adherence is a common issue that needs to be addressed. Technology as a means to communicate the content in web-based interventions has been neglected in research. Indeed, technology is often seen as a black-box, a mere tool that has no effect or value and serves only as a vehicle to deliver intervention content. In this paper we examine technology from a holistic perspective. We see it as a vital and inseparable aspect of web-based interventions to help explain and understand adherence.
OBJECTIVE
This study aims to review the literature on web-based health interventions to investigate whether intervention characteristics and persuasive design affect adherence to a web-based intervention.
METHODS
We conducted a systematic review of studies into web-based health interventions. Per intervention, intervention characteristics, persuasive technology elements and adherence were coded. We performed a multiple regression analysis to investigate whether these variables could predict adherence.
RESULTS
We included 101 articles on 83 interventions. The typical web-based intervention is meant to be used once a week, is modular in set-up, is updated once a week, lasts for 10 weeks, includes interaction with the system and a counselor and peers on the web, includes some persuasive technology elements, and about 50% of the participants adhere to the intervention. Regarding persuasive technology, we see that primary task support elements are most commonly employed (mean 2.9 out of a possible 7.0). Dialogue support and social support are less commonly employed (mean 1.5 and 1.2 out of a possible 7.0, respectively). When comparing the interventions of the different health care areas, we find significant differences in intended usage (p=.004), setup (p<.001), updates (p<.001), frequency of interaction with a counselor (p<.001), the system (p=.003) and peers (p=.017), duration (F=6.068, p=.004), adherence (F=4.833, p=.010) and the number of primary task support elements (F=5.631, p=.005). Our final regression model explained 55% of the variance in adherence. In this model, a RCT study as opposed to an observational study, increased interaction with a counselor, more frequent intended usage, more frequent updates and more extensive employment of dialogue support significantly predicted better adherence.
CONCLUSIONS
Using intervention characteristics and persuasive technology elements, a substantial amount of variance in adherence can be explained. Although there are differences between health care areas on intervention characteristics, health care area per se does not predict adherence. Rather, the differences in technology and interaction predict adherence. The results of this study can be used to make an informed decision about how to design a web-based intervention to which patients are more likely to adhere. |
Genetic malformations of cortical development | The malformations of the cerebral cortex represent a major cause of developmental disabilities, severe epilepsy and reproductive disadvantage. The advent of high-resolution MRI techniques has facilitated the in vivo identification of a large group of cortical malformation phenotypes. Several malformation syndromes caused by abnormal cortical development have been recognised and specific causative gene defects have been identified. Periventricular nodular heterotopia (PNH) is a malformation of neuronal migration in which a subset of neurons fails to migrate into the developing cerebral cortex. X-linked PNH is mainly seen in females and is often associated with focal epilepsy. FLNA mutations have been reported in all familial cases and in about 25% of sporadic patients. A rare recessive form of PNH due ARGEF2 gene mutations has also been reported in children with microcephaly, severe delay and early seizures. Lissencephaly-pachygyria and subcortical band heterotopia (SBH) are disorders of neuronal migration and represent a malformative spectrum resulting from mutations of either LIS1 or DCX genes. LIS1 mutations cause a more severe malformation in the posterior brain regions. Most children have severe developmental delay and infantile spasms, but milder phenotypes are on record, including posterior SBH owing to mosaic mutations of LIS1. DCX mutations usually cause anteriorly predominant lissencephaly in males and SBH in female patients. Mutations of DCX have also been found in male patients with anterior SBH and in female relatives with normal brain magnetic resonance imaging. Autosomal recessive lissencephaly with cerebellar hypoplasia, accompanied by severe delay, hypotonia, and seizures, has been associated with mutations of the reelin (RELN) gene. X-linked lissencephaly with corpus callosum agenesis and ambiguous genitalia in genotypic males is associated with mutations of the ARX gene. Affected boys have severe delay and seizures with suppression-burst EEG. Early death is frequent. Carrier female patients can have isolated corpus callosum agenesis. Among several syndromes featuring polymicrogyria, bilateral perisylvian polymicrogyria shows genetic heterogeneity, including linkage to chromosome Xq28 in some pedigrees, autosomal dominant or recessive inheritance in others, and an association with chromosome 22q11.2 deletion in some patients. About 65% of patients have severe epilepsy. Recessive bilateral frontoparietal polymicrogyria has been associated with mutations of the GPR56 gene. Epilepsy is often present in patients with cortical malformations and tends to be severe, although its incidence and type vary in different malformations. It is estimated that up to 40% of children with drug-resistant epilepsy have a cortical malformation. However, the physiopathological mechanisms relating cortical malformations to epilepsy remain elusive. |
Pregel: a system for large-scale graph processing | Many practical computing problems concern large graphs. Standard examples include the Web graph and various social networks. The scale of these graphs in some cases billions of vertices, trillions of edges poses challenges to their efficient processing. In this paper we present a computational model suitable for this task. Programs are expressed as a sequence of iterations, in each of which a vertex can receive messages sent in the previous iteration, send messages to other vertices, and modify its own state and that of its outgoing edges or mutate graph topology. This vertex-centric approach is flexible enough to express a broad set of algorithms. The model has been designed for efficient, scalable and faulttolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier. Distribution-related details are hidden behind an abstract API. The result is a framework for processing large graphs that is expressive and easy to program. |
A committee of neural networks for traffic sign classification | We describe the approach that won the preliminary phase of the German traffic sign recognition benchmark with a better-than-human recognition rate of 98.98%.We obtain an even better recognition rate of 99.15% by further training the nets. Our fast, fully parameterizable GPU implementation of a Convolutional Neural Network does not require careful design of pre-wired feature extractors, which are rather learned in a supervised way. A CNN/MLP committee further boosts recognition performance. |
An Ethological Approach to Personality Development | attachment that we have jointly developed is that it is an ethological approach to personality development. We have had a long and happy partnership in pursuing this approach. In this article we wish to give a brief historical account of the initially separate but compatible approaches that eventually merged in the partnership, and how our contributions have intertwined in the course of developing an ethologically oriented theory of attachment and a body of research that has both stemmed from the theory and served to extend and elaborate it. |
Modeling and Control of a Small Autonomous Aircraft Having Two Tilting Rotors | This paper presents recent work concerning a small tiltrotor aircraft with a reduced number of rotors. The design consists of two propellers which can tilt laterally and longitudinally. A model of the full birotor dynamics is provided, and a controller based on the backstepping procedure is synthesized for the purposes of stabilization and trajectory tracking. The proposed control strategy has been tested in simulation |
PGA: Using Graphs to Express and Automatically Reconcile Network Policies | Software Defined Networking (SDN) and cloud automation enable a large number of diverse parties (network operators, application admins, tenants/end-users) and control programs (SDN Apps, network services) to generate network policies independently and dynamically. Yet existing policy abstractions and frameworks do not support natural expression and automatic composition of high-level policies from diverse sources. We tackle the open problem of automatic, correct and fast composition of multiple independently specified network policies. We first develop a high-level Policy Graph Abstraction (PGA) that allows network policies to be expressed simply and independently, and leverage the graph structure to detect and resolve policy conflicts efficiently. Besides supporting ACL policies, PGA also models and composes service chaining policies, i.e., the sequence of middleboxes to be traversed, by merging multiple service chain requirements into conflict-free composed chains. Our system validation using a large enterprise network policy dataset demonstrates practical composition times even for very large inputs, with only sub-millisecond runtime latencies. |
Recognition of psychotherapy effectiveness: the APA resolution. | In August 2012, the American Psychological Association (APA) Council of Representatives voted overwhelmingly to adopt as APA policy a Resolution on the Recognition of Psychotherapy Effectiveness. This invited article traces the origins and intentions of that resolution and its protracted journey through the APA governance labyrinth. We summarize the planned dissemination and projected results of the resolution and identify several lessons learned through the entire process. |
Some questions in fuzzy metric spaces | The George and Veeramani’s fuzzy metric defined by M∗(x, y, t) = min{x,y}+t max{x,y}+t on [0,∞[ (the set of non-negative real numbers) has shown some advantages in front of classical metrics in the process of filtering images. In this paper we study from the mathematical point of view this fuzzy metric and other fuzzy metrics related to it. As a consequence of this study we introduce, throughout the paper, some questions relative to fuzzy metrics. Also, as another practical application, we show that this fuzzy metric is useful for measuring perceptual colour differences between colour samples. |
Competitive Wage Cycles with Imperfect Output Market Competition | We consider a model of a sector in which the same set of oligopolistic firms faces a common labour supply constraint. The wage is given in the short run, adjusting competitively in the longer run. When the costs of job creation are low relative to the degree of output market power, there exists no wage that clears the labour market in the short run, and at some wages there are two equilibria, one with involuntary unemployment and one with unfilled vacancies. The competitive wage dynamics produces a cycle with persistent labour market disequilibrium and recurrent periods of involuntary unemployment. |
Anesthetic efficacy of metomidate and comparison of plasma cortisol responses to tricaine methanesulfonate, quinaldine and clove oil anesthetized channel catfish Ictalurus punctatus | The present experiments were designed to determine the efficacy of metomidate hydrochloride as an alternative anesthetic with potential cortisol blocking properties for channel catfish Ictalurus punctatus. Channel catfish (75 g) were exposed to concentrations of metomidate ranging from 0.5 to 16 ppm for a period of 60 min. At 16-ppm metomidate, mortality occurred in 65% of the catfish. No mortalities were observed at concentrations of 8 ppm or less. The minimum concentration of metomidate producing desirable anesthetic properties was 6 ppm. At this concentration, acceptable induction and recovery times were observed in catfish ranging from 3 to 810 g average body weight. Plasma cortisol levels during metomidate anesthesia (6 ppm) were compared to fish anesthetized with tricaine methanesulfonate (100 ppm), quinaldine (30 ppm) and clove oil (100 ppm). Cortisol levels of catfish treated with metomidate and clove oil remained at baseline levels during 30 min of anesthesia (P>0.05). Plasma cortisol levels of tricaine methanesulfonate and quinaldine anesthetized catfish peaked approximately eightand fourfold higher (P< 0.05), respectively, than fish treated with metomidate. These results suggest that the physiological disturbance of channel catfish during routine-handling procedures and stress-related research could be reduced through the use of metomidate as an anesthetic. D 2003 Elsevier Science B.V. All rights reserved. |
State of the Art for the Biosorption Process—a Review | In recent years, biosorption process has become an economic and eco-friendly alternative treatment technology in the water and wastewater industry. In this light, a number of biosorbents were developed and are successfully employed for treating various pollutants including metals, dyes, phenols, fluoride, and pharmaceuticals in solutions (aqueous/oil). However, still there are few technical barriers in the biosorption process that impede its commercialization and thus to overcome these problems there has been a steadily growing interest in this research field. This resulted in large numbers of publications and patents each year. This review reports the state of the art in biosorption research. In this review, we provide a compendium of know-how in laboratory methodology, mathematical modeling of equilibrium and kinetics, identification of the biosorption mechanism. Various mathematical models of biosorption were discussed: the process in packed-bed column arrangement, as well as by suspended biomass. Particular attention was paid to patents in biosorption and pilot-scale systems. In addition, we provided future aspects in biosorption research. |
THE KTH-TIPS database | This document provides a brief Users’ Guide to the KTH-TIPS image database (KTH is the abbreviation of our university, and TIPS stands for Textures under varying Illumination, Pose and Scale). The guide describes which materials are contained in the database (Section 2), how images were acquired (Section 3) and subsequently cropped to remove the background (Section 4), and we also discuss some non-ideal artifacts, like poor focus, in some pictures (Section 5). This document concludes by outlining how we intend to extend the database in the future (Section 6). |
ADHD comorbidity findings from the MTA study: comparing comorbid subgroups. | OBJECTIVES
Previous research has been inconclusive whether attention-deficit/hyperactivity disorder (ADHD), when comorbid with disruptive disorders (oppositional defiant disorder [ODD] or conduct disorder [CD]), with the internalizing disorders (anxiety and/or depression), or with both, should constitute separate clinical entities. Determination of the clinical significance of potential ADHD + internalizing disorder or ADHD + ODD/CD syndromes could yield better diagnostic decision-making, treatment planning, and treatment outcomes.
METHOD
Drawing upon cross-sectional and longitudinal information from 579 children (aged 7-9.9 years) with ADHD participating in the NIMH Collaborative Multisite Multimodal Treatment Study of Children With Attention-Deficit/Hyperactivity Disorder (MTA), investigators applied validational criteria to compare ADHD subjects with and without comorbid internalizing disorders and ODD/CD.
RESULTS
Substantial evidence of main effects of internalizing and externalizing comorbid disorders was found. Moderate evidence of interactions of parent-reported anxiety and ODD/CD status were noted on response to treatment, indicating that children with ADHD and anxiety disorders (but no ODD/CD) were likely to respond equally well to the MTA behavioral and medication treatments. Children with ADHD-only or ADHD with ODD/CD (but without anxiety disorders) responded best to MTA medication treatments (with or without behavioral treatments), while children with multiple comorbid disorders (anxiety and ODD/CD) responded optimally to combined (medication and behavioral) treatments.
CONCLUSIONS
Findings indicate that three clinical profiles, ADHD co-occurring with internalizing disorders (principally parent-reported anxiety disorders) absent any concurrent disruptive disorder (ADHD + ANX), ADHD co-occurring with ODD/CD but no anxiety (ADHD + ODD/CD), and ADHD with both anxiety and ODD/CD (ADHD + ANX + ODD/CD) may be sufficiently distinct to warrant classification as ADHD subtypes different from "pure" ADHD with neither comorbidity. Future clinical, etiological, and genetics research should explore the merits of these three ADHD classification options. |
Exploring Customer Preferences with Probabilistic Topics Models | Customer preference learning and recommendation for (e-)commerce is a widely researched problem where a number of different solutions have been proposed. In this study we propose and implement a novel approach to the problem of extracting and modelling user preferences in commerce using latent topic models. We explore the use of probabilistic topic models on transaction itemsets considering both single one-time actions and customers’ shopping history. We conclude that the extracted latent models not only provide insight to the consumer behaviour but also can effectively support an item recommender system. |
Joint Beat and Downbeat Tracking with Recurrent Neural Networks | In this paper we present a novel method for jointly extracting beats and downbeats from audio signals. A recurrent neural network operating directly on magnitude spectrograms is used to model the metrical structure of the audio signals at multiple levels and provides an output feature that clearly distinguishes between beats and downbeats. A dynamic Bayesian network is then used to model bars of variable length and align the predicted beat and downbeat positions to the global best solution. We find that the proposed model achieves state-of-the-art performance on a wide range of different musical genres and styles. |
Endoscopic Subtotal Thyroidectomy: The Procedure of Choice for Graves’ disease? | The aim of this study was to evaluate the feasibility and outcomes of endoscopic subtotal thyroidectomy for Graves’ disease. From August 1998 to April 2008, a total of 100 patients with benign thyroid diseases underwent endoscopic thyroidectomy via the breast approach. Among these patients, 42 underwent subtotal thyroidectomy for Graves’ disease. The resection was successfully completed endoscopically in 41 patients (98%). Overall, the mean operating time, mean blood loss, and mean resected thyroid weight were 277 minutes, 76 ml, and 49.9 g, respectively. As the resected thyroid weight increased, the operating time was significantly prolonged and the blood loss significantly increased. Morbidities included one permanent and one temporary case of recurrent laryngeal nerve palsy with hypocalcemia. A hypertrophic scar was seen in the right breast medial margin in three men. Thyroid function was classified as euthyroidism, hypothyroidism, and recurrent hyperthyroidism in 5, 34, and 3 patients, respectively. At 92 months of median follow-up, two patients had modest operation-associated symptoms: one with swallowing discomfort and another with paresthesia in the anterior chest wall at the time of discharge. However, both patients’ symptoms disappeared within 36 months after surgery. Young women were highly satisfied, with an overall mean satisfaction rating of 9.3 points. Although the endoscopic approach may be relatively contraindicated for large thyroid glands, endoscopic subtotal thyroidectomy via the breast approach is a safe, feasible procedure with excellent cosmetic benefits, and it may be the procedure of choice in carefully selected patients with Graves’ disease. |
Perseverance in self-perception and social perception: biased attributional processes in the debriefing paradigm. | Two experiments demonstrated that self-perceptions and social perceptions may persevere after the initial basis for such perceptions has been completely discredited. In both studies subjects first received false feedback, indicating that they had either succeeded or failed on a novel discrimination task and then were thoroughly debriefed concerning the predetermined and random nature of this outcome manipulation. In experiment 2, both the initial outcome manipulation and subsequent debriefing were watched and overheard by observers. Both actors and observers showed substantial perseverance of initial impressions concerning the actors' performance and abilities following a standard "outcome" debriefing. "Process" debriefing, in which explicit discussion of the perseverance process was provided, generally proved sufficient to eliminate erroneous self-perceptions. Biased attribution processes that might underlie perserverance phenomena and the implications of the present data for the ethical conduct of deception research are discussed. |
DualBlink: A Wearable Device to Continuously Detect, Track, and Actuate Blinking For Alleviating Dry Eyes and Computer Vision Syndrome | Increased visual attention, such as during computer use leads to less blinking, which can cause dry eyes—the leading cause of computer vision syndrome. As people spend more time looking at screens on mobile and desktop devices, computer vision syndrome is becoming epidemic in today's population, leading to blurry vision, fatigue, and a reduced quality of life.
One way to alleviate dry eyes is increased blinking. In this paper, we present a series of glasses-mounted devices that track the wearer's blink rate and, upon absent blinks, trigger blinks through actuation: light flashes, physical taps, and small puffs of air near the eye. We conducted a user study to evaluate the effectiveness of our devices and found that air puff and physical tap actuations result in a 36% increase in participants’ average blink rate. Air puff thereby struck the best compromise between effective blink actuations and low distraction ratings from participants. In a follow-up study, we found that high intensity, short puffs near the eye were most effective in triggering blinks while receiving only low-rated distraction and invasiveness ratings from participants. We conclude this paper with two miniaturized and self-contained DualBlink prototypes, one integrated into the frame of a pair of glasses and the other one as a clip-on for existing glasses. We believe that DualBlink can serve as an always-available and viable option to treat computer vision syndrome in the future. |
Question Asking During Tutoring | Whereas it is well documented that student question asking is infrequent in classroom environments, there is little research on questioning processes during tutoring. The present study investigated the questions asked in tutoring sessions on research methods (college students) and algebra (7th graders). Student questions were approximately 240 times as frequent in tutoring settings as classroom settings, whereas tutor questions were only slightly more frequent than teacher questions. Questions were classified by (a) degree of specification , (b) content, and (c) question-generation mechanism to analyze their quality. Student achievement was positively correlated with the quality of student questions after students had some experience with tutoring, but the frequency of questions was not correlated with achievement. Students partially self-regulated their learning by identifying knowledge deficits and asking questions to repair them, but they need training to improve these skills. We identified some ways that tutors and teachers might improve their question-asking skills. specializations are discourse processes and questioning mechanisms. |
Ask Me Any Rating: A Content-based Recommender System based on Recurrent Neural Networks | In this work we propose Ask Me Any Rating (AMAR), a novel content-based recommender system based on deep neural networks which is able to produce top-N recommendations leveraging user and item embeddings which are learnt from textual information describing the items. A comprehensive experimental evaluation conducted on stateof-the-art datasets showed a significant improvement over all the baselines taken into account. |
Privacy Games Along Location Traces: A Game-Theoretic Framework for Optimizing Location Privacy | The mainstream approach to protecting the privacy of mobile users in location-based services (LBSs) is to alter (e.g., perturb, hide, and so on) the users’ actual locations in order to reduce exposed sensitive information. In order to be effective, a location-privacy preserving mechanism must consider both the privacy and utility requirements of each user, as well as the user’s overall exposed locations (which contribute to the adversary’s background knowledge).
In this article, we propose a methodology that enables the design of optimal user-centric location obfuscation mechanisms respecting each individual user’s service quality requirements, while maximizing the expected error that the optimal adversary incurs in reconstructing the user’s actual trace. A key advantage of a user-centric mechanism is that it does not depend on third-party proxies or anonymizers; thus, it can be directly integrated in the mobile devices that users employ to access LBSs. Our methodology is based on the mutual optimization of user/adversary objectives (maximizing location privacy versus minimizing localization error) formalized as a Stackelberg Bayesian game. This formalization makes our solution robust against any location inference attack, that is, the adversary cannot decrease the user’s privacy by designing a better inference algorithm as long as the obfuscation mechanism is designed according to our privacy games.
We develop two linear programs that solve the location privacy game and output the optimal obfuscation strategy and its corresponding optimal inference attack. These linear programs are used to design location privacy--preserving mechanisms that consider the correlation between past, current, and future locations of the user, thus can be tuned to protect different privacy objectives along the user’s location trace. We illustrate the efficacy of the optimal location privacy--preserving mechanisms obtained with our approach against real location traces, showing their performance in protecting users’ different location privacy objectives. |
Bi-directional AC-DC/DC-AC converter for power sharing of hybrid AC/DC systems | In this paper, some of the aspects related to the connectivity of DC microgrids to the main grid are investigated. A prototype system has been designed and implemented to address these aspects. The described system is dependent mainly on sustainable energy sources. Hence, a special care has been given to dealing with this kind of sources while designing different components of the system. Certain features had to be maintained in the system in order to assure efficient integration of different sources such as, efficient and reliable load-feeding capability and full controllability of voltage and power flow among various buses in the system. Two different converters have been investigated; firstly, a fully controlled rectifier has been designed to tie the DC grid with the AC one. A vector decoupling controlled sinusoidal pulse width modulation (SPWM) technique has been used to allow the designed rectifier to maintain a constant output voltage while being able to control the active and reactive power drawn from the grid independently. Hence, this controlled rectifier acts as a voltage regulator for the DC microgrid and has a uni-directional power flow capability from the AC grid to the DC microgrid. Moreover, in order to allow bi-directional power flow, a bi-directional AC-DC/DC-AC converter has also been designed. The Bi-directional AC-DC/DC-AC converter controls the active power transferred from the DC grid to the AC grid while operating at unity power factor. In addition, it controls the active power transferred from the AC grid to the DC grid while operating at unity power factor. Both simulation and experimental results verify the validity of the proposed system. |
Dynamic Clustering of Streaming Short Documents | Clustering technology has found numerous applications in mining textual data. It was shown to enhance the performance of retrieval systems in various different ways, such as identifying different query aspects in search result diversification, improving smoothing in the context of language modeling, matching queries with documents in a latent topic space in ad-hoc retrieval, summarizing documents etc. The vast majority of clustering methods have been developed under the assumption of a static corpus of long (and hence textually rich) documents. Little attention has been given to streaming corpora of short text, which is the predominant type of data in Web 2.0 applications, such as social media, forums, and blogs. In this paper, we consider the problem of dynamically clustering a streaming corpus of short documents. The short length of documents makes the inference of the latent topic distribution challenging, while the temporal dynamics of streams allow topic distributions to change over time. To tackle these two challenges we propose a new dynamic clustering topic model - DCT - that enables tracking the time-varying distributions of topics over documents and words over topics. DCT models temporal dynamics by a short-term or long-term dependency model over sequential data, and overcomes the difficulty of handling short text by assigning a single topic to each short document and using the distributions inferred at a certain point in time as priors for the next inference, allowing the aggregation of information. At the same time, taking a Bayesian approach allows evidence obtained from new streaming documents to change the topic distribution. Our experimental results demonstrate that the proposed clustering algorithm outperforms state-of-the-art dynamic and non-dynamic clustering topic models in terms of perplexity and when integrated in a cluster-based query likelihood model it also outperforms state-of-the-art models in terms of retrieval quality. |
Analyzing Student Work Patterns Using Programming Exercise Data | Web-based programming exercises are a useful way for students to practice and master essential concepts and techniques presented in introductory programming courses. Although these systems are used fairly widely, we have a limited understanding of how students use these systems, and what can be learned from the data collected by these systems.
In this paper, we perform a preliminary exploratory analysis of data collected by the CloudCoder programming exercise system from five introductory courses taught in two programming languages across three colleges and universities. We explore a number of interesting correlations in the data that confirm existing hypotheses. Finally, and perhaps most importantly, we demonstrate the effectiveness and future potential of systems like CloudCoder to help us study novice programmers. |
In a randomized, double-blind clinical trial, adjuvant atorvastatin improved symptoms of depression and blood lipid values in patients suffering from severe major depressive disorder. | BACKGROUND
The administration of statins seems to be a promising new avenue in the treatment of patients suffering from major depressive disorder (MDD), though patients suffering from severe MDD remain unstudied in this respect. The aim of the present study was therefore to investigate, in a randomized double-blind clinical trial, the influence of adjuvant atorvastatin on symptoms of depression in patients with MDD.
METHODS
A total of 60 patients suffering from MDD (mean age: 32.25 years; 53% males) received a standard medication of 40 mg/d citalopram. Next, patients were randomly assigned either to the atorvastatin group (20 mg/d) or to the placebo group. Blood lipid values were assessed at baseline and on completion of the study 12 weeks later. Experts rated depressive symptoms via Hamilton Depression Rating Scales (HDRS) at baseline and 3, 6 and 12 weeks later.
RESULTS
HDRS scores decreased over time; the significant Time by Group interaction showed that symptoms of depression decreased more in the atorvastatin than in the placebo group. Compared to the placebo group, in the atorvastatin group cholesterol, triglyceride, and Low Density Lipids (LDL) significantly decreased, and High Density Lipids (HDL) significantly increased over time. HDRS scores and blood lipid values were generally not associated.
CONCLUSIONS
The pattern of results suggests that adjuvant atorvastatin favorably influences symptoms of depression among patients with severe MDD. Given that after 12 weeks of monotherapy and adjuvant atorvastatin patients were still moderately to severely depressed, more powerful treatment algorithms such as augmentation and change of medication are highly recommended. |
A bug Mining tool to identify and analyze security bugs using Naive Bayes and TF-IDF | Bug report contains a vital role during software development, However bug reports belongs to different categories such as performance, usability, security etc. This paper focuses on security bug and presents a bug mining system for the identification of security and non-security bugs using the term frequency-inverse document frequency (TF-IDF) weights and naïve bayes. We performed experiments on bug report repositories of bug tracking systems such as bugzilla and debugger. In the proposed approach we apply text mining methodology and TF-IDF on the existing historic bug report database based on the bug s description to predict the nature of the bug and to train a statistical model for manually mislabeled bug reports present in the database. The tool helps in deciding the priorities of the incoming bugs depending on the category of the bugs i.e. whether it is a security bug report or a non-security bug report, using naïve bayes. Our evaluation shows that our tool using TF-IDF is giving better results than the naïve bayes method. |
ERRoR ANAlysIs AND RETENTIoN-AwARE ERRoR MANAgEMENT FoR NAND FlAsh MEMoRy | With continued scaling of NAND flash memory process technology and multiple bits programmed per cell, NAND flash reliability and endurance are degrading. In our research, we experimentally measure, characterize, analyze, and model error patterns in nanoscale flash memories. Based on the understanding developed using real flash memory chips, we design techniques for more efficient and effective error management than traditionally used costly error correction codes. |
Optimal relay path selection and cooperative communication protocol for a swarm of UAVs | In many applications based on the use of unmanned aerial vehicles (UAVs), it is possible to establish a cluster of UAVs in which each UAV knows the other vehicle's position. Assuming that the common channel condition between any two nodes of UAVs is line-of-sight (LOS), the time and energy consumption for data transmission on each path that connecting two nodes may be estimated by a node itself. In this paper, we use a modified Bellman-Ford algorithm to find the best selection of relay nodes in order to minimize the time and energy consumption for data transmission between any UAV node in the cluster and the UAV acting as the cluster head. This algorithm is applied with a proposed cooperative MAC protocol that is compatible with the IEEE 802.11 standard. The evaluations under data saturation conditions illustrate noticeable benefits in successful packet delivery ratio, average delay, and in particular the cost of time and energy. |
Internet Addiction and Relationships with Insomnia, Anxiety, Depression, Stress and Self-Esteem in University Students: A Cross-Sectional Designed Study | BACKGROUND AND AIMS
Internet addiction (IA) could be a major concern in university medical students aiming to develop into health professionals. The implications of this addiction as well as its association with sleep, mood disorders and self-esteem can hinder their studies, impact their long-term career goals and have wide and detrimental consequences for society as a whole. The objectives of this study were to: 1) Assess potential IA in university medical students, as well as factors associated with it; 2) Assess the relationships between potential IA, insomnia, depression, anxiety, stress and self-esteem.
METHODS
Our study was a cross-sectional questionnaire-based survey conducted among 600 students of three faculties: medicine, dentistry and pharmacy at Saint-Joseph University. Four validated and reliable questionnaires were used: the Young Internet Addiction Test, the Insomnia Severity Index, the Depression Anxiety Stress Scales (DASS 21), and the Rosenberg Self Esteem Scale (RSES).
RESULTS
The average YIAT score was 30 ± 18.474; Potential IA prevalence rate was 16.8% (95% confidence interval: 13.81-19.79%) and it was significantly different between males and females (p-value = 0.003), with a higher prevalence in males (23.6% versus 13.9%). Significant correlations were found between potential IA and insomnia, stress, anxiety, depression and self-esteem (p-value < 0.001); ISI and DASS sub-scores were higher and self-esteem lower in students with potential IA.
CONCLUSIONS
Identifying students with potential IA is important because this addiction often coexists with other psychological problems. Therefore, interventions should include not only IA management but also associated psychosocial stressors such as insomnia, anxiety, depression, stress, and self-esteem. |
Grasses : systematics and evolution | Phylogeny systematics and classifications morpholgy and anatomy developmental and evolutionary processes reproductive biology biochemical diversity macromolecular studies biogeography. |
Effects of defects on dielectric breakdown phenomena and life time of polymeric insulation | Effects of defects on dielectric breakdown phenomena and life time of PE insulation were investigated. Volatile impurity was observed by FT-IR spectrum and oxidation reaction was faster on Cu than Al open pan. From the artificial impurities such as carbon fiber, nylon, Cu and Al particles, electrical tree started and dielectric breakdown was occurred finally. Space charge formed by the injection of electron from electrode and trapped in impurities acts as the main cause of dielectric breakdown in polymeric insulation. |
Extracting depth and matte using a color-filtered aperture | This paper presents a method for automatically extracting a scene depth map and the alpha matte of a foreground object by capturing a scene through RGB color filters placed in the camera lens aperture. By dividing the aperture into three regions through which only light in one of the RGB color bands can pass, we can acquir three shifted views of a scene in the RGB planes of an image in a single exposure. In other words, a captured image has depth-dependent color misalignment. We develop a color alignment measure to estimate disparities between the RGB planes for depth reconstruction. We also exploit color misalignment cues in our matting algorithm in order to disambiguate between the foreground and background regions even where their colors are similar. Based on the extracted depth and matte, the color misalignment in the captured image can be canceled, and various image editing operations can be applied to the reconstructed image, including novel view synthesis, postexposure refocusing, and composition over different backgrounds. |
Modelling bird communities/landscape patterns relationships in a rural area of South-Western France | The new trends in agricultural policy in Western Europe conduct to new management problems in maintaining and utilizing biological resources. In the South-Western France, the evolution of agricultural practices occurs in two opposite ways. On one hand, the intensification of agriculture leads to simplify the landscape by hedgerows removal, grasslands ploughing and drainage for corn cultivation. On the other hand, the decreasing numbers of cattle and sheep conduct the less fertile parts of the territory to evolve into fallow. These two processes are closely linked on a same territory and important interactions exist between intensive agricultural areas and semi-natural communities. To understand the importance of these interactions and their role in ecological stability of landscapes, we use passerine bird communities as an ecological indicator. We modelized the relationships between birds and landscape structure from 256 relevés. Each relevé includes a bird count point of 20 mn and a description of the landscape feature on the surrounding 6.25 ha. An ordination of the relevés along the main ecological gradients was realized using Correspondence Analysis. Then, these ordinations where related to the landscape structure with Stepwise and Multiple Regression Analysis. The rate of woody area, the hedgerow network complexity and the rate of fallow land are the main ecological gradients. We have used this model to measure the importance of the changes induced on landscape by a range of management practices differing in intensity. To achieve this aim we compare the displacement of 116 relevés along the ecological gradients between 1983 and 1988. The changes occurring both in bird composition and landscape structure reveal the ecological impacts of the different management practices (hedgerow removal, drainage, ploughing, decreasing grazing pressure). We examine the behaviour of ecological diversity of landscape units differing in structure and use. |
HRCT of interstitial lung disease (ILD): The basic ingredients of the alphabet soup | Any information contained in this pdf file is automatically generated from digital material submitted to EPOS by third parties in the form of scientific presentations. References to any names, marks, products, or services of third parties or hypertext links to thirdparty sites or information are provided solely as a convenience to you and do not in any way constitute or imply ECR's endorsement, sponsorship or recommendation of the third party, information, product or service. ECR is not responsible for the content of these pages and does not make any representations regarding the content or accuracy of material in this file. As per copyright regulations, any unauthorised use of the material or parts thereof as well as commercial reproduction or multiple distribution by any traditional or electronically based reproduction/publication method ist strictly prohibited. You agree to defend, indemnify, and hold ECR harmless from and against any and all claims, damages, costs, and expenses, including attorneys' fees, arising from or related to your use of these pages. Please note: Links to movies, ppt slideshows and any other multimedia files are not available in the pdf version of presentations. www.myESR.org |
An Empirical Evaluation of True Online TD({\lambda}) | The true online TD(λ) algorithm has recently been proposed (van Seijen and Sutton, 2014) as a universal replacement for the popular TD(λ) algorithm, in temporal-difference learning and reinforcement learning. True online TD(λ) has better theoretical properties than conventional TD(λ), and the expectation is that it also results in faster learning. In this paper, we put this hypothesis to the test. Specifically, we compare the performance of true online TD(λ) with that of TD(λ) on challenging examples, random Markov reward processes, and a real-world myoelectric prosthetic arm. We use linear function approximation with tabular, binary, and non-binary features. We assess the algorithms along three dimensions: computational cost, learning speed, and ease of use. Our results confirm the strength of true online TD(λ): 1) for sparse feature vectors, the computational overhead with respect to TD(λ) is minimal; for non-sparse features the computation time is at most twice that of TD(λ), 2) across all domains/representations the learning speed of true online TD(λ) is often better, but never worse than that of TD(λ), and 3) true online TD(λ) is easier to use, because it does not require choosing between trace types, and it is generally more stable with respect to the step-size. Overall, our results suggest that true online TD(λ) should be the first choice when looking for an efficient, general-purpose TD method. |
LEARNING DEEP RESNET BLOCKS SEQUENTIALLY | We prove a multiclass boosting theory for the ResNet architectures which simultaneously creates a new technique for multiclass boosting and provides a new algorithm for ResNet-style architectures. Our proposed training algorithm, BoostResNet, is particularly suitable in non-differentiable architectures. Our method only requires the relatively inexpensive sequential training of T “shallow ResNets”. We prove that the training error decays exponentially with the depth T if the weak module classifiers that we train perform slightly better than some weak baseline. In other words, we propose a weak learning condition and prove a boosting theory for ResNet under the weak learning condition. A generalization error bound based on margin theory is proved and suggests that ResNet could be resistant to overfitting using a network with l1 norm bounded weights. |
SPECIFIC AND CROSS-OVER EFFECTS OF FOAM ROLLING ON ANKLE DORSIFLEXION RANGE OF MOTION. | BACKGROUND
Flexibility is an important physical quality. Self-myofascial release (SMFR) methods such as foam rolling (FR) increase flexibility acutely but how long such increases in range of motion (ROM) last is unclear. Static stretching (SS) also increases flexibility acutely and produces a cross-over effect to contralateral limbs. FR may also produce a cross-over effect to contralateral limbs but this has not yet been identified.
PURPOSE
To explore the potential cross-over effect of SMFR by investigating the effects of a FR treatment on the ipsilateral limb of 3 bouts of 30 seconds on changes in ipsilateral and contralateral ankle DF ROM and to assess the time-course of those effects up to 20 minutes post-treatment.
METHODS
A within- and between-subject design was carried out in a convenience sample of 26 subjects, allocated into FR (n=13) and control (CON, n=13) groups. Ankle DF ROM was recorded at baseline with the in-line weight-bearing lunge test for both ipsilateral and contralateral legs and at 0, 5, 10, 15, 20 minutes following either a two-minute seated rest (CON) or 3 3 30 seconds of FR of the plantar flexors of the dominant leg (FR). Repeated measures ANOVA was used to examine differences in ankle DF ROM.
RESULTS
No significant between-group effect was seen following the intervention. However, a significant within-group effect (p<0.05) in the FR group was seen between baseline and all post-treatment time-points (0, 5, 10, 15 and 20 minutes). Significant within-group effects (p<0.05) were also seen in the ipsilateral leg between baseline and at all post-treatment time-points, and in the contralateral leg up to 10 minutes post-treatment, indicating the presence of a cross-over effect.
CONCLUSIONS
FR improves ankle DF ROM for at least 20 minutes in the ipsilateral limb and up to 10 minutes in the contralateral limb, indicating that FR produces a cross-over effect into the contralateral limb. The mechanism producing these cross-over effects is unclear but may involve increased stretch tolerance, as observed following SS.
LEVELS OF EVIDENCE
2c. |
Information coding in the olfactory system: Evidence for a stereotyped and highly organized epitope map in the olfactory bulb | In the mammalian olfactory system, information from approximately 1000 different odorant receptor types is organized in the nose into four spatial zones. Each zone is a mosaic of randomly distributed neurons expressing different receptor types. In these studies, we have obtained evidence that information highly distributed in the nose is transformed in the olfactory bulb of the brain into a highly organized spatial map. We find that specific odorant receptor gene probes hybridize in situ to small, and distinct, subsets of olfactory bulb glomeruli. The spatial and numerical characteristics of the patterns of hybridization that we observe with different receptor probes indicate that, in the olfactory bulb, olfactory information undergoes a remarkable organization into a fine, and perhaps stereotyped, spatial map. In our view, this map is in essence an epitope map, whose approximately 1000 distinct components are used in a multitude of different combinations to discriminate a vast array of different odors. |
Seroprevalence Study of Herpes Simplex Virus Type 2 among Pregnant Women in Germany Using a Type-Specific Enzyme Immunoassay | In a German seroepidemiological study to determine the proportion of pregnant women infected with herpes simplex virus type 2 (HSV-2) and at risk of transmitting the infection to the newborn during delivery, IgG antibodies to HSV-2 in 1999 sera collected from pregnant women in 1996–1997 were measured using an automated type-specific enzyme immunoassay (Cobas Core HSV-2 IgG EIA; Roche Diagnostics, Switzerland). The seroprevalence of HSV-2 was 8.9%, and control studies with a type-common HSV assay measuring antibodies to HSV-1 and HSV-2 revealed that 20.7% of pregnant women were seronegative for HSV antibodies and are therefore at risk of acquiring primary genital HSV infection of either type. |
An Empirical Study of Operating System Errors | We present a study of operating system errors found by automatic, static, compiler analysis applied to the Linux and OpenBSD kernels. Our approach differs from previous studies that consider errors found by manual inspection of logs, testing, and surveys because static analysis is applied uniformly to the entire kernel source, though our approach necessarily considers a less comprehensive variety of errors than previous studies. In addition, automation allows us to track errors over multiple versions of the kernel source to estimate how long errors remain in the system before they are fixed.We found that device drivers have error rates up to three to seven times higher than the rest of the kernel. We found that the largest quartile of functions have error rates two to six times higher than the smallest quartile. We found that the newest quartile of files have error rates up to twice that of the oldest quartile, which provides evidence that code "hardens" over time. Finally, we found that bugs remain in the Linux kernel an average of 1.8 years before being fixed. |
Jitter Analysis and a Benchmarking Figure-of-Merit for Phase-Locked Loops | This brief analyzes the jitter as well as the power dissipation of phase-locked loops (PLLs). It aims at defining a benchmark figure-of-merit (FOM) that is compatible with the well-known FOM for oscillators but now extended to an entire PLL. The phase noise that is generated by the thermal noise in the oscillator and loop components is calculated. The power dissipation is estimated, focusing on the required dynamic power. The absolute PLL output jitter is calculated, and the optimum PLL bandwidth that gives minimum jitter is derived. It is shown that, with a steep enough input reference clock, this minimum jitter is independent of the reference frequency and output frequency for a given PLL power budget. Based on these insights, a benchmark FOM for PLL designs is proposed. |
Runtime Verification in Erlang by Using Contracts | During its lifetime, a program suffers several changes that seek to improve or to augment some parts of its functionality. However, these modifications usually also introduce errors that affect the alreadyworking code. There are several approaches and tools that help to spot and find the source of these errors. However, most of these errors could be avoided beforehand by using some of the knowledge that the programmers had when they were writing the code. This is the idea behind the design-by-contract approach, where users can define contracts that can be checked during runtime. In this paper, we apply the principles of this approach to Erlang, enabling, in this way, a runtime verification system in this language. We define two types of contracts. One of them can be used in any Erlang program, while the second type is intended to be used only in concurrent programs. We provide the details of the implementation of both types of contracts. Moreover, we provide an extensive explanation of each contract as well as examples of their usage. All the ideas presented in this paper have been implemented in a contract-based runtime verification system named EDBC. Its source code is available at GitHub as an open-source and free project. |
Advances in Prospect Theory : Cumulative Representation of Uncertainty | We develop a new version of prospect theory that employs cumulative rather than separable decision weights and extends the theory in several respects. This version, called cumulative prospect theory, applies to uncertain as well as to risky prospects with any number of outcomes, and it allows different weighting functions for gains and for losses. Two principles, diminishing sensitivity and loss aversion, are invoked to explain the characteristic curvature of the value function and the weighting functions. A review of the experimental evidence and the results of a new experiment confirm a distinctive fourfold pattern of risk attitudes: risk aversion for gains and risk seeking for losses of high probability; risk seeking for gains and risk aversion for losses of low probability. Expected utility theory reigned for several decades as the dominant normative and descriptive model of decision making under uncertainty, but it has come under serious question in recent years. There is now general agreement that the theory does not provide an adequate description of individual choice: a substantial body of evidence shows that decision makers systematically violate its basic tenets. Many alternative models have been proposed in response to this empirical challenge (for reviews, see Camerer, 1989; Fishburn, 1988; Machina, 1987). Some time ago we presented a model of choice, called prospect theory, which explained the major violations of expected utility theory in choices between risky prospects with a small number of outcomes (Kahneman and Tversky, 1979; Tversky and Kahneman, 1986). The key elements of this theory are 1) a value function that is concave for gains, convex for losses, and steeper for losses than for gains, *An earlier version of this article was entitled "Cumulative Prospect Theory: An Analysis of Decision under Uncertainty." This article has benefited from discussions with Colin Camerer, Chew Soo-Hong, David Freedman, and David H. Krantz. We are especially grateful to Peter P. Wakker for his invaluable input and contribution to the axiomatic analysis. We are indebted to Richard Gonzalez and Amy Hayes for running the experiment and analyzing the data. This work was supported by Grants 89-0064 and 88-0206 from the Air Force Office of Scientific Research, by Grant SES-9109535 from the National Science Foundation, and by the Sloan Foundation. 298 AMOS TVERSKY/DANIEL KAHNEMAN and 2) a nonlinear transformation of the probability scale, which overweights small probabilities and underweights moderate and high probabilities. In an important later development, several authors (Quiggin, 1982; Schmeidler, 1989; Yaari, 1987; Weymark, 1981) have advanced a new representation, called the rank-dependent or the cumulative functional, that transforms cumulative rather than individual probabilities. This article presents a new version of prospect theory that incorporates the cumulative functional and extends the theory to uncertain as well to risky prospects with any number of outcomes. The resulting model, called cumulative prospect theory, combines some of the attractive features of both developments (see also Luce and Fishburn, 1991). It gives rise to different evaluations of gains and losses, which are not distinguished in the standard cumulative model, and it provides a unified treatment of both risk and uncertainty. To set the stage for the present development, we first list five major phenomena of choice, which violate the standard model and set a minimal challenge that must be met by any adequate descriptive theory of choice. All these findings have been confirmed in a number of experiments, with both real and hypothetical payoffs. Framing effects. The rational theory of choice assumes description invariance: equivalent formulations of a choice problem should give rise to the same preference order (Arrow, 1982). Contrary to this assumption, there is much evidence that variations in the framing of options (e.g., in terms of gains or losses) yield systematically different preferences (Tversky and Kahneman, 1986). Nonlinear preferences. According to the expectation principle, the utility of a risky prospect is linear in outcome probabilities. Allais's (1953) famous example challenged this principle by showing that the difference between probabilities of .99 and 1.00 has more impact on preferences than the difference between 0.10 and 0.11. More recent studies observed nonlinear preferences in choices that do not involve sure things (Camerer and Ho, 1991). Source dependence. People's willingness to bet on an uncertain event depends not only on the degree of uncertainty but also on its source. Ellsberg (1961) observed that people prefer to bet on an urn containing equal numbers of red and green balls, rather than on an urn that contains red and green balls in unknown proportions. More recent evidence indicates that people often prefer a bet on an event in their area of competence over a bet on a matched chance event, although the former probability is vague and the latter is clear (Heath and Tversky, 1991). Risk seeking. Risk aversion is generally assumed in economic analyses of decision under uncertainty. However, risk-seeking choices are consistently observed in two classes of decision problems. First, people often prefer a small probability of winning a large prize over the expected value of that prospect. Second, risk seeking is prevalent when people must choose between a sure loss and a substantial probability of a larger loss. Loss' aversion. One of the basic phenomena of choice under both risk and uncertainty is that losses loom larger than gains (Kahneman and Tversky, 1984; Tversky and Kahneman, 1991). The observed asymmetry between gains and losses is far too extreme to be explained by income effects or by decreasing risk aversion. ADVANCES IN PROSPECT THEORY 299 The present development explains loss aversion, risk seeking, and nonlinear preferences in terms of the value and the weighting functions. It incorporates a framing process, and it can accommodate source preferences. Additional phenomena that lie beyond the scope of the theory--and of its alternatives--are discussed later. The present article is organized as follows. Section 1.1 introduces the (two-part) cumulative functional; section 1.2 discusses relations to previous work; and section 1.3 describes the qualitative properties of the value and the weighting functions. These properties are tested in an extensive study of individual choice, described in section 2, which also addresses the question of monetary incentives. Implications and limitations of the theory are discussed in section 3. An axiomatic analysis of cumulative prospect theory is presented in the appendix. |
Expanding the Concept of Literacy | W hen I ask people to define, in one or two sentences, the word literacy—what literacy is and what it enables people to do—the answers I receive are quite similar. To most people, literacy means the ability to read and write, to understand information, and to express ideas both concretely and abstractly. The unstated assumption is that “to read and write” means to read and write text. Although media and computer literacy are occasionally mentioned in these definitions, media literacy is most often defined as the ability to understand how television and film manipulate viewers, and computer literacy is generally defined as the skills to use a computer to perform various tasks, such as accessing the Web. If I also ask people about the nature of language, I usually receive the response that language enables us to conceptualize ideas, to abstract information, and to receive and share knowledge. The underlying assumption, so accepted that it is never stated, is that language means words. Twenty-five years ago, a rather popular book was entitled Four Arguments for the Elimination of Television.1 Clearly, that vision of the world was not realized: television has not been eliminated, |
Integrating Bridge Structural Health Monitoring and Condition-Based Maintenance Management | The development of structural health monitoring (SHM) technology has evolved for over fifteen years in Hong Kong since the implementation of the “Wind And Structural Health Monitoring System (WASHMS)” on the suspension Tsing Ma Bridge in 1997. Five cable-supported bridges in Hong Kong, namely the Tsing Ma (suspension) Bridge, the Kap Shui Mun (cable-stayed) Bridge, the Ting Kau (cable-stayed) Bridge, the Western Corridor (cable-stayed) Bridge, and the Stonecutters (cable-stayed) Bridge, have been instrumented with sophisticated long-term SHM systems. These SHM systems mainly focus on the tracing of structural behavior and condition of the long-span bridges over their lifetime. Recently, a structural health monitoring and maintenance management system (SHM&MMS) has been designed and will be implemented on twenty-one sea-crossing viaduct bridges with a total length of 9,283 km in the Hong Kong Link Road (HKLR) of the Hong Kong – Zhuhai – Macao Bridge of which the construction commenced in mid-2012. The SHM&MMS gives more emphasis on durability monitoring of the reinforced concrete viaduct bridges in marine environment and integration of the SHM system and bridge maintenance management system. It is targeted to realize the transition from traditional corrective and preventive maintenance to condition-based maintenance (CBM) of in-service bridges. The CBM uses real-time and continuous monitoring data and monitoring-derived information on the condition of bridges (including structural performance and deterioration mechanisms) to identify when the actual maintenance is necessary and how cost-effective maintenance can be conducted. This paper outlines how to incorporate SHM technology into bridge maintenance strategy to realize CBM management of bridges. |
Structural Attention Neural Networks for improved sentiment analysis | We introduce a tree-structured attention neural network for sentences and small phrases and apply it to the problem of sentiment classification. Our model expands the current recursive models by incorporating structural information around a node of a syntactic tree using both bottomup and top-down information propagation. Also, the model utilizes structural attention to identify the most salient representations during the construction of the syntactic tree. To our knowledge, the proposed models achieve state of the art performance on the Stanford Sentiment Treebank dataset. |
Trunk muscle activity during lumbar stabilization exercises on both a stable and unstable surface. | STUDY DESIGN
Controlled laboratory study.
OBJECTIVES
To clarify whether differences in surface stability influence trunk muscle activity.
BACKGROUND
Lumbar stabilization exercises on unstable surfaces are performed widely. One perceived advantage in performing stabilization exercises on unstable surfaces is the potential for increased muscular demand. However, there is little evidence in the literature to help establish whether this assumption is correct.
METHODS
Nine healthy male subjects performed lumbar stabilization exercises. Pairs of intramuscular fine-wire or surface electrodes were used to record the electromyographic signal amplitude of the rectus abdominis, the external obliques, the transversus abdominis, the erector spinae, and lumbar multifidus. Five exercises were performed on the floor and on an unstable surface: elbow-toe, hand-knee, curl-up, side bridge, and back bridge. The EMG data were normalized as the percentage of the maximum voluntary contraction, and data between doing each exercise on the stable versus unstable surface were compared using a Wilcoxon signed-rank test.
RESULTS
With the elbow-toe exercise, the activity level for all muscles was enhanced when performed on the unstable surface. When performing the hand-knee and side bridge exercises, activity level of the more global muscles was enhanced when performed on an unstable surface. Performing the curl-up exercise on an unstable surface, increased the activity of the external obliques but reduced transversus abdominis activation.
CONCLUSION
This study indicates that lumbar stabilization exercises on an unstable surface enhanced the activities of trunk muscles, except for the back bridge exercise. |
Effects of heat on meat proteins - Implications on structure and quality of meat products. | Globular and fibrous proteins are compared with regard to structural behaviour on heating, where the former expands and the latter contracts. The meat protein composition and structure is briefly described. The behaviour of the different meat proteins on heating is discussed. Most of the sarcoplasmic proteins aggregate between 40 and 60 °C, but for some of them the coagulation can extend up to 90°C. For myofibrillar proteins in solution unfolding starts at 30-32°C, followed by protein-protein association at 36-40°C and subsequent gelation at 45-50°C (conc.>0.5% by weight). At temperatures between 53 and 63°C the collagen denaturation occurs, followed by collagen fibre shrinkage. If the collagen fibres are not stabilised by heat-resistant intermolecular bonds, it dissolves and forms gelatine on further heating. The structural changes on cooking in whole meat and comminuted meat products, and the alterations in water-holding and texture of the meat product that it leads to, are then discussed. |
Watch me playing, i am a professional: a first study on video game live streaming | "Electronic-sport" (E-Sport) is now established as a new entertainment genre. More and more players enjoy streaming their games, which attract even more viewers. In fact, in a recent social study, casual players were found to prefer watching professional gamers rather than playing the game themselves. Within this context, advertising provides a significant source of revenue to the professional players, the casters (displaying other people's games) and the game streaming platforms. For this paper, we crawled, during more than 100 days, the most popular among such specialized platforms: Twitch.tv. Thanks to these gigabytes of data, we propose a first characterization of a new Web community, and we show, among other results, that the number of viewers of a streaming session evolves in a predictable way, that audience peaks of a game are explainable and that a Condorcet method can be used to sensibly rank the streamers by popularity. Last but not least, we hope that this paper will bring to light the study of E-Sport and its growing community. They indeed deserve the attention of industrial partners (for the large amount of money involved) and researchers (for interesting problems in social network dynamics, personalized recommendation, sentiment analysis, etc.). |
Malware Visualization for Fine-Grained Classification | Due to the rapid rise of automated tools, the number of malware variants has increased dramatically, which poses a tremendous threat to the security of the Internet. Recently, some methods for quick analysis of malware have been proposed, but these methods usually require a large computational overhead and cannot classify samples accurately for large-scale and complex malware data set. Therefore, in this paper, we propose a new visualization method for characterizing malware globally and locally to achieve fast and effective fine-grained classification. We take a new approach to visualize malware as RGB-colored images and extract global features from the images. Gray-level co-occurrence matrix and color moments are selected to describe the global texture features and color features, respectively, which produces low-dimensional feature data to reduce the complexity of training model. Moreover, a series of special byte sequences are extracted from code sections and data sections of malware and are processed into feature vectors by Simhash as the local features. Finally, we merge the global features and local features to perform malware classification using random forest, K-nearest neighbor, and support vector machine. Experimental results show that our approach obtains the highest accuracy of 97.47% and the highest F-measure of 96.85% of 7087 samples from 15 families. Color features and the local features effectively assist in the classification based on texture features and enhance the F-measure by 3.4% and 1%, respectively. Overall, the combination of global features and local features can realize fine-grained malware classification with low computational cost. |
A high performance 180 nm generation logic technology | A 180 nm generation logic technology has been developed with high performance 140 nm L/sub GATE/ transistors, six layers of aluminum interconnects and low-/spl epsi/ SiOF dielectrics. The transistors are optimized for a reduced 1.3-1.5 V operation to provide high performance and low power. The interconnects feature high aspect ratio metal lines for low resistance and fluorine doped SiO/sub 2/ inter-level dielectrics for reduced capacitance. 16 Mbit SRAMs with a 5.59 /spl mu/m/sup 2/ 6-T cell size have been built on this technology as a yield and reliability test vehicle. |
Design and Implementation of Anti Spyware System using Design Patterns | Spyware is considered as a great threat to confidentiality that it can cause loss of control over private data for computer users. This kind of threat might select some data and send it to another third party without the consent of the user. Spyware detection techniques have been presented traditionally by three approaches; signature, behavior and specification based detection techniques. These approaches were successful in detecting known spyware but it suffers from some drawbacks such as; the need for updating data describing the system behavior to detect new or unknown spywares, and the high level of false positive or false negative rate. Therefore, in this paper we introduce a proposed anti spyware system design and implementation using design patterns for detecting and classifying spyware. This proposed approach can be reusable and modifying itself for any new or unknown spyware. |
Social media interventions for diet and exercise behaviours: a systematic review and meta-analysis of randomised controlled trials | OBJECTIVES
To conduct a systematic review of randomised controlled trials (RCTs) examining the use of social media to promote healthy diet and exercise in the general population.
DATA SOURCES
MEDLINE, CENTRAL, ERIC, PubMed, CINAHL, Academic Search Complete, Alt Health Watch, Health Source, Communication and Mass Media Complete, Web of Knowledge and ProQuest Dissertation and Thesis (2000-2013).
STUDY ELIGIBILITY CRITERIA
RCTs of social media interventions promoting healthy diet and exercise behaviours in the general population were eligible. Interventions using social media, alone or as part of a complex intervention, were included.
STUDY APPRAISAL AND SYNTHESIS
Study quality was assessed using the Cochrane Risk of Bias Tool. We describe the studies according to the target populations, objectives and nature of interventions, outcomes examined, and results and conclusions. We extracted data on the primary and secondary outcomes examined in each study. Where the same outcome was assessed in at least three studies, we combined data in a meta-analysis.
RESULTS
22 studies were included. Participants were typically middle-aged Caucasian women of mid-to-high socioeconomic status. There were a variety of interventions, comparison groups and outcomes. All studies showed a decrease in programme usage throughout the intervention period. Overall, no significant differences were found for primary outcomes which varied across studies. Meta-analysis showed no significant differences in changes in physical activity (standardised mean difference (SMD) 0.13 (95% CI -0.04 to 0.30), 12 studies) and weight (SMD -0.00 (95% CI -0.19 to 0.19), 10 studies); however, pooled results from five studies showed a significant decrease in dietary fat consumption with social media (SMD -0.35 (95% CI -0.68 to -0.02)).
CONCLUSIONS
Social media may provide certain advantages for public health interventions; however, studies of social media interventions to date relating to healthy lifestyles tend to show low levels of participation and do not show significant differences between groups in key outcomes. |
Protecting C++ Dynamic Dispatch Through VTable Interleaving | With new defenses against traditional control-flow attacks like stack buffer overflows, attackers are increasingly using more advanced mechanisms to take control of execution. One common such attack is vtable hijacking, in which the attacker exploits bugs in C++ programs to overwrite pointers to the virtual method tables (vtables) of objects. We present a novel defense against this attack. The key insight of our approach is a new way of laying out vtables in memory through careful ordering and interleaving. Although this layout is very different from a traditional layout, it is backwards compatible with the traditional way of performing dynamic dispatch. Most importantly, with this new layout, checking the validity of a vtable at runtime becomes an efficient range check, rather than a set membership test. Compared to prior approaches that provide similar guarantees, our approach does not use any profiling information, has lower performance overhead (about 1%) and has lower code bloat overhead (about 1.7%). |
A high voltage battery charger with smooth charge mode transition in BCD process | A 60 V battery charger implemented using TSMC 0.25 µm Bipolar-CMOS-DMOS (BCD) 60 V process is presented in this work. A novel transition method is proposed to ensure smooth transitions between constant current (CC) and constant voltage (CV) modes. Since the proposed approach would require only additional diodes, power consumption and chip area are conserved in comparison with other sophisticated methods proposed in prior works. The charger sources a current of 50 mA in CC mode and has an efficiency of 75% ∼ 80% throughout the charging sequence. The supply voltage is kept at 3 volts higher than the battery voltage throughout the charging sequence to maintain charging efficiency. A thermal protection circuit is included in this design to prevent the charger from operating at a temperature over the maximum allowed temperature. |
Performance Analysis of Engineering Students for Recruitment Using Classification Data Mining Techniques | -Data Mining is a powerful tool for academic intervention. Mining in education environment is called Educational Data Mining. Educational Data Mining is concerned with developing new methods to discover knowledge from educational database and can used for decision making in educational system. In our work, we collected the student’s data from engineering institute that have different information about their previous and current academics records like students S.No., Name, Branch, 10, 12 , B.Tech passing percentage and final grade & then apply different classification algorithm using Data Mining tools (WEKA) for analysis the students academics performance for Training & placement department or company executives. This paper deals with a comparative study of various classification data mining algorithms for the performance analysis of the student’s academic records and check which algorithm is optimal for classifying students’ based on their final grade. This analysis also classifies the performance of Students into Excellent, Good and Average categories. Keywords– Data Mining, Discover knowledge, Technical Education, Educational Data, Mining, Classification, WEKA, Classifiers. |
HUmanoid Robotic Leg via pneumatic muscle actuators : implementation and control | In this article, a HUmanoid Robotic Leg (HURL) via the utilization of pneumatic muscle actuators (PMAs) is presented. PMAs are a pneumatic form of actuation possessing crucial attributes for the implementation of a design that mimics the motion characteristics of a human ankle. HURL acts as a feasibility study in the conceptual goal of developing a 10 degree-of-freedom (DoF) lower-limb humanoid for compliance and postural control, while serving as a knowledge basis for its future alternative use in prosthetic robotics. HURL’s design properties are described in detail, while its 2-DoF motion capabilities (dorsiflexion–plantar flexion, eversion–inversion) are experimentally evaluated via an advanced nonlinear PID-based control algorithm. |
Student Portfolios and the College Admissions Problem | We develop a decentralized Bayesian model of college admissions with two ranked colleges, heterogeneous students and two realistic match frictions: students find it costly to apply to college, and college evaluations of their applications are uncertain. Students thus face a portfolio choice problem in their application decision, while colleges choose admissions standards that act like market-clearing prices. Enrollment at each college is affected by the standards at the other college through student portfolio reallocation. In equilibrium, student-college sorting may fail: weaker students sometimes apply more aggressively, and the weaker college might impose higher standards. Applying our framework, we analyze affirmative action, showing how it induces minority applicants to construct their application portfolios as if they were majority students of higher caliber. ∗Earlier versions were called “The College Admissions Problem with Uncertainty” and “A Supply and Demand Model of the College Admissions Problem”. We would like to thank Philipp Kircher (CoEditor) and three anonymous referees for their helpful comments and suggestions. Greg Lewis and Lones Smith are grateful for the financial support of the National Science Foundation. We have benefited from seminars at BU, UCLA, Georgetown, HBS, the 2006 Two-Sided Matching Conference (Bonn), 2006 SED (Vancouver), 2006 Latin American Econometric Society Meetings (Mexico City), and 2007 American Econometric Society Meetings (New Orleans), Iowa State, Harvard/MIT, the 2009 Atlanta NBER Conference, and Concordia. Parag Pathak and Philipp Kircher provided useful discussions of our paper. We are also grateful to John Bound and Brad Hershbein for providing us with student college applications data. †Arizona State University, Department of Economics, Tempe, AZ 85287. ‡Harvard University, Department of Economics, Cambridge, MA 02138. §University of Wisconsin, Department of Economics, Madison, WI 53706. |
The Impact of an Enhanced Interpreter Service Intervention on Hospital Costs and Patient Satisfaction | Many health care providers do not provide adequate language access services for their patients who are limited English-speaking because they view the costs of these services as prohibitive. However, little is known about the costs they might bear because of unaddressed language barriers or the costs of providing language access services. To investigate how language barriers and the provision of enhanced interpreter services impact the costs of a hospital stay. Prospective intervention study. Public hospital inpatient medicine service. Three hundred twenty-three adult inpatients: 124 Spanish-speakers whose physicians had access to the enhanced interpreter intervention, 99 Spanish-speakers whose physicians only had access to usual interpreter services, and 100 English-speakers matched to Spanish-speaking participants on age, gender, and admission firm. Patient satisfaction, hospital length of stay, number of inpatient consultations and radiology tests conducted in the hospital, adherence with follow-up appointments, use of emergency department (ED) services and hospitalizations in the 3 months after discharge, and the costs associated with provision of the intervention and any resulting change in health care utilization. The enhanced interpreter service intervention did not significantly impact any of the measured outcomes or their associated costs. The cost of the enhanced interpreter service was $234 per Spanish-speaking intervention patient and represented 1.5% of the average hospital cost. Having a Spanish-speaking attending physician significantly increased Spanish-speaking patient satisfaction with physician, overall hospital experience, and reduced ED visits, thereby reducing costs by $92 per Spanish-speaking patient over the study period. The enhanced interpreter service intervention did not significantly increase or decrease hospital costs. Physician–patient language concordance reduced return ED visit and costs. Health care providers need to examine all the cost implications of different language access services before they deem them too costly. |
International Congress on Engineering and Food ( ICEF 11 ) An overview of encapsulation technologies for food applications | Encapsulation is a process to entrap active agents within a carrier material and it is a useful tool to improve delivery of bioactive molecules and living cells into foods. Materials used for design of protective shell of encapsulates must be food-grade, biodegradable and able to form a barrier between the internal phase and its surroundings. Among all materials, the most widely used for encapsulation in food applications are polysaccharides. Proteins and lipids are also appropriate for encapsulation. Spray drying is the most extensively applied encapsulation technique in the food industry because it is flexible, continuous, but more important an economical operation. Most of encapsulates are spray-dried ones, rest of them are prepared by spray-chilling, freeze-drying, melt extrusion and melt injection. Molecular inclusion in cyclodextrins and liposomal vesicles are more expensive technologies, and therefore, less exploited. There are number of reasons why to employ an encapsulation technology and this paper reviews some of them. For example, this technology may provide barriers between sensitive bioactive materials and the environment, and thus, to allow taste and aroma differentiation, mask bad tasting or smelling, stabilize food ingredients or increase their bioavailability. One of the most important reasons for encapsulation of active ingredients is to provide improved stability in final products and during processing. Another benefit of encapsulation is less evaporation and degradation of volatile actives, such as aroma. Furthermore, encapsulation is used to mask unpleasant feelings during eating, such as bitter taste and astringency of polyphenols. Also, another goal of employing encapsulation is to prevent reaction with other components in food products such as oxygen or water. In addition to the above, encapsulation may be used to immobilize cells or enzymes in food processing applications, such as fermentation process and metabolite production processes. There is an increasing demand to find suitable solutions that provide high productivity and, at the same time, satisfy an adequate quality of the final food products. This paper aims to provide a short overview of commonly used processes to encapsulate food actives. © 2011 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of ICEF11 Executive Committee Members |
Seminal plasma biochemistry and spermatozoa characteristics of Atlantic cod (Gadus morhua L.) of wild and cultivated origin. | Our objectives were to compare spermatozoa activity, morphology, and seminal plasma (SP) biochemistry between wild and cultivated Atlantic cod (Gadus morhua). Swimming velocities of wild cod spermatozoa were significantly faster than those of cultivated males. Wild males had a significantly larger spermatozoa head area, perimeter, and length, while cultivated males had more circular heads. Total monounsaturated fatty acids and the ratio of n-3/n-6 were significantly higher in sperm from wild males, while total n-6 from cultivated males was significantly higher than the wild males. Significantly higher concentrations of the fatty acids C14:0, C16:1n-7, C18:4n-3, C20:1n-11, C20:1n-9, C20:4n-3, C22:1n-11, and C22:6n-3 were observed in wild males, while significantly higher concentrations of C18:2n-6, C20:2n-6, and C22:5n-3 occurred in cultivated males. Osmolality, protein concentration, lactate dehydrogenase and superoxide dismutase activity of SP of wild males were significantly higher than the cultivated males. Antioxidant capacity of SP was significantly higher in cultivated males, while pH and anti-trypsin did not differ between fish origins. Four bands of anti-trypsin activity and nine protein bands were detected in SP. Performing a discriminant function analysis, on morphology and fatty acid data showed significant discrimination between wild and cultivated fish. Results are relevant to breeding programs and aquaculture development. |
Contour Model-Based Hand-Gesture Recognition Using the Kinect Sensor | In RGB-D sensor-based pose estimation, training data collection is often a challenging task. In this paper, we propose a new hand motion capture procedure for establishing the real gesture data set. A 14-patch hand partition scheme is designed for color-based semiautomatic labeling. This method is integrated into a vision-based hand gesture recognition framework for developing desktop applications. We use the Kinect sensor to achieve more reliable and accurate tracking under unconstrained conditions. Moreover, a hand contour model is proposed to simplify the gesture matching process, which can reduce the computational complexity of gesture matching. This framework allows tracking hand gestures in 3-D space and matching gestures with simple contour model, and thus supports complex real-time interactions. The experimental evaluations and a real-world demo of hand gesture interaction demonstrate the effectiveness of this framework. |
The new Degree in Materials Engineering at the Technical University of Madrid (UPM) | The Technical University of Madrid (UPM) is pioneering the introduction in Spain of a new Degree in Materials Engineering, with a four-year duration, accessed directly from the baccalaureate-level studies. The materials engineers from the UPM will be prepared to meet the challenges not only in the field of structural materials, but also in functional materials and biomaterials. With the objective of enhancing student exchange programmes, the third year of studies will be taught entirely in English. |
Effects of long-term biventricular stimulation for resynchronization on echocardiographic measures of remodeling. | BACKGROUND
Long-term ventricular resynchronization therapy improves symptom status. Changes in left ventricular remodeling have not been adequately evaluated.
METHODS AND RESULTS
Fifty-three patients with systolic heart failure and bundle-branch block underwent implantation of biventricular stimulation (BVS) devices as part of a randomized trial. Echocardiograms were acquired at randomization and at 6-week intervals until completion of 12 weeks of continuous BVS. There were no changes in heart rate or QRS duration after 12 weeks of BVS. Serum norepinephrine values did not change with BVS. After 12 weeks of BVS, left atrial volume decreased. Left ventricular end-systolic and end-diastolic dimensions and left ventricular end-systolic volume also decreased after 12 weeks of BVS. Sphericity index did not change. Measures of systolic function, including left ventricular outflow tract and aortic velocity time integral and myocardial performance index, improved.
CONCLUSIONS
Long-term resynchronization therapy results in atrial and ventricular reverse remodeling and improved hemodynamics. |
Efficiency of parallel direct optimization. | Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. |
In vitro and in vivo Efficacy of New Blue Light Emitting Diode Phototherapy Compared to Conventional Halogen Quartz Phototherapy for Neonatal Jaundice | High intensity light emitting diodes (LEDs) are being studied as possible light sources for the phototherapy of neonatal jaundice, as they can emit high intensity light of narrow wavelength band in the blue region of the visible light spectrum corresponding to the spectrum of maximal bilirubin absorption. We developed a prototype blue gallium nitride LED phototherapy unit with high intensity, and compared its efficacy to commercially used halogen quartz phototherapy device by measuring both in vitro and in vivo bilirubin photodegradation. The prototype device with two focused arrays, each with 500 blue LEDs, generated greater irradiance than the conventional device tested. The LED device showed a significantly higher efficacy of bilirubin photodegradation than the conventional phototherapy in both in vitro experiment using microhematocrit tubes (44+/-7% vs. 35+/-2%) and in vivo experiment using Gunn rats (30+/-9% vs. 16+/-8%). We conclude that high intensity blue LED device was much more effective than conventional phototherapy of both in vitro and in vivo bilirubin photodegradation. Further studies will be necessary to prove its clinical efficacy. |
Stochastic choice of basis functions in adaptive function approximation and the functional-link net | A theoretical justification for the random vector version of the functional-link (RVFL) net is presented in this paper, based on a general approach to adaptive function approximation. The approach consists of formulating a limit-integral representation of the function to be approximated and subsequently evaluating that integral with the Monte-Carlo method. Two main results are: (1) the RVFL is a universal approximator for continuous functions on bounded finite dimensional sets, and (2) the RVFL is an efficient universal approximator with the rate of approximation error convergence to zero of order O(C/ radicaln), where n is number of basis functions and with C independent of n. Similar results are also obtained for neural nets with hidden nodes implemented as products of univariate functions or radial basis functions. Some possible ways of enhancing the accuracy of multivariate function approximations are discussed. |
A Region Ensemble for 3-D Face Recognition | In this paper, we introduce a new system for 3D face recognition based on the fusion of results from a committee of regions that have been independently matched. Experimental results demonstrate that using 28 small regions on the face allow for the highest level of 3D face recognition. Score-based fusion is performed on the individual region match scores and experimental results show that the Borda count and consensus voting methods yield higher performance than the standard sum, product, and min fusion rules. In addition, results are reported that demonstrate the robustness of our algorithm by simulating large holes and artifacts in images. To our knowledge, no other work has been published that uses a large number of 3D face regions for high-performance face matching. Rank one recognition rates of 97.2% and verification rates of 93.2% at a 0.1% false accept rate are reported and compared to other methods published on the face recognition grand challenge v2 data set. |
A Mixed Hierarchical Attention based Encoder-Decoder Approach for Standard Table Summarization | Structured data summarization involves generation of natural language summaries from structured input data. In this work, we consider summarizing structured data occurring in the form of tables as they are prevalent across a wide variety of domains. We formulate the standard table summarization problem, which deals with tables conforming to a single predefined schema. To this end, we propose a mixed hierarchical attention based encoderdecoder model which is able to leverage the structure in addition to the content of the tables. Our experiments on the publicly available WEATHERGOV dataset show around 18 BLEU (∼ 30%) improvement over the current state-of-the-art. |
Calibrated alar base excision: A 20-year experience | Conflicting guidelines for excisions about the alar base led us to develop calibrated alar base excision, a modification of Weir's approach. In approximately 20% of 1500 rhinoplasties this technique was utilized as a final step. Of these patients, 95% had lateral wallexcess (“tall nostrils”), 2% had nostril floor excess (“wide nostrils”), 2% had a combination of these (“tall-wide nostrils”), and 1% had thick nostril rims. Lateral wall excess length is corrected by a truncated crescent excision of the lateral wall above the alar crease. Nasal floor excess is improved by an excision of the nasal sill. Combination noses (e.g., tall-wide) are approached with a combination alar base excision. Finally, noses with thick rims are improved with diamond excision. Closure of the excision is accomplished with fine simple external sutures. Electrocautery is unnecessary and deep sutures are utilized only in wide noses. Few complications were noted. Benefits of this approach include straightforward surgical guidelines, a natural-appearing correction, avoidance of notching or obvious scarring, and it is quick and simple. |
Towards a Decentralized Magnetic Indoor Positioning System | Decentralized magnetic indoor localization is a sophisticated method for processing sampled magnetic data directly on a mobile station (MS), thereby decreasing or even avoiding the need for communication with the base station. In contrast to central-oriented positioning systems, which transmit raw data to a base station, decentralized indoor localization pushes application-level knowledge into the MS. A decentralized position solution has thus a strong feasibility to increase energy efficiency and to prolong the lifetime of the MS. In this article, we present a complete architecture and an implementation for a decentralized positioning system. Furthermore, we introduce a technique for the synchronization of the observed magnetic field on the MS with the artificially-generated magnetic field from the coils. Based on real-time clocks (RTCs) and a preemptive operating system, this method allows a stand-alone control of the coils and a proper assignment of the measured magnetic fields on the MS. A stand-alone control and synchronization of the coils and the MS have an exceptional potential to implement a positioning system without the need for wired or wireless communication and enable a deployment of applications for rescue scenarios, like localization of miners or firefighters. |
B2B E-commerce: Frameworks for e-readiness assessment | This paper looks into the development of e- readiness model for B2B e-commerce of Small and Medium Enterprises (SMEs). Drawing on existing research on e-readiness models of ICT (information, Communications Technology) specifically B2B e-commerce technologies, this paper formulates a conceptual framework of B2B e-commerce assessment constructs together with the specific research approach towards the development of a more robust e-readiness model. The research presents a conceptual model and framework that highlight the key factors in B2B e-commerce implementation, index score formulations, focus group evaluations and the recommendation approaches to improve B2B e-commerce readiness. |
Radiofrequency ablation of advanced head and neck cancer. | OBJECTIVE
To determine if the application of radiofrequency ablation to advanced head and neck cancer (HNC) would result in local control of the tumor.
DESIGN
Radiofrequency ablation was applied to advanced head and neck malignant tumors in the participants of this nonrandomized controlled trial.
SETTING
Academic tertiary care medical center.
PARTICIPANTS
Twenty-one participants with recurrent and/or unresectable HNC who failed treatment with surgery, radiation, and/or chemotherapy were selected for the trial. Patients deemed appropriate for curative standard radiation or surgery were not accepted as participants.
INTERVENTION
Radiofrequency ablation was applied to head and neck tumors under general anesthesia and computed tomographic scan guidance.
MAIN OUTCOME MEASURES
The primary end point was local control. Computed tomographic scan tumor measurements were used to assess response by standard response evaluation criteria in solid tumors (RECIST) guidelines. Secondary outcome measures included survival and quality of life.
RESULTS
Eight of 13 participants had stable disease after intervention. Median survival was 127 days, and an improvement in University of Washington quality-of-life scores was noted. Adverse outcomes included 1 death due to carotid hemorrhage and 2 strokes.
CONCLUSION
Radiofrequency ablation is a palliative treatment alternative that shows promise in addressing the challenges of local control and quality of life in patients with incurable HNC who have failed standard curative treatment. |
Distributed Representations for Compositional Semantics | The mathematical representation of semantics is a key issue for Natural Language Processing (NLP). A lot of research has been devoted to finding ways of representing the semantics of individual words in vector spaces. Distributional approaches—meaning distributed representations that exploit co-occurrence statistics of large corpora—have proved popular and successful across a number of tasks. However, natural language usually comes in structures beyond the word level, with meaning arising not only from the individual words but also the structure they are contained in at the phrasal or sentential level. Modelling the compositional process by which the meaning of an utterance arises from the meaning of its parts is an equally fundamental task of NLP. This dissertation explores methods for learning distributed semantic representations and models for composing these into representations for larger linguistic units. Our underlying hypothesis is that neural models are a suitable vehicle for learning semantically rich representations and that such representations in turn are suitable vehicles for solving important tasks in natural language processing. The contribution of this thesis is a thorough evaluation of our hypothesis, as part of which we introduce several new approaches to representation learning and compositional semantics, as well as multiple state-of-the-art models which apply distributed semantic representations to various tasks in NLP. Part I focuses on distributed representations and their application. In particular, in Chapter 3 we explore the semantic usefulness of distributed representations by evaluating their use in the task of semantic frame identification. Part II describes the transition from semantic representations for words to compositional semantics. Chapter 4 covers the relevant literature in this field. Following this, Chapter 5 investigates the role of syntax in semantic composition. For this, we discuss a series of neural network-based models and learning mechanisms, and demonstrate how syntactic information can be incorporated into semantic composition. This study allows us to establish the effectiveness of syntactic information as a guiding parameter for semantic composition, and answer questions about the link between syntax and semantics. Following these discoveries regarding the role of syntax, Chapter 6 investigates whether it is possible to further reduce the impact of monolingual surface forms and syntax when attempting to capture semantics. Asking how machines can best approximate human signals of semantics, we propose multilingual information as one method for grounding semantics, and develop an extension to the distributional hypothesis for multilingual representations. Finally, Part III summarizes our findings and discusses future work. |
Favorable and Unfavorable Characteristics of EFL Teachers Perceived by University Students of Thailand | Teachers play pivotal roles in EFL classrooms. Characteristics of EFL teachers may affect students’ attitudes and motivations to language learning. The effective/good characteristics of the EFL teachers perceived by the students have been extensively investigated by the previous research works. However, the perceptions of the students from different backgrounds to EFL teachers may vary in different learning settings. In addition, the research works on both favorable and unfavorable characteristics of EFL teachers are comparatively scarce. This study aimed to investigate the favorable and unfavorable characteristics of the EFL teachers perceived by Thai university students. The data were collected from 6o students at Vongchavalitkul University. Open-ended questionnaires and semi-structured interviews were used as the main instruments for data collection. Useful information about EFL teachers’ personal trait-related characteristics and classroom teaching-related characteristics emerged from the data. The information is very useful and beneficial for the EFL teachers to reflect their personal characteristics and reconsider their classroom teaching, which may be very helpful for them to do some adjustment and preparation in their teaching to achieve better education results. |
Genetic screening for a single common LRRK2 mutation in familial Parkinson's disease. | Mutations in the leucine-rich repeat kinase 2 (LRRK2) gene cause some forms of autosomal dominant Parkinson's disease. We measured the frequency of a novel mutation (Gly2019 ser) in familial Parkinson's disease by screening genomic DNA of patients and controls. Of 767 affected individuals from 358 multiplex families, 35 (5%) individuals were either heterozygous (34) or homozygous (one) for the mutation, and had typical clinical findings of idiopathic Parkinson's disease. Thus, our results suggest that a single LRRK2 mutation causes Parkinson's disease in 5% of individuals with familial disease. Screening for this mutation should be a component of genetic testing for Parkinson's disease. |
Building contingency planning for closed-loop supply chains with product recovery | Contingency planning is the first stage in developing a formal set of production planning and control activities for the reuse of products obtained via return flows in a closed-loop supply chain. The paper takes a contingency approach to explore the factors that impact production planning and control for closed-loop supply chains that incorporate product recovery. A series of three cases are presented, and a framework developed that shows the common activities required for all remanufacturing operations. To build on the similarities and illustrate and integrate the differences in closed-loop supply chains, Hayes and Wheelwright’s product–process matrix is used as a foundation to examine the three cases representing Remanufacture-to-Stock (RMTS), Reassemble-to-Order (RATO), and Remanufacture-to-Order (RMTO). These three cases offer end-points and an intermediate point for closed-loop supply operations. Since they represent different positions on the matrix, characteristics such as returns volume, timing, quality, product complexity, test and evaluation complexity, and remanufacturing complexity are explored. With a contingency theory for closed-loop supply chains that incorporate product recovery in place, past cases can now be reexamined and the potential for generalizability of the approach to similar types of other problems and applications can be assessed and determined. © 2002 Elsevier Science B.V. All rights reserved. |
Saliency Detection via Absorbing Markov Chain | In this paper, we formulate saliency detection via absorbing Markov chain on an image graph model. We jointly consider the appearance divergence and spatial distribution of salient objects and the background. The virtual boundary nodes are chosen as the absorbing nodes in a Markov chain and the absorbed time from each transient node to boundary absorbing nodes is computed. The absorbed time of transient node measures its global similarity with all absorbing nodes, and thus salient objects can be consistently separated from the background when the absorbed time is used as a metric. Since the time from transient node to absorbing nodes relies on the weights on the path and their spatial distance, the background region on the center of image may be salient. We further exploit the equilibrium distribution in an ergodic Markov chain to reduce the absorbed time in the long-range smooth background regions. Extensive experiments on four benchmark datasets demonstrate robustness and efficiency of the proposed method against the state-of-the-art methods. |
Overtime evaluation of the vascular HEALing process after everolimus-eluting stent implantation by optical coherence tomography. The HEAL-EES study. | PURPOSE
Second-generation drug-eluting stent (DES) have shown a better safety and efficacy as compared to first generation DES due to an improved vascular healing process. This process has not been so far evaluated in vivo in an overtime fashion by optical coherent tomography (OCT). We sought to evaluate the vascular healing process after everolimus-eluting stent (EES) implantation at 6, 9 and 12months, by OCT.
METHODS
Consecutive 36 patients undergoing percutaneous coronary intervention with EES were randomized 1:1:1 to receive OCT imaging at 6 (group A), 9 (group B) or 12-month follow-up (group C). One patient from group C was excluded because of target lesion revascularization at 1-month, whereas 5 patients withdraw the informed consent. Finally, 30 patients were analyzed.
RESULTS
Neointimal thickness was not different between 3 groups (group A: 99.50 [94.06-127.79] μm, group B: 107.26 [83.48-133.59] μm, group C: 127.67 [102.51-138.49] μm; p=0.736). Although the percentage of "uncovered struts" was significantly higher in group A as compared to the other groups (8.0% vs. 4.4% vs. 2.9%, respectively; p=0.180), the ratio of uncovered to total struts per section <30% was similar between 3 groups (0.3% vs. 0.3% vs. 0%, respectively; p=1.000).
CONCLUSION
Healing process following EES implantation seems almost completed at 6-month follow-up. These data, which need to be confirmed in a larger study, may support the decision to shorten dual antiplatelet therapy. |
Dynamics of Rhinoplasty | Nasal dynamics were studied on 87 patients undergoing rhinoplasty of one zone or two distant nasal zones. Statistical analysis of the result revealed that reduction of the nasion area, besides setting the soft tissue back, gave the appearance of increased intercanthal distance and lengthened the nose. Reduction of the nasal bridge resulted in a wider appearance on front view and a cephalically rotated tip on profile. Augmentation of the bridge affected the nose reversely. Tip cephalad rotation was achieved by resecting one of the three areas: the cephalad portion of the lower lateral cartilages (affecting the rims more), the caudal septum (affecting the central portion more), and the caudal portion of the medial crura of the lower lateral cartilages (affecting the central portion only). Resection of the alar base not only narrowed the nostrils but also moved the alar rim caudally. Furthermore, it reduced tip projection when a large alar base reduction was done. Reduction of the nasal spine increased the upper lip length on profile and reduced tip projection when a large reduction took place. Significant reduction in caudal nose projection resulted in widening of the alar base. |
Reference-Conditioned Super-Resolution by Neural Texture Transfer | With the recent advancement in deep learning, we have witnessed a great progress in single image super-resolution. However, due to the significant information loss of the image downscaling process, it has become extremely challenging to further advance the state-of-theart, especially for large upscaling factors. This paper explores a new research direction in super resolution, called reference-conditioned superresolution, in which a reference image containing desired high-resolution texture details is provided besides the low-resolution image. We focus on transferring the high-resolution texture from reference images to the super-resolution process without the constraint of content similarity between reference and target images, which is a key difference from previous example-based methods. Inspired by recent work on image stylization, we address the problem via neural texture transfer. We design an end-to-end trainable deep model which generates detail enriched results by adaptively fusing the content from the low-resolution image with the texture patterns from the reference image. We create a benchmark dataset for the general research of reference-based super-resolution, which contains reference images paired with low-resolution inputs with varying degrees of similarity. Both objective and subjective evaluations demonstrate the great potential of using reference images as well as the superiority of our results over other state-of-the-art methods. |
StyleCheck : An Automated Stylistic Analysis Tool | StyleCheck is a user-friendly tool with multiple functions designed to aid in the production of quality writing. Its features include stylistic analysis (on both documentwide and individual-sentence scales) and spelling and grammar check, as well as generating suggested replacements for all types of errors. In addition, StyleCheck includes the capability to identify the famous author (out of a limited corpus) with the style most similar to the user’s. The source code for StyleCheck is available online at: https://github.com/alexpwelton/StyleCheck Dartmouth Computer Science Technical Report TR2014-754 |
Appropriate Use Criteria for Coronary Revascularization and Trends in Utilization, Patient Selection, and Appropriateness of Percutaneous Coronary Intervention. | IMPORTANCE
Appropriate Use Criteria for Coronary Revascularization were developed to critically evaluate and improve patient selection for percutaneous coronary intervention (PCI). National trends in the appropriateness of PCI have not been examined.
OBJECTIVE
To examine trends in PCI utilization, patient selection, and procedural appropriateness following the introduction of Appropriate Use Criteria.
DESIGN, SETTING, AND PARTICIPANTS
Multicenter, longitudinal, cross-sectional analysis of patients undergoing PCI between July 1, 2009, and December 31, 2014, at hospitals continuously participating in the National Cardiovascular Data Registry CathPCI registry over the study period.
MAIN OUTCOMES AND MEASURES
Proportion of nonacute PCIs classified as inappropriate at the patient and hospital level using the 2012 Appropriate Use Criteria for Coronary Revascularization.
RESULTS
A total of 2.7 million PCI procedures from 766 hospitals were included. Annual PCI volume of acute indications was consistent over the study period (377,540 in 2010; 374,543 in 2014), but the volume of nonacute PCIs decreased from 89,704 in 2010 to 59,375 in 2014. Among patients undergoing nonacute PCI, there were significant increases in angina severity (Canadian Cardiovascular Society grade III/IV angina, 15.8% in 2010 and 38.4% in 2014), use of antianginal medications prior to PCI (at least 2 antianginal medications, 22.3% in 2010 and 35.1% in 2014), and high-risk findings on noninvasive testing (22.2% in 2010 and 33.2% in 2014) (P < .001 for all), but only modest increases in multivessel coronary artery disease (43.7% in 2010 and 47.5% in 2014, P < .001). The proportion of nonacute PCIs classified as inappropriate decreased from 26.2% (95% CI, 25.8%-26.6%) to 13.3% (95% CI, 13.1%-13.6%), and the absolute number of inappropriate PCIs decreased from 21,781 to 7921. Hospital-level variation in the proportion of PCIs classified as inappropriate persisted over the study period (median, 12.6% [interquartile range, 5.9%-22.9%] in 2014).
CONCLUSIONS AND RELEVANCE
Since the publication of the Appropriate Use Criteria for Coronary Revascularization in 2009, there have been significant reductions in the volume of nonacute PCI. The proportion of nonacute PCIs classified as inappropriate has declined, although hospital-level variation in inappropriate PCI persists. |
Relative sub-image based features for leaf recognition using support vector machine | In this paper, we extract our proposed RSC features from leaf images and use SVM classifier to implement an automated leaf recognition system for plant leaf identification and classification. Automatic plant species identification and classification is helpful in biology, forest and agriculture to study and discover new species in plant in botanical gardens and is also used to recognize the medicinal plants to prepare herbal medicines. Here, 300 leaf features are extracted from a single leaf of 624 leaf dataset to classify 23 different kinds of plant species with an average accuracy of 95%. Compared with other approaches, our proposed algorithm has less time complexity and is easy to implementation with higher accuracy. |
Efficient deep learning for stereo matching with larger image patches | Stereo matching plays an important role in many applications, such as Advanced Driver Assistance Systems, 3D reconstruction, navigation, etc. However it is still an open problem with many difficult. Most difficult are often occlusions, object boundaries, and low or repetitive textures. In this paper, we propose a method for processing the stereo matching problem. We propose an efficient convolutional neural network to measure how likely the two patches matched or not and use the similarity as their stereo matching cost. Then the cost is refined by stereo methods, such as semiglobal maching, subpixel interpolation, median filter, etc. Our architecture uses large image patches which makes the results more robust to texture-less or repetitive textures areas. We experiment our approach on the KITTI2015 dataset which obtain an error rate of 4.42% and only needs 0.8 second for each image pairs. |
UNPU: A 50.6TOPS/W unified deep neural network accelerator with 1b-to-16b fully-variable weight bit-precision | Deep neural network (DNN) accelerators [1-3] have been proposed to accelerate deep learning algorithms from face recognition to emotion recognition in mobile or embedded environments [3]. However, most works accelerate only the convolutional layers (CLs) or fully-connected layers (FCLs), and different DNNs, such as those containing recurrent layers (RLs) (useful for emotion recognition) have not been supported in hardware. A combined CNN-RNN accelerator [1], separately optimizing the computation-dominant CLs, and memory-dominant RLs or FCLs, was reported to increase overall performance, however, the number of processing elements (PEs) for CLs and RLs was limited by their area and consequently, performance was suboptimal in scenarios requiring only CLs or only RLs. Although the PEs for RLs can be reconfigured into PEs for CLs or vice versa, only a partial reconfiguration was possible resulting in marginal performance improvement. Moreover, previous works [1-2] supported a limited set of weight bit precisions, such as either 4b or 8b or 16b. However, lower weight bit-precisions can achieve better throughput and higher energy efficiency, and the optimal bit-precision can be varied according to different accuracy/performance requirements. Therefore, a unified DNN accelerator with fully-variable weight bit-precision is required for the energy-optimal operation of DNNs within a mobile environment. |
A Hard-Core Predicate for all One-Way Functions | A central tool in constructing pseudorandom generators, secure encryption functions, and in other areas are “hard-core” predicates <italic>b</italic> of functions (permutations) ƒ, discovered in [Blum Micali 82]. Such <italic>b</italic>(<italic>x</italic>) cannot be efficiently guessed (substantially better than 50-50) given only ƒ(<italic>x</italic>). Both <italic>b</italic>, ƒ are computable in polynomial time.
[Yao 82] transforms any one-way function ƒ into a more complicated one, ƒ<supscrpt>*</supscrpt>, which has a hard-core predicate. The construction applies the original ƒ to many small pieces of the input to ƒ<supscrpt>*</supscrpt> just to get one “hard-core” bit. The security of this bit may be smaller than any constant positive power of the security of ƒ. In fact, for inputs (to ƒ<supscrpt>*</supscrpt>) of practical size, the pieces effected by ƒ are so small that ƒ can be inverted (and the “hard-core” bit computed) by exhaustive search.
In this paper we show that every one-way function, padded to the form ƒ(<italic>p</italic>, <italic>x</italic>) = (<italic>p</italic>, <italic>g</italic>(<italic>x</italic>)), ‖‖<italic>p</italic>‖‖ = ‖<italic>x</italic>‖, has by itself a hard-core predicate of the same (within a polynomial) security. Namely, we prove a conjecture of [Levin 87, sec. 5.6.2] that the scalar product of Boolean vectors <italic>p</italic>, <italic>x</italic> is a hard-core of every one-way function ƒ(<italic>p</italic>, <italic>x</italic>) = (<italic>p</italic>, <italic>g</italic>(<italic>x</italic>)). The result extends to multiple (up to the logarithm of security) such bits and to any distribution on the <italic>x</italic>'s for which ƒ is hard to invert. |
Structural plasticity and the evolution of antibody affinity and specificity. | The germline precursor to the ferrochelatase antibody 7G12 was found to bind the polyether jeffamine in addition to its cognate hapten N-methylmesoporphyrin. A comparison of the X-ray crystal structures of the ligand-free germline Fab and its complex with either hapten or jeffamine reveals that the germline antibody undergoes significant conformational changes upon the binding of these two structurally distinct ligands, which lead to increased antibody-ligand complementarity. The five somatic mutations introduced during affinity maturation lead to enhanced binding affinity for hapten and a loss in affinity for jeffamine. Moreover, a comparison of the crystal structures of the germline and affinity-matured antibodies reveals that somatic mutations not only fix the optimal binding site conformation for the hapten, but also introduce interactions that interfere with the binding of non-hapten molecules. The structural plasticity of this germline antibody and the structural effects of the somatic mutations that result in enhanced affinity and specificity for hapten likely represent general mechanisms used by the immune response, and perhaps primitive proteins, to evolve high affinity, selective receptors for so many distinct chemical structures. |
Gender differences in cardiovascular prophylaxis: Focus on antiplatelet treatment. | Cardiovascular disease (CVD) represents the leading cause of death worldwide, and equally affects both sexes although women develop disease at an older age than men. A number of clinical evidence has identified the female sex as an independent factor for poor prognosis, with the rate of mortality and disability following an acute cardiovascular (CV) event being higher in women than men. It has been argued that the different level of platelet reactivity between sexes may account for a different responsiveness to anti-platelet therapy, with consequent important implications on clinical outcomes. However, conclusive evidence supporting the concept of a gender-dependent effectiveness of platelet inhibitors are lacking. On the contrary, sex-related dissimilarities have been evidenced in cardiovascular patients in terms of age of presentation, comorbidities such as obesity, diabetes and renal disease, and a different pharmacological approach to and effectiveness in controlling classical cardiovascular risk factors such as hypertension, glucose profile and lipid dysmetabolism. All these factors could place women at an increased level of cardiovascular risk compared to men, and may concur to an enhanced pro-thrombogenic profile. The purpose of this manuscript is to provide an overview of gender-related differences in cardiovascular treatment, in order to highlight the need to improve the pharmacological prophylaxis adopted in women through a more accurate evaluation of the overall cardiovascular risk profile with consequent establishment of a more effective and targeted anti-thrombotic strategy which is not limited to the use of anti-platelet agents. |
DETECTION OF BOVINE HERPESVIRUS 1 FROM AN OUTBREAK OF INFECTIOUS BOVINE RHINOTRACHEITIS | Clinical symptoms from the respiratory tract were observed in cattle after the introduction of pregnant heifers into the dairy herd. Sera and nasal swabs from all animals and tissue samples from two dead animals were tested for BHV1. Specific antibodies against BHV1 were found in serum samples of 24 animals. Only one sample reacted doubtfully in gB ELISA. The virus was isolated only from nasal swabs and lungs collected from 2 weeks old calf. The remaining samples were negative in virus isolation test. PCR with external primers detected the presence of BHV1 in 11 nasal swabs and in lung and liver samples of 2 weeks old calf. In nested PCR almost all tested samples were positive. Restriction enzyme analysis confirmed the specificity of amplification. Results of laboratory diagnosis revealed that introduction of newly purchased animals into the herd initiated the outbreak of disease caused by BHV1. |
Modelling and multicriteria analysis of water saving scenarios for an irrigation district in the upper Yellow River Basin | Water saving in irrigation is a main issue in the Yellow River basin. This paper refers to a field and modelling study performed in the Huinong Irrigation District, a very large surface irrigation system in Ningxia, upper Yellow River basin, intended to assess water saving and improved water use issues. The decision support system SEDAM was purposefully developed to evaluate alternative scenarios of improvements of farm and off-farm irrigation canal systems. It includes a demand and delivery simulation tool and adopts multicriteria analysis. Simulation is performed at various scales, starting at the distributor and then successively at the sub-branch, branch and sector scales. It uses a database built from random generation of system characteristics at these scales, and based on field surveys. Demand is built from exploring interactively the irrigation scheduling simulation model ISAREG and the surface irrigation models SRFR and SIRMOD, which were previously parameterized. The first is used to generate improved irrigation schedules and the second to define improved basin irrigation scenarios. In addition, a simple paddy irrigation tool is used to simulate replacing the current deep flooding method by shallow water irrigation. Water delivery scenarios are built to match those of demand including several improved procedures that aim at controlling runoff and seepage. Results indicate that progressively adopting farm and delivery system improvements leads to reduced canal seepage and runoff, which is essential to an effective functioning of the drainage system, in addition to control diversions into the Huinong canal. Water savings amount to more than 50% of actual water use. However, results referring to the economic criteria, particularly to the farm gross margin, reveal that more stringent improvements have low impacts, i.e. the respective utilities increase little when scenarios require higher investments. The described application shows that adopting a DSS simulation model and multicriteria analysis is appropriate to assess water use improvements in large irrigation systems and that it is advantageous to perform the analysis of related impacts by combining economic and environmental criteria. The importance of adopting improved delivery systems is also evidenced. # 2007 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +351 213653480; fax: +351 213621575. E-mail address: [email protected] (L.S. Pereira). 0378-3774/$ – see front matter # 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.agwat.2007.08.011 a g r i c u l t u r a l w a t e r m a n a g e m e n t 9 4 ( 2 0 0 7 ) 9 3 – 1 0 8 94 |
Investigation of MPPT for PV applications by mathematical model | This paper will discuss the PV system as whole by emulating load and PV power supply in the lab conditions. The paper discussion would be based on the obtained results for Monocrystalline silicon PV cell module. The important point of interest would be the voltage open circuit, short circuit current, maximum power. The lab has been done in the Standard Testing Conditions (STC) to ensure a fair comparison under different conditions. This paper confirmed that as temperature increased the voltage open circuit decreases as well as power decreases, and when insolation increases the short circuit current increases makes the maximum power increases. Per unit value investigating whether temperature or insolation has a bigger impact on Pmax, and for these experimental conditions, it has been found that insolation affects the system. As well as the paper will focus on building a SIMULINK/Matlab Module to simulate PV cells under different environmental conditions and help in designing future PV systems. The work will be compared with lab-emulated data to be validated and address the accuracy of the obtained results with the real life results. The paper will study the integration of DC-DC Boost converter with open loop control feedback circuit (Fixed Duty Ratio). In addition, the paper will discuss three different approaches of Maximum Power Point Tracking (MPPT) which are P&O, RCC and INC. The performance of each one will be investigated and verified. |
Meta-Path Guided Embedding for Similarity Search in Large-Scale Heterogeneous Information Networks | Most real-world data can be modeled as heterogeneous information networks (HINs) consisting of vertices of multiple types and their relationships. Search for similar vertices of the same type in large HINs, such as bibliographic networks and business-review networks, is a fundamental problem with broad applications. Although similarity search in HINs has been studied previously, most existing approaches neither explore rich semantic information embedded in the network structures nor take user’s preference as a guidance. In this paper, we re-examine similarity search in HINs and propose a novel embedding-based framework. It models vertices as low-dimensional vectors to explore network structureembedded similarity. To accommodate user preferences at defining similarity semantics, our proposed framework, ESim, accepts user-defined meta-paths as guidance to learn vertex vectors in a user-preferred embedding space. Moreover, an efficient and parallel sampling-based optimization algorithm has been developed to learn embeddings in large-scale HINs. Extensive experiments on real-world large-scale HINs demonstrate a significant improvement on the effectiveness of ESim over several state-of-the-art algorithms as well as its scalability. |
Microfinance programs and better health: prospects for sub-Saharan Africa. | ALTHOUGH SOCIAL GRADIENTS IN MORBIDITY AND MORtality fromscrofula, rickets,andscarlet feverwerenoticedinEnglandasearlyas1845,currentunderstandingof therelationshipbetweenpovertyandillhealth is still evolving. A detailed examination of the social determinants of health is the current focus of a World Health OrganizationCommission,andaglobalagendathataddressestheoverlappingvulnerabilitiesofpoverty, social exclusion, andhealth recentlyhasbeenarticulated intheUnitedNationsMillennium Development Goals (MDG) framework. Sub-Saharan Africa remains the area of the world at greatest risk of failing to meet any MDG targets. Some experts suggest that conditions of extreme deprivation characterizing much of the region create “poverty traps” that limit access to proven interventions and constrain potential gains in employment, income, food, shelter, and education, carrying dire immediate and longer-term health consequences. The interdependence of poverty, health, and development might seem obvious, but cross-sectoral experience on how and where to intervene remains limited. Microfinance programs are increasingly at the center of this nexus, and new ideas can extend their potential benefits. Microfinance institutions (MFIs) provide poor households with access to financial services, allowing them to borrow and save in reliable and convenient forms. The success of the microfinance sector has been impressive. Across a wide variety of models, reported loan repayment rates, even among the poorest clients, often exceed 95%. Global experience has demonstrated that MFIs can recover all or most of their administrative costs through interest rates and user fees. Thus, rapid growth and wide scale are possible, even when donor funds are limited. By the end of 2005, more than 3000 MFIs were reported to have been providing services to 113 million clients, 84% of whom were women. The 2006 Nobel Peace Prize to Muhammad Yunus and Grameen Bank was given in recognition that microfinance also promises to effect social change. Small loans used for income generation have the potential to reduce poverty directly, while simultaneously catalyzing wider benefits including better health. At the most basic level, higher and steadier incomes make it easier to put food on the table each day. When health problems emerge, access to reliable ways to borrow and save can make it easier to pay for medicines and clinic visits. Financial access also can help individuals cope with unemployment caused by illness and forestall their need to sell off valuable assets. Thus, interventions to improve financial access may complement interventions to improve health conditions. Opportunities also are emerging for MFIs to broaden their scope and benefits that as yet remain largely unrealized. Microfinance institutions operate in villages, slums, and neighborhoods in which the lack of financial access is just one of many deprivations. In creating neighborhood-based associations of women that meet regularly and focus on tools to improve livelihoods, many MFIs may have the potential to more directly address health-related concerns. Doing so will not make sense for every institution and population, and microfinance leaders rightly have been wary of weighing down institutions with added responsibilities. But evidence is mounting to suggest that combining financial and health interventions can be powerful. These understandings are evolving, and expectations need to be realistic. Despite a tide of global optimism, beneficial effects of microfinance programs have not been witnessed in all contexts, and experimental evaluations, common in assessing health interventions, are virtually absent in the microfinance sector. Furthermore, although sub-Saharan Africa has the highest proportion of people living in extreme poverty, with more than 40% living on less than $1 per day, access to microfinance services remains extremely limited, extending to less than 10% of those who need it. In this Commentary, we discuss the global experience and availableevidenceonthepotential formicrofinance tocontribute toward achieving MDG targets, with specific reference to health gains, and we examine the challenges and opportunities forexpandingaccess tosuchservices insub-SaharanAfrica. |
Low-Power mm-Wave Components up to 104GHz in 90nm CMOS | A customized 90nm device layout yields an extrapolated fmax of 300GHz. The device is incorporated into a low-power 60GHz amplifier consuming 10.5mW, providing 12dB of gain, and an output P1dB of 4dBm. An experimental 3-stage 104GHz amplifier has a measured peak gain of 9.3dB. Finally, a Colpitts oscillator at 104GHz delivers up to -5dBm of output power while consuming 6mW. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.