title
stringlengths
8
300
abstract
stringlengths
0
10k
Sentiment Analysis using SVM: A Systematic Literature Review
The world has revolutionized and phased into a new era, an era which upholds the true essence of technology and digitalization. As the market has evolved at a staggering scale, it is must to exploit and inherit the advantages and opportunities, it provides. With the advent of web 2.0, considering the scalability and unbounded reach that it provides, it is detrimental for an organization to not to adopt the new techniques in the competitive stakes that this emerging virtual world has set along with its advantages. The transformed and highly intelligent data mining approaches now allow organizations to collect, categorize, and analyze users’ reviews and comments from micro-blogging sites regarding their services and products. This type of analysis makes those organizations capable to assess, what the consumers want, what they disapprove of, and what measures can be taken to sustain and improve the performance of products and services. This study focuses on critical analysis of the literature from year 2012 to 2017 on sentiment analysis by using SVM (support vector machine). SVM is one of the widely used supervised machine learning techniques for text classification. This systematic review will serve the scholars and researchers to analyze the latest work of sentiment analysis with SVM as well as provide them a baseline for future trends and comparisons. Keywords—Sentiment analysis; polarity detection; machine learning; support vector machine (SVM); support vector machine; SLR; systematic literature review
Device analyzer: largescale mobile data collection
We collected usage information from 12,500 Android devices in the wild over the course of nearly 2 years. Our dataset contains 53 billion data points from 894 models of devices running 687 versions of Android. Processing the collected data presents a number of challenges ranging from scalability to consistency and privacy considerations. We present our system architecture for collection and analysis of this highly-distributed dataset, discuss how our system can reliably collect time-series data in the presence of unreliable timing information, and discuss issues and lessons learned that we believe apply to many other big data collection projects.
FastTree 2 – Approximately Maximum-Likelihood Trees for Large Alignments
BACKGROUND We recently described FastTree, a tool for inferring phylogenies for alignments with up to hundreds of thousands of sequences. Here, we describe improvements to FastTree that improve its accuracy without sacrificing scalability. METHODOLOGY/PRINCIPAL FINDINGS Where FastTree 1 used nearest-neighbor interchanges (NNIs) and the minimum-evolution criterion to improve the tree, FastTree 2 adds minimum-evolution subtree-pruning-regrafting (SPRs) and maximum-likelihood NNIs. FastTree 2 uses heuristics to restrict the search for better trees and estimates a rate of evolution for each site (the "CAT" approximation). Nevertheless, for both simulated and genuine alignments, FastTree 2 is slightly more accurate than a standard implementation of maximum-likelihood NNIs (PhyML 3 with default settings). Although FastTree 2 is not quite as accurate as methods that use maximum-likelihood SPRs, most of the splits that disagree are poorly supported, and for large alignments, FastTree 2 is 100-1,000 times faster. FastTree 2 inferred a topology and likelihood-based local support values for 237,882 distinct 16S ribosomal RNAs on a desktop computer in 22 hours and 5.8 gigabytes of memory. CONCLUSIONS/SIGNIFICANCE FastTree 2 allows the inference of maximum-likelihood phylogenies for huge alignments. FastTree 2 is freely available at http://www.microbesonline.org/fasttree.
ADAMTS-12: A Multifaced Metalloproteinase in Arthritis and Inflammation
ADAMTS-12 is a member of a disintegrin and metalloproteinase with thrombospondin motifs (ADAMTS) family of proteases, which were known to play important roles in various biological and pathological processes, such as development, angiogenesis, inflammation, cancer, arthritis, and atherosclerosis. In this review, we briefly summarize the structural organization of ADAMTS-12; concentrate on the emerging role of ADAMTS-12 in several pathophysiological conditions, including intervertebral disc degeneration, tumorigenesis and angioinhibitory effects, pediatric stroke, gonad differentiation, trophoblast invasion, and genetic linkage to schizophrenia and asthma, with special focus on its role in arthritis and inflammation; and end with the perspective research of ADAMTS-12 and its potential as a promising diagnostic and therapeutic target in various kinds of diseases and conditions.
What Is Agile Software Development ?
ever do anything that is a waste of time – and be prepared to wage long, tedious wars over this principle, " said Michael O'Connor, project manager at Trimble Navigation in Christchurch, New Zealand. This product group at Trimble is typical of the homegrown approach to agile software development methodologies. While interest in agile methodologies has blossomed in the past two years, its roots go back more than a decade. Teams using early versions of Scrum, Dynamic Systems Development Methodology (DSDM), and adaptive software development (ASD) were delivering successful projects in the early-to mid-1990s. This article attempts to answer the question, " What constitutes agile software development? " Because of the breadth of agile approaches and the people who practice them, this is not as easy a question to answer as one might expect. I will try to answer this question by first focusing on the sweet-spot problem domain for agile approaches. Then I will delve into the three dimensions that I refer to as agile ecosystems: barely sufficient methodology, collaborative values, and chaordic perspective. Finally, I will examine several of these agile ecosystems. All problems are different and require different strategies. While battlefield commanders plan extensively, they realize that plans are just a beginning; probing enemy defenses (creating change) and responding to enemy actions (responding to change) are more important. Battlefield commanders succeed by defeating the enemy (the mission), not conforming to a plan. I cannot imagine a battlefield commander saying, " We lost the battle, but by golly, we were successful because we followed our plan to the letter. " Battlefields are messy, turbulent, uncertain, and full of change. No battlefield commander would say, " If we just plan this battle long and hard enough, and put repeatable processes in place, we can eliminate change early in the battle and not have to deal with it later on. " A growing number of software projects operate in the equivalent of a battle zone – they are extreme projects. This is where agile approaches shine. Project teams operating in this zone attempt to utilize leading or bleeding-edge technologies , respond to erratic requirements changes, and deliver products quickly. Projects may have a relatively clear mission , but the specific requirements can be volatile and evolving as customers and development teams alike explore the unknown. These projects, which I call high-exploration factor projects, do not succumb to rigorous, plan-driven methods. …
Why Should I Share? Examining Social Capital and Knowledge Contribution in Electronic Networks of Practice
Electronic networks of practice are computer mediated discussion forums focused on problems of practice that enable individuals to exchange advice and ideas with others based on common interests. However, why individuals help strangers in these electronic networks is not well under stood: there is no immediate benefit to the contri 1V. Sambamurthy and Mani Subramani were the accepting senior editors for this paper. butor, and free-riders are able to acquire the same knowledge as everyone else. To understand this paradox, we apply theories of collective action to examine how individual motivations and social capital influence knowledge contribution in elect ronic networks. This study reports on the activities of one electronic network supporting a professional legal association. Using archival, network, survey, and content analysis data, we empirically test a model of knowledge contribution. We find that people contribute their knowledge when they per ceive that it enhances their professional repu tations, when they have the experience to share, and when they are structurally embedded in the network. Surprisingly, contributions occur without regard to expectations of reciprocity from others or high levels of commitment to the network.
Favored subjects and psychosocial needs in music therapy in terminally ill cancer patients: a content analysis.
BACKGROUND Research has shown positive effects of music therapy on the physical and mental well-being of terminally ill patients. This study aimed to identify favored subjects and psychosocial needs of terminally ill cancer patients during music therapy and associated factors. METHODS Forty-one Patients receiving specialized inpatient palliative care prospectively performed a music therapy intervention consisting of at least two sessions (total number of sessions: 166; per patient average: 4, range, 2-10). Applied music therapy methods and content were not pre-determined. Therapeutic subjects and psychosocial needs addressed in music therapy sessions were identified from prospective semi-structured "field notes" using qualitative content analysis. Patient- and treatment-related characteristics as well as factors related to music and music therapy were assessed by questionnaire or retrieved from medical records. RESULTS Seven main categories of subjects were identified: "condition, treatment, further care", "coping with palliative situation", "emotions and feelings", "music and music therapy", "biography", "social environment", and "death, dying, and spiritual topics". Patients addressed an average of 4.7 different subjects (range, 1-7). Some subjects were associated with gender (p = .022) and prior impact of music in patients' life (p = .012). The number of subjects per session was lower when receptive music therapy methods were used (p = .040). Psychosocial needs were categorized into nine main dimensions: "relaxing and finding comfort", "communication and dialogue", "coping and activation of internal resources", "activity and vitality", "finding expression", "sense of self and reflection", "finding emotional response", "defocusing and diversion", and "structure and hold". Patients expressed an average of 4.9 psychosocial needs (range, 1-8). Needs were associated with age, parallel art therapy (p = .010), role of music in patient's life (p = .021), and the applied music therapy method (p = .012). CONCLUSION Seven main categories of therapeutically relevant subjects and nine dimensions of psychosocial needs could be identified when music therapy was delivered to terminally ill cancer patients. Results showed that patients with complex psychosocial situations addressed an average number of five subjects and needs, respectively. Some socio-demographic factors, the role of music in patient's lives and the applied music therapy methods may be related with the kind and number of expressed subjects and needs.
Adaptive channel allocation spectrum etiquette for cognitive radio networks
In this work, we propose a game theoretic framework to analyze the behavior of cognitive radios for distributed adaptive channel allocation. We define two different objective functions for the spectrum sharing games, which capture the utility of selfish users and cooperative users, respectively. Based on the utility definition for cooperative users, we show that the channel allocation problem can be formulated as a potential game, and thus converges to a deterministic channel allocation Nash equilibrium point. Alternatively, a no-regret learning implementation is proposed for both scenarios and it is shown to have similar performance with the potential game when cooperation is enforced, but with a higher variability across users. The no-regret learning formulation is particularly useful to accommodate selfish users. Non-cooperative learning games have the advantage of a very low overhead for information exchange in the network. We show that cooperation based spectrum sharing etiquette improves the overall network performance at the expense of an increased overhead required for information exchange
Left atrial remodeling and response to valsartan in the prevention of recurrent atrial fibrillation: the GISSI-AF echocardiographic substudy.
BACKGROUND Left atrial (LA) dilation precedes or appears early after the onset of atrial fibrillation (AF) and factors in perpetuating the arrhythmia. Angiotensin receptor blockers were proposed for reversing LA remodeling. We evaluated the effect of valsartan on LA remodeling in patients with a recent episode of AF and the effect of LA size on AF recurrence (AFr). METHODS AND RESULTS LA and left ventricular (LV) echocardiographic variables were measured at baseline and 6 and 12 months in 340 patients from GISSI-AF, a trial testing valsartan prevention of AFr. Reversal of remodeling was considered as a decrease in LA size over 12 months. Changes in patients with and without recurrence and the relationship to duration of AFr were analyzed. Patients were 68.4±8.8 years old, with history of hypertension (85.3%) and cardioversion in the previous 2 weeks (87.4%) or ≥2 AFr in the previous 6 months (40.4%). Baseline LA maximal volume (LAVmax) was severely increased (>40 mL/m(2)); LV dimensions and function were relatively normal. Over 12 months, 54.4% of patients had AFr. LAVmax was unchanged by rhythm, time, or randomized treatment. Higher baseline LAVmax and lower LA emptying fraction were linearly related to increasing AFr duration during follow-up. CONCLUSIONS GISSI-AF patients in sinus rhythm and history of AF showed severely increased LAVmax with mostly normal LV volume, mass, and systolic and diastolic function. Valsartan for 1 year did not reverse LA remodeling or prevent AFr. Half of the patients without AFr had severe LA dilation; therefore, mechanisms other than structural remodeling triggered recurrence.
Evidence Combination From an Evolutionary Game Theory Perspective
Dempster-Shafer evidence theory is a primary methodology for multisource information fusion because it is good at dealing with uncertain information. This theory provides a Dempster's rule of combination to synthesize multiple evidences from various information sources. However, in some cases, counter-intuitive results may be obtained based on that combination rule. Numerous new or improved methods have been proposed to suppress these counter-intuitive results based on perspectives, such as minimizing the information loss or deviation. Inspired by evolutionary game theory, this paper considers a biological and evolutionary perspective to study the combination of evidences. An evolutionary combination rule (ECR) is proposed to help find the most biologically supported proposition in a multievidence system. Within the proposed ECR, we develop a Jaccard matrix game to formalize the interaction between propositions in evidences, and utilize the replicator dynamics to mimick the evolution of propositions. Experimental results show that the proposed ECR can effectively suppress the counter-intuitive behaviors appeared in typical paradoxes of evidence theory, compared with many existing methods. Properties of the ECR, such as solution's stability and convergence, have been mathematically proved as well.
Multiple Event Detection and Recognition for Large-Scale Power Systems Through Cluster-Based Sparse Coding
Accurate event analysis in real time is of paramount importance for high-fidelity situational awareness such that proper actions can take place before any isolated faults escalate to cascading blackouts. Existing approaches are limited to detect only single or double events or a specified event type. Although some previous works can well distinguish multiple events in small-scale systems, the performance tends to degrade dramatically in large-scale systems. In this paper, we focus on multiple event detection, recognition, and temporal localization in large-scale power systems. We discover that there always exist “regions” where the reaction of all buses to certain event within each region demonstrates high degree similarity, and that the boundary of the “regions” generally remains the same regardless of the type of event(s). We further verify that, within each region, this reaction to multiple events can be approximated as a linear combination of reactions to each constituent event. Based on these findings, we propose a novel method, referred to as cluster-based sparse coding (CSC), to extract all the underlying single events involved in a multievent scenario. Multiple events of three typical disturbances (e.g., generator trip, line trip, and load shedding) can be detected and recognized. Specifically, the CSC algorithm can effectively distinguish line trip events from oscillation, which has been a very challenging task for event analysis. Experimental results based on simulated large-scale system model (i.e., NPCC) show that the proposed CSC algorithm presents high detection and recognition rate with low false alarms.
Towards a unified architecture for in-RDBMS analytics
The increasing use of statistical data analysis in enterprise applications has created an arms race among database vendors to offer ever more sophisticated in-database analytics. One challenge in this race is that each new statistical technique must be implemented from scratch in the RDBMS, which leads to a lengthy and complex development process. We argue that the root cause for this overhead is the lack of a unified architecture for in-database analytics. Our main contribution in this work is to take a step towards such a unified architecture. A key benefit of our unified architecture is that performance optimizations for analytics techniques can be studied generically instead of an ad hoc, per-technique fashion. In particular, our technical contributions are theoretical and empirical studies of two key factors that we found impact performance: the order data is stored, and parallelization of computations on a single-node multicore RDBMS. We demonstrate the feasibility of our architecture by integrating several popular analytics techniques into two commercial and one open-source RDBMS. Our architecture requires changes to only a few dozen lines of code to integrate a new statistical technique. We then compare our approach with the native analytics tools offered by the commercial RDBMSes on various analytics tasks, and validate that our approach achieves competitive or higher performance, while still achieving the same quality.
Impulse buying: The role of affect, social influence, and subjective wellbeing
Purpose – The purpose of this research is to examine predictors of impulse buying. Although moderate levels of impulse buying can be pleasant and gratifying, recent theoretical work suggests that chronic, high frequency impulse buying has a compulsive element and can function as a form of escape from negative affective states, depression, and low self-esteem. Design/methodology/approach – The present research empirically tests a theoretical model of impulse buying by examining the associations between chronic impulse buying tendencies and subjective wellbeing, affect, susceptibility to interpersonal influence, and self-esteem. Findings – Results indicate that the cognitive facet of impulse buying, associated with a lack of planning in relation to purchase decisions, is negatively associated with subjective wellbeing. The affective facet of impulse buying, associated with feelings of excitement and an overpowering urge to buy, is linked to negative affect and susceptibility to interpersonal influence. Practical implications – Given the link to negative emotions and potentially harmful consequences, impulse buying may be viewed as problematic consumer behavior. Reductions in problematic impulse buying could be addressed through public policy or social marketing. Originality/value – This study validates and extends the Verplanken et al. model by examining the relationship between impulse buying and other psychological constructs (i.e. subjective wellbeing, positive and negative affect, social influence, and self-esteem).
Public-private partnerships and contract negotiations : an empirical study
The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.
Periprocedural intracardiac echocardiography for left atrial appendage closure: a dual-center experience.
OBJECTIVES This dual-center study sought to demonstrate the utility and safety of intracardiac echocardiography (ICE) in providing adequate imaging guidance as an alternative to transesophageal echocardiography (TEE) during Amplatzer Cardiac Plug device implantation. BACKGROUND Over 90% of intracardiac thrombi in atrial fibrillation originate from the left atrial appendage (LAA). Patients with contraindications to anticoagulation are potential candidates for LAA percutaneous occlusion. TEE is typically used to guide implantation. METHODS ICE-guided percutaneous LAA closure was performed in 121 patients to evaluate the following tasks typically achieved by TEE: assessment of the LAA dimension for device sizing; guidance of transseptal puncture; verification of the delivery sheath position; confirmation of location and stability of the device before and after release and continuous monitoring to detect procedural complications. In 51 consecutive patients, we compared the measurements obtained by ICE and fluoroscopy to choose the size of the device. RESULTS The device was successfully implanted in 117 patients, yielding a technical success rate of 96.7%. Procedural success was achieved in 113 cases (93.4%). Four major adverse events (3 cardiac tamponades and 1 in-hospital transient ischemic attack) occurred. There was significant correlation in the measurements for device sizing assessed by angiography and ICE (r = 0.94, p < 0.0001). CONCLUSIONS ICE imaging was able to perform the tasks typically provided by TEE during implantation of the Amplatzer Cardiac Plug device for LAA occlusion. Therefore, we provide evidence that the use of ICE offered accurate measurements of LAA dimension in order to select the correct device sizes.
Microplastics in the marine environment.
This review discusses the mechanisms of generation and potential impacts of microplastics in the ocean environment. Weathering degradation of plastics on the beaches results in their surface embrittlement and microcracking, yielding microparticles that are carried into water by wind or wave action. Unlike inorganic fines present in sea water, microplastics concentrate persistent organic pollutants (POPs) by partition. The relevant distribution coefficients for common POPs are several orders of magnitude in favour of the plastic medium. Consequently, the microparticles laden with high levels of POPs can be ingested by marine biota. Bioavailability and the efficiency of transfer of the ingested POPs across trophic levels are not known and the potential damage posed by these to the marine ecosystem has yet to be quantified and modelled. Given the increasing levels of plastic pollution of the oceans it is important to better understand the impact of microplastics in the ocean food web.
CONCEPTUALISATION AND OPERATIONALISATION OF RESISTANCE TO CHANGE
The present paper is meant to provide a systematic conceptualisation and operationalisation of the notion of resistance to change (RTC) that might serve as basis for an empirical investigation and analysis. Specifically, the main aim is to conceptualise the notion of RTC, linking it to a broader conceptual theoretical framework of reference. The objective is then to develop a working definition of RTC for use in the research. Once RTC is defined, the concept is operationalised through the identification of a set of ‘items’ that are supposed to measure the individual level of resistance to change. Building on the significant RTC literature devoted to the analysis of the manifestations of RTC, those ‘items’ exemplify a scheme of behaviourally distinct patterns through which individuals usually express their resistance to change. A long period of observation and two pilot studies in the investigated organisation (ENEL Ente Nazionale per l’Energia Elettrica) had the purpose of contextualising the construction of the specific questionnaire ‘items’, used to operationalise and measure the phenomenon amongst the sample of middle managers covered in the research. These issues are considered in the second half of the present paper along with a detailed analysis of the psychometric properties of the proposed scales of RTC and a discussion regarding the capability of the various scales to measure the level of resistance to change of individuals.
Health-related quality of life and self-related health in patients with type 2 diabetes: Effects of group-based rehabilitation versus individual counselling
BACKGROUND Type 2 diabetes can seriously affect patients' health-related quality of life and their self-rated health. Most often, evaluation of diabetes interventions assess effects on glycemic control with little consideration of quality of life. The aim of the current study was to study the effectiveness of group-based rehabilitation versus individual counselling on health-related quality of life (HRQOL) and self-rated health in type 2 diabetes patients. METHODS We randomised 143 type 2 diabetes patients to either a six-month multidisciplinary group-based rehabilitation programme including patient education, supervised exercise and a cooking-course or a six-month individual counselling programme. HRQOL was measured by Medical Outcomes Study Short Form 36-item Health Survey (SF-36) and self-rated health was measured by Diabetes Symptom Checklist - Revised (DCS-R). RESULTS In both groups, the lowest estimated mean scores of the SF36 questionnaire at baseline were "vitality" and "general health". There were no significant differences in the change of any item between the two groups after the six-month intervention period. However, vitality-score increased 5.2 points (p = 0.12) within the rehabilitation group and 5.6 points (p = 0.03) points among individual counselling participants.In both groups, the highest estimated mean scores of the DSC-R questionnaire at baseline were "Fatigue" and "Hyperglycaemia". Hyperglycaemic and hypoglycaemic distress decreased significantly after individual counselling than after group-based rehabilitation (difference -0.3 points, p = 0.04). No between-group differences occurred for any other items. However, fatigue distress decreased 0.40 points within the rehabilitation group (p = 0.01) and 0.34 points within the individual counselling group (p < 0.01). In the rehabilitation group cardiovascular distress decreased 0.25 points (p = 0.01). CONCLUSIONS A group-based rehabilitation programme did not improve health-related quality of life and self-rated health more than an individual counselling programme. In fact, the individual group experienced a significant relief in hyper- and hypoglycaemic distress compared with the rehabilitation group.However, the positive findings of several items in both groups indicate that lifestyle intervention is an important part of the management of type 2 diabetes patients.
Composition and evolution of the oceanic crust
Abstract Laboratory measurements of compressional wave velocities in rocks are consistent with an oceanic crust composed of an assemblage of hornblende and plagioclase (layer 3) overlain by tholeiitic basalt (layer 2). Partial melting of mantle peridotite under mid-oceanic ridges marks the initial stages in the development of the oceanic crust. Tholeiitic magma, which escapes to the ocean floor, forms layer 2. Layer 3, composed of amphibolite and hornblende gabbro, originates beneath the ocean floor by crystallization of tholeiitic magma under hydrous conditions and subsequent metamorphism in the vicinity of ridge crests. Once formed, the oceanic crust is transported laterally by horizontally spreading upper mantle. Disposal of the oceanic crust is accomplished by downward movement in the vicinity of descending limbs of convection cells. Partial melting of the oceanic crust in regions of downward convection results in the formation of the calc-alkaline suite of rocks and eclogite.
Amprenavir and lopinavir pharmacokinetics following coadministration of amprenavir or fosamprenavir with lopinavir/ritonavir, with or without efavirenz.
BACKGROUND Amprenavir (APV), fosamprenavir (FPV), lopinavir (LPV), ritonavir (RTV) and efavirenz (EFV) are to varying degrees substrates, inducers and inhibitors of CYP3A4. Coadministration of these drugs might result in complex pharmacokinetic drug-drug interactions. METHODS Two prospective, open-label, non-randomized studies evaluated APV and LPV steady-state pharmacokinetics in HIV-infected patients on APV 750 mg twice daily + LPV/RTV 533/133 mg twice daily with EFV (n=7) or without EFV (n=12) + background nucleosides (Study 1) and after switching FPV 1,400 mg twice daily for APV (n=10) (Study 2). RESULTS In Study 1 EFV and non-EFV groups did not differ in APV minimum plasma concentration (Cmin; 1.10 versus 1.06 microg/ml, P = 0.89), area under the concentration-time curve (AUC; 17.46 versus 24.34 microg x h/ml, P = 0.22) or maximum concentration (Cmax; 2.61 versus 4.33 microg/ml, P = 0.08); for LPV there was no difference in Cmin, (median: 3.66 versus 6.18 microg/ml, P = 0.20), AUC (81.84 versus 93.75 microg x h/ml, P = 0.37) or Cmax (10.36 versus 10.93 microg/ml, P = 0.61). In Study 2, after switching from APV to FPV, APV Cmin increased by 58% (0.83 versus 1.30 microg/ml, P = 0.0001), AUC by 76% (19.41 versus 34.24 micorg x h/ml, P = 0.0001), and Cmax by 75% (3.50 versus 6.14, P = 0.001). Compared with historical controls, LPV and RTV pharmacokinetics were not changed. All treatment regimens were well tolerated. Seven of eight completers (88%) maintained HIV-1 RNA <400 copies/ml 12 weeks after the switch (1 lost to follow up). CONCLUSIONS EFV did not appear to significantly alter APV and LPV pharmacokinetic parameters in HIV-infected patients taking APV 750 mg twice daily + LPV 533/133 mg twice daily. Switching FPV 1400 mg twice daily for APV 750 mg twice daily resulted in an increase in APV Cmin, AUC, and Cmax without changing LPV or RTV pharmacokinetics or overall tolerability.
Magnetic field generation in relativistic shocks
We present an analytical estimate for the magnetic field strength generated by the Weibel instability in ultra‐relativistic shocks in a hydrogen plasma. We find that the Weibel instability is, by itself, not capable of converting the kinetic energy of protons penetrating the shock front into magnetic field energy. Other (nonlinear) processes must determine the magnetic field strength in the wake of the shock.
Modulation of Immune Response by Organophosphate Pesticides: Mammals as Potential Model
Organophosphates (OPs) are most widely used pesticides and primarily induce toxicity by inhibition of acetylcholinesterase (AChE) in the nerve terminals of central and peripheral nervous system, leading to a variety of short-term and chronic effects in the non-target animals. In addition to acetylcholinesterase, OPs are known to potent inhibitors of serine hydrolases which are vital component of the immune system and therefore influence the immune functions. OPs induce several immunomodulatory effects in vertebrates by altering neutrophil function, macrophage production, antibody production, immunosuppression, reduced interleukin production and T cell proliferation. Immunotoxicity due to OP exposure is mediated through perturbation of the cholinergic response of lymphocytes, altering signal transduction, mutilating granule exocytosis pathway and impairing FasL/Fas pathway of natural killer cell and other immune related cells. Apoptosis of lymphocytes or immune related cells is promoted through mitochondrial pore formation and DNA fragmentation. In this review an attempt has been made to document the immunomodulatory effects of organophosphate pesticides using mammals as potential model with an additional information on the probable mechanism of immunotoxicity induced by OPs.
Preventing Distributed Denial-of-Service Flooding Attacks With Dynamic Path Identifiers
In recent years, there are increasing interests in using path identifiers (<inline-formula> <tex-math notation="LaTeX">$\it PIDs$ </tex-math></inline-formula>) as inter-domain routing objects. However, the <inline-formula> <tex-math notation="LaTeX">$\it PIDs$ </tex-math></inline-formula> used in existing approaches are static, which makes it easy for attackers to launch the distributed denial-of-service (DDoS) flooding attacks. To address this issue, in this paper, we present the design, implementation, and evaluation of dynamic PID (D-PID), a framework that uses <inline-formula> <tex-math notation="LaTeX">$\it PIDs$ </tex-math></inline-formula> negotiated between the neighboring domains as inter-domain routing objects. In D-PID, the <inline-formula> <tex-math notation="LaTeX">$\it PID$ </tex-math></inline-formula> of an inter-domain path connecting the two domains is kept secret and changes dynamically. We describe in detail how neighboring domains negotiate <inline-formula> <tex-math notation="LaTeX">$\it PIDs$ </tex-math></inline-formula> and how to maintain ongoing communications when <inline-formula> <tex-math notation="LaTeX">$\it PIDs$ </tex-math></inline-formula> change. We build a 42-node prototype comprised of six domains to verify D-PID’s feasibility and conduct extensive simulations to evaluate its effectiveness and cost. The results from both simulations and experiments show that D-PID can effectively prevent DDoS attacks.
Impact of Employees Motivation on Organizational Effectiveness
The purpose of this paper is to identify the factors that effects employee motivation and examining the relationship between organizational effectiveness and employee motivation. A model was designed based on the literature, linking factors of employee motivation with employee motivation and organizational effectiveness. Three hypotheses were build based on the literature and the model and were tested in perspective of the previous studies and literature. The literature and various studies concluded that factors: empowerment and recognition have positive effect on employee motivation. More the empowerment and recognition of employees in an organization is increased, more will their motivation to work will enhance. Also there exists a positive relationship between employee motivation and organizational effectiveness. The more the employees are motive to tasks accomplishment higher will the organizational performance and success. The study focuses on the practice and observance of the two central factors, empowerment and employee recognition for enhancing employee motivation which leads to organizational effectiveness. The organizations should design their rules, policies and organizational structures that give space to the employee to work well and appreciate them on their tasks fulfillment and achievements. This will surely lead to organizational growth.
Comparison between on-label versus off-label use of drug-eluting coronary stents in clinical practice: results from the German DES.DE-Registry
Observational studies from the USA have demonstrated that off-label use of drug-eluting stents (DES) is common. Data on off-label use in Western Europe are limited. We analyzed the data of consecutive patients receiving DES prospectively enrolled in the multicenter German DES.DE registry (Deutsches Drug-Eluting Stent Register) between October 2005 and October 2006. Off-label use was defined in the presence of one of the following criteria: ST-elevation myocardial infarction, in-stent stenosis, chronic total occlusion, lesions in a bypass graft, in bifurcation or left main stem, stent length per lesion ≥32 mm, and vessel diameter <2.5 or >3.5 mm. Overall, 4,295 patients were included in this analysis and divided into two groups: 2,366 (55.1%) received DES for off-label and 1,929 (44.9%) for on-label indications. There were substantial differences in the rates of off-label use at the participating hospitals. Patients with off-label DES more often presented with high-risk features such as acute coronary syndrome, cardiogenic shock, congestive heart failure, and more complex coronary anatomy. Among hospital survivors, the incidence of the composite endpoint of death, myocardial infarction and stroke (MACCE) (9.2 vs. 7.4%, p < 0.05), and target vessel revascularization (TVR) (11.3 vs. 9.1%, p < 0.05) was increased in the off-label group at the 1-year follow-up. However, in the multivariate analysis off-label use was not linked with an elevated risk for MACCE (hazard ratio 0.86, 95% confidence interval 0.62–1.18) and TVR (hazard ratio 1.05, 95% confidence interval 0.78–1.42). In clinical practice, DES was very frequently used off-label. After adjustment for confounding variables, off-label use was not associated with an increase of adverse events.
Pattern Recognition Through PerceptuallyImportant Points In Financial Time Series
Technical Analysis is a financial risk management practice that has been in use since the advent of the stock market and Pattern Recognition is an indivisible part of it. There has been a lot of research into pattern recognition in time series. Existing pattern recognition techniques lack dynamic extensibility. They do not provide any interfaces in order to include new patterns for recognition dynamically. This limits the operability of these techniques to a particular domain. This research devises a new technique for domain independent pattern recognition while giving sufficient speed and accuracy. This enables it to be used by critical Decision Support Systems for time series of different domains. The system emulates the human visual cognition process by implementing the concept of Perceptually Important Points Identification (PIPI). Perceptually Important Points (PIP) represents the minimal set of data points which are necessary to form a pattern. For dynamic inclusion of patterns a Pattern Definition Language (PDL) has been conceptualized for defining patterns in time series by using a declarative programming paradigm. This also results in domain independent pattern recognition without needing any modification.
Data mining techniques for medical data: A review
Data mining is an important area of research and is pragmatically used in different domains like finance, clinical research, education, healthcare etc. Further, the scope of data mining have thoroughly been reviewed and surveyed by many researchers pertaining to the domain of healthcare which is an active interdisciplinary area of research. In fact, the task of knowledge extraction from the medical data is a challenging endeavor and it is a complex task. The main motive of this review paper is to give a review of data mining in the purview of healthcare. Moreover, intertwining and interrelation of previous researches have been presented in a novel manner. Furthermore, merits and demerits of frequently used data mining techniques in the domain of health care and medical data have been compared. The use of different data mining tasks in health care is also discussed. An analytical approach regarding the uniqueness of medical data in health care is also presented.
Multiband antenna for Bluetooth/ZigBee/ Wi-Fi/WiMAX/WLAN/X-band applications: Partial ground with periodic structures and EBG
A compact in size microstrip antenna with multiband resonance features is described in this paper. A circular antenna with the partial ground is used as a primary antenna which resonated for WiMAX band (3.3–3.7 GHz). Further, primary antenna embedded with mushroom shape EBG (Electromagnetic Band Gap) and periodic DGS (Defected Ground Structure) to operate over WLAN band (5.1–5.8 GHz) and X-band from 8–12 GHz. Proposed antenna covers 2.4/3.5/5.5 GHz bands and complete X-band which also reflects that it has novel suitability to use for a ZigBee/Wi-Fi/Bluetooth applications. The proposed antenna is etched on FR-4 substrate and overall dimension is 20×26×1.6 mm3. The projected antenna has wide-ranging bandwidth with VSWR < 2. It provides favorable agreement between measured and simulated results.
The out-of-body experience: disturbed self-processing at the temporo-parietal junction.
Folk psychology postulates a spatial unity of self and body, a "real me" that resides in one's body and is the subject of experience. The spatial unity of self and body has been challenged by various philosophical considerations but also by several phenomena, perhaps most notoriously the "out-of-body experience" (OBE) during which one's visuo-spatial perspective and one's self are experienced to have departed from their habitual position within one's body. Here the authors marshal evidence from neurology, cognitive neuroscience, and neuroimaging that suggests that OBEs are related to a failure to integrate multisensory information from one's own body at the temporo-parietal junction (TPJ). It is argued that this multisensory disintegration at the TPJ leads to the disruption of several phenomenological and cognitive aspects of self-processing, causing illusory reduplication, illusory self-location, illusory perspective, and illusory agency that are experienced as an OBE.
Examining the influence of gender, education, social class and birth cohort on MMSE tracking over time: a population-based prospective cohort study
BACKGROUND Whilst many studies have analysed predictors of longitudinal cognitive decline, few have described their impact on population distributions of cognition by age cohort. The aim of this paper was to examine whether gender, education, social class and birth cohort affect how mean population cognition changes with age. METHODS The Medical Research Council Cognitive Function and Ageing Study (MRC CFAS) is a multi-centre population based longitudinal study of 13,004 individuals in England and Wales. Using ten years of follow-up data, mean Mini-mental State Examination (MMSE) scores were modelled by age and birth cohort adjusting for non-random drop-out. The model included terms to estimate cohort effects. Results are presented for five year age bands between 65-95 years. RESULTS At a population level, women show greater change in MMSE scores with age than men. Populations with lower education level and manual work also show similar effects. More recent birth cohorts have slightly higher scores. CONCLUSION Longitudinal data can allow examination of population patterns by gender, educational level, social class and cohort. Each of these major socio-demographic factors shows some effect on whole population change in MMSE with age.
Self-adaptive systems: A survey of current approaches, research challenges and applications
Self-adaptive software is capable of evaluating and changing its own behavior, whenever the evaluation shows that the software is not accomplishing what it was intended to do, or when better functionality or performance may be possible. The topic of system adaptivity has been widely studied since the mid-60s and, over the past decade, several application areas and technologies relating to self-adaptivity have assumed greater importance. In all these initiatives, software has become the common element that introduces self-adaptability. Thus, the investigation of systematic software engineering approaches is necessary, in order to develop self-adaptive systems that may ideally be applied across multiple domains. The main goal of this study is to review recent progress on self-adaptivity from the standpoint of computer sciences and cybernetics, based on the analysis of state-of-the-art approaches reported in the literature. This review provides an over-arching, integrated view of computer science and software engineering foundations. Moreover, various methods and techniques currently applied in the design of self-adaptive systems are analyzed, as well as some European research initiatives and projects. Finally, the main bottlenecks for the effective application of self-adaptive technology, as well as a set of key research issues on this topic, are precisely identified, in order to overcome current constraints on the effective application of self-adaptivity in its emerging areas of application. 2013 Elsevier Ltd. All rights reserved.
Systems modeling and simulation applications for critical care medicine
Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area.
On the Reactivity of Diamond-Like Semiconductor Surfaces
The adsorption and decomposition of molecular oxygen and water on the (100) surface of diamond-like crystals (Si and Ge) have been studied by semiempirical quantum chemistry MNDO method in the framework of a cluster approach. The results show that both O 2 and H 2 O, on the two kinds of crystal surfaces, adsorb both molecularly and dissociatively. The dissociative process is always favoured from a thermodynamic point of view, but involves an activation energy in agreement with the experimental evidence.
Large Scale Image Deduplication
With the rise of internet and personal digital camera, it becomes easy for researchers to get image data in mass quantity. With these large amount of image data, it is impossible for humans to examine each image and insure the quality of the dataset. Therefore it is crucial to develop algorithms that can process large amount of data. This paper will focus on a particular problem related to image dataset, image deduplication. We propose an efficient and scalable method to find near-duplicate images in an image collection. Our method includes 3-steps, first, extract compact features from each image. Second, use a fast clustering algorithm to reduce possible image match. The clustering algorithm should be CPU/memory efficient and can scale through multiple machines easily. We will use the method proposed by [4] that uses map-reduce to implement approximate nearest neighbor. Finally, apply a more accurate method on the clustered images to find duplicate images in each group. We will use method proposal by [9].
Analyzing Payment Based Question and Answering Service
Community based question answering (CQA) services receive a large volume of questions today. It is increasingly challenging to motivate domain experts to give timely answers. Recently, payment-based CQA services explore new incentive models to engage real-world experts and celebrities by allowing them to set a price on their answers. In this paper, we perform a data-driven analysis on Fenda, a paymentbased CQA service that has gained initial success with this incentive model. Using a large dataset of 220K paid questions (worth 1 million USD) over two months, we examine how monetary incentives affect different players in the system and their over-time engagement. Our study reveals several key findings: while monetary incentive enables quick answers from experts, it also drives certain users to aggressively game the systems for profits. In addition, this incentive model turns CQA service into a supplier-driven marketplace where users need to proactively adjust their price as needed. We find famous people are unwilling to lower their price, which in turn hurts their income and engagement-level over time. Based on our results, we discuss the implications to future paymentbased CQA design.
Achieving Flexible and Self-Contained Data Protection in Cloud Computing
For enterprise systems running on public clouds in which the servers are outside the control domain of the enterprise, access control that was traditionally executed by reference monitors deployed on the system servers can no longer be trusted. Hence, a self-contained security scheme is regarded as an effective way for protecting outsourced data. However, building such a scheme that can implement the access control policy of the enterprise has become an important challenge. In this paper, we propose a self-contained data protection mechanism called RBAC-CPABE by integrating role-based access control (RBAC), which is widely employed in enterprise systems, with the ciphertext-policy attribute-based encryption (CP-ABE). First, we present a data-centric RBAC (DC-RBAC) model that supports the specification of fine-grained access policy for each data object to enhance RBAC’s access control capabilities. Then, we fuse DC-RBAC and CP-ABE by expressing DC-RBAC policies with the CP-ABE access tree and encrypt data using CP-ABE. Because CP-ABE enforces both access control and decryption, access authorization can be achieved by the data itself. A security analysis and experimental results indicate that RBAC-CPABE maintains the security and efficiency properties of the CP-ABE scheme on which it is based, but substantially improves the access control capability. Finally, we present an implemented framework for RBAC-CPABE to protect privacy and enforce access control for data stored in the cloud.
싱가포르의 물류산업의 발전전략에 관한 연구
This paper focuses on logistics industry in Singapore. The main object of this research is to analyze competitiveness of Singapore's logistics industry while comparing Korea's logistics industry and to suggest strategic plan on development of logistics industry in Korea. Competitiveness of logistics industry in Singapore is at the top of the world. but under the rapidly changing environment of the international logistics, logistics industry of Singapore has recently faced with challenges from other countries of East Asia. The Singapore government is attempting various logistics policies to break through the challenges. The suggestions proposed by this paper is expected to offer a useful policy for developing logistics industry in Korea.
A Language Support for Cloud Elasticity Management
Elasticity is the intrinsic element that differentiates Cloud computing from traditional computing paradigm, since it allows service providers to rapidly adjust their needs for resources to absorb the demand and hence guarantee a minimum level of Quality of Service (QoS) that respects the Service Level Agreements (SLAs) previously defined with their clients. However, due to non-negligible resource initiation time, network fluctuations or unpredictable workload, it becomes hard to guarantee QoS levels and SLA violations may occur. This paper proposes a language support for Cloud elasticity management that relies on CSLA (Cloud Service Level Agreement). CSLA offers new features such as QoS/functionality degradation and an advanced penalty model that allow providers to finely express contracts so that services self-adaptation capabilities are improved and SLA violations minimized. The approach was evaluated with a real infrastructure and application test bed. Experimental results show that the use of CSLA makes Cloud services capable of absorbing more peaks and oscillations by trading-off the QoS levels and costs due to penalties.
Are effective teachers like good parents? Teaching styles and student adjustment in early adolescence.
This study examined the utility of parent socialization models for understanding teachers' influence on student adjustment in middle school. Teachers were assessed with respect to their modeling of motivation and to Baumrind's parenting dimensions of control, maturity demands, democratic communication, and nurturance. Student adjustment was defined in terms of their social and academic goals and interest in class, classroom behavior, and academic performance. Based on information from 452 sixth graders from two suburban middle schools, results of multiple regressions indicated that the five teaching dimensions explained significant amounts of variance in student motivation, social behavior, and achievement. High expectations (maturity demands) was a consistent positive predictor of students' goals and interests, and negative feedback (lack of nurturance) was the most consistent negative predictor of academic performance and social behavior. The role of motivation in mediating relations between teaching dimensions and social behavior and academic achievement also was examined; evidence for mediation was not found. Relations of teaching dimensions to student outcomes were the same for African American and European American students, and for boys and girls. The implications of parent socialization models for understanding effective teaching are discussed.
Study and prototyping of practically large-scale mmWave antenna systems for 5G cellular devices
This article discusses the challenges, benefits and approaches associated with realizing largescale antenna arrays at mmWave frequency bands for future 5G cellular devices. Key design considerations are investigated to deduce a novel and practical phased array antenna solution operating at 28 GHz with near spherical coverage. The approach is further evolved into a first-of- a-kind cellular phone prototype equipped with mmWave 5G antenna arrays consisting of a total of 32 low-profile antenna elements. Indoor measurements are carried out using the presented prototype to characterize the proposed mmWave antenna system using 16-QAM modulated signals with 27.925 GHz carrier frequency. The biological implications due to the absorbed electromagnetic waves when using mmWave cellular devices are studied and compared in detail with those of 3/4G cellular devices.
Folate Augmentation of Treatment – Evaluation for Depression (FolATED): protocol of a randomised controlled trial
BACKGROUND Clinical depression is common, debilitating and treatable; one in four people experience it during their lives. The majority of sufferers are treated in primary care and only half respond well to active treatment. Evidence suggests that folate may be a useful adjunct to antidepressant treatment: 1) patients with depression often have a functional folate deficiency; 2) the severity of such deficiency, indicated by elevated homocysteine, correlates with depression severity, 3) low folate is associated with poor antidepressant response, and 4) folate is required for the synthesis of neurotransmitters implicated in the pathogenesis and treatment of depression. METHODS/DESIGN The primary objective of this trial is to estimate the effect of folate augmentation in new or continuing treatment of depressive disorder in primary and secondary care. Secondary objectives are to evaluate the cost-effectiveness of folate augmentation of antidepressant treatment, investigate how the response to antidepressant treatment depends on genetic polymorphisms relevant to folate metabolism and antidepressant response, and explore whether baseline folate status can predict response to antidepressant treatment. Seven hundred and thirty patients will be recruited from North East Wales, North West Wales and Swansea. Patients with moderate to severe depression will be referred to the trial by their GP or Psychiatrist. If patients consent they will be assessed for eligibility and baseline measures will be undertaken. Blood samples will be taken to exclude patients with folate and B12 deficiency. Some of the blood taken will be used to measure homocysteine levels and for genetic analysis (with additional consent). Eligible participants will be randomised to receive 5 mg of folic acid or placebo. Patients with B12 deficiency or folate deficiency will be given appropriate treatment and will be monitored in the 'comprehensive cohort study'. Assessments will be at screening, randomisation and 3 subsequent follow-ups. DISCUSSION If folic acid is shown to improve the efficacy of antidepressants, then it will provide a safe, simple and cheap way of improving the treatment of depression in primary and secondary care. TRIAL REGISTRATION Current controlled trials ISRCTN37558856.
Burst firing in sensory systems
Neurons that fire high-frequency bursts of spikes are found in various sensory systems. Although the functional implications of burst firing might differ from system to system, bursts are often thought to represent a distinct mode of neuronal signalling. The firing of bursts in response to sensory input relies on intrinsic cellular mechanisms that work with feedback from higher centres to control the discharge properties of these cells. Recent work sheds light on the information that is conveyed by bursts about sensory stimuli, on the cellular mechanisms that underlie bursting, and on how feedback can control the firing mode of burst-capable neurons, depending on the behavioural context. These results provide strong evidence that bursts have a distinct function in sensory information transmission.
[Assessment of the management of community-acquired pneumonia in adults outpatients].
BACKGROUND There is limited information about the effectiveness of the treatment of community-acquired pneumonia (CAP) in Chilean emergency rooms. AIM To assess the treatment of CAP in emergency rooms at the Viña del Mar Health Service in Chile. MATERIAL AND METHODS Prospective study of immunocompetent adult patients consulting for a CAP in emergency rooms. Those that required hospital admission were considered ineligible. The initial clinical and laboratory assessment, antimicrobial treatment and their condition after 30 days of follow up, were recorded. RESULTS Three hundred eleven adult patients aged 57+/-22 years (152 males), were evaluated. Patients with class I CAP (40% of cases) were treated with Clarithromycin (71.8%) or Amoxicillin (26.6%) for 10 days. Patients with class II CAP (60%) were treated with Amoxicillin-clavulanate (80.7%) or Levofloxacin (18.2%) for 10 days. Three hundred eight patients (99%) were cured without need of hospital admission; three patients (1%) were subsequently hospitalized because of clinical failure of ambulatory treatment. Overall, three patients (1%) died; all deaths occurred during or immediately after hospitalization and were related to the severity of lung infection but not to the choice of antibiotic treatment. CONCLUSIONS The outpatient management of CAP by general practitioners working at emergency rooms was clinically effective with low rates of hospital admission and mortality.
Information extraction on novel text using machine learning and rule-based system
Novel consists of around 30,000 to 50,000 words in total. It usually tells a story about entities and its relation one another such as, Person, Location or Organization. In order to apprehend those information, reading the whole novel is compulsory. However, it is a time-consuming task. This research proposes a solution — automatic extraction of entity relation by means of Information Extraction (IE) technique. This technique is divided into two steps. First, all the entities are retrieved from the text input, by using Named Entity Recognition (NER). Afterward, all relations is extracted by Relation Extraction (RE) process. This research implements an IE system to both NER and RE, which employs supervised machine learning approach combined with rule-based system. The main purpose of this research is to determine which features and algorithm of the machine learning are adequate to acquire the best result, and which rules are the most suitable for novel characteristics.
Event-Driven Business Process Management and its Practical Application Taking the Example of DHL
The recently coined term «Event-Driven Business Process Management» (EDBPM) is a combination of actually two different disciplines: Business Process Management (BPM) and Complex Event Processing (CEP). The common understanding behind BPM is that each company’s unique way of doing business is captured in its business processes. For this reason, business processes are today seen as the most valuable corporate asset. In the context of this article, BPM means a software platform which provides companies the ability to model, manage, and optimize these processes for significant gain. As an independent system, Complex Event Processing (CEP) is a parallel running platform that analyses and processes events. The BPMand the CEP-platform correspond via events which are produced by the BPM-workflow engine and by the IT services which are associated with the business process steps. In this paper we present a general framework for EDBPM as well as first use cases in the context of logistics and financial services.
Inference in Probabilistic Graphical Models by Graph Neural Networks
A fundamental computation for statistical inference and accurate decision-making is to compute the marginal probabilities or most probable states of task-relevant variables. Probabilistic graphical models can efficiently represent the structure of such complex data, but performing these inferences is generally difficult. Messagepassing algorithms, such as belief propagation, are a natural way to disseminate evidence amongst correlated variables while exploiting the graph structure, but these algorithms can struggle when the conditional dependency graphs contain loops. Here we use Graph Neural Networks (GNNs) to learn a message-passing algorithm that solves these inference tasks. We first show that the architecture of GNNs is well-matched to inference tasks. We then demonstrate the efficacy of this inference approach by training GNNs on a collection of graphical models and showing that they substantially outperform belief propagation on loopy graphs. Our message-passing algorithms generalize out of the training set to larger graphs and graphs with different structure.
Mosapride, a 5HT-4 receptor agonist, improves insulin sensitivity and glycaemic control in patients with Type II diabetes mellitus
Aims/hypothesis. We investigated the potential role of mosapride, a 5HT-4 receptor agonist, in glycaemic control in Type II (non-insulin-dependent) diabetic mellitus patients without autonomic neuropathy. Methods. Thirty-four inpatients with Type II diabetes mellitus were randomly assigned to receive either mosapride (5 mg orally three times a day, n=17) or a placebo (n=17) for 1 week (first study). Changes in blood glucose and insulin were determined basally as well as after intravenous glucose loading. Insulin sensitivity was evaluated during hyperinsulinaemic-normoglycaemic-clamp studies and by measuring the number of and the autophosphorylation of insulin receptors on the erythrocytes of patients (n=9). Sixty-nine outpatients with Type II diabetes were similarly treated with mosapride or a placebo for 8 weeks (second study). Finally, tissue- specific expression of 5HT-4 receptors was examined by reverse transcriptase-polymerase chain reaction (RT-PCR). Results. Mosapride lowered fasting blood glucose and fructosamine concentrations (p<0.05) (first study). It significantly increased the number of (Mosapride 3323±518 vs 4481±786 [p<0.05], Control 4227±761 vs 3275±554 per 300 µl erythrocytes) and the tyrosine autophosphorylation (Mosapride 3178±444 vs 4043±651 [p<0.05], Control 3721±729 vs 3013±511 insulin receptor unit) of insulin receptors, as well as glucose utilisation (Mosapride 4.92±0.53 vs 5.88±0.72 [p<0.05], Control 4.74±0.65 vs 4.70±0.31 mg/kg·min). Mosapride treatment for 8 weeks significantly reduced fasting glucose (9.91±0.34 vs 8.51±0.34 mmol/l, p<0.05), insulin (53.2±4.62 vs 40.8±5.52 pmol/l, p<0.05) and HbA1c (8.61±0.20 vs 7.67±0.19%, p<0.01) concentrations (second study). The RT-PCR analysis demonstrated specific expression of 5HT-4 receptors in the muscle, but not in the liver or fat tissues. Conclusions/interpretation. Mosapride could improve insulin action at muscle and glycaemic control in Type II diabetic patients.
Metabolic engineering with systems biology tools to optimize production of prokaryotic secondary metabolites.
Covering: 2012 to 2016Metabolic engineering using systems biology tools is increasingly applied to overproduce secondary metabolites for their potential industrial production. In this Highlight, recent relevant metabolic engineering studies are analyzed with emphasis on host selection and engineering approaches for the optimal production of various prokaryotic secondary metabolites: native versus heterologous hosts (e.g., Escherichia coli) and rational versus random approaches. This comparative analysis is followed by discussions on systems biology tools deployed in optimizing the production of secondary metabolites. The potential contributions of additional systems biology tools are also discussed in the context of current challenges encountered during optimization of secondary metabolite production.
How Spelling Supports Reading And Why It Is More Regular and Predictable Than You May Think By
that any educated person can spell, yet literate adults commonly characterize themselves as poor spellers and make spelling mistakes. Many children have trouble spelling, but we do not know how many, or in relation to what standard, because state accountability assessments seldom include a direct measure of spelling competence. Few state standards specify what, exactly, a student at each grade level should be able to spell, and most subsume spelling under broad topics such as written composition and language proficiency. State writing tests may not even score children on spelling accuracy, as they prefer to lump it in with other “mechanical” skills in the scoring rubrics. Nevertheless, research has shown that learning to spell and learning to read rely on much of the same underlying knowledge—such as the relationships between letters and sounds— and, not surprisingly, that spelling instruction can be designed to help children better understand that key knowledge, resulting in better reading (Ehri, 2000). Catherine Snow et al. (2005, p. 86) summarize the real importance of spelling for reading as follows: “Spelling and reading build and rely on the same mental representation of a word. Knowing the spelling of a word makes the representation of it sturdy and accessible for fluent reading.” In fact, Ehri and Snowling (2004) found that the ability to read words “by sight” (i.e. automatically) rests on the ability to map letters and letter combinations to sounds. Because words are not very visually distinctive (for example, car, can, cane), it is impossible for children to memorize more than a few dozen words unless they have developed insights into how letters and sounds correspond. Learning to spell requires instruction and gradual integration of information about print, speech sounds, and meaning—these, in turn, support memory for whole words, which is used in both spelling and sight reading. Research also bears out a strong relationship between spelling and writing: Writers who must think too hard about how to spell use up valuable cognitive resources needed for higher level aspects of composition (Singer and Bashir, 2004). Even more than reading, writing is a mental juggling act that depends on automatic deployment of basic skills such as handwriting, spelling, grammar, and punctuation so that the writer can keep track of such concerns as topic, organization, word choice, and audience needs. Poor spellers may restrict what they write to words they can spell, with inevitable loss of verbal power, or they may lose track of their thoughts when they get stuck trying to spell a word. But what about spell check? Since the advent of word processing and spell checkers, some educators have argued that spelling instruction is unnecessary. It’s true that spell checkers work reasonably well for those of us who can spell reasonably well—but rudimentary spelling skills are insufficient to use a spell checker. Spell checkers do not catch all errors. Students who are very poor spellers do not produce the close approximations of target words necessary for the spell checker to suggest the right word. In fact, one study (Montgomery, Karlan, and Coutinho, 2001) reported that spell checkers usually catch just 30 to 80 percent of misspellings overall (partly because they miss errors like here vs. hear), and that spell checkers identified the target word from the misspellings of students with learning disabilities only 53 percent of the time. Clearly, the research base for claiming that spelling is important for young children is solid: Learning to spell enhances children’s reading and writing. But what about middle-school students? Does continued spelling instruction offer any added benefits? Here the research is sparse indeed. Yet, the nature of the English language’s spelling/writing system provides reason to believe that there would be significant benefits to older stu-
The practice of crowdsourcing is transforming the Web and giving rise to a new field
social systems, social search, social media, collective intelligence, wikinomics, crowd wisdom, smart mobs, mass collaboration, and human computation. The topic has been discussed extensively in books, popular press, and academia.1,5,15,23,29,35 But this body of work has considered mostly efforts in the physical world.23,29,30 Some do consider crowdsourcing systems on the Web, but only certain system types28,33 or challenges (for example, how to evaluate users12). This survey attempts to provide a global picture of crowdsourcing systems on the Web. We define and classify such systems, then describe a broad sample of systems. The sample CROWDSOURCING SYSTEMS enlist a multitude of humans to help solve a wide variety of problems. Over the past decade, numerous such systems have appeared on the World-Wide Web. Prime examples include Wikipedia, Linux, Yahoo! Answers, Mechanical Turk-based systems, and much effort is being directed toward developing many more. As is typical for an emerging area, this effort has appeared under many names, including peer production, user-powered systems, user-generated content, collaborative systems, community systems, Crowdsourcing Systems on the World-Wide Web DOI:10.1145/1924421.1924442
Assessing the Validity and Reliability of a Questionnaire on Dietary Fibre-related Knowledge in a Turkish Student Population
This study aimed to validate a questionnaire on dietary fibre (DF)-related knowledge in a Turkish student population. Participants (n=360) were either undergraduate students who have taken a nutrition course for 14 weeks (n=174) or those in another group who have not taken such a nutrition course (n=186). Test-retest reliability, internal reliability, and construct validity of the questionnaire were determined. Overall internal reliability (Cronbach's alpha=0.90) and test-retest reliability (0.90) were high. Significant differences (p<0.001) between the scores of the two groups of students indicated that the questionnaire had satisfactory construct validity. It was found that one-fifth of the students were unsure of the correct answer for any item, and 52.5% of them were not aware that DF had to be consumed on a daily basis. Only 36.4 to 44.2% of the students were able to correctly identify the food sources of DF.
Social competence intervention for youth with Asperger Syndrome and high-functioning autism: an initial investigation.
Individuals with high functioning autism (HFA) or Asperger Syndrome (AS) exhibit difficulties in the knowledge or correct performance of social skills. This subgroup's social difficulties appear to be associated with deficits in three social cognition processes: theory of mind, emotion recognition and executive functioning. The current study outlines the development and initial administration of the group-based Social Competence Intervention (SCI), which targeted these deficits using cognitive behavioral principles. Across 27 students age 11-14 with a HFA/AS diagnosis, results indicated significant improvement on parent reports of social skills and executive functioning. Participants evidenced significant growth on direct assessments measuring facial expression recognition, theory of mind and problem solving. SCI appears promising, however, larger samples and application in naturalistic settings are warranted.
Onset time, recovery duration, and drug cost with four different methods of inducing general anesthesia.
UNLABELLED We compared two conventional induction techniques (thiopental and propofol), an inhaled induction with sevoflurane using a circle system, and a rebreathing method. Fentanyl 1 microg/kg was given to women undergoing 10- to 20-min procedures. Anesthesia was induced (n = 20 each) with one of the following: 1) sevoflurane and N2O from a rebreathing bag (Sevo/Bag). A 5-L bag was prefilled with a mixture of sevoflurane 7% and N2O 60% in oxygen. The bag was connected between the normal circle system, separated by a spring-loaded valve; 2) sevoflurane 8% and N2O 60% from a circle system on a conventional anesthesia machine with a total fresh gas flow of 6 L/min (Sevo/Circle); 3) propofol 3 mg/kg as an i.v. bolus; 4) thiopental sodium 5 mg/kg as an i.v. bolus. Postoperative nausea and vomiting was treated with ondansetron. Induction times were comparable with each method. Recovery duration was shortest with sevoflurane, intermediate with propofol, and longest with thiopental. Induction drug costs were lowest with Sevo/Bag and thiopental, intermediate with Sevo/Circle, and highest with propofol. However, sevoflurane (by either method) caused considerable nausea and vomiting that required treatment. Consequently, total drug cost was least with thiopental, intermediate with Sevo/Bag and propofol, and greatest with Sevo/Circle. Thus, no single technique was clearly superior. IMPLICATIONS Anesthetic induction techniques influence awakening time, recovery duration, and drug costs. We tested two i.v. methods and two inhaled techniques. However, none of the four tested methods was clearly superior to the others.
A comparison of the noise sensitivity of nine QRS detection algorithms
The noise sensitivities of nine different QRS detection algorithms were measured for a normal, single-channel, lead-II, synthesized ECG corrupted with five different types of synthesized noise: electromyographic interference, 60-Hz power line interference, baseline drift due to respiration, abrupt baseline shift, and a composite noise constructed from all of the other noise types. The percentage of QRS complexes detected, the number of false positives, and the detection delay were measured. None of the algorithms were able to detect all QRS complexes without any false positives for all of the noise types at the highest noise level. Algorithms based on amplitude and slope had the highest performance for EMG-corrupted ECG. An algorithm using a digital filter had the best performance for the composite-noise-corrupted data.<<ETX>>
The challenges of leadership in the modern world: introduction to the special issue.
This article surveys contemporary trends in leadership theory as well as its current status and the social context that has shaped the contours of leadership studies. Emphasis is placed on the urgent need for collaboration among social-neuro-cognitive scientists in order to achieve an integrated theory, and the author points to promising leads for accomplishing this. He also asserts that the 4 major threats to world stability are a nuclear/biological catastrophe, a world-wide pandemic, tribalism, and the leadership of human institutions. Without exemplary leadership, solving the problems stemming from the first 3 threats will be impossible.
Effects of improvement in aerobic power on resting insulin and glucose concentrations in children
In this study we determined the influence of improving aerobic power (V˙O2max) on basal plasma levels of insulin and glucose of 11- to 14-year-old children, while accounting for body fat, gender, pubertal status, and leisure-time physical activity (LTPA) levels. Blood samples were obtained from 349 children after an overnight fast and analyzed for plasma insulin and glucose. Height, mass, body mass index (BMI), and sum of skinfolds (Σ triceps + subscapular sites) were measured. LTPA levels and pubertal status were estimated from questionnaires, and V˙O2max was predicted from a cycle ergometry test. Regardless of gender, insulin levels were significantly correlated (P = 0.0001) to BMI, skinfolds, pubertal stage, and predicted V˙O2max, but were not related to LTPA levels. Fasting glucose levels were not correlated to measures of adiposity or exercise (LTPA score, V˙O2max) for females; however, BMI and skinfolds were correlated for males (P < 0.006). The children then took part in an 8-week aerobic exercise program. The 60 children whose V˙O2max improved (≥3 ml · kg−1 · min−1) had a greater reduction in circulating insulin than the 204 children whose V˙O2max did not increase −16 (41) vs −1 (63) pmol · l−1; P = 0.028. The greatest change occurred in those children with the highest initial resting insulin levels. Plasma glucose levels were slightly reduced only in those children with the highest insulin levels whose V˙O2max improved (P < 0.0506). The results of this study indicate that in children, adiposity has the most significant influence on fasting insulin levels; however, increasing V˙O2max via exercise can lower insulin levels in those children with initially high levels of the hormone. In addition, LTPA does not appear to be associated with fasting insulin status, unless it is sufficient to increase V˙O2max.
Structural Basis for Activation of the Receptor Tyrosine Kinase KIT by Stem Cell Factor
Stem Cell Factor (SCF) initiates its multiple cellular responses by binding to the ectodomain of KIT, resulting in tyrosine kinase activation. We describe the crystal structure of the entire ectodomain of KIT before and after SCF stimulation. The structures show that KIT dimerization is driven by SCF binding whose sole role is to bring two KIT molecules together. Receptor dimerization is followed by conformational changes that enable lateral interactions between membrane proximal Ig-like domains D4 and D5 of two KIT molecules. Experiments with cultured cells show that KIT activation is compromised by point mutations in amino acids critical for D4-D4 interaction. Moreover, a variety of oncogenic mutations are mapped to the D5-D5 interface. Since key hallmarks of KIT structures, ligand-induced receptor dimerization, and the critical residues in the D4-D4 interface, are conserved in other receptors, the mechanism of KIT stimulation unveiled in this report may apply for other receptor activation.
Ensemble deep learning for speech recognition
Deep learning systems have dramatically improved the accuracy of speech recognition, and various deep architectures and learning methods have been developed with distinct strengths and weaknesses in recent years. How can ensemble learning be applied to these varying deep learning systems to achieve greater recognition accuracy is the focus of this paper. We develop and report linear and log-linear stacking methods for ensemble learning with applications specifically to speechclass posterior probabilities as computed by the convolutional, recurrent, and fully-connected deep neural networks. Convex optimization problems are formulated and solved, with analytical formulas derived for training the ensemble-learning parameters. Experimental results demonstrate a significant increase in phone recognition accuracy after stacking the deep learning subsystems that use different mechanisms for computing high-level, hierarchical features from the raw acoustic signals in speech.
Optimal financial contracts with unobservable investments
In this article we propose a security-design problem in which risk neutral entrepreneurs make unobservable investment decisions while employing the investment funds of risk-neutral outside investor/creditor(s). Contracts are restricted to satisfy limited liability and monotonicity of the payment schedule. The model we present extends the classical one proposed by Innes (1990, Journal of Economic Theory 52, 47-67) along three main directions: agents' decisions may be restricted by their initial capital and outside financial opportunities; their investment decisions may also consist in hiding funds in an asset placed outside their firms; initial firms' capital, which identifies entrepreneur types, may only be imperfectly observed by creditors (i.e. types are private information). We motivate our interest in this security-design problem referring to the 'opacity' that often characterizes the financial situation and decisions of small firms, a particularly large fraction of the non-financial sector in most developed countries.
Making sense of interoperability : Protocols and Standardization initiatives in IOT
Recent developments in connectivity technologies have spurred the adoption of internet connected “smart” devices for remote sensing, actuating and intelligent monitoring using advanced analytics and real-time data processing. As the pace and the scale of such solutions increase rapidly, there will soon be a problem of getting these disparate solutions to work seamlessly together to realize a large scale Internet of Things (IOT). Fortunately, several industry bodies and standards forums are working on extending or adopting the internet protocols to the constrained devices. However, in the near term, disparate islands of solutions are likely to outpace deployment of interoperable standards-based solutions. In this paper, we review and discuss the recent developments in protocols and standardization initiatives for IOT from the perspective of interoperability. In particular, we look at application layer protocols and issues of interoperability of solutions.
Generation using Generative Adversarial Training
Generative models reduce the need of acquiring laborious labeling for the dataset. Text generation techniques can be applied for improving language models, machine translation, summarization, and captioning. This project experiments on different recurrent neural network models to build generative adversarial networks for generating texts from noise. The trained generator is capable of producing sentences with certain level of grammar and logic.
Making time for mindfulness
OBJECTIVE Digital mental wellbeing interventions are increasingly being used by the general public as well as within clinical treatment. Among these, mindfulness and meditation programs delivered through mobile device applications are gaining popularity. However, little is known about how people use and experience such applications and what are the enabling factors and barriers to effective use. To address this gap, the study reported here sought to understand how users adopt and experience a popular mobile-based mindfulness intervention. METHODS A qualitative semi-structured interview study was carried out with 16 participants aged 25-38 (M=32.5) using the commercially popular mindfulness application Headspace for 30-40days. All participants were employed and living in a large UK city. The study design and interview schedule were informed by an autoethnography carried out by the first author for thirty days before the main study began. Results were interpreted in terms of the Reasoned Action Approach to understand behaviour change. RESULTS The core concern of users was fitting the application into their busy lives. Use was also influenced by patterns in daily routines, on-going reflections about the consequences of using the app, perceived self-efficacy, emotion and mood states, personal relationships and social norms. Enabling factors for use included positive attitudes towards mindfulness and use of the app, realistic expectations and positive social influences. Barriers to use were found to be busy lifestyles, lack of routine, strong negative emotions and negative perceptions of mindfulness. CONCLUSIONS Mobile wellbeing interventions should be designed with consideration of people's beliefs, affective states and lifestyles, and should be flexible to meet the needs of different users. Designers should incorporate features in the design of applications that manage expectations about use and that support users to fit app use into a busy lifestyle. The Reasoned Action Approach was found to be a useful theory to inform future research and design of persuasive mental wellbeing technologies.
First-trimester combined screening for trisomy 21 in a predominantly Chinese population.
OBJECTIVE To examine the effectiveness of first-trimester fetal trisomy 21 screening using a combination of maternal age, nuchal translucency thickness (NT) and maternal serum free beta-human chorionic gonadotropin (beta-hCG) and pregnancy-associated plasma protein-A (PAPP-A) levels in a predominantly Chinese population in Hong Kong. METHODS This was a prospective study over a 1.5-year period of 2990 women who underwent combined screening for trisomy 21 between 11+0 and 13+6 weeks of gestation in a university fetal medicine unit. NT was measured according to the criteria set by The Fetal Medicine Foundation (FMF), maternal serum free beta-hCG and PAPP-A levels were measured, and the risk of trisomy 21 was calculated using The FMF's algorithm. Fetal karyotyping was advised when the risk was 1 : 300 or above. All subjects were followed up for pregnancy and fetal outcome. RESULTS Of the 2990 women who underwent the screening program, 99% were Chinese. There were 57 twin pregnancies, giving a total of 3047 fetuses. Thirty-one percent of the women were 35 years old or above. One hundred and eighty-five (6.1%) fetuses were screen-positive; this included 14 cases of trisomy 21 and 17 cases of other chromosomal abnormalities. The positive predictive value was 16.7%. Among the 2862 screen-negative fetuses, only 18 (0.6%) cases had an unknown fetal outcome. There were no cases in which trisomy 21 was missed and the infant was liveborn. CONCLUSION First-trimester combined screening for fetal trisomy 21 is highly effective among Chinese subjects.
A state of the art survey of data mining-based fraud detection and credit scoring
Credit risk has been a widespread and deep penetrating problem for centuries, but not until various credit derivatives and products were developed and novel technologies began radically changing the human society, have fraud detection, credit scoring and other risk management systems become so important not only to some specific firms, but to industries and governments worldwide. Frauds and unpredictable defaults cost billions of dollars each year, thus, forcing financial institutions to continuously improve their systems for loss reduction. In the past twenty years, amounts of studies have proposed the use of data mining techniques to detect frauds, score credits and manage risks, but issues such as data selection, algorithm design, and hyperparameter optimization affect the perceived ability of the proposed solutions and it is difficult for auditors and researchers to explore and figure out the highest level of general development in this area. In this survey we focus on a state of the art survey of recently developed data mining techniques for fraud detection and credit scoring. Several outstanding experiments are recorded and highlighted, and the corresponding techniques, which are mostly based on supervised learning algorithms, unsupervised learning algorithms, semisupervised algorithms, ensemble learning, transfer learning, or some hybrid ideas are explained and analysed. The goal of this paper is to provide a dense review of up-to-date techniques for fraud detection and credit scoring, a general analysis on the results achieved and upcoming challenges for further researches.
Impact of mucosal healing on long-term outcomes in ulcerative colitis treated with infliximab: a multicenter experience.
BACKGROUND Mucosal healing can be achieved with infliximab (IFX). AIM To assess the impact of mucosal healing on long-term outcomes in patients with ulcerative colitis (UC) when treated with infliximab (IFX) beyond 1 year. METHODS All consecutive adult patients with refractory UC receiving maintenance treatment with IFX in five French referral centres were analysed retrospectively. Only patients who had endoscopic evaluation between 6 and 52 weeks following IFX initiation were included. According to their Mayo endoscopic sub-score, patients were categorised into mucosal healing (sub-score: 0-1) and no mucosal healing (2-3). Outcome measures were colectomy and IFX failure defined by drug withdrawal due to secondary failure among primary responders. RESULTS Of the 63 patients (30 women; median age: 38 years), 30 (48%) achieved mucosal healing. The median follow-up duration was 27 (3-79) months. Colectomy-free survival rates at 12, 24 and 36 months were, respectively, 100%, 96% and 96% in patients with mucosal healing. The corresponding figures were, respectively, 80%, 65% and 65% in patients without mucosal healing (P = 0.004). By multivariate analysis, mucosal healing was the only factor associated with colectomy-free survival, with an odds ratio of 18.01 (95%CI: 1.58-204.92). IFX failure-free survival rates at 12, 24 and 36 months were, respectively, 76%, 69% and 64% in patients with mucosal healing, and 44%, 25% and 21% in those without mucosal healing (P = 0.003). CONCLUSION Patients with refractory UC who achieved mucosal healing after IFX initiation had better long-term outcomes, with significantly less colectomy and less IFX failure.
Aircraft angle of attack and air speed detection by redundant strip pressure sensors
Air speed and angle of attack are fundamental parameters in the control of flying bodies. Conventional detection techniques use sensors that may protrude outside the aircraft, be too bulky for installation on small UAVs, or be excessively intrusive. We propose a novel readout methodology where flight parameters are inferred from redundant pressure readings made by capacitive strip sensors directly applied on the airfoil skin. Redundancy helps lower the accuracy requirements on the individual sensors. A strategy for combining sensor data is presented with an error propagation analysis. The latter enables foreseeing the precision by which the flight parameters can be detected. The methodology has been validated by fluid dynamic simulation and a sample case is illustrated.
Book Review Article: Ethnography, Advocacy and Feminism: A Volatile Mix. A view from a reading of Diane Bell's Ngarrindjeri Wurruwarrin.
Ethnography may lie at the heart of anthropological methodology but its claims are contested. Feminist anthropologists in particular have debated the challenges a critical academic discipline poses for a consciously politicised positioning of the ethnographer, examining the constraints this might impose on the ethnographic project. Such dilemmas are compounded in the context of advocacy work. This critique of a feminist ethnography (Diane Bell's Ngarrindjeri Wurruwarrin), which emerged from advocacy work in a litigious Australian context, suggests that the truth demands of advocacy work sit uneasily with both the partiality of critical ethnography and the politics of the feminist project.
A Document Skew Detection Method Using the Hough Transform
Document image processing has become an increasingly important technology in the automation of office documentation tasks. Automatic document scanners such as text readers and OCR (Optical Character Recognition) systems are an essential component of systems capable of those tasks. One of the problems in this field is that the document to be read is not always placed correctly on a flatbed scanner. This means that the document may be skewed on the scanner bed, resulting in a skewed image. This skew has a detrimental effect on document on document analysis, document understanding, and character segmentation and recognition. Consequently, detecting the skew of a document image and correcting it are important issues in realising a practical document reader. In this paper we describe a new algorithm for skew detection. We then compare the performance and results of this skew detection algorithm to other publidhed methods form O'Gorman, Hinds, Le, Baird, Posel and Akuyama. Finally, we discuss the theory of skew detection and the different apporaches taken to solve the problem of skew in documents. The skew correction algorithm we propose has been shown to be extremenly fast, with run times averaging under 0.25 CPU seconds to calculate the angle on the DEC 5000/20 workstation.
Research, design and fabrication of 2.45 GHz microstrip patch antenna arrays for close-range wireless power transmission systems
Antennas play a very important role in wireless power transmission (WPT) systems using microwave since they affect the system transmission efficiency. General requirements for an antenna used for WPT systems are high directivity and narrow beamwidth. This paper presents the design, simulation and fabrication of microstrip patch antenna arrays for close-range WPT systems. Various antenna prototypes such as single patch, 2×2, 2×4, 4×8 and 8×8 microstrip patch antenna arrays working at 2.45 GHz are proposed. The 2×4 microstrip patch antenna array is fabricated and measured using Anritsu 37364D Vector Network Analyzer and NSI2000 Nearfield System. Measurement results show that the fabricated antenna obtains a directivity of 14.7 dBi and a 3 dB beamwidth of 23.2 degree. The experiment with the WPT system is carried out. This paper also presents structure of a WPT system and the role each building block in the system.
What is binocular vision for? A birds' eye view.
It is proposed that with the possible exception of owls, binocularity in birds does not have a higher order function that results in the perception of solidity and relative depth. Rather, binocularity is a consequence of the requirement of having a portion of the visual field that looks in the direction of travel; hence, each eye must have a contralateral projection that gives rise to binocularity. This contralateral projection is necessary to gain a symmetrically expanding optic flow-field about the bill. This specifies direction of travel and time to contact a target during feeding or when provisioning chicks. In birds that do not need such control of their bill, binocular field widths are very small, suggesting that binocular vision plays only a minor role in the control of locomotion. In the majority of birds, the function of binocularity would seem to lie in what each eye does independently rather than in what the two eyes might be able to do together. The wider binocular fields of owls may be the product of an interaction between enlarged eyes and enlarged outer ears, which may simply prevent more lateral placement of the eyes.
dhSegment: A Generic Deep-Learning Approach for Document Segmentation
In recent years there have been multiple successful attempts tackling document processing problems separately by designing task specific hand-tuned strategies. We argue that the diversity of historical document processing tasks prohibits to solve them one at a time and shows a need for designing generic approaches in order to handle the variability of historical series. In this paper, we address multiple tasks simultaneously such as page extraction, baseline extraction, layout analysis or multiple typologies of illustrations and photograph extraction. We propose an open-source implementation of a CNN-based pixel-wise predictor coupled with task dependent post-processing blocks. We show that a single CNN-architecture can be used across tasks with competitive results. Moreover most of the task-specific post-precessing steps can be decomposed in a small number of simple and standard reusable operations, adding to the flexibility of our approach.
Clinical features of colorectal cancer in Iran: a 15-year review.
OBJECTIVE The increasing frequency of colorectal cancer in Asian countries, which have traditionally had a lower incidence than populations in developed countries, and a lower age of onset in Iran led us to perform a study on patients with colorectal cancer to investigate the clinical features of colorectal cancer in Iranians. METHODS We studied 977 consecutive cases of colorectal cancer in terms of age, gender, and anatomic location during a 15-year period (1989-2004). The patients were hospitalized in three referral hospitals affiliated to Shahid Beheshti Medical University in Tehran, Iran. Related data were extracted from the patients' medical records and processed using SPSS ver. 12. Most of our patients (251 cases or 25.6%) were between 60 and 69 years of age. We recorded 534 men (54.3%) and 443 women (45.7%). A total of 36% of patients were between 40 and 59 years and 306 patients (31.32%) had rectal cancer. RESULTS The highest frequency of colorectal cancer in our study belonged to an age group a decade younger than the range mentioned in the literature for populations in developed societies. CONCLUSION Regarding similar studies, it seems reasonable to adopt screening a decade earlier for Iranian patients.
Contornos en negativo: Reescrituras posdictatoriales del siglo XIX (Argentina, Chile y Uruguay)
This dissertation focuses on how contemporary Southern Cone fiction responds to the post-dictatorial symbolic crisis by articulating a new political praxis based upon the intertwining of historical and aesthetic discourses. It thus addresses the link between narrative, historiography and politics in recent Argentinian, Chilean and Uruguayan texts that have not received major critical attention, probably because they do not fit into contemporary critical categories. The analysis seeks to go beyond the hegemonic debates on post-dictatorship —mostly represented by scholars such us Idelber Avelar, Francine Masiello, Nelly Richard and Alberto Moreiras, among others— which seem trapped in a number of recurrent topics and concepts —mourning, memory, post-memory, horror, allegory— that erase their critical potential and obliterate other possibilities. Breaking with this pattern, my work explores a dimension that has often been overlooked: the contemporary representation of the nineteenth century. I test the hypothesis that, in contemporary Southern Cone literature, the fictional re-writing of the nineteenth century (the foundational moment of nation, narration and intellectuals in Latin America) attempts to outline new ethical models for the intellectual, to solve the representational crisis through a reformulation of realism and to frame a specific political role for literary practice. My perspective owes considerably to a number of thinkers within the Marxist tradition —especially Walter Benjamin, Gyorgy Lukacs and Fredric Jameson— and intends to contribute to some of their major discussion topics, above all the relation between history and aesthetics and the dichotomy allegory/realism.
Loan supply, credit markets and the euro area financial crisis
We use bank-level information on lending practices from the euro area Bank Lending Survey to construct a new indicator of loans JEL Classification: E51, E44, C32
Performance evaluation of electric power steering with IPM motor and drive system
As the development of microprocessors, power electronic converters and electric motor drives, electric power steering (EPS) system which uses an electric motor came to use a few year ago. Electric power steering systems have many advantages over traditional hydraulic power steering systems in engine efficiency, space efficiency, and environmental compatibility. This paper deals with design and optimization of an interior permanent magnet (IPM) motor for power steering application. Simulated Annealing method is used for optimization. After optimization and finding motor parameters, An IPM motor and drive with mechanical parts of EPS system is simulated and performance evaluation of system is done.
Prefetch Side-Channel Attacks: Bypassing SMAP and Kernel ASLR
Modern operating systems use hardware support to protect against control-flow hijacking attacks such as code-injection attacks. Typically, write access to executable pages is prevented and kernel mode execution is restricted to kernel code pages only. However, current CPUs provide no protection against code-reuse attacks like ROP. ASLR is used to prevent these attacks by making all addresses unpredictable for an attacker. Hence, the kernel security relies fundamentally on preventing access to address information. We introduce Prefetch Side-Channel Attacks, a new class of generic attacks exploiting major weaknesses in prefetch instructions. This allows unprivileged attackers to obtain address information and thus compromise the entire system by defeating SMAP, SMEP, and kernel ASLR. Prefetch can fetch inaccessible privileged memory into various caches on Intel x86. It also leaks the translation-level for virtual addresses on both Intel x86 and ARMv8-A. We build three attacks exploiting these properties. Our first attack retrieves an exact image of the full paging hierarchy of a process, defeating both user space and kernel space ASLR. Our second attack resolves virtual to physical addresses to bypass SMAP on 64-bit Linux systems, enabling ret2dir attacks. We demonstrate this from unprivileged user programs on Linux and inside Amazon EC2 virtual machines. Finally, we demonstrate how to defeat kernel ASLR on Windows 10, enabling ROP attacks on kernel and driver binary code. We propose a new form of strong kernel isolation to protect commodity systems incuring an overhead of only 0.06-5.09%.
Adaptive Loss Minimization for Semi-Supervised Elastic Embedding
The semi-supervised learning usually only predict labels for unlabeled data appearing in training data, and cannot effectively predict labels for testing data never appearing in training set. To handle this outof-sample problem, many inductive methods make a constraint such that the predicted label matrix should be exactly equal to a linear model. In practice, this constraint is too rigid to capture the manifold structure of data. Motivated by this deficiency, we relax the rigid linear embedding constraint and propose to use an elastic embedding constraint on the predicted label matrix such that the manifold structure can be better explored. To solve our new objective and also a more general optimization problem, we study a novel adaptive loss with efficient optimization algorithm. Our new adaptive loss minimization method takes the advantages of both L1 norm and L2 norm, and is robust to the data outlier under Laplacian distribution and can efficiently learn the normal data under Gaussian distribution. Experiments have been performed on image classification tasks and our approach outperforms other state-of-the-art methods.
Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser
Neural networks are vulnerable to adversarial examples, which poses a threat to their application in security sensitive systems. We propose high-level representation guided denoiser (HGD) as a defense for image classification. Standard denoiser suffers from the error amplification effect, in which small residual adversarial noise is progressively amplified and leads to wrong classifications. HGD overcomes this problem by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image. Compared with ensemble adversarial training which is the state-of-the-art defending method on large images, HGD has three advantages. First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks. Second, HGD can be trained on a small subset of the images and generalizes well to other images and unseen classes. Third, HGD can be transferred to defend models other than the one guiding it. In NIPS competition on defense against adversarial attacks, our HGD solution won the first place and outperformed other models by a large margin.1
Multi-Frequency and Dual-Mode Patch Antenna Based on Electromagnetic Band-gap (EBG) Structure
Microstrip patch antenna with multi-frequency and dual-mode is presented in this communication. The introduction of metamaterial substrate made of modified mushroom-type electromagnetic band-gap (EBG) structure enables the multi- functions. The composite right/left-handed transmission line (CRLH TL) theory and multi-conductor transmission line (MTL) theory are introduced to analyze and design this kind of antenna. Three modes including one monopolar radiation pattern in the n=0 mode and two patch-like radiation patterns in the n=-1, 1 modes respectively are analyzed and can be tuned by the parameters of the EBG structure. Since geometry of the radiating patch does not affect the infinite wavelength mode, circular polarization (CP) is obtained for the n=+1 mode by truncating the square radiating patch with linear polarization (LP) for the n=0 mode staying unchanged. Thus dual polarized antenna is obtained. Two prototypes of probe-fed patch antennas are fabricated to verify the theory analysis.
Facial expression recognition using Support Vector Machines
This paper propose a facial expression recognition approach based on Principal Component Analysis (PCA) and Local Binary Pattern (LBP) algorithms. Experiments were carried out on the Japanese Female Facial Expression (JAFFE) database and our recently introduced Mevlana University Facial Expression (MUFE) database. Support Vector Machine (SVM) was used as classifier. In all conducted experiments on JAFFE and MUFE databases, obtained results reveal that PCA+SVM has an average recognition rate of 87% and 77%, respectively.
Restricted Kaposi’s Sarcoma (KS) Herpesvirus Transcription in KS Lesions from Patients on Successful Antiretroviral Therapy
UNLABELLED Kaposi's sarcoma (KS) is caused by Kaposi's sarcoma-associated herpesvirus (KSHV; human herpesvirus 8). KS is an AIDS-defining cancer, and it is changing in the post-antiretroviral therapy (post-ART) era. In countries with ready access to ART, approximately one-third of KS cases present in patients with undetectable HIV loads and CD4 counts of ≥200 cells/µl. This is in contrast to pre-ART era KS, which was associated with systemic HIV replication and CD4 counts of ≤200 cells/µl. Using primary patient biopsy specimens, we identified a novel molecular signature that characterizes AIDS KS lesions that develop in HIV-suppressed patients on ART: KSHV transcription is limited in HIV-suppressed patients. With one exception, only the canonical viral latency mRNAs were detectable. In contrast, early AIDS KS lesions expressed many more viral mRNAs, including, for instance, the viral G protein-coupled receptor (vGPCR). IMPORTANCE This is the first genomewide study of Kaposi's sarcoma-associated herpesvirus (KSHV) transcription in KS lesions in the post-antiretroviral (post-ART) era. It shows that the gene expression of KSHV is altered in patients on ART, and it provides clinical evidence for active AIDS (as characterized by high HIV load and low CD4 counts) being a potential modulator of KSHV transcription. This implies a novel mode of pathogenesis (tightly latent KS), which may inform KS cancer treatment options in the post-ART era.
Cognitive neuroscience of human memory.
Current knowledge is summarized about long-term memory systems of the human brain, with memory systems defined as specific neural networks that support specific mnemonic processes. The summary integrates convergent evidence from neuropsychological studies of patients with brain lesions and from functional neuroimaging studies using positron emission tomography (PET) or functional magnetic resonance imaging (fMRI). Evidence is reviewed about the specific roles of hippocampal and parahippocampal regions, the amygdala, the basal ganglia, and various neocortical areas in declarative memory. Evidence is also reviewed about which brain regions mediate specific kinds of procedural memory, including sensorimotor, perceptual, and cognitive skill learning; perceptual and conceptual repetition priming; and several forms of conditioning. Findings are discussed in terms of the functional neural architecture of normal memory, age-related changes in memory performance, and neurological conditions that affect memory such as amnesia. Alzheimer's disease, Parkinson's disease, and Huntington's disease.
Global Encoding for Abstractive Summarization
Problems in the seq2seq (repetition and semantic irrelevance); Our global encoding mechanism: CNN and self attention; Improved performances on the benchmark datasets; Generate summaries with less repetition and higher semantic consistency to the source text. Sequence-to-Sequence as Baseline  Encoder: RNN is more popular, usually LSTM and GRU  Decoder: RNN for sequential decoding. Usually training is with teacher forcing.  Attention mechanism: additive attention or global attention for the relevant source-side information Example Problems in the seq2seq (repetition and semantic irrelevance); Experiments  Dataset: LCSTS and Gigaword Qualitative Analyses CNN (Inception-like structure), Self Attention and Gate Conclusion • Conventional Seq2Seq requires a mechanism to improve the source annotations so that they can provide summary-oriented information for the attention. • Global encoding can improve the quality of generated summaries, which is reflected in both the ROUGE evaluation and the case study. • It still requires future work to figure out what it filters and how it improves the performance of the model. Example Noise in the source context Relationship between the source and the target is different from the alignment in machine translation, and correct alignment does not indicate good summary. Source annotation at each time step lacks global information of the context, which may provide unnecessary information for summary. Global Encoding  Convolutional Neural Networks over the source annotations  Self attention for the connections to the global context.  Collaboratively build a gate for the original source annotations. Self Attention
Understanding Online Consumer Stickiness in E-commerce Environment : A Relationship Formation Model
Consumers with online shopping experience often stick to some special websites. Does it mean that long-term relationships between consumers and the websites form? To solve the problem, this paper analyzed the belief and attitude factors influencing online consumer stickiness intention. Based on the relationships between belief-attitude-intention implied in TPB (Theory of Planned Behavior), according to Expectation-Confirmation theory and Commitment-trust theory, we developed a concept model on online consumer stickiness. Six research hypotheses derived from this model were empirically validated using a survey of online shoppers. SEM (Structure Equation Modeling) was as a data analysis method. The results suggest that online consumer stickiness intention is influenced by consumers’ commitment to the website (e-vendors) and their overall satisfaction to the transaction process, and commitment’s effect is stronger; in turn, both consumers’ commitment and overall satisfaction are influenced by consumers’ ongoing trust significantly; confirmation has a significant effect on overall satisfaction but has not the same effect on commitment. The findings show that the long-term relationships between consumers and special websites exactly exist during online repeat consumption.
Row-less Universal Schema
Universal schema jointly embeds knowledge bases and textual patterns to reason about entities and relations for automatic knowledge base construction and information extraction. In the past, entity pairs and relations were represented as learned vectors with compatibility determined by a scoring function, limiting generalization to unseen text patterns and entities. Recently, ‘column-less’ versions of Universal Schema have used compositional pattern encoders to generalize to all text patterns. In this work we take the next step and propose a ‘row-less’ model of universal schema, removing explicit entity pair representations. Instead of learning vector representations for each entity pair in our training set, we treat an entity pair as a function of its relation types. In experimental results on the FB15k-237 benchmark we demonstrate that we can match the performance of a comparable model with explicit entity pair representations using a model of attention over relation types. We further demonstrate that the model performs with nearly the same accuracy on entity pairs never seen during training.
Genital injuries in adults.
The examination of the rape victim should focus on the therapeutic, forensic and psychological needs of the individual patient. One aspect will be an examination for ano-genital injuries. From a medical perspective, they tend to be minor and require little in the way of treatment. They must be considered when assessing the risk of blood-borne viruses and the need for prophylaxis. From a forensic perspective, an understanding of genital injury rates, type of injury, site and healing may assist the clinician to interpret the findings in the context of the allegations that have been made. There are many myths and misunderstandings about ano-genital injuries and rape. The clinician has a duty to dispel these.
Particle size analysis by image analysis
A phase I pharmacokinetics study of 9-nitrocamptothecin in patients with advanced solid tumors
9-Nitrocamptothecin (9-NC) is a novel orally administered camptothecin analog. The purpose of this study is to evaluate the pharmacokinetics and safety of 9-nitrocamptothecin in patients with advanced solid tumors. The 23 patients for a single-dose pharmacokinetic experiment were divided into 3 dosing cohorts. The dosage of 9-nitrocamptothecin capsule was 1.25, 1.5 and 1.75 mg/m2, respectively. The 8 patients for a multiple-dose pharmacokinetic study were orally administered 9-nitrocamptothecin 1.5 mg/m2 for 5 consecutive days. Determination of the plasma concentration of 9-nitrocamptothecin was performed by high-performance liquid chromatography-ultraviolet detector technique, and determination of plasma concentration of 9-aminocamptothecin was performed by high-performance liquid chromatography-fluorescence detector technique. In the single-dose pharmacokinetic study, the mean ± SD 9-nitrocamptothecin C max were 94.49 ± 41.38, 115.56 ± 63.27 and 147.57 ± 38.19 ng/mL; AUC0–36 were 877.14 ± 360.90, 961.33 ± 403.58 and 1,189.75 ± 405.80 ng h/mL, respectively; the mean ± SD 9-aminocamptothecin C max were 12.85 ± 6.46, 10.72 ± 6.58 and 28.74 ± 31.94 ng/mL; AUC0–36 were 157.61 ± 111.61, 88.71 ± 39.51 and 173.52 ± 122.19 ng h/mL, respectively. In the multiple-dose pharmacokinetic study, the mean ± SD 9-nitrocamptothecin AUCss was 907.04 ± 736.47 ng h/mL, C max was 85.98 ± 47.52 ng/mL, C min was 18.91 ± 22.50 ng/mL, C av was 37.79 ± 30.69 ng/mL, DF was 2.16 ± 0.87; the mean ± SD 9-aminocamptothecin AUCss was 442.73 ± 308.39 ng h/mL, C max was 34.83 ± 18.31 ng/mL, C min was 10.32 ± 6.95 ng/mL, C av was 18.45 ± 12.85 ng/mL, DF was 1.34 ± 0.42. Comparing single-dose 1.5 mg/m2 group with multiple-dose 1.5 mg/m2 group, no significant difference was observed in 9-NC pharmacokinetic parameters, but with respect to the metabolite, significant differences were observed in C max and AUC. The toxicity of 9-NC varied from mild to moderate. No grade 3 or grade 4 toxicity was observed during the study. There was 2- to 13-fold variabilities in 9-NC and 9-AC exposure among different patients for any given dose of 9-NC. All participants had good tolerance throughout the study. 9-NC and 9-AC exposure did not increase proportionally to the dose ranging from 1.25 to 1.75 mg/m2. After 5-day continuous administration, accumulation was observed in the metabolite 9-AC, but not in 9-NC.
Heart Rate Variability in Congenital Central Hypoventilation Syndrome
ABSTRACT: Heart rate variability was assessed in 12 patients with congenital central hypoventilation syndrome (CCHS) and in age- and sex-matched controls using SD of time intervals between R waves (R-R intervals), R-R interval histograms, spectral analysis, and Poincaré plots of sequential R-R intervals over a 24-h period using ambulatory monitoring. Mean heart rates in patients with CCHS were 103.3 ± 17.7 SD and in controls were 98.8 ± 21.6 SD (p > 0.5, NS). SD analysis of R-R intervals showed similar results in both groups (CCHS 102.2 ± 36.0 ms versus controls 126.1 ± 43.3 ms; p > 0.1, NS). Spectral analysis revealed that, for similar epochs sampled during quiet sleep and wakefulness, the ratios of low-frequency band to high-frequency band spectral power were increased for 11 of 12 patients with CCHS during sleep, whereas a decrease in these ratios was consistently observed in all controls during comparable sleep states (χ2 = 20.31; p < 0.000007). During wakefulness, the ratios of low-frequency band to high-frequency band spectral power were similar in both patients with CCHS and controls. Poincaré plots displayed significantly reduced beat-to-beat changes at slower heart rates in the CCHS patients (χ2 = 24.0; p < 0.000001). The scatter of points in CCHS Poincare plots was easily distinguished from controls. AH CCHS patients showed disturbed variability with one or more measures. The changes in moment-to-moment heart rate variability suggest that, in addition to a loss of ventilatory control, CCHS patients exhibit a dysfunction in autonomic nervous system control of the heart.
Low risk for nephrogenic systemic fibrosis in nondialysis patients who have chronic kidney disease and are investigated with gadolinium-enhanced magnetic resonance imaging.
BACKGROUND AND OBJECTIVES During the past decade, nephrogenic systemic fibrosis (NSF) has been reported in patients who have severe renal impairment and have been exposed to a gadolinium (Gd)-based contrast agent during magnetic resonance imaging (MRI). As a result of positive reporting bias, many suitable patients with chronic kidney disease (CKD) are being denied a highly important form of investigation that can be safely undertaken. We analyzed the safety of Gd-MRI in patients with CKD and varying levels of estimated GFR (eGFR). DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS We performed a retrospective analysis of 2053 unselected patients who had CKD and had received Gd-MRI between 1999 and 2009, so as to determine the risk for NSF related to level of CKD, nature of Gd preparation, and Gd dosage. RESULTS Overall, 2053 patients (63.5% men; mean age 60.6 +/- 15.7 years) had 2278 Gd-MRI scans; their mean eGFR was 40.7 +/- 23.7 ml/min. A total of 918 (44.7%) patients had stage 3, 491 (23.9%) had stage 4, and 117 (5.7%) had predialysis stage 5 CKD. No cases of NSF were identified during an average follow-up period of 28.6 +/- 18.2 months. CONCLUSIONS In this study, no patients developed NSF during extended follow-up, even after multiple Gd doses in some. Gd-MRI can be safely undertaken in the majority of patients with CKD, but caution is merited for dialysis patients and those with acute kidney injury, with relative caution for predialysis patients with stage 5 CKD.
Performance of a Deep-Learning Neural Network Model in Assessing Skeletal Maturity on Pediatric Hand Radiographs.
Purpose To compare the performance of a deep-learning bone age assessment model based on hand radiographs with that of expert radiologists and that of existing automated models. Materials and Methods The institutional review board approved the study. A total of 14 036 clinical hand radiographs and corresponding reports were obtained from two children's hospitals to train and validate the model. For the first test set, composed of 200 examinations, the mean of bone age estimates from the clinical report and three additional human reviewers was used as the reference standard. Overall model performance was assessed by comparing the root mean square (RMS) and mean absolute difference (MAD) between the model estimates and the reference standard bone ages. Ninety-five percent limits of agreement were calculated in a pairwise fashion for all reviewers and the model. The RMS of a second test set composed of 913 examinations from the publicly available Digital Hand Atlas was compared with published reports of an existing automated model. Results The mean difference between bone age estimates of the model and of the reviewers was 0 years, with a mean RMS and MAD of 0.63 and 0.50 years, respectively. The estimates of the model, the clinical report, and the three reviewers were within the 95% limits of agreement. RMS for the Digital Hand Atlas data set was 0.73 years, compared with 0.61 years of a previously reported model. Conclusion A deep-learning convolutional neural network model can estimate skeletal maturity with accuracy similar to that of an expert radiologist and to that of existing automated models. © RSNA, 2017 An earlier incorrect version of this article appeared online. This article was corrected on January 19, 2018.
High Performance and Reliable NIC-Based Multicast over Myrinet/GM-2
Multicast is an important collective operation for parallel programs. Some Network Interface Cards (NICs), such as Myrinet, have programmable processors that can be programmed to support multicast. This paper proposes a high performance and reliable NICbased multicast scheme, in which a NIC-based multisend mechanism is used to to send multiple replicas of a message to different destinations, and a NIC-based forwarding mechanism to forward the received packets without intermediate host involvement. We have explored different design alternatives and implemented the proposed scheme with the set of best alternatives over Myrinet/GM-2. MPICH-GM has also been modified to take advantage of this scheme. At the GM-level, the NICbased multicast improves the multicast latency by a factor up to 1.48 for messages 512 bytes, and a factor up to 1.86 for 16KB messages over 16 nodes compared to the traditional host-based multicast. Similar improvements are also achieved at the MPI level. In addition, it is demonstrated that NIC-based multicast is tolerant to process skew and has significant benefits for large sys-
Automated sleep stage identification system based on time-frequency analysis of a single EEG channel and random forest classifier
In this work, an efficient automated new approach for sleep stage identification based on the new standard of the American academy of sleep medicine (AASM) is presented. The propose approach employs time-frequency analysis and entropy measures for feature extraction from a single electroencephalograph (EEG) channel. Three time-frequency techniques were deployed for the analysis of the EEG signal: Choi-Williams distribution (CWD), continuous wavelet transform (CWT), and Hilbert-Huang Transform (HHT). Polysomnographic recordings from sixteen subjects were used in this study and features were extracted from the time-frequency representation of the EEG signal using Renyi's entropy. The classification of the extracted features was done using random forest classifier. The performance of the new approach was tested by evaluating the accuracy and the kappa coefficient for the three time-frequency distributions: CWD, CWT, and HHT. The CWT time-frequency distribution outperformed the other two distributions and showed excellent performance with an accuracy of 0.83 and a kappa coefficient of 0.76.
The influence of age at onset and duration of illness on long-term outcome in patients with obsessive-compulsive disorder: A report from the International College of Obsessive Compulsive Spectrum Disorders (ICOCS)
Several studies reported a negative effect of early onset and long duration of illness on long-term outcome in psychiatric disorders, including Obsessive-Compulsive Disorder (OCD). OCD is a prevalent, comorbid and disabling condition, associated with reduced quality of life and overall well-being for affected patients and related caregivers. The present multicenter naturalistic study sought to assess the influence of early onset and duration of illness on long-term outcome in a sample of 376 OCD out-patients worldwide, as part of the "International College of Obsessive-Compulsive Spectrum Disorders" (ICOCS) network. Binary logistic regressions were performed with age at the onset and duration of illness, as continuous independent variables, on a series of different outcome dependent variables, including lifetime number of hospitalizations and suicide attempts, poly-therapy and psychiatric comorbidity. Correlations in terms of disability (SDS) were analyzed as well. Results showed that a longer duration of illness (but not earlier age of onset) was associated with hospitalization (odds ratio=1.03, p=0.01), earlier age at onset with CBT (odds ratio=0.94, p<0.001) and both a later age at onset (odds ratio=1.05, p=0.02) and a shorter duration of illness (odds ratio=0.93, p=0.02) with panic disorder comorbidity. In addition, earlier age at onset inversely correlated with higher social disability (r=-0.12, p=0.048) and longer duration of illness directly correlated with higher disability in work, social and family life (r=0.14, p=0.017; r=0.13, p=0.035; r=0.14, p=0.02). The findings from the present large, multicenter study indicate early onset and long duration of illness as overall negative predictors of long-term outcome in OCD.
Investigation of 18F-FDG PET in the selection of patients with breast cancer as candidates for sentinel node biopsy after neoadjuvant therapy
The main objective of this study was to determine the role of [18F]-2-fluoro-2-deoxy-D-glucose positron emission tomography (FDG PET) in the selection of patients with breast cancer as candidates for sentinel node biopsy (SNB) after neoadjuvant therapy. Forty-four patients with primary breast cancer clinically classified as cT2, cT3 or cT4a-c cN0-N2 or cN3 M0 and with a baseline FDG PET scan positive both in the site of primary tumour and axillary lymph nodes underwent neoadjuvant therapy and then a second FDG PET scan. In the case of axillary FDG PET uptake, patients underwent axillary lymph node dissection (ALND). If the second FDG PET scan was negative for axillary involvement, SNB was performed in order to evaluate axillary lymph node status. Only in the case of SN positivity did total ALND follow. Specificity and positive predictive value of FDG PET for detection of axillary lymph node metastases after neoadjuvant therapy were as high as 83% (95% confidence interval: 51–97%) and 85% (95% confidence interval: 54–97%), respectively, whereas sensitivity, negative predictive value and diagnostic accuracy were inadequate for a correct staging (34, 32 and 48%, respectively). The poor sensitivity of FDG PET in detecting axillary lymph node metastases makes SNB mandatory in cases of a negative scan. The relatively high positive predictive value seems to suggest a role of FDG PET in selecting patients who, after neoadjuvant therapy, are candidates for ALND, avoiding SNB. However, this issue requires confirmation in a larger series of patients.
The effect of shame and shame memories on paranoid ideation and social anxiety.
BACKGROUND Social wariness and anxiety can take different forms. Paranoid anxiety focuses on the malevolence of others, whereas social anxiety focuses on the inadequacies in the self in competing for social position and social acceptance. This study investigates whether shame and shame memories are differently associated with paranoid and social anxieties. METHOD Shame, traumatic impact of shame memory, centrality of shame memory, paranoia and social anxiety were assessed using self-report questionnaires in 328 participants recruited from the general population. RESULTS Results from path analyses show that external shame is specifically associated with paranoid anxiety. In contrast, internal shame is specifically associated with social anxiety. In addition, shame memories, which function like traumatic memories, or that are a central reference point to the individual's self-identity and life story, are significantly associated with paranoid anxiety, even when current external and internal shame are considered at the same time. Thus, traumatic impact of shame memory and centrality of shame memory predict paranoia (but not social anxiety) even when considering for current feelings of shame. CONCLUSION Our study supports the evolutionary model suggesting there are two different types of 'conspecific' anxiety, with different evolutionary histories, functions and psychological processes. Paranoia, but less so social anxiety, is associated with traumatic impact and the centrality of shame memories. Researchers and clinicians should distinguish between types of shame memory, particularly those where the self might have felt vulnerable and subordinate and perceived others as threatening and hostile, holding malevolent intentions towards the self.
COMPETITIVENESS, DEFENDER POLICY OF THE COMPETITION AND NATIONAL SOVEREIGNTY: ESTIMATIONS ABOUT THE BRAZILIAN CASE
This article emphasizes that a System for the Defense of Competition is basic to ensure the legitimate interests of the population of a country. However, generally accepted assumptions limit the logic of national development, particularly for peripheral countries, based on the rapid changes observed in the technological paradigm. It has been in mind that the Capitalist Production System is efficient, just because there are barriers to competition sufficient for the consolidation of investments, necessitating of this kind of growth of scale and concentration 1 O presente artigo e fruto de contribuicao academica dos autores e nao reflete necessariamente a opiniao da
OUHANDS database for hand detection and pose recognition
In this paper we propose a publicly available static hand pose database called OUHANDS and protocols for training and evaluating hand pose classification and hand detection methods. A comparison between the OUHANDS database and existing databases is given. Baseline results for both of the protocols are presented.
Symmetric Nonnegative Matrix Factorization for Graph Clustering
Nonnegative matrix factorization (NMF) provides a lower rank approximation of a nonnegative matrix, and has been successfully used as a clustering method. In this paper, we offer some conceptual understanding for the capabilities and shortcomings of NMF as a clustering method. Then, we propose Symmetric NMF (SymNMF) as a general framework for graph clustering, which inherits the advantages of NMF by enforcing nonnegativity on the clustering assignment matrix. Unlike NMF, however, SymNMF is based on a similarity measure between data points, and factorizes a symmetric matrix containing pairwise similarity values (not necessarily nonnegative). We compare SymNMF with the widely-used spectral clustering methods, and give an intuitive explanation of why SymNMF captures the cluster structure embedded in the graph representation more naturally. In addition, we develop a Newton-like algorithm that exploits second-order information efficiently, so as to show the feasibility of SymNMF as a practical framework for graph clustering. Our experiments on artificial graph data, text data, and image data demonstrate the substantially enhanced clustering quality of SymNMF over spectral clustering and NMF. Therefore, SymNMF is able to achieve better clustering results on both linear and nonlinear manifolds, and serves as a potential basis for many extensions
Reamer-irrigator-aspirator bone graft and bi Masquelet technique for segmental bone defect nonunions: a review of 25 cases.
INTRODUCTION Segmental bone loss, either from trauma, tumor or infection is a challenging clinical entity. Amputation is a possible outcome and part of the decision making process. Surgical management is almost always needed and can require several interventions to obtain bone union. A staged protocol of obtaining a clean viable soft tissue bed, placement of a PMMA antibiotic impregnated spacer to induce a neovascular and bioactive membrane followed by autogenous bone graft has been reported with good outcomes. Our study attempts to expand on this data by evaluating the use of RIA bone graft for the treatment of segmental bone loss nonunions following trauma and or infection. METHODS Following IRB approval, two orthopaedic trauma fellowship trained surgeons used one surgical protocol for the management of segmental bone defect nonunions. Femur RIA bone graft was used as the graft source when possible. We retrospectively evaluated patients with segmental bone loss of the lower extremity over a two year period. Our primary endpoint was clinical and radiographic bone union. A secondary endpoint was RIA related complications. Additionally, by using some known mathematical equations, we show a plausible way of quantifying the amount of bone loss from a long bone based on the shape of the bone, defect shape and the measured length of bone loss on plain radiograph. RESULTS 25 patients with 27 segmental bone loss nonunions were evaluated. Nineteen were tibia bone loss and eight were femoral. 15 (56%) nonunions were open fractures with bone loss and 12(46%) were for bone loss related to infection or surgical debridement. The average deficit size was 5.8 cm in length (range 1-25 cm). At six months and 1 year post operative, 70% and 90% nonunions were healed clinically and radiographically respectively. There were no RIA related complications. DISCUSSION RIA bone graft has been shown to be a very bioactive material. Several studies support the use of this bone graft for the treatment of nonunion including one recent study evaluating 13 patients with segmental bone loss. Our study expands on this data by evaluating its use as the primary source of bone graft for the treatment of segmental bone loss nonunions in the lower extremity. CONCLUSION RIA bone graft for the treatment of segmental bone defect nonunion of the lower extremity appears safe and can yield predictable results when following sound surgical principles. 90% of our nonunions were healed at one year following a single bone graft procedure. Very large defects, once a formidable clinical dilemma can be managed successfully with the use of RIA bone graft.