title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Intravenous recombinant human relaxin in compensated heart failure: a safety, tolerability, and pharmacodynamic trial. | BACKGROUND
Relaxin is upregulated in human heart failure (HF). Animal and clinical data suggest beneficial hemodynamic and renal effects from vasodilation. We determined safety, tolerability, and pharmacodynamic effects of human Relaxin in stable HF.
METHODS AND RESULTS
Sixteen patients were treated with open-label intravenous Relaxin in 3 dose-escalation cohorts and monitored hemodynamically for 24-hour infusion and postinfusion periods and followed until Day 30. The safety demonstrated in Group A (8-hour sequential infusions at dose levels of 10, then 30, and then 100 microg x kg x day equivalents) allowed escalation to Group B (240, 480, and 960 microg x kg x day). The highest safe dose, 960 microg x kg x day, was selected for a 24-hour infusion in Group C. Relaxin showed no adverse effects; produced hemodynamic effects consistent with vasodilation (ie, trends toward increases in cardiac index, decreases in pulmonary wedge pressure, and decreases in circulating NT-pro BNP without inducing hypotension; improved markers of renal function [creatinine, blood urea nitrogen]). The highest dose caused a transient elevation in creatinine and blood urea nitrogen at Day 9 that was without apparent clinical significance.
CONCLUSIONS
Relaxin was safe and well-tolerated in patients with stable HF, and preliminary pharmacodynamic responses suggest it causes vasodilation. Further evaluation of the safety and efficacy of this drug in HF appears warranted. |
Magma-assisted rifting in Ethiopia | The rifting of continents and evolution of ocean basins is a fundamental component of plate tectonics, yet the process of continental break-up remains controversial. Plate driving forces have been estimated to be as much as an order of magnitude smaller than those required to rupture thick continental lithosphere. However, Buck has proposed that lithospheric heating by mantle upwelling and related magma production could promote lithospheric rupture at much lower stresses. Such models of mechanical versus magma-assisted extension can be tested, because they predict different temporal and spatial patterns of crustal and upper-mantle structure. Changes in plate deformation produce strain-enhanced crystal alignment and increased melt production within the upper mantle, both of which can cause seismic anisotropy. The Northern Ethiopian Rift is an ideal place to test break-up models because it formed in cratonic lithosphere with minor far-field plate stresses. Here we present evidence of seismic anisotropy in the upper mantle of this rift zone using observations of shear-wave splitting. Our observations, together with recent geological data, indicate a strong component of melt-induced anisotropy with only minor crustal stretching, supporting the magma-assisted rifting model in this area of initially cold, thick continental lithosphere. |
End-to-End Goal-Driven Web Navigation | We propose a goal-driven web navigation as a benchmark task for evaluating an agent with abilities to understand natural language and plan on partially observed environments. In this challenging task, an agent navigates through a website, which is represented as a graph consisting of web pages as nodes and hyperlinks as directed edges, to find a web page in which a query appears. The agent is required to have sophisticated high-level reasoning based on natural languages and efficient sequential decision-making capability to succeed. We release a software tool, called WebNav, that automatically transforms a website into this goal-driven web navigation task, and as an example, we make WikiNav, a dataset constructed from the English Wikipedia. We extensively evaluate different variants of neural net based artificial agents on WikiNav and observe that the proposed goal-driven web navigation well reflects the advances in models, making it a suitable benchmark for evaluating future progress. Furthermore, we extend the WikiNav with questionanswer pairs from Jeopardy! and test the proposed agent based on recurrent neural networks against strong inverted index based search engines. The artificial agents trained on WikiNav outperforms the engined based approaches, demonstrating the capability of the proposed goal-driven navigation as a good proxy for measuring the progress in real-world tasks such as focused crawling and question-answering. |
Knowledge Adaptation: Teaching to Adapt | Domain adaptation is crucial in many real-world applications where the distribution of the training data differs from the distribution of the test data. Previous Deep Learning-based approaches to domain adaptation need to be trained jointly on source and target domain data and are therefore unappealing in scenarios where models need to be adapted to a large number of domains or where a domain is evolving, e.g. spam detection where attackers continuously change their tactics. To fill this gap, we propose Knowledge Adaptation, an extension of Knowledge Distillation (Bucilua et al., 2006; Hinton et al., 2015) to the domain adaptation scenario. We show how a student model achieves state-of-the-art results on unsupervised domain adaptation from multiple sources on a standard sentiment analysis benchmark by taking into account the domain-specific expertise of multiple teachers and the similarities between their domains. When learning from a single teacher, using domain similarity to gauge trustworthiness is inadequate. To this end, we propose a simple metric that correlates well with the teacher’s accuracy in the target domain. We demonstrate that incorporating high-confidence examples selected by this metric enables the student model to achieve state-of-the-art performance in the single-source scenario. |
A personalized collaborative Digital Library environment: a model and an application | The Web, and consequently the information contained in it, is growing rapidly. Every day a huge amount of newly created information is electronically published in Digital Libraries, whose aim is to satisfy the users’ information needs. In this paper, we envisage a Digital Library not only as an information resource where users may submit queries to satisfy their daily information need, but also as a collaborative working and meeting space of people sharing common interests. Indeed, we will present a personalized collaborative Digital Library environment, where users may organize the information space according to their own subjective view, may build communities, may become aware of each other, exchange information and knowledge with other users, and may get recommendations based on preference patterns of users. |
Effectiveness of peer education interventions for HIV prevention, adolescent pregnancy prevention and sexual health promotion for young people: a systematic review of European studies. | Peer education remains a popular strategy for health promotion and prevention, but evidence of its effectiveness is still limited. This article presents a systematic review of peer education interventions in the European Union that were published between January 1999 and May 2010. The objective of the review is to determine the effectiveness of peer education programs for human immunodeficiency virus (HIV) prevention, adolescent pregnancy prevention and promotion of sexual health among young people. Standardized methods of searching and data extraction were utilized and five studies were identified. Although a few statistically significant and non-significant changes were observed in the studies, it is concluded that, overall, when compared to standard practice or no intervention, there is no clear evidence of the effectiveness of peer education concerning HIV prevention, adolescent pregnancy prevention and sexual health promotion for young people in the member countries of the European Union. Further research is needed to determine factors that contribute to program effectiveness. |
Automatic Annotation of Structured Facts in Images | Motivated by the application of fact-level image understanding, we present an automatic method for data collection of structured visual facts from images with captions. Example structured facts include attributed objects (e.g., <flower, red>), actions (e.g., <baby, smile>), interactions (e.g., <man, walking, dog>), and positional information (e.g., <vase, on, table>). The collected annotations are in the form of fact-image pairs (e.g.,<man, walking, dog> and an image region containing this fact). With a language approach, the proposed method is able to collect hundreds of thousands of visual fact annotations with accuracy of 83% according to human judgment. Our method automatically collected more than 380,000 visual fact annotations and more than 110,000 unique visual facts from images with captions and localized them in images in less than one day of processing time on standard CPU platforms. |
Judgment under Uncertainty: Heuristics and Biases. | This article described three heuristics that are employed in making judgements under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgements and decisions in situations of uncertainty. |
Surrealist Art and Writing, 1919-1939: The Gold of Time | 1. Introduction 2. Breaking the institutional codes: revolution in the classroom 3. The politics of dream and the dream of politics 4. In the service of which revolution? An aborted incarnation of the dream: Marxism and Surrealism 5. Surrealism and painting (The ineffable) 6. The Surrealist woman and the colonial other. |
Do pancrelipase delayed-release capsules have a protective role against nonalcoholic fatty liver disease after pancreatoduodenectomy in patients with pancreatic cancer? A randomized controlled trial. | BACKGROUND
The aim of this randomized controlled trial (RCT) was to investigate whether pancrelipase protects against nonalcoholic fatty liver disease (NAFLD) development after pancreatoduodenectomy in patients with pancreatic cancer better than conventional pancreatic enzyme supplementation.
METHODS
A total of 57 patients were randomly assigned to the study group (n = 29; pancrelipase replacement therapy) or the control group (n = 28; conventional pancreatic enzyme supplementation). NAFLD was defined as a liver-to-spleen attenuation ratio less than 0.9 on CT. Clinical and laboratory findings were also assessed.
RESULTS
NAFLD was observed in 6/29 patients (21%) in the study group, and 11/28 patients (39%) in the control group, but this was not a statistically significant difference. In the control group, crossover to pancrelipase replacement therapy upon NAFLD diagnosis produced improvement in five out of 10 patients. Multivariate analysis showed that advanced age and extended resection were independent risk factors for NAFLD development.
CONCLUSION
This RCT did not show a significant protective effect of pancrelipase replacement therapy over conventional pancreatic enzyme supplementation on NAFLD development after pancreatoduodenectomy for pancreatic cancer. Further studies are clearly required to investigate the etiology of and new therapeutic strategies for treatment-resistant NAFLD (UMIN 000019817). |
Pattern-oriented modeling of agent-based complex systems: lessons from ecology. | Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity. |
Trichoscopy Simplified | It has been a long while since skin surfaces and skin lesions have been examined by dermoscopy. However examining the hair and the scalp was done again recently and gained attention and slight popularity by the practical tool, namely trichoscopy, which can be called in a simplified way as a dermoscopy of the hair and the scalp. Trichoscopy is a great tool to examine and asses an active scalp disease and hair and other signs can be specific for some scalp and hair diseases. These signs include yellow dots, dystrophic hairs, cadaverized (black dots), white dots and exclamation mark hairs. Trichoscopy magnifies hair shafts at higher resolution to enable detailed examinations with measurements that a naked eye cannot distinguish nor see. Trichoscope is considered recently the newest frontier for the diagnosis of hair and scalp disease. Aim of this paper. The aim of this paper is to simplify and sum up the main trichoscopic readings and findings of hair and scalp disorders that are commonly encountered at clinic dermatology settings. |
Left ventricular reverse remodeling in percutaneous and surgical aortic bioprostheses: an echocardiographic study. | BACKGROUND
Surgical aortic valve replacement (SAVR) is the definitive proven therapy for patients with severe aortic stenosis who have symptoms or decreased left ventricular (LV) function. The development of transcatheter aortic valve implantation (TAVI) offers a viable and "less invasive" option for the treatment of patients with critical aortic stenosis at high risk with conventional approaches. The main objective of this study was the comparison of LV hemodynamic and structural modifications (reverse remodeling) between percutaneous and surgical approaches in the treatment of severe aortic stenosis.
METHODS
Fifty-eight patients who underwent TAVI with the CoreValve bioprosthetic valve were compared with 58 patients with similar characteristics who underwent SAVR. Doppler echocardiographic data were obtained before the intervention, at discharge, and after 6-month to 12-month follow-up.
RESULTS
Mean transprosthetic gradient at discharge was lower (P<.003) in the TAVI group (10±5 mm Hg) compared with the SAVR group (14±5 mm Hg) and was confirmed at follow-up (10±4 vs 13±4 mm Hg, respectively, P<.001). Paravalvular leaks were more frequent in the TAVI group (trivial to mild, 69%; moderate, 14%) than in the SAVR group (trivial to mild, 30%; moderate, 0%) (P<.0001). The incidence of severe prosthesis-patient mismatch (PPM) was significantly lower (P<.004) in the TAVI group (12%) compared with the SAVR group (36%). At follow-up, LV mass and LV mass indexed to height and to body surface area improved in both groups, with no significant difference. In patients with severe PPM, only the TAVI subgroup showed significant reductions in LV mass. LV ejection fraction improved at follow-up significantly only in TAVI patients compared with baseline values (from 50.2±9.6% to 54.8±7.3%, P<.0001).
CONCLUSIONS
Hemodynamic performance after TAVI was shown to be superior to that after SAVR in terms of transprosthetic gradient, LV ejection fraction, and the prevention of severe PPM, but with a higher incidence of aortic regurgitation. Furthermore, LV reverse remodeling was observed in all patients in the absence of PPM, while the same remodeling occurred only in the TAVI subgroup when severe PPM was present. |
The importance of the unsuppressed glands in the study of intact parathyroid hormone disappearance after parathyroid adenomectomy. | BACKGROUND
In the usual techniques for intraoperative intact parathyroid hormone (iPTH) monitoring for primary hyperparathyroidism, the normal glands are implicitly considered suppressed. On the contrary, we believe, as do other researchers, that they are not totally suppressed.
METHODS
For this reason, we considered the introduction of an infusion from the unsuppressed normal glands (UNG), described by an influx constant (IC (pg/ml per min)), into the formulation of a two-compartment model. For the blood compartment, we have: C(t)=A.exp(-at)+B.exp(-bt)+EV, where A+B+EV=iPTH concentration at zero time (clamping), EV (equilibrium value)=IC/k, 'a' and 'b' are reciprocals of the time constants of the two exponentials and k=rate constant of elimination from the blood. The experimental data were obtained using an IRMA standard method, collecting samples in 20 patients, during and following adenomectomy.
RESULTS
In spite of the variability among the patients, all fits were very good, thus confirming the importance of the UNG contribution to the shaping of the disappearance curve. For this reason, the relationship between the constant infusion from the UNG and the basal iPTH level at the induction of anaesthesia (BV), was studied.
CONCLUSIONS
The existence of a negative correlation, together with the determination of a regression curve (IC=6.5BV), not only confirmed our assumptions, but also revealed the theoretical possibility of a priori knowledge of the iPTH contribution from the UNG. Hence, there is a theoretical possibility of discriminating between this contribution and that of the remaining (if any) affected gland(s). |
Combining Decision Procedures | We give a detailed survey of the current state-of-the-art methods for combining decision procedures. We review the Nelson-Oppen combination method, Shostak method, and some very recent results on the combination of theories over non-disjoint signatures. |
A Unifying Framework of Anytime Sparse Gaussian Process Regression Models with Stochastic Variational Inference for Big Data | This paper presents a novel unifying framework of anytime sparse Gaussian process regression (SGPR) models that can produce good predictive performance fast and improve their predictive performance over time. Our proposed unifying framework reverses the variational inference procedure to theoretically construct a non-trivial, concave functional that is maximized at the predictive distribution of any SGPR model of our choice. As a result, a stochastic natural gradient ascent method can be derived that involves iteratively following the stochastic natural gradient of the functional to improve its estimate of the predictive distribution of the chosen SGPR model and is guaranteed to achieve asymptotic convergence to it. Interestingly, we show that if the predictive distribution of the chosen SGPR model satisfies certain decomposability conditions, then the stochastic natural gradient is an unbiased estimator of the exact natural gradient and can be computed in constant time (i.e., independent of data size) at each iteration. We empirically evaluate the trade-off between the predictive performance vs. time efficiency of the anytime SGPR models on two real-world million-sized datasets. |
Children of Depressed Parents--A Public Health Opportunity. | One of the best-replicated findings in clinical psychiatry is that the biological offspring of depressed parents (usually mothers are studied) themselves have considerable emotional and functional problems, usually depression and anxiety. These findings have been shown cross-sectionally in infants1 and in prepubescent, adolescent, and adult offspring.2 Offspring followed up longitudinally show that their risk continues over time.3 The magnitude of the risk varies between 2-fold and 6-fold depending on the control group and outcome used as well as the phenotype definition. The Swedish study in this issue of JAMA Psychiatry by Shen et al4 contributes substantially to these findings. The strengths of the study are numerous. They include a new outcome— school performance based on a national standardized system— and a large sample comprising a nationwide birth cohort of more than 1 million children. Clinical studies may have more intense assessment of parents and offspring, but those samples are considerably smaller so that potential confounders cannot be controlled for (eg, parental sex, age, education, birth order, or substance abuse). Both depressed mothers and depressed fathers were included in the study, and depression was studied at different periods, including before the birth of the child, after birth, and as the child developed from ages 1 to 16 years. Another strength is that the data come from yet another country. So far, studies of the offspring of depressed parents have been from the United States, the Netherlands, the United Kingdom, Germany, and Australia, showing the universality of the findings, at least in Western countries. The findings clearly show that both maternal and paternal depression occurring any time during a child’s life up to age 16 years is associated with the child’s poor school performance. Maternal depression had the largest negative effect, especially in daughters. These sex findings are important because fathers have been excluded or poorly represented, so studies are underpowered to examine sex either in parents or offspring. All studies have shortcomings, especially observational ones, even those with large samples. This study is not immune. As the authors note, we cannot be certain that the fathers were living with their child and whether the reduced effect of paternal depression may be due to the absence of the father in the home or a real effect. Another limitation is the absence of the child’s diagnosis to determine its mediation effect, but that approach would have resulted in a different kind of study. Clear and replicated findings are an epidemiologist’s dream because they suggest directions for improving the health of the public. How does this study do? Numerous epidemiological studies around the world have shown that the onset and prevalence of depression are high, especially among women of childbearing age, so parental depression is a problem of large proportion.5 Depression in a parent is a modifiable risk factor because the parent’s symptoms can be treated. There is considerable evidence for the efficacy of a range of medications and evidence-based psychotherapy (alone or in combination) for the treatment of depression. The symptoms are amenable to treatment. Therefore, bringing the depressed parent into remission might help the child. Fortunately, those studies6,7 have also been performed and have shown that remission of the depressed mother, whether by medication or by psychotherapy, can reduce the child’s problems. We can state with confidence that treatment for a depressed parent should be readily available, sustained, and aggressive to achieve remission. Furthermore, a child with emotional problems or even serious school problems indicates that the parent’s own clinical needs should also be considered. Good research raises questions of a more specific nature. The study by Shen et al4 shows that paternal depression also had a negative effect on the children. Does treating the depressed father have the same influence on children? There has been difficulty recruiting fathers for these studies because men have lower rates of depression and are more reluctant to come for treatment. Moreover, we do not know how or if the father’s absence may have affected the findings in this study. What are the best treatments? The results of a recent study7 suggested that medications that produce increased activity and irritability may be effective for the mother’s depression, but this effect does not translate into improvement in the child. What are the best treatments for depressed pregnant women, considering the effect of maternal depression on the infant? Does psychotherapy directed toward parenting have added value? When is parental treatment insufficient for helping the child? Which children should also be treated? Does remission of maternal depression lead to long-term effects on the child? The longest follow-up of children of remitted mothers to date has been 1 year.8 Why are women and their daughters the most vulnerable? We celebrate replication of such findings with clear public health and clinical implications. However, in the midst of this celebration, we should not forget that our biological understandings of mechanism and risk in this domain are not satisfactory. Brain imaging and immunologic, electrophysiological, and genetic studies are under way that may guide the precision of both the risk and the treatment in the future. In the context of the strongly replicated findings on the biological offspring of depressed parents, biological studies may well profit from a familial, high-risk design. Studies of individuals at high risk for depression who are not yet ill may identify bioRelated article Opinion |
Combining Knowledge and Data Driven Insights for Identifying Risk Factors using Electronic Health Records | BACKGROUND
The ability to identify the risk factors related to an adverse condition, e.g., heart failures (HF) diagnosis, is very important for improving care quality and reducing cost. Existing approaches for risk factor identification are either knowledge driven (from guidelines or literatures) or data driven (from observational data). No existing method provides a model to effectively combine expert knowledge with data driven insight for risk factor identification.
METHODS
We present a systematic approach to enhance known knowledge-based risk factors with additional potential risk factors derived from data. The core of our approach is a sparse regression model with regularization terms that correspond to both knowledge and data driven risk factors.
RESULTS
The approach is validated using a large dataset containing 4,644 heart failure cases and 45,981 controls. The outpatient electronic health records (EHRs) for these patients include diagnosis, medication, lab results from 2003-2010. We demonstrate that the proposed method can identify complementary risk factors that are not in the existing known factors and can better predict the onset of HF. We quantitatively compare different sets of risk factors in the context of predicting onset of HF using the performance metric, the Area Under the ROC Curve (AUC). The combined risk factors between knowledge and data significantly outperform knowledge-based risk factors alone. Furthermore, those additional risk factors are confirmed to be clinically meaningful by a cardiologist.
CONCLUSION
We present a systematic framework for combining knowledge and data driven insights for risk factor identification. We demonstrate the power of this framework in the context of predicting onset of HF, where our approach can successfully identify intuitive and predictive risk factors beyond a set of known HF risk factors. |
Treatments for Parkinson disease--past achievements and current clinical needs. | Although idiopathic Parkinson disease (PD) remains the only neurodegenerative disorder for which there are highly effective symptomatic therapies, there are still major unmet needs regarding its long-term management. Although levodopa continues as the gold standard for efficacy, its chronic use is associated with potentially disabling motor complications. Current evidence suggests that these are related to mode of administration, whereby multiple oral doses of levodopa generate pulsatile stimulation of striatal dopamine receptors. Current dopamine agonists, while producing more constant plasma levels, fail to match levodopa's efficacy. Strategies to treat levodopa-related motor complications are only partially effective, rarely abolishing motor fluctuations or dyskinesias. Best results are currently achieved with invasive strategies via subcutaneous (s.c.) or intraduodenal delivery of apomorphine or levodopa, or deep brain stimulation of the subthalamic nucleus. Another area of major unmet medical need is related to nondopaminergic and nonmotor symptoms of PD. Targeting transmitter systems beyond the dopamine system is an interesting approach, both for the motor and nonmotor problems of PD. So far, clinical trial evidence regarding 5-HT agonists, glutamate antagonists, adenosine A(2) antagonists and alpha-adrenergic receptor antagonists, has been inconsistent, but trials with cholinesterase inhibitors and atypical antipsychotics to treat dementia and psychosis, have been successful. However, the ultimate goal of PD medical management is modifying disease progression, thereby delaying the evolution of motor and nonmotor complications of advanced disease. As understanding of preclinical markers for PD develops, there is new hope for neuropreventive strategies to target "at risk" populations before clinical onset of disease. |
Relationship between flow-mediated vasodilation and cardiovascular risk factors in a large community-based study | OBJECTIVE
To determine the relationships between flow-mediated vasodilation (FMD) and cardiovascular risk factors, and to evaluate confounding factors for measurement of FMD in a large general population in Japan.
METHODS
This was a cross-sectional study. A total of 5314 Japanese adults recruited from people who underwent health screening from 1 April 2010 to 31 August 2012 at 3 general hospitals in Japan. Patients' risk factors (age, Body Mass Index, blood pressure, cholesterol parameters, glucose level and HbA1c level) and prevalence of cardiovascular disease (coronary heart disease and cerebrovascular disease) were investigated.
RESULTS
Univariate regression analysis revealed that FMD correlated with age (r=-0.27, p<0.001), Body Mass Index (r=-0.14, p<0.001), systolic blood pressure (r=-0.18, p<0.001), diastolic blood pressure (r=-0.13, p<0.001), total cholesterol (r=-0.07, p<0.001), triglycerides (r=-0.10, p<0.001), high-density lipoprotein cholesterol (r=0.06, p<0.001), low-density lipoprotein cholesterol (r=-0.04, p=0.01), glucose level (r=-0.14, p<0.001), HbA1c (r=-0.14, p<0.001), and baseline brachial artery diameter (r=-0.43, p<0.001) as well as Framingham Risk score (r=-0.29, p<0.001). Multivariate analysis revealed that age (t value=-9.17, p<0.001), sex (t value=9.29, p<0.001), Body Mass Index (t value=4.27, p<0.001), systolic blood pressure (t value=-2.86, p=0.004), diabetes mellitus (t value=-4.19, p<0.001), smoking (t value=-2.56, p=0.01), and baseline brachial artery diameter (t value=-29.4, p<0.001) were independent predictors of FMD.
CONCLUSIONS
FMD may be a marker of the grade of atherosclerosis and may be used as a surrogate marker of cardiovascular outcomes. Age, sex, Body Mass Index, systolic blood pressure, diabetes mellitus, smoking and, particularly, baseline brachial artery diameter are potential confounding factors in the measurement of FMD. |
82-1041 Identifying Information Security Threats | The success of an enterprises information security risk-based management program is based on the accurate identification of the threats to the organization's information systems. This article presents a structured approach for identifying an enterprise-specific threat population, which is an essential first step for security planners who are involved in developing cost-effective strategies for addressing their organizations' information security risks. |
Parkinson disease: systemic and orofacial manifestations, medical and dental management. | BACKGROUND
More than 1.5 million Americans have Parkinson disease (PD), and this figure is expected to rise as the population ages. However, the dental literature offers little information about the illness.
TYPES OF STUDIES REVIEWED
The authors conducted a MEDLINE search using the key terms "Parkinson's disease," "medical management" and "dentistry." They selected contemporaneous articles published in peer-reviewed journals and gave preference to articles reporting randomized controlled trials.
RESULTS
PD is a progressive neurodegenerative disorder caused by loss of dopaminergic and nondopaminergic neurons in the brain. These deficits result in tremor, slowness of movement, rigidity, postural instability and autonomic and behavioral dysfunction. Treatment consists of administering medications that replace dopamine, stimulate dopamine receptors and modulate other neurotransmitter systems.
CLINICAL IMPLICATIONS
Oral health may decline because of tremors, muscle rigidity and cognitive deficits. The dentist should consult with the patient's physician to establish the patient's competence to provide informed consent and to determine the presence of comorbid illnesses. Scheduling short morning appointments that begin 90 minutes after administration of PD medication enhances the patient's ability to cooperate with care. Inclination of the dental chair at 45 degrees, placement of a bite prop, use of a rubber dam and high-volume oral evacuation enhance airway protection. To avoid adverse drug interactions with levodopa and entacapone, the dentist should limit administration of local anesthetic agents to three cartridges of 2 percent lidocaine with 1:100,000 epinephrine per half hour, and patients receiving selegiline should not be given agents containing epinephrine or levonordefrin. The dentist should instruct the patient and the caregiver in good oral hygiene techniques. |
SemEval 2016 Task 11: Complex Word Identification | We report the findings of the Complex Word Identification task of SemEval 2016. To create a dataset, we conduct a user study with 400 non-native English speakers, and find that complex words tend to be rarer, less ambiguous and shorter. A total of 42 systems were submitted from 21 distinct teams, and nine baselines were provided. The results highlight the effectiveness of Decision Trees and Ensemble methods for the task, but ultimately reveal that word frequencies remain the most reliable predictor of word complexity. |
ROC curve equivalence using the Kolmogorov-Smirnov test | This paper describes a simple, non-parametric and generic test of the equivalence of Receiver Operating Characteristic (ROC) curves based on a modified Kolmogorov-Smirnov (KS) test. The test is described in relation to the commonly used techniques such as the Area Under the ROC curve (AUC) and the Neyman-Pearson method. We first review how the KS test is used to test the null hypotheses that the class labels predicted by a classifier are no better than random. We then propose an interval mapping technique that allows us to use two KS tests to test the null hypothesis that two classifiers have ROC curves that are equivalent. We demonstrate that this test discriminates different ROC curves both when one curve dominates another and when the curves cross and so are not discriminated by AUC. The interval mapping technique is then used to demonstrate that, although AUC has its limitations, it can be a model-independent and coherent measure of classifier performance. |
Managing Conflicts in Goal-Driven Requirements Engineering | A wide range of inconsistencies can arise during requirements engineering as goals and requirements are elicited from multiple stakeholders. Resolving such inconsistencies sooner or later in the process is a necessary condition for successful development of the software implementing those requirements. The paper first reviews the main types of inconsistency that can arise during requirements elaboration, defining them in an integrated framework and exploring their interrelationships. It then concentrates on the specific case of conflicting formulations of goals and requirements among different stakeholder viewpoints or within a single viewpoint. A frequent, weaker form of conflict called divergence is introduced and studied in depth. Formal techniques and heuristics are proposed for detecting conflicts and divergences from specifications of goals/ requirements and of domain properties. Various techniques are then discussed for resolving conflicts and divergences systematically by introduction of new goals or by transformation of specifications of goals/objects towards conflict-free versions. Numerous examples are given throughout the paper to illustrate the practical relevance of the concepts and techniques presented. The latter are discussed in the framework of the KAOS methodology for goal-driven requirements engineering. Index Terms Goal-driven requirements engineering, divergent requirements, conflict management, viewpoints, specification transformation, lightweight formal methods. ,((( 7UDQVDFWLRQV RQ 6RIWZDUH (QJLQHHULQJ 6SHFLDO ,VVXH RQ 0DQDJLQJ ,QFRQVLVWHQF\ LQ 6RIWZDUH 'HYHORSPHQW 1RY |
"Trust me, I'm an online vendor": towards a model of trust for e-commerce system design | Consumers' lack of trust has often been cited as a major barrier to the adoption of electronic commerce (e-commerce). To address this problem, a model of trust was developed that describes what design factors affect consumers' assessment of online vendors' trustworthiness. Six components were identified and regrouped into three categories: Prepurchase Knowledge, Interface Properties and Informational Content. This model also informs the Human-Computer Interaction (HCI) design of e-commerce systems in that its components can be taken as trust-specific high-level user requirements. |
Deep Bidirectional and Unidirectional LSTM Recurrent Neural Network for Network-wide Traffic Speed Prediction | Short-term traffic forecasting based on deep learning methods, especially long short-term memory (LSTM) neural networks, has received much attention in recent years. However, the potential of deep learning methods in traffic forecasting has not yet fully been exploited in terms of the depth of the model architecture, the spatial scale of the prediction area, and the predictive power of spatial-temporal data. In this paper, a deep stacked bidirectional and unidirectional LSTM (SBU- LSTM) neural network architecture is proposed, which considers both forward and backward dependencies in time series data, to predict network-wide traffic speed. A bidirectional LSTM (BDLSM) layer is exploited to capture spatial features and bidirectional temporal dependencies from historical data. To the best of our knowledge, this is the first time that BDLSTMs have been applied as building blocks for a deep architecture model to measure the backward dependency of traffic data for prediction. The proposed model can handle missing values in input data by using a masking mechanism. Further, this scalable model can predict traffic speed for both freeway and complex urban traffic networks. Comparisons with other classical and state-of-the-art models indicate that the proposed SBU-LSTM neural network achieves superior prediction performance for the whole traffic network in both accuracy and robustness. |
Fuel Optimized Predictive Following in Low Speed Conditions Master's thesis performed at Vehicular Systems | The situation when driving in dense traffic and at low speeds is called Stop and Go. A controller for automatic following of the car in front could under these conditions reduce the driver’s workload and keep a safety distance to the preceding vehicle through different choices of gear and engine torque. The aim of this thesis is to develop such a controller, with an additional focus on lowering the fuel consumption. With help of GPS, 3D-maps and sensors information about the slope of the road and the preceding vehicle can be obtained. Using this information the controller is able to predict future possible control actions and an optimization algorithm can then find the best inputs with respect to some criteria. The control method used is Model Predictive Control (MPC) and as the name indicate a model of the control object is required for the prediction. To find the optimal sequence of inputs, the optimization method Dynamic Programming choose the one which lead to the lowest fuel consumption and satisfactory following. Simulations have been made using a reference trajectory which was measured in a real traffic jam. The simulations show that it is possible to follow the preceding vehicle in a good way and at the same time reduce the fuel consumption with approximately 3 %. |
Simultaneous calibration, localization, and mapping | The calibration parameters of a mobile robot play a substantial role in navigation tasks. Often these parameters are subject to variations that depend either on environmental changes or on the wear of the devices. In this paper, we propose an approach to simultaneously estimate a map of the environment, the position of the on-board sensors of the robot, and its kinematic parameters. Our method requires no prior knowledge about the environment and relies only on a rough initial guess of the platform parameters. The proposed approach performs on-line estimation of the parameters and it is able to adapt to non-stationary changes of the configuration. We tested our approach in simulated environments and on a wide range of real world data using different types of robotic platforms. |
Applications of Recurrent Neural Networks in Environmental Factor Forecasting: A Review | Analysis and forecasting of sequential data, key problems in various domains of engineering and science, have attracted the attention of many researchers from different communities. When predicting the future probability of events using time series, recurrent neural networks (RNNs) are an effective tool that have the learning ability of feedforward neural networks and expand their expression ability using dynamic equations. Moreover, RNNs are able to model several computational structures. Researchers have developed various RNNs with different architectures and topologies. To summarize the work of RNNs in forecasting and provide guidelines for modeling and novel applications in future studies, this review focuses on applications of RNNs for time series forecasting in environmental factor forecasting. We present the structure, processing flow, and advantages of RNNs and analyze the applications of various RNNs in time series forecasting. In addition, we discuss limitations and challenges of applications based on RNNs and future research directions. Finally, we summarize applications of RNNs in forecasting. |
Analysis and Compensation of Transmitter IQ Imbalances in OFDMA and SC-FDMA Systems | One limiting issue in implementing high-speed wireless systems is the impairment associated with analog processing due to component imperfections. In uplink transmission of multiuser systems, a major source of such impairment is in-phase/quadrature-phase imbalance (IQI) introduced at multiple transmitters. In this paper, we deal with orthogonal-frequency-division multiple access (OFDMA) and single-carrier frequency-division multiple access (SC-FDMA) which have received attention in recent years as physical layer protocol in WiMAX and 3GPP Long Term Evolution (LTE) and analyze the effect of the transmitter (Tx) IQIs on OFDMA and SC-FDMA receivers. To cope with the interuser interference problem due to Tx IQIs, we propose a widely linear receiver for OFDMA and SC-FDMA systems and also propose a novel subcarrier allocation scheme, which has high tolerance to such Tx IQ distortion. |
Processing-in-Memory for Energy-Efficient Neural Network Training: A Heterogeneous Approach | Neural networks (NNs) have been adopted in a wide range of application domains, such as image classification, speech recognition, object detection, and computer vision. However, training NNs – especially deep neural networks (DNNs) – can be energy and time consuming, because of frequent data movement between processor and memory. Furthermore, training involves massive fine-grained operations with various computation and memory access characteristics. Exploiting high parallelism with such diverse operations is challenging. To address these challenges, we propose a software/hardware co-design of heterogeneous processing-in-memory (PIM) system. Our hardware design incorporates hundreds of fix-function arithmetic units and ARM-based programmable cores on the logic layer of a 3D die-stacked memory to form a heterogeneous PIM architecture attached to CPU. Our software design offers a programming model and a runtime system that program, offload, and schedule various NN training operations across compute resources provided by CPU and heterogeneous PIM. By extending the OpenCL programming model and employing a hardware heterogeneity-aware runtime system, we enable high program portability and easy program maintenance across various heterogeneous hardware, optimize system energy efficiency, and improve hardware utilization. |
Heart disease and stroke statistics--2015 update: a report from the American Heart Association. | Véronique L. Roger, MD, MPH, FAHA; Alan S. Go, MD; Donald M. Lloyd-Jones, MD, ScM, FAHA; Emelia J. Benjamin, MD, ScM, FAHA; Jarett D. Berry, MD; William B. Borden, MD; Dawn M. Bravata, MD; Shifan Dai, MD, PhD*; Earl S. Ford, MD, MPH, FAHA*; Caroline S. Fox, MD, MPH; Heather J. Fullerton, MD; Cathleen Gillespie, MS*; Susan M. Hailpern, DPH, MS; John A. Heit, MD, FAHA; Virginia J. Howard, PhD, FAHA; Brett M. Kissela, MD; Steven J. Kittner, MD, FAHA; Daniel T. Lackland, DrPH, MSPH, FAHA; Judith H. Lichtman, PhD, MPH; Lynda D. Lisabeth, PhD, FAHA; Diane M. Makuc, DrPH*; Gregory M. Marcus, MD, MAS, FAHA; Ariane Marelli, MD, MPH; David B. Matchar, MD, FAHA; Claudia S. Moy, PhD, MPH; Dariush Mozaffarian, MD, DrPH, FAHA; Michael E. Mussolino, PhD; Graham Nichol, MD, MPH, FAHA; Nina P. Paynter, PhD, MHSc; Elsayed Z. Soliman, MD, MSc, MS; Paul D. Sorlie, PhD; Nona Sotoodehnia, MD, MPH; Tanya N. Turan, MD, FAHA; Salim S. Virani, MD; Nathan D. Wong, PhD, MPH, FAHA; Daniel Woo, MD, MS, FAHA; Melanie B. Turner, MPH; on behalf of the American Heart Association Statistics Committee and Stroke Statistics Subcommittee |
High-dimensional signature compression for large-scale image classification | We address image classification on a large-scale, i.e. when a large number of images and classes are involved. First, we study classification accuracy as a function of the image signature dimensionality and the training set size. We show experimentally that the larger the training set, the higher the impact of the dimensionality on the accuracy. In other words, high-dimensional signatures are important to obtain state-of-the-art results on large datasets. Second, we tackle the problem of data compression on very large signatures (on the order of 105 dimensions) using two lossy compression strategies: a dimensionality reduction technique known as the hash kernel and an encoding technique based on product quantizers. We explain how the gain in storage can be traded against a loss in accuracy and/or an increase in CPU cost. We report results on two large databases — ImageNet and a dataset of lM Flickr images — showing that we can reduce the storage of our signatures by a factor 64 to 128 with little loss in accuracy. Integrating the decompression in the classifier learning yields an efficient and scalable training algorithm. On ILSVRC2010 we report a 74.3% accuracy at top-5, which corresponds to a 2.5% absolute improvement with respect to the state-of-the-art. On a subset of 10K classes of ImageNet we report a top-1 accuracy of 16.7%, a relative improvement of 160% with respect to the state-of-the-art. |
3D medial axis point approximation using nearest neighbors and the normal field | We present a novel method to approximate medial axis points given a set of points sampled from a surface and the normal vectors to the surface at those points. For each sample point, we find its maximal tangent ball containing no other sample points, by iteratively reducing its radius using nearest neighbor queries. We prove that the center of the ball constructed by our algorithm converges to a true medial axis point as the sampling density increases to infinity. We also propose a simple heuristic to handle noisy samples. By simple extensions, our method is applied to medial axis point simplification, local feature size estimation and feature-sensitive point decimation. Our algorithm is simple, easy to implement, and suitable for parallel computation using GPU because the iteration process for each sample point runs independently. Experimental results show that our method is efficient both in time and in space. |
FPGA based implementation of BPSK and QPSK modulators using address reverse accumulators | Implementation of digital modulators on Field Programmable Gate Array (FPGA) is a research area that has received great attention recently. Most of the research has focused on the implementation of simple digital modulators on FPGAs such as Amplitude Shift Keying (ASK), Frequency Shift Keying (FSK), and Phase Shift Keying (PSK). This paper presented a novel method of implementing Quadrature Phase Shift Keying (QPSK) along with Binary PSK (BPSK) using accumulators with a reverse addressing technique. The implementation of the BPSK modulator required two sinusoidal signals with 180-degree phase shift. The first signal was obtained using Look Up Table(LUT) based on Direct Digital Synthesizer (DDS) technique. The second signal was obtained by using the same LUT but after inverting the most significant bit of the accumulator to get the out of phase signal. For the QPSK modulator, four sinusoidal waves were needed. Using only one LUT, these waves were obtained. The first two wave were generated by using two accumulators working on the rising edge and the falling edge of a perfect twice frequency square wave clock which results in a 90-degree phase shift. The other waves were obtained from the same accumulators after reversing the most significant bit in each one. The implementation of the entire systems was done in the Very high speed integrated circuit Hardware Description Language (VHDL) without the help of Xilinx System Generator or DSP Builder tools as many papers did. |
SPROUT: Lazy vs. Eager Query Plans for Tuple-Independent Probabilistic Databases | A paramount challenge in probabilistic databases is the scalable computation of confidences of tuples in query results. This paper introduces an efficient secondary-storage operator for exact computation of queries on tuple-independent probabilistic databases. We consider the conjunctive queries without self-joins that are known to be tractable on any tuple-independent database, and queries that are not tractable in general but become tractable on probabilistic databases restricted by functional dependencies. Our operator is semantically equivalent to a sequence of aggregations and can be naturally integrated into existing relational query plans. As a proof of concept, we developed an extension of the PostgreSQL 8.3.3 query engine called SPROUT. We study optimizations that push or pull our operator or parts thereof past joins. The operator employs static information, such as the query structure and functional dependencies, to decide which constituent aggregations can be evaluated together in one scan and how many scans are needed for the overall confidence computation task. A case study on the TPC-H benchmark reveals that most TPC-H queries obtained by removing aggregations can be evaluated efficiently using our operator. Experimental evaluation on probabilistic TPC-H data shows substantial efficiency improvements when compared to the state of the art. |
Gender and Communal Trait Differences in the Relations Among Social Behaviour, Affect Arousal, and Cardiac Autonomic Control | To examine the relation between social behaviour and vagal activity, the communal behaviour of healthy college men (N = 33) and women (N = 33) was manipulated while monitoring heart rate (HR) and respiratory sinus arrhythmia (RSA). The subjects were classified as low or high on communal trait. Communal behaviour was manipulated by having the subjects behave in an agreeable or quarrelsome manner in scripted role-plays. HR, RSA and self-report arousal were obtained during or immediately following baseline, experimental and relaxation periods. 2 (Gender) × 2 (Communal Trait; low/high) × 2 (Condition; agreeable/quarrelsome) ANCOVAs were performed. Men had lower RSA values when behaving in a quarrelsome fashion than agreeable and lower RSA values than women in the quarrelsome condition. In the latter condition, low communal men reported more arousal than other groups. Strong but opposite associations between RSA and affect arousal were observed in low communal men and woman. Men, especially more quarrelsome (less communal) men exhibited weaker vagal control during arousing social situations. |
Image-Based Stereoscopic painterly Rendering | We present a new image-based stereoscopic painterly algorithm that we use to automatically generate stereoscopic paintings. Our work is motivated by contemporary painters who have explored the aesthetic implications of painting stereo pairs of canvases. We base our method on two real images, acquired from spatially displaced cameras. We derive a depth map by utilizing computer vision depth-from-stereo techniques and use this information to plan and render stereo paintings. These paintings can be viewed stereoscopically, in which case the pictorial medium is perceptually extended by the viewer to better suggest the sense of distance. |
Personalization and Adaptation to the Medium and Context in a Fall Detection System | The main objective of this paper is to present a distributed processing architecture that explicitly integrates capabilities for its continuous adaptation to the medium, the context, and the user. This architecture is applied to a falling detection system through: (1) an optimization module that finds the optimal operation parameters for the detection algorithms of the system devices; (2) a distributed processing architecture that provides capabilities for remote firmware update of the smart sensors. The smart sensor also provides an estimation of activities of daily living (ADL), which results very useful in monitoring of the elderly and patients with chronic diseases. The developed experiments have demonstrated the feasibility of the system and specifically, the accuracy of the proposed algorithms and procedures (100% success for impact detection, 100% sensitivity and 95.68% specificity rates for fall detection, and 100% success for ADL level classification). Although the experiments have been developed with a cohort of young volunteers, the personalization and adaption mechanisms of the proposed architecture related to the concepts of "design for all" and "design space" will significantly ease the adaptation of the system for its application to the elderly. |
Asymmetric interaction and indeterminate fitness correlation between cooperative partners in the fig–fig wasp mutualism | Empirical observations have shown that cooperative partners can compete for common resources, but what factors determine whether partners cooperate or compete remain unclear. Using the reciprocal fig-fig wasp mutualism, we show that nonlinear amplification of interference competition between fig wasps-which limits the fig wasps' ability to use a common resource (i.e. female flowers)-keeps the common resource unsaturated, making cooperation locally stable. When interference competition was manually prevented, the fitness correlation between figs and fig wasps went from positive to negative. This indicates that genetic relatedness or reciprocal exchange between cooperative players, which could create spatial heterogeneity or self-restraint, was not sufficient to maintain stable cooperation. Moreover, our analysis of field-collected data shows that the fitness correlation between cooperative partners varies stochastically, and that the mainly positive fitness correlation observed during the warm season shifts to a negative correlation during the cold season owing to an increase in the initial oviposition efficiency of each fig wasp. This implies that the discriminative sanction of less-cooperative wasps (i.e. by decreasing the egg deposition efficiency per fig wasp) but reward to cooperative wasps by fig, a control of the initial value, will facilitate a stable mutualism. Our finding that asymmetric interaction leading to an indeterminate fitness interaction between symbiont (i.e. cooperative actors) and host (i.e. recipient) has the potential to explain why conflict has been empirically observed in both well-documented intraspecific and interspecific cooperation systems. |
SemCiR: A citation recommendation system based on a novel semantic distance measure | Purpose – The purpose of this paper is to propose a novel citation recommendation system that inputs a text and recommends publications that should be cited by it. Its goal is to help researchers in finding related works. Further, this paper seeks to explore the effect of using relational features in addition to textual features on the quality of recommended citations. Design/methodology/approach – In order to propose a novel citation recommendation system, first a new relational similarity measure is proposed for calculating the relatedness of two publications. Then, a recommendation algorithm is presented that uses both relational and textual features to compute the semantic distances of publications of a bibliographic dataset from the input text. Findings – The evaluation of the proposed system shows that combining relational features with textual features leads to better recommendations, in comparison with relying only on the textual features. It also demonstrates that citation context plays an important role among textual features. In addition, it is concluded that different relational features have different contributions to the proposed similarity measure. Originality/value – A new citation recommendation system is proposed which uses a novel semantic distance measure. This measure is based on textual similarities and a new relational similarity concept. The other contribution of this paper is that it sheds more light on the importance of citation context in citation recommendation, by providing more evidences through analysis of the results. In addition, a genetic algorithm is developed for assigning weights to the relational features in the similarity measure. |
The present status of Einsteinian relativistic celestial mechanics. | The present status of Einsteinian relativistic celestial mechanics is reviewed. Starting from a conceptual description of the problem at the Newtonian level, we compare the basic concepts of relativistic celestial mechanics to those of Newtonian one. Some problems to be solved in the years to come are formulated. In the following we want to consider the gravitational N-body problem, i.e. we deal with the dynamics of N distinct bodies of arbitrary shape and composition under the innuence of their mutual gravitational interaction. Other, non-gravitational forces will not be considered here. For didactical reasons let us rst summarize the Newtonian treatment of this problem. Newtonian celestial mechanics is coined by the existence of a class of preferred Cartesian inertial coordinates, x = (ct; x i). Such inertial coordinates exist globally in Newton's framework, i.e. they cover the entire space-time manifold. Moreover, these preferred coordinates in Newton's theory have direct physical meaning: they are directly related with the observables. The basic variables of Newtonian celestial mechanics are the local matter variables (mass-density), v (the velocity eld of matter) and t ij (the stress-tensor, containing pressure, elastic stresses, etc.) and the gravitational eld variable U (Newtonian potential). These variables obey the local equations of motion @ @t + r(v) = 0; (1) @v i @t + @ @x j h v i v j + t ij i = @ @x i U : (2) The rst equation describes the conservation of mass (continuity equation) and the second one is the force density equation (the Euler equation). The eld equation (Poisson equation), U = ?4GG; (3) completes our set of diierential equations. If one adds some equations of state of matter (e.g., pressure as function of density for cold matter or stress-strain relations) in principle these equations would be suucient to formulate Newtonian celestial mechanics in one Cartesian inertial 1 |
Effects of a selectively bred novelty-seeking phenotype on the motivation to take cocaine in male and female rats | Gender and enhanced novelty reactivity can predispose certain individuals to drug abuse. Previous research in male and female rats selectively bred for high or low locomotor reactivity to novelty found that bred High Responders (bHRs) acquire cocaine self-administration more rapidly than bred Low Responders (bLRs) and that bHR females in particular self-administered more cocaine than the other groups. The experiments presented here aimed to determine whether an individual's sex and behavioral phenotype interact to affect motivation to take cocaine. We examined motivation for taking cocaine in two experiments using a range of doses on a progressive ratio (PR) schedule of responding in bHR or bLR males and females. Additionally, we included a measure of continuing to respond in the absence of reinforcement, a feature of addiction that has been recently incorporated into tests of animal models on the basis of the criteria for substance use disorder in the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition. Statistical analyses were performed using PASW Statistics 18.0 software. Data were analyzed using repeated-measures analysis of variance followed by a Bonferroni correction post hoc test when applicable. We found sex differences as well as effects of novelty reactivity on the motivation to self-administer cocaine. Specifically, females demonstrated higher breaking points on the PR schedule compared with males, regardless of phenotype, and bHR males and females exhibited higher motivation than bLR animals at a number of the doses studied. An individual's sex continues to be a predisposing factor with respect to drug abuse liability and can be compounded by additional individual differences such as reactivity to novelty. |
E-GOVERNMENT EVALUATION FACTORS: CITIZEN’S PERSPECTIVE | The e-government field is growing to a considerable size, both in its contents and position with respect to other research fields. The government to citizen segment of egovernment is taking the lead in terms of its importance and size. Like the evaluation of all other information systems initiatives, the evaluation of egovernments in both theory and practice has proved to be important but complex. The complexity of evaluation is mostly due to the multiple perspectives involved, the difficulties of quantifying benefits, and the social and technical context of use. The importance of e-government evaluation is due to the enormous investment of governments on delivering e-government services, and to the considerable pace of growing in the e-government field. However, despite the importance of the evaluation of e-government services, literature shows that e-government evaluation is still an immature area in terms of development and management. This work is part of a research effort that aims to develop a holistic evaluation framework for e-government systems. The main aim of this paper is to investigate the citizen’ perspective in evaluating e-government services, and present a set of evaluating factors that influence citizens’ utilization of e-government services. These evaluation factors can serve as part of an e-government evaluation framework. Moreover, the evaluation factors can also be used as means of providing valuable feedback for the planning of future egovernment initiatives. |
Gender Role Portrayal and the Disney Princesses | The popular Disney Princess line includes nine films (e.g., Snow White, Beauty and the Beast) and over 25,000 marketable products. Gender role depictions of the prince and princess characters were examined with a focus on their behavioral characteristics and climactic outcomes in the films. Results suggest that the prince and princess characters differ in their portrayal of traditionally masculine and feminine characteristics, these gender role portrayals are complex, and trends towards egalitarian gender roles are not linear over time. Content coding analyses demonstrate that all of the movies portray some stereotypical representations of gender, including the most recent film, The Princess and the Frog. Although both the male and female roles have changed over time in the Disney Princess line, the male characters exhibit more androgyny throughout and less change in their gender role portrayals. |
The ConceptMapper Approach to Named Entity Recognition | ConceptMapper is an open source tool we created for classifying mentions in an unstructured text document based on concept terminologies and yielding named entities as output. It is implemented as a UIMA1 (Unstructured Information Management Architecture (IBM, 2004)) annotator, and concepts come from standardised or proprietary terminologies. ConceptMapper can be easily configured, for instance, to use different search strategies or syntactic concepts. In this paper we will describe ConceptMapper, its configuration parameters and their trade-offs, in terms of precision and recall in identifying concepts in a collection of clinical reports written in English. ConceptMapper is available from the Apache UIMA Sandbox, using the Apache Open Source license. |
Sensing techniques for tablet+stylus interaction | We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen + touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures produced by the nonpreferred hand, from touch gestures produced by the hand holding the pen, which necessarily impart a detectable motion signal to the stylus. We can sense which hand grips the tablet, and determine the screen's relative orientation to the pen. By selectively combining these signals and using them to complement one another, we can tailor interaction to the context, such as by ignoring unintentional touch inputs while writing, or supporting contextually-appropriate tools such as a magnifier for detailed stroke work that appears when the user pinches with the pen tucked between his fingers. These and other techniques can be used to impart new, previously unanticipated subtleties to pen + touch interaction on tablets. |
Assessing and Quantifying Network Effects in an Online Dating Market | We empirically examine and quantify network effects on a large online dating platform in Brazil. We exploit a natural experiment, wherein a focal platform acquired its Brazilian competitor and subsequently imported the competitor’s base of 150,000+ users over a 3-day period; a large exogenous shock to the composition of the purchasing platform. Our study context and the natural experiment provide two unique sources of identification: i) accounts purchased from the competitor were almost exclusively heterosexual users, even though the purchasing platform also played host to homosexual users, and ii) the treatment varied across cities, in that the “value” of new users to the existing user base differed. because purchased users differed from existing users in terms of their average characteristics (e.g., location within the city). We leverage the former to estimate a difference-in-differences specification, treating homosexual enrollment and exit rates as a plausible control for those of the heterosexual population, whereas the latter provides us with an opportunity to explore the importance of local market structure in the manifestation of network effects. We find that the treatment increased both rates of enrollment and rates of exit, amongst both genders, with a net positive effect that translated to a 17% increase in short-term revenue for the platform. We find clear evidence of local network effects; in cities where the average spatial distance between new users and existing users was larger, the treatment effect was significantly weaker. We discuss the implications for the literature and practice, and we suggest a number of avenues for future work in this space. ∗Assistant Professor, Information Systems Department, Carlson School of Management, University of Minnesota. [email protected]. †Associate Professor, Information Systems Department, Desautels Faculty of Management, McGill University. [email protected]. |
Multi-Level and Multi-Scale Feature Aggregation Using Sample-level Deep Convolutional Neural Networks for Music Classification | Music tag words that describe music audio by text have different levels of abstraction. Taking this issue into account, we propose a music classification approach that aggregates multilevel and multi-scale features using pre-trained feature extractors. In particular, the feature extractors are trained in sample-level deep convolutional neural networks using raw waveforms. We show that this approach achieves state-of-the-art results on several music classification datasets. |
Throughput Analysis of Synchronous Data Flow Graphs | Synchronous data flow graphs (SDFGs) are a useful tool for modeling and analyzing embedded data flow applications, both in a single processor and a multiprocessing context or for application mapping on platforms. Throughput analysis of these SDFGs is an important step for verifying throughput requirements of concurrent real-time applications, for instance within design-space exploration activities. Analysis of SDFGs can be hard, since the worst-case complexity of analysis algorithms is often high. This is also true for throughput analysis. In particular, many algorithms involve a conversion to another kind of data flow graph, the size of which can be exponentially larger than the size of the original graph. In this paper, we present a method for throughput analysis of SDFGs, based on explicit state-space exploration and we show that the method, despite its worst-case complexity, works well in practice, while existing methods often fail. We demonstrate this by comparing the method with state-of-the-art cycle mean computation algorithms. Moreover, since the state-space exploration method is essentially the same as simulation of the graph, the results of this paper can be easily obtained as a byproduct in existing simulation tools |
Multiple sclerosis in Finland: incidence trends and differences in relapsing remitting and primary progressive disease courses. | OBJECTIVE
To compare the secular trends and geographical differences in the incidence of relapsing-remitting (RRMS) and primary progressive multiple sclerosis (PPMS) in Finland, and to draw inferences about aetiological differences between the two forms of the disease.
METHODS
New multiple sclerosis cases in southern Uusimaa and the western districts Vaasa and Seinäjoki of Finland in 1979-1993 were verified from hospital records and classified into RRMS and PPMS. Patients met the Poser criteria for definite multiple sclerosis or otherwise satisfied the criteria for PPMS. Disease course was categorised by the same neurologist. Crude and age adjusted incidence in 1979-1993 was estimated.
RESULTS
During 1979-1993 the age adjusted incidence was 5.1 per 100 000 person-years in Uusimaa, 5.2 in Vaasa, and 11.6 in Seinäjoki. The rates in Uusimaa remained stable, while a decrease occurred in Vaasa and an increase in Seinäjoki. Between 1979-86 and 1987-93 the incidence of PPMS increased in Seinäjoki from 2.6 to 3.7 per 10(5) and decreased in Vaasa from 1.9 to 0.2 per 10(5); the trends were similar for RRMS.
CONCLUSIONS
There are significant differences in secular trends for multiple sclerosis incidence in Finland by geographical area, but these are similar for PPMS and RRMS. The recent changes point to locally acting environmental factors. The parallel incidence trends for RRMS and PPMS suggest similar environmental triggers for the two clinical presentations of multiple sclerosis. |
Mechatronic design of innovative fingers for anthropomorphic robot hands | In this paper, a novel design approach for the development of robot hands is presented. This approach, that can be considered alternative to the “classical” one, takes into consideration compliant structures instead of rigid ones. Compliance effects, which were considered in the past as a “defect” to be mechanically eliminated, can be viceversa regarded as desired features and can be properly controlled in order to achieve desired properties from the robotic device. In particular, this is true for robot hands, where the mechanical complexity of “classical” design solutions has always originated complicated structures, often with low reliability and high costs. In this paper, an alternative solution to the design of dexterous robot hand is illustrated, considering a “mechatronic approach” for the integration of the mechanical structure, the sensory and electronic system, the control and the actuation part. Moreover, the preliminary experimental activity on a first prototype is reported and discussed. The results obtained so far, considering also reliability, costs and development time, are very encouraging, and allows to foresee a wider diffusion of dextrous hands for robotic applications. |
Next-generation DNA sequencing | DNA sequence represents a single format onto which a broad range of biological phenomena can be projected for high-throughput data collection. Over the past three years, massively parallel DNA sequencing platforms have become widely available, reducing the cost of DNA sequencing by over two orders of magnitude, and democratizing the field by putting the sequencing capacity of a major genome center in the hands of individual investigators. These new technologies are rapidly evolving, and near-term challenges include the development of robust protocols for generating sequencing libraries, building effective new approaches to data-analysis, and often a rethinking of experimental design. Next-generation DNA sequencing has the potential to dramatically accelerate biological and biomedical research, by enabling the comprehensive analysis of genomes, transcriptomes and interactomes to become inexpensive, routine and widespread, rather than requiring significant production-scale efforts. |
Modified step aerobics training and neuromuscular function in osteoporotic patients: a randomized controlled pilot study | Training programs directed to improve neuromuscular and musculoskeletal function of the legs are scarce with respect to older osteoporotic patients. We hypothesized that a modified step aerobics training program might be suitable for this purpose and performed a randomized controlled pilot study to assess the feasibility of conducting a large study. Here we report on the training-related effects on neuromuscular function of the plantar flexors. Twenty-seven patients with an age of at least 65 years were enrolled and randomized into control and intervention group. The latter received supervised modified step aerobics training (twice weekly, 1 h per session) over a period of 6 months. At baseline, and after 3 and 6 months neuromuscular function of the plantar flexors, i.e., isometric maximum voluntary torque, rate of torque development and twitch torque parameters were determined in detail in all patients of both groups. Twenty-seven patients (median age 75 years; range 66–84 years) were randomized (control group n = 14; intervention group n = 13). After 3 and 6 months of training, maximum voluntary contraction strength in the intervention group was significantly higher by 7.7 Nm (9.1%; 95% CI 3.3–12.2 Nm, P < 0.01) and 12.4 Nm (14.8%; 95% CI 6.4–18.5 Nm, P < 0.01) compared to controls. These changes were most probably due to neural and muscular adaptations. It is worthwhile to investigate efficacy of this training program in a large randomized trial. However, a detailed neuromuscular assessment appears feasible only in a subset of participants. |
MASK: Redesigning the GPU Memory Hierarchy to Support Multi-Application Concurrency | Graphics Processing Units (GPUs) exploit large amounts of threadlevel parallelism to provide high instruction throughput and to efficiently hide long-latency stalls. The resulting high throughput, along with continued programmability improvements, have made GPUs an essential computational resource in many domains. Applications from different domains can have vastly different compute and memory demands on the GPU. In a large-scale computing environment, to efficiently accommodate such wide-ranging demands without leaving GPU resources underutilized, multiple applications can share a single GPU, akin to how multiple applications execute concurrently on a CPU. Multi-application concurrency requires several support mechanisms in both hardware and software. One such key mechanism is virtual memory, which manages and protects the address space of each application. However, modern GPUs lack the extensive support for multi-application concurrency available in CPUs, and as a result suffer from high performance overheads when shared by multiple applications, as we demonstrate. We perform a detailed analysis of which multi-application concurrency support limitations hurt GPU performance the most. We find that the poor performance is largely a result of the virtual memory mechanisms employed in modern GPUs. In particular, poor address translation performance is a key obstacle to efficient GPU sharing. State-of-the-art address translation mechanisms, which were designed for single-application execution, experience significant inter-application interference when multiple applications spatially share the GPU. This contention leads to frequent misses in the shared translation lookaside buffer (TLB), where a single miss can induce long-latency stalls for hundreds of threads. As a result, the GPU often cannot schedule enough threads to successfully hide the stalls, which diminishes system throughput and becomes a first-order performance concern. Based on our analysis, we propose MASK, a new GPU framework that provides low-overhead virtual memory support for the concurrent execution of multiple applications. MASK consists of three novel address-translation-aware cache and memory management mechanisms that work together to largely reduce the overhead of address translation: (1) a token-based technique to reduce TLB contention, (2) a bypassing mechanism to improve the effectiveness of cached address translations, and (3) an application-aware memory scheduling scheme to reduce the interference between address translation and data requests. Our evaluations show that MASK restores much of the throughput lost to TLB contention. Relative to a state-of-the-art GPU TLB, MASK improves system throughput by 57.8%, improves IPC throughput by 43.4%, and reduces applicationlevel unfairness by 22.4%. MASK's system throughput is within 23.2% of an ideal GPU system with no address translation overhead. |
Redox environment of the cell as viewed through the redox state of the glutathione disulfide/glutathione couple. | Redox state is a term used widely in the research field of free radicals and oxidative stress. Unfortunately, it is used as a general term referring to relative changes that are not well defined or quantitated. In this review we provide a definition for the redox environment of biological fluids, cell organelles, cells, or tissue. We illustrate how the reduction potential of various redox couples can be estimated with the Nernst equation and show how pH and the concentrations of the species comprising different redox couples influence the reduction potential. We discuss how the redox state of the glutathione disulfide-glutathione couple (GSSG/2GSH) can serve as an important indicator of redox environment. There are many redox couples in a cell that work together to maintain the redox environment; the GSSG/2GSH couple is the most abundant redox couple in a cell. Changes of the half-cell reduction potential (E(hc)) of the GSSG/2GSH couple appear to correlate with the biological status of the cell: proliferation E(hc) approximately -240 mV; differentiation E(hc) approximately -200 mV; or apoptosis E(hc) approximately -170 mV. These estimates can be used to more fully understand the redox biochemistry that results from oxidative stress. These are the first steps toward a new quantitative biology, which hopefully will provide a rationale and understanding of the cellular mechanisms associated with cell growth and development, signaling, and reductive or oxidative stress. |
A Trainable Document Summarizer | s by the selection of sentences. American Documentation, 12(2):139–143, April 1961. [13] U. Reimer and U. Hahn. Text condensationas knowledge base abstraction. In IEEE Conf. on AI Applications, pages 338– 344, 1988. [14] G. Salton, J. Alan, and C. Buckley. Approaches to passage retrieval in full text information systems. In Proceedings of SIGIR’93, pages 49–58, June 1993. [15] G. Salton, J. Allan, C. Buckley, and A. Singhal. Automatic analysis, theme generation, and summarization of machinereadable texts. Science, 264(3):1421–1426, June 1994. [16] C. Schwarz. Content based text handling. Information Processing & Management, 26(2):219–226, 1990. [17] E. F. Skorokhod’ko. Adaptive method of automatic abstracting and indexing. In IFIP Congress,Ljubljana, Yugoslavia 71, pages 1179–1182. North Holland, 1972. [18] J. I. Tait. Generating summaries using a script-based language analyzer. In L. Steels and J.A. Campbell, editors, Progress in Artificial Intelligence, pages 312–318. Ellis Horwood, 1985. [19] L. C. Tong and S. L. Tan. A statistical approach to automatic text extraction. Asian Library Journal. 9 Appendix 9.0.1 Direct Match If a summary sentence is identical to a sentence in the original, or has essentially the same content, the match is defined as a direct match. An example match that is not exact but considered to convey the same content is shown below: Manual: This paper identifies the desirable features of an ideal multisensor gas monitor and lists the different models currently available. Original: The present part lists the desirable features and the different models of portable, multisensor gas monitors currently available. 9.0.2 Direct Join If the content of the manual sentence is represented by two or more sentences in the original, the latter sentences are noted as joins. For example: Manual: In California, Caltrans has a rolling pavement management program, with continuouscollection of data with the aim of identifying roads that require more monitoring and repair. Original (1): Rather than conducting biennial surveys, Caltrans now has a rolling pavement-management program, with data collected continuously. Original (2): The idea is to pinpoint the roads that may need more or less monitoring and repair. 9.0.3 Incomplete Matches A sentence in the original document is labelled as an incomplete match if it only partially covers the content of a manual summary sentence, or if a direct match is not clear. It can occur in the context of a single sentence or a join. The following exemplifies an incomplete single sentence match: Manual: Intergranular fracture of polycrystalline Ni3Alwas studied at 77K. Original: Before discussing the observeddeformation and fracture behavior of polycrystalline Ni3Al at 77K in terms of the kinetics of the proposed environmental embrittlement mechanism, we should ask whether the low temperature by itself significantly affects the brittleness of Ni3Al. |
Clinical and laboratory features of human dirofilariasis in Russia | The article presents the results of the prospective study of 266 patients with dirofilariasis who received medical and diagnostic assistance in Rostov Scientific Research Institute of Microbiology and Parasitology in Rostov-on-Don, Russia from 2000 to 2016. We have assessed the features of the dynamics of epidemiology of this infection in several territories of the Russian Federation, depending on the social structure of patients. Immature females of dirofilaria were found most commonly in humans (82.9 ± 2.6%), the proportion of maturity females and adult males of worms respectively was 10.5 ± 2.1% and 0.9 ± 0.6%. All mature worms were localized inside a capsule. Peripheral blood eosinophilia was detected only in patients with the migration of helminths (19 of 116 persons - 16.4%). Blood samples of patients examined by the method of concentration in 3% acetic acid for detection of microfilariae, showed negative result in all patients. Our data are consistent with the opinion of KI Skriabin about that human as «dual facultative host» for dirofiliaria. It is rare that parasite in human body is able to develop to the imago stage (according to our observations - 11.4%). The immune response to invasion by dirofiliaria in human is manifested as dense connective tissue which forms a capsule. According to our study the rare cases (22) of detection the sexual mature D. repens (10.4%) were localized inside the capsule. Observations of patients with D. repens infection allowed concluding that human for this helminth is «a biological deadend». |
Cationic amphiphilic drugs cause a marked expansion of apparent lysosomal volume: implications for an intracellular distribution-based drug interaction. | How a drug distributes within highly compartmentalized mammalian cells can affect both the activity and pharmacokinetic behavior. Many commercially available drugs are considered to be lysosomotropic, meaning they are extensively sequestered in lysosomes by an ion trapping-type mechanism. Lysosomotropic drugs typically have a very large apparent volume of distribution and a prolonged half-life in vivo, despite minimal association with adipose tissue. In this report we tested the prediction that the accumulation of one drug (perpetrator) in lysosomes could influence the accumulation of a secondarily administered one (victim), resulting in an intracellular distribution-based drug interaction. To test this hypothesis cells were exposed to nine different hydrophobic amine-containing drugs, which included imipramine, chlorpromazine and amiodarone, at a 10 μM concentration for 24 to 48 h. After exposure to the perpetrators the cellular accumulation of LysoTracker Red (LTR), a model lysosomotropic probe, was evaluated both quantitatively and microscopically. We found that all of the tested perpetrators caused a significant increase in the cellular accumulation of LTR. Exposure of cells to imipramine caused an increase in the cellular accumulation of other lysosomotropic probes and drugs including LyosTracker Green, daunorubicin, propranolol and methylamine; however, imipramine did not alter the cellular accumulation of non-lysosomotropic amine-containing molecules including MitoTracker Red and sulforhodamine 101. In studies using ionophores to abolish intracellular pH gradients we were able to resolve ion trapping-based cellular accumulation from residual pH-gradient independent accumulation. Results from these evaluations in conjunction with lysosomal pH measurements enabled us to estimate the relative aqueous volume of lysosomes of cells before and after imipramine treatment. Our results suggest that imipramine exposure caused a 4-fold expansion in the lysosomal volume, which provides the basis for the observed drug interaction. The imipramine-induced lysosomal volume expansion was shown to be both time- and temperature-dependent and reversed by exposing cells to hydroxypropyl-β-cyclodextrin, which reduced lysosomal cholesterol burden. This suggests that the expansion of lysosomal volume occurs secondary to perpetrator-induced elevations in lysosomal cholesterol content. In support of this claim, the cellular accumulation of LTR was shown to be higher in cells isolated from patients with Niemann-Pick type C disease, which are known to hyperaccumulate cholesterol in lysosomes. |
Semantic MEDLINE: An advanced information management application for biomedicine | Semantic MEDLINE integrates information retrieval, advanced natural language processing, automatic summarization, and visualization into a single Web portal. The application is intended to help manage the results of PubMed searches by condensing core semantic content in the citations retrieved. Output is presented as a connected graph of semantic relations, with links to the original MEDLINE citations. The ability to connect salient information across documents helps users keep up with the research literature and discover connections which might otherwise go unnoticed. Semantic MEDLINE can make an impact on biomedicine by supporting scientific discovery and the timely translation of insights from basic research into advances in clinical practice and patient care. Marcelo Fiszman has an M.D. from the State University of Rio de Janeiro and a Ph.D. in biomedical informatics from the University of Utah. He was awarded a postdoctoral fellowship in biomedical informatics at the National Library of Medicine (NLM) and is currently a research scientist there. His work focuses on natural language processing algorithms that exploit symbolic, rule-based techniques for semantic interpretation of biomedical text. He is also interested in using extracted semantic information for automatic abstraction summarization and literaturebased discovery. These efforts underpin Semantic MEDLINE, which is currently under development at NLM. This innovative biomedical information management application combines document retrieval, semantic interpretation, automatic summarization, and knowledge visualization into a single application. |
Two-factor authentication for the Bitcoin protocol | We show how to realize two-factor authentication for a Bitcoin wallet. To do so, we explain how to employ an ECDSA adaption of the two-party signature protocol by MacKenzie and Reiter (Int J Inf Secur 2(3–4):218–239, 2004. doi: 10.1007/s10207-004-0041-0 ) in the context of Bitcoin and present a prototypic implementation of a Bitcoin wallet that offers both: two-factor authentication and verification over a separate channel. Since we use a smart phone as the second authentication factor, our solution can be used with hardware already available to most users and the user experience is quite similar to the existing online banking authentication methods. |
Coordinated and Reconfigurable Vehicle Dynamics Control | A coordinated reconfigurable vehicle dynamics control (CRVDC) system is achieved by high-level control of generalized forces/moment, distributed to the slip and slip angle of each tire by an innovative control allocation (CA) scheme. Utilizing control of individual tire slip and slip angles helps resolve the inherent tire force nonlinear constraints that otherwise may make the system more complex and computationally expensive. This in turn enables a real-time adaptable, computationally efficient accelerated fixed-point (AFP) method to improve the CA convergence rate when actuation saturates. Evaluation of the overall system is accomplished by simulation testing with a full-vehicle CarSim model under various adverse driving conditions, including scenarios where vehicle actuator failures occur. Comparison with several other vehicle control system approaches shows how the system operational envelope for CRVDC is significantly expanded in terms of vehicle global trajectory and planar motion responses. |
New Propeller-Type Tribocharging Device With Application to the Electrostatic Separation of Granular Insulating Materials | The aim of this paper is the development and functional optimization of a new propeller-type aerodynamic tribocharging device with application in the field of electrostatic separation. The originality of the system consists of its modular structure: one or several propellers can be stacked in the same device to provide appropriate tribocharging conditions to a wide variety of granular mixtures containing two or more of insulating materials. The study is conducted with samples of the following various insulating materials: polycarbonate, polyamide, acrylonitrile butadiene styrene, polyvinyl chloride, and high-impact polystyrene, grain sizes up to 4 mm in diameter, for several values of the propeller rotation speed, and of the mass of the particles in the tribocharging device, the wall of which are made of acetate, Polymethyl methacrylate (PMMA), or aluminum. The efficiency of the device is tested by processing the charged granular mixture in a metal-belt conveyor-type electrostatic separator. The aluminum-wall device enables better charging that PMMA and acetate. The mass introduced in the device has no significant effect on the outcome of the process, but the speed of the propellers does. Successful separation of a mixture of three insulating materials is reported. |
Pair-Linking for Collective Entity Disambiguation: Two Could Be Better Than All | Collective entity disambiguation, or collective entity linking aims to jointly resolve multiple mentions by linking them to their associated entities in a knowledge base. Previous works are primarily based on the underlying assumption that entities within the same document are highly related. However, the extent to which these entities are actually connected in reality is rarely studied and therefore raises interesting research questions. For the first time, this paper shows that the semantic relationships between mentioned entities within a document are in fact less dense than expected. This could be attributed to several reasons such as noise, data sparsity, and knowledge base incompleteness. As a remedy, we introduce MINTREE, a new tree-based objective for the problem of entity disambiguation. The key intuition behind MINTREE is the concept of coherence relaxation which utilizes the weight of a minimum spanning tree to measure the coherence between entities. Based on this new objective, we design Pair-Linking, a novel iterative solution for the MINTREE optimization problem. The idea of Pair-Linking is simple: instead of considering all the given mentions, Pair-Linking iteratively selects a pair with the highest confidence at each step for decision making. Via extensive experiments on 8 benchmark datasets, we show that our approach is not only more accurate but also surprisingly faster than many state-of-the-art collective linking algorithms. |
Doping engineering for improved immunity against BV softness and BV shift in trench power MOSFET | In this paper, we report typical soft breakdown and BVDSS walk-in/walk-out phenomena observed in the development of ON semiconductor's T8 60V trench power MOSFET. These breakdown behaviors show strong correlation with doping profile. We propose two 1D location-dependent variables, Qint(y) and Cave(y), to assist the study, and demonstrate the effectiveness of them in revealing hidden information behind regular SIMS data. Our study details the methodology of engineering doping profiles for improved breakdown stability. |
EARLY DEVELOPMENTS OF A PARALLELLY ACTUATED HUMANOID , SAFFIR | This paper presents the design of our new 33 degree of freedom full size humanoid robot, SAFFiR (Shipboard Autonomous Fire Fighting Robot). The goal of this research project is to realize a high performance mixed force and position controlled robot with parallel actuation. The robot has two 6 DOF legs and arms, a waist, neck, and 3 DOF hands/fingers. The design is characterized by a central lightweight skeleton actuated with modular ballscrew driven force controllable linear actuators arranged in a parallel fashion around the joints. Sensory feedback on board the robot includes an inertial measurement unit, force and position output of each actuator, as well as 6 axis force/torque measurements from the feet. The lower body of the robot has been fabricated and a rudimentary walking algorithm implemented while the upper body fabrication is completed. Preliminary walking experiments show that parallel actuation successfully minimizes the loads through individual actuators. |
Near-optimal Reinforcement Learning in Factored MDPs | Any reinforcement learning algorithm that applies to all Markov decision processes (MDPs) will su er ( Ô SAT ) regret on some MDP, where T is the elapsed time and S and A are the cardinalities of the state and action spaces. This implies T = (SA) time to guarantee a near-optimal policy. In many settings of practical interest, due to the curse of dimensionality, S and A can be so enormous that this learning time is unacceptable. We establish that, if the system is known to be a factored MDP, it is possible to achieve regret that scales polynomially in the number of parameters encoding the factored MDP, which may be exponentially smaller than S or A. We provide two algorithms that satisfy near-optimal regret bounds in this context: posterior sampling reinforcement learning (PSRL) and an upper confidence bound algorithm (UCRL-Factored). |
The Antibacterial Activity of Leaf Extracts of Eucalyptus camaldulensis (Myrtaceae) | The antibacterial activity of the leaf extracts of Eucalyptus camaldulensis was studied against Klebsiella spp, Salmonella typhi, Yersinia enterocolitica, Pseudomonas aeruginosa (Gram-negative), Staphylococcus aureus and Bacillus subtilis by the agar diffusion method. The methanol extract, dichloromethane fraction and methanol residue at 10mg mL displayed broad spectrum activity against -1 all the test organisms but the petroleum ether fraction showed no activity. The antibacterial activity of the extracts was compared to the drug gentamycin. The minimum inhibitory concentrations of the methanol extract and dichloromethane fraction determined by the agar dilution method ranged between 0.04 and 10mg mL with that of Bacillus subtilis being the least. Phytochemical screening of the plant revealed -1 the presence of tannins, saponins and cardiac glycosides. The results of this study support the traditional use of Eucalyptus camaldulensis leaves as an antibacterial agent. |
A Case for Hardware Protection of Guest VMs from Compromised Hypervisors in Cloud Computing | Cloud computing, enabled by virtualization technologies, is becoming a mainstream computing model. Many companies are starting to utilize the infrastructure-as-a-service (IaaS) cloud computing model, leasing guest virtual machines (VMs) from the infrastructure providers for economic reasons: to reduce their operating costs and to increase the flexibility of their own infrastructures. Yet, many companies may be hesitant to move to cloud computing due to security concerns. An integral part of any virtualization technology is the all-powerful hyper visor. A hyper visor is a system management software layer which can access all resources of the platform. Much research has been done on using hyper visors to monitor guest VMs for malicious code and on hardening hyper visors to make them more secure. There is, however, another threat which has not been addressed by researchers -- that of compromised or malicious hyper visors that can extract sensitive or confidential data from guest VMs. Consequently, we propose that a new research direction needs to be undertaken to tackle this threat. We further propose that new hardware mechanisms in the multi core microprocessors are a viable way of providing protections for the guest VMs from the hyper visor, while still allowing the hyper visor to flexibly manage the resources of the physical platform. |
The Firms Speak: What the World Business Environment Survey Tells Us about Constraints on Private Sector Development | This chapter summarizes the salient results of the World Business Environment Survey (WBES). It shows that important dimensions of the climate for business operation and investment can be measured, analyzed, and compared across countries, and that governance is key to the business environment and investment climate. The survey findings suggest that key policy, institutional, and governance indicators affect the growth of a firm's sales and investment and the extent to which firms operate in the unofficial economy. Further, the paper provides empirical support for some commonly held notions, while challenging others. It suggests a link between taxation, financing, and corruption on the one hand, and growth and investment on the other, and it highlights the costs to economies where the state is captured by a narrow set of private interests. |
Distributed Deep Neural Networks Over the Cloud, the Edge and End Devices | We propose distributed deep neural networks (DDNNs) over distributed computing hierarchies, consisting of the cloud, the edge (fog) and end devices. While being able to accommodate inference of a deep neural network (DNN) in the cloud, a DDNN also allows fast and localized inference using shallow portions of the neural network at the edge and end devices. When supported by a scalable distributed computing hierarchy, a DDNN can scale up in neural network size and scale out in geographical span. Due to its distributed nature, DDNNs enhance sensor fusion, system fault tolerance and data privacy for DNN applications. In implementing a DDNN, we map sections of a DNN onto a distributed computing hierarchy. By jointly training these sections, we minimize communication and resource usage for devices and maximize usefulness of extracted features which are utilized in the cloud. The resulting system has built-in support for automatic sensor fusion and fault tolerance. As a proof of concept, we show a DDNN can exploit geographical diversity of sensors to improve object recognition accuracy and reduce communication cost. In our experiment, compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x. |
Psychological traits and the cortisol awakening response: Results from the Netherlands Study of Depression and Anxiety | BACKGROUND
Hypothalamus-Pituitary-Adrenal (HPA) axis dysregulation is often seen in major depression, and is thought to represent a trait vulnerability - rather than merely an illness marker - for depressive disorder and possibly anxiety disorder. Vulnerability traits associated with stress-related disorders might reflect increased sensitivity for the development of psychopathology through an association with HPA axis activity. Few studies have examined the association between psychological trait factors and the cortisol awakening response, with inconsistent results. The present study examined the relationship between multiple psychological trait factors and the cortisol awakening curve, including both the dynamic of the CAR and overall cortisol awakening levels, in a sample of persons without psychopathology, hypothesizing that persons scoring high on vulnerability traits demonstrate an elevated cortisol awakening curve.
METHODS
From 2981 participants of the Netherlands Study of Depression and Anxiety (NESDA), baseline data from 381 controls (aged 18-65) without previous, current and parental depression and anxiety disorders were analyzed. Psychological measures included the Big Five personality traits (neuroticism, extraversion, openness to experience, conscientiousness, and agreeableness) measured using the NEO-FFI, anxiety sensitivity assessed by the Anxiety Sensitivity Index, cognitive reactivity to sadness (hopelessness, acceptance/coping, aggression, control/perfectionism, risk aversion, and rumination) as measured by the LEIDS-R questionnaire, and mastery, assessed using the Pearlin and Schooler Mastery scale. Salivary cortisol levels were measured at awakening, and 30, 45, and 60 min afterwards.
RESULTS
In adjusted analyses, high scores of hopelessness reactivity (β=.13, p=.02) were consistently associated with a higher cortisol awakening response. In addition, although inconsistent across analyses, persons scoring higher on extraversion, control/perfectionism reactivity, and mastery tended to show a slightly flatter CAR. No significant associations were found for neuroticism, openness to experience, agreeableness, conscientiousness, anxiety sensitivity, and acceptance/coping, aggression, or risk aversion reactivity.
CONCLUSION
Of various psychological traits, only hopelessness reactivity, a trait that has been associated with depression and suicidality, is consistently associated with HPA axis dysregulation. Hopelessness reactivity may represent a predisposing vulnerability for the development of a depressive or anxiety disorder, possibly in part mediated by HPA axis activity. |
Genes for elite power and sprint performance: ACTN3 leads the way. | The ability of skeletal muscles to produce force at a high velocity, which is crucial for success in power and sprint performance, is strongly influenced by genetics and without the appropriate genetic make-up, an individual reduces his/her chances of becoming an exceptional power or sprinter athlete. Several genetic variants (i.e. polymorphisms) have been associated with elite power and sprint performance in the last few years and the current paradigm is that elite performance is a polygenic trait, with minor contributions of each variant to the unique athletic phenotype. The purpose of this review is to summarize the specific knowledge in the field of genetics and elite power performance, and to provide some future directions for research in this field. Of the polymorphisms associated with elite power and sprint performance, the α-actinin-3 R577X polymorphism provides the most consistent results. ACTN3 is the only gene that shows a genotype and performance association across multiple cohorts of elite power athletes, and this association is strongly supported by mechanistic data from an Actn3 knockout mouse model. The angiotensin-1 converting enzyme insertion/deletion polymorphism (ACE I/D, registered single nucleotide polymorphism [rs]4646994), angiotensinogen (AGT Met235Thr rs699), skeletal adenosine monophosphate deaminase (AMPD1) Gln(Q)12Ter(X) [also termed C34T, rs17602729], interleukin-6 (IL-6 -174 G/C, rs1800795), endothelial nitric oxide synthase 3 (NOS3 -786 T/C, rs2070744; and Glu298Asp, rs1799983), peroxisome proliferator-activated receptor-α (PPARA Intron 7 G/C, rs4253778), and mitochondrial uncoupling protein 2 (UCP2 Ala55Val, rs660339) polymorphisms have also been associated with elite power performance, but the findings are less consistent. In general, research into the genetics of athletic performance is limited by a small sample size in individual studies and the heterogeneity of study samples, often including athletes from multiple-difference sporting disciplines. In the future, large, homogeneous, strictly defined elite power athlete cohorts need to be established though multinational collaboration, so that meaningful genome-wide association studies can be performed. Such an approach would provide unbiased identification of potential genes that influence elite athletic performance. |
Pedestrian intention recognition using Latent-dynamic Conditional Random Fields | We present a novel approach for pedestrian intention recognition for advanced video-based driver assistance systems using a Latent-dynamic Conditional Random Field model. The model integrates pedestrian dynamics and situational awareness using observations from a stereo-video system for pedestrian detection and human head pose estimation. The model is able to capture both intrinsic and extrinsic class dynamics. Evaluation of our method is performed on a public available dataset addressing scenarios of lateral approaching pedestrians that might cross the road, turn into the road or stop at the curbside. During experiments, we demonstrate that the proposed approach leads to better stability and class separation compared to state-of-the-art pedestrian intention recognition approaches. |
The Impact of Humanoid Affect Expression on Human Behavior in a Game-Theoretic Setting | With the rapid development of robot and other intelligent and autonomous agents, how a human could be influenced by a robot’s expressed mood when making decisions becomes a crucial question in human-robot interaction. In this pilot study, we investigate (1) in what way a robot can express a certain mood to influence a human’s decision making behavioral model; (2) how and to what extent the human will be influenced in a game theoretic setting. More specifically, we create an NLP model to generate sentences that adhere to a specific affective expression profile. We use these sentences for a humanoid robot as it plays a Stackelberg security game against a human. We investigate the behavioral model of the human player. |
Apollo: Scalable and Coordinated Scheduling for Cloud-Scale Computing | Efficiently scheduling data-parallel computation jobs over cloud-scale computing clusters is critical for job performance, system throughput, and resource utilization. It is becoming even more challenging with growing cluster sizes and more complex workloads with diverse characteristics. This paper presents Apollo, a highly scalable and coordinated scheduling framework, which has been deployed on production clusters at Microsoft to schedule thousands of computations with millions of tasks efficiently and effectively on tens of thousands of machines daily. The framework performs scheduling decisions in a distributed manner, utilizing global cluster information via a loosely coordinated mechanism. Each scheduling decision considers future resource availability and optimizes various performance and system factors together in a single unified model. Apollo is robust, with means to cope with unexpected system dynamics, and can take advantage of idle system resources gracefully while supplying guaranteed resources when needed. |
Towards Automatically Classifying Depressive Symptoms from Twitter Data for Population Health | Major depressive disorder, a debilitating and burdensome disease experienced by individuals worldwide, can be defined by several depressive symptoms (e.g., anhedonia (inability to feel pleasure), depressed mood, difficulty concentrating, etc.). Individuals often discuss their experiences with depression symptoms on public social media platforms like Twitter, providing a potentially useful data source for monitoring population-level mental health risk factors. In a step towards developing an automated method to estimate the prevalence of symptoms associated with major depressive disorder over time in the United States using Twitter, we developed classifiers for discerning whether a Twitter tweet represents no evidence of depression or evidence of depression. If there was evidence of depression, we then classified whether the tweet contained a depressive symptom and if so, which of three subtypes: depressed mood, disturbed sleep, or fatigue or loss of energy. We observed that the most accurate classifiers could predict classes with high-to-moderate F1-score performances for no evidence of depression (85), evidence of depression (52), and depressive symptoms (49). We report moderate F1-scores for depressive symptoms ranging from 75 (fatigue or loss of energy) to 43 (disturbed sleep) to 35 (depressed mood). Our work demonstrates baseline approaches for automatically encoding Twitter data with granular depressive symptoms associated with major depressive disorder. |
A Decade of Lattice Cryptography | Lattice-based cryptography is the use of conjectured hard problems on point lattices in Rn as the foundation for secure cryptographic systems. Attractive features of lattice cryptography include apparent resistance to quantum attacks (in contrast with most number-theoretic cryptography), high asymptotic efficiency and parallelism, security under worst-case intractability assumptions, and solutions to long-standing open problems in cryptography. This work surveys most of the major developments in lattice cryptography over the past ten years. The main focus is on the foundational short integer solution (SIS) and learning with errors (LWE) problems (and their more efficient ring-based variants), their provable hardness assuming the worst-case intractability of standard lattice problems, and their many cryptographic applications. C. Peikert. A Decade of Lattice Cryptography. Foundations and Trends © in Theoretical Computer Science, vol. 10, no. 4, pp. 283–424, 2014. DOI: 10.1561/0400000074. Full text available at: http://dx.doi.org/10.1561/0400000074 |
The ContikiMAC Radio Duty Cycling Protocol | Low-power wireless devices must keep their radio transceivers off as much as possible to reach a low power consumption, but must wake up often enough to be able to receive communication from their neighbors. This report describes the ContikiMAC radio duty cycling mechanism, the default radio duty cycling mechanism in Contiki 2.5, which uses a power efficient wake-up mechanism with a set of timing constraints to allow device to keep their transceivers off. With ContikiMAC, nodes can participate in network communication yet keep their radios turned off for roughly 99% of the time. This report describes the ContikiMAC mechanism, measures the energy consumption of individual ContikiMAC operations, and evaluates the efficiency of the fast sleep and phase-lock optimizations. |
Complete and robust no-fit polygon generation for the irregular stock cutting problem | The no-fit polygon is a construct that can be used between pairs of shapes for fast and efficient handling of geometry within irregular two-dimensional stock cutting problems. Previously, the no-fit polygon (NFP) has not been widely applied because of the perception that it is difficult to implement and because of the lack of generic approaches that can cope with all problem cases without specific case-by-case handling. This paper introduces a robust orbital method for the creation of no-fit polygons which does not suffer from the typical problem cases found in the other approaches from the literature. Furthermore, the algorithm only involves two simple geometric stages so it is easily understood and implemented. We demonstrate how the approach handles known degenerate cases such as holes, interlocking concavities and jigsaw type pieces and we give generation times for 32 irregular packing benchmark problems from the literature, including real world datasets, to allow further comparison with existing and future approaches. |
Effects of a Dynamic Warm-Up, Static Stretching or Static Stretching with Tendon Vibration on Vertical Jump Performance and EMG Responses | The purpose of this study was to investigate the short-term effects of static stretching, with vibration given directly over Achilles tendon, on electro-myographic (EMG) responses and vertical jump (VJ) performances. Fifteen male, college athletes voluntarily participated in this study (n=15; age: 22±4 years old; body height: 181±10 cm; body mass: 74±11 kg). All stages were completed within 90 minutes for each participant. Tendon vibration bouts lasted 30 seconds at 50 Hz for each volunteer. EMG analysis for peripheral silent period, H-reflex, H-reflex threshold, T-reflex and H/M ratio were completed for each experimental phases. EMG data were obtained from the soleus muscle in response to electro stimulation on the popliteal post tibial nerve. As expected, the dynamic warm-up (DW) increased VJ performances (p=0.004). Increased VJ performances after the DW were not statistically substantiated by the EMG findings. In addition, EMG results did not indicate that either static stretching (SS) or tendon vibration combined with static stretching (TVSS) had any detrimental or facilitation effect on vertical jump performances. In conclusion, using TVSS does not seem to facilitate warm-up effects before explosive performance. |
A smart bed platform for monitoring & Ulcer prevention | The focus of this paper is to develop a software-hardware platform that addresses one of the most costly, acute health conditions, pressure ulcers — or bed sores. Caring for pressure ulcers is extremely costly, increases the length of hospital stays and is very labor intensive. The proposed platform collects information from various sensors incorporated into the bed, analyzes the data to create a time-stamped, whole-body pressure distribution map, and commands the bed's actuators to periodically adjust its surface profile to redistribute pressure over the entire body. These capabilities are combined to form a cognitive support system, that augments the ability of a care giver, allowing them to provide better care to more patients in less time. For proof of concept, we have implemented algorithms and architectures that cover four key aspects of this platform: 1) data collection, 2) modeling & profiling, 3) machine learning, and 4) acting. |
Optimistic Parallel State-Machine Replication | State-machine replication, a fundamental approach to fault tolerance, requires replicas to execute commands deterministically, which usually results in sequential execution of commands. Sequential execution limits performance and under-uses servers, which are increasingly parallel (i.e., multicore). To narrow the gap between state-machine replication requirements and the characteristics of modern servers, researchers have recently come up with alternative execution models. This paper surveys existing approaches to parallel state-machine replication and proposes a novel optimistic protocol that inherits the scalable features of previous techniques. Using a replicated B+-tree service, we demonstrate in the paper that our protocol outperforms the most efficient techniques by a factor of 2.4 times. |
Metasurface holograms reaching 80% efficiency. | Surfaces covered by ultrathin plasmonic structures--so-called metasurfaces--have recently been shown to be capable of completely controlling the phase of light, representing a new paradigm for the design of innovative optical elements such as ultrathin flat lenses, directional couplers for surface plasmon polaritons and wave plate vortex beam generation. Among the various types of metasurfaces, geometric metasurfaces, which consist of an array of plasmonic nanorods with spatially varying orientations, have shown superior phase control due to the geometric nature of their phase profile. Metasurfaces have recently been used to make computer-generated holograms, but the hologram efficiency remained too low at visible wavelengths for practical purposes. Here, we report the design and realization of a geometric metasurface hologram reaching diffraction efficiencies of 80% at 825 nm and a broad bandwidth between 630 nm and 1,050 nm. The 16-level-phase computer-generated hologram demonstrated here combines the advantages of a geometric metasurface for the superior control of the phase profile and of reflectarrays for achieving high polarization conversion efficiency. Specifically, the design of the hologram integrates a ground metal plane with a geometric metasurface that enhances the conversion efficiency between the two circular polarization states, leading to high diffraction efficiency without complicating the fabrication process. Because of these advantages, our strategy could be viable for various practical holographic applications. |
Overview of the ImageCLEF 2018 Caption Prediction Tasks | The caption prediction task is in 2018 in its second edition after the task was first run in the same format in 2017. For 2018 the database was more focused on clinical images to limit diversity. As automatic methods with limited manual control were used to select images, there is still an important diversity remaining in the image data set. Participation was relatively stable compared to 2017. Usage of external data was restricted in 2018 to limit critical remarks regarding the use of external resources by some groups in 2017. Results show that this is a difficult task but that large amounts of training data can make it possible to detect the general topics of an image from the biomedical literature. For an even better comparison it seems important to filter the concepts for the images that are made available. Very general concepts (such as “medical image”) need to be removed, as they are not specific for the images shown, and also extremely rare concepts with only one or two examples can not really be learned. Providing more coherent training data or larger quantities can also help to learn such complex models. |
HeartToGo: A Personalized medicine technology for cardiovascular disease prevention and detection | To date, cardiovascular disease (CVD) is the first leading cause of global death. The Electrocardiogram (ECG) is the most widely adopted clinical tool that measures and records the electrical activity of the heart from the body surface. The mainstream resting ECG machines for CVD diagnosis and supervision can be ineffective in detecting abnormal transient heart activities, which may not occur during an individual's hospital visit. Common Holter-based portable solutions offer 24-hour ECG recording, containing hundreds of thousands of heart beats that not only are tedious and time-consuming to analyze manually but also miss the capability to provide any real-time feedback. In this study, we seek to establish a cell phone-based personalized medicine technology for CVD, capable of performing continuous monitoring and recording of ECG in real time, generating individualized cardiac health summary report in layman's language, automatically detecting abnormal CVD conditions and classifying them at any place and anytime. Specifically, we propose to develop an artificial neural network (ANN)-based machine learning technique, combining both individualized medical information and clinical ECG database data, to train the cell phone to learn to adapt to its user's physiological conditions to achieve better ECG feature extraction and more accurate CVD classification results. |
How to evaluate the microcirculation: report of a round table conference | INTRODUCTION
Microvascular alterations may play an important role in the development of organ failure in critically ill patients and especially in sepsis. Recent advances in technology have allowed visualization of the microcirculation, but several scoring systems have been used so it is sometimes difficult to compare studies. This paper reports the results of a round table conference that was organized in Amsterdam in November 2006 in order to achieve consensus on image acquisition and analysis.
METHODS
The participants convened to discuss the various aspects of image acquisition and the different scores, and a consensus statement was drafted using the Delphi methodology.
RESULTS
The participants identified the following five key points for optimal image acquisition: five sites per organ, avoidance of pressure artifacts, elimination of secretions, adequate focus and contrast adjustment, and recording quality. The scores that can be used to describe numerically the microcirculatory images consist of the following: a measure of vessel density (total and perfused vessel density; two indices of perfusion of the vessels (proportion of perfused vessels and microcirculatory flow index); and a heterogeneity index. In addition, this information should be provided for all vessels and for small vessels (mostly capillaries) identified as smaller than 20 microm. Venular perfusion should be reported as a quality control index, because venules should always be perfused in the absence of pressure artifact. It is anticipated that although this information is currently obtained manually, it is likely that image analysis software will ease analysis in the future.
CONCLUSION
We proposed that scoring of the microcirculation should include an index of vascular density, assessment of capillary perfusion and a heterogeneity index. |
Embolotherapy for Neuroendocrine Tumor Liver Metastases: Prognostic Factors for Hepatic Progression-Free Survival and Overall Survival | The purpose of the study was to evaluate prognostic factors for survival outcomes following embolotherapy for neuroendocrine tumor (NET) liver metastases. This was a multicenter retrospective study of 155 patients (60 years mean age, 57 % male) with NET liver metastases from pancreas (n = 71), gut (n = 68), lung (n = 8), or other/unknown (n = 8) primary sites treated with conventional transarterial chemoembolization (TACE, n = 50), transarterial radioembolization (TARE, n = 64), or transarterial embolization (TAE, n = 41) between 2004 and 2015. Patient-, tumor-, and treatment-related factors were evaluated for prognostic effect on hepatic progression-free survival (HPFS) and overall survival (OS) using unadjusted and propensity score-weighted univariate and multivariate Cox proportional hazards models. Median HPFS and OS were 18.5 and 125.1 months for G1 (n = 75), 12.2 and 33.9 months for G2 (n = 60), and 4.9 and 9.3 months for G3 tumors (n = 20), respectively (p < 0.05). Tumor burden >50 % hepatic volume demonstrated 5.5- and 26.8-month shorter median HPFS and OS, respectively, versus burden ≤50 % (p < 0.05). There were no significant differences in HPFS or OS between gut or pancreas primaries. In multivariate HPFS analysis, there were no significant differences among embolotherapy modalities. In multivariate OS analysis, TARE had a higher hazard ratio than TACE (unadjusted Cox model: HR 2.1, p = 0.02; propensity score adjusted model: HR 1.8, p = 0.11), while TAE did not differ significantly from TACE. Higher tumor grade and tumor burden prognosticated shorter HPFS and OS. TARE had a higher hazard ratio for OS than TACE. There were no significant differences in HPFS among embolotherapy modalities. |
Community structure in social and biological networks. | A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known--a collaboration network and a food web--and find that it detects significant and informative community divisions in both cases. |
Planning Problems in Intermodal Freight Transport : Accomplishments and Prospects | Intermodal freight transport has received an increased attention due to problems of road congestion, environmental concerns and traffic safety. A growing recognition of the strategic importance of speed and agility in the supply chain is forcing firms to reconsider traditional logistic services. As a consequence, research interest in intermodal freight transportation problems is growing. This paper provides an overview of planning decisions in intermodal freight transport and solution methods proposed in scientific literature. Planning problems are classified according to type of decision maker and decision level. General conclusions are given and subjects for further research are identified. |
Organizations as complex adaptive systems: Implications of Complexity Theory for leadership research | This article contrasts the assumptions of General Systems Theory, the framework for much prior leadership research, with those of Complexity Theory, to further develop the latter's implications for the definition of leadership and the leadership process. We propose that leadership in a Complex Adaptive System (CAS) may affect the organization indirectly, through the mediating variables of organizational identity and social movements. A rudimentary model of leadership in a CAS is presented. We then outline two non-linear methodologies, dynamic systems simulation and artificial neural networks, as appropriate to enable development and testing of a model leadership under the assumptions of Complexity Theory. © 2006 Elsevier Inc. All rights reserved. |
Minimal Gated Unit for Recurrent Neural Networks | Recently recurrent neural networks (RNN) has been very successful in handling sequence data. However, understanding RNN and finding the best practices for RNN is a difficult task, partly because there are many competing and complex hidden units (such as LSTM and GRU). We propose a gated unit for RNN, named as Minimal Gated Unit (MGU), since it only contains one gate, which is a minimal design among all gated hidden units. The design of MGU benefits from evaluation results on LSTM and GRU in the literature. Experiments on various sequence data show that MGU has comparable accuracy with GRU, but has a simpler structure, fewer parameters, and faster training. Hence, MGU is suitable in RNN’s applications. Its simple architecture also means that it is easier to evaluate and tune, and in principle it is easier to study MGU’s properties theoretically and empirically. |
Dissolution enhancement of glimepiride using modified gum karaya as a carrier | OBJECTIVE
The aim of present investigation is to enhance in vitro dissolution of poorly soluble drug glimepiride by preparing solid dispersions using modified gum karaya.
MATERIALS AND METHODS
Solid dispersions of drug were prepared by solvent evaporation method using modified gum karaya as carrier. Four batches of solid dispersion (SD1, SD4, SD9, and SD14) and physical mixture (PM1, PM4, PM9, and PM14) were prepared and characterized by differential scanning colorimetry (DSC), Fourier transform infrared (FTIR) spectroscopy, powder X-Ray diffraction (X-RD), and scanning electron microscopy (SEM) studies. Equilibrium solubility studies were carried out in shaker incubator for 24 h and in vitro drug release was determined using USP Dissolution Apparatus-II.
RESULTS
Maximum solubility and in vitro dissolution were observed with Batch SD4. No significant enhancement of dissolution characteristics were observed in the corresponding physical mixture PM4. Low viscosity with comparable swelling characteristics as compared to GK of modified form of gum karaya may lead to improvement in dissolution behavior of solid dispersion batches. Also, the conversion of crystalline form of drug to amorphous form may be a responsible factor, which was further confirmed by DSC, FTIR studies, and X-RD studies. SEM photographs of batch SD4 revealed porous nature of particle surface.
CONCLUSION
Modified forms of natural carriers prove beneficial in dissolution enhancement of poorly soluble drugs and exhibited a great potential in novel drug delivery systems. |
A Single-Item Inventory Model for a Nonstationary Demand Process | In this paper, we consider an adaptive base-stock policy for a single-item inventory system, where the demand process is non-stationary. In particular, the demand process is an integrated moving average process of order (0, 1, 1), for which an exponential-weighted moving average provides the optimal forecast. For the assumed control policy we characterize the inventory random variable and use this to find the safety stock requirements for the system. From this characterization, we see that the required inventory, both in absolute terms and as it depends on the replenishment lead-time, behaves much differently for this case of non-stationary demand compared with stationary demand. We then show how the single-item model extends to a multistage, or supply-chain context; in particular we see that the demand process for the upstream stage is not only non-stationary but also more variable than that for the downstream stage. We also show that for this model there is no value from letting the upstream stages see the exogenous demand. The paper concludes with some observations about the practical implications of this work. |
Non-valvular atrial fibrillation in the elderly; preliminary results from the National AFTER (Atrial Fibrillation in Turkey: Epidemiologic Registry) Study. | OBJECTIVE
This study aimed at the assessment of the clinical approach to atrial fibrillation (AF) in the older population and the consistency with the guidelines based on the records of the multicenter, prospective AFTER (Atrial Fibrillation in Turkey: Epidemiologic Registry) study.
PATIENTS AND METHODS
2242 consecutive patients admitted to the Cardiology Outpatient Clinics of 17 different tertiary Health Care Centers with at least one AF attack determined on electrocardiographic examination, were included in the study. Among the patients included in the study, 631 individuals aged 75 years and older were analyzed.
RESULTS
The mean age of the patients was determined as 80.3±4.2 years. The most frequent type of AF in geriatric population was the persistent-permanent type with a percentage of 88%. 60% of the patients with AF were female. Hypertension was the most common co-morbidity in patients with AF (76%). While in 16% of patients a history of stroke, transient ischemic attack or systemic thromboembolism was present, a history of bleeding was present in 14% of the patients. 37% of the patients were on warfarin treatment and 60% of the patients were on aspirin treatment. In 38% of the patients who were on oral anticoagulant treatment, INR level was in the effective range.
CONCLUSIONS
The rate of anticoagulant use in the elderly with AF was 37% and considering the reason of this situation was the medication not being prescribed by the physician, one should pay more attention particularly in the field of treatment. |
Adaptive Control For Space-Station Joints | |
When Face Recognition Meets with Deep Learning: An Evaluation of Convolutional Neural Networks for Face Recognition | Deep learning, in particular Convolutional Neural Network (CNN), has achieved promising results in face recognition recently. However, it remains an open question: why CNNs work well and how to design a 'good' architecture. The existing works tend to focus on reporting CNN architectures that work well for face recognition rather than investigate the reason. In this work, we conduct an extensive evaluation of CNN-based face recognition systems (CNN-FRS) on a common ground to make our work easily reproducible. Specifically, we use public database LFW (Labeled Faces in the Wild) to train CNNs, unlike most existing CNNs trained on private databases. We propose three CNN architectures which are the first reported architectures trained using LFW data. This paper quantitatively compares the architectures of CNNs and evaluates the effect of different implementation choices. We identify several useful properties of CNN-FRS. For instance, the dimensionality of the learned features can be significantly reduced without adverse effect on face recognition accuracy. In addition, a traditional metric learning method exploiting CNN-learned features is evaluated. Experiments show two crucial factors to good CNN-FRS performance are the fusion of multiple CNNs and metric learning. To make our work reproducible, source code and models will be made publicly available. |
Introduction to Probabilistic Topic Models | Probabilistic topic models are a suite of algorithms whose aim is to discover the hidden thematic structure in large archives of documents. In this article, we review the main ideas of this field, survey the current state-of-the-art, and describe some promising future directions. We first describe latent Dirichlet allocation (LDA) [8], which is the simplest kind of topic model. We discuss its connections to probabilistic modeling, and describe two kinds of algorithms for topic discovery. We then survey the growing body of research that extends and applies topic models in interesting ways. These extensions have been developed by relaxing some of the statistical assumptions of LDA, incorporating meta-data into the analysis of the documents, and using similar kinds of models on a diversity of data types such as social networks, images and genetics. Finally, we give our thoughts as to some of the important unexplored directions for topic modeling. These include rigorous methods for checking models built for data exploration, new approaches to visualizing text and other high dimensional data, and moving beyond traditional information engineering applications towards using topic models for more scientific ends. |
Diagnostic value of knee arthrometry in the prediction of anterior cruciate ligament strain during landing. | BACKGROUND
Previous studies have indicated that higher knee joint laxity may be indicative of an increased risk of anterior cruciate ligament (ACL) injuries. Despite the frequent clinical use of knee arthrometry in the evaluation of knee laxity, little data exist to correlate instrumented laxity measures and ACL strain during dynamic high-risk activities. Purpose/
HYPOTHESES
The purpose of this study was to evaluate the relationships between ACL strain and anterior knee laxity measurements using arthrometry during both a drawer test and simulated bipedal landing (as an identified high-risk injurious task). We hypothesized that a high correlation exists between dynamic ACL strain and passive arthrometry displacement. The secondary hypothesis was that anterior knee laxity quantified by knee arthrometry is a valid predictor of injury risk such that specimens with greater anterior knee laxity would demonstrate increased levels of peak ACL strain during landing.
STUDY DESIGN
Controlled laboratory study.
METHODS
Twenty cadaveric lower limbs (mean age, 46 ± 6 years; 10 female and 10 male) were tested using a CompuKT knee arthrometer to measure knee joint laxity. Each specimen was tested under 4 continuous cycles of anterior-posterior shear force (±134 N) applied to the tibial tubercle. To quantify ACL strain, a differential variable reluctance transducer (DVRT) was arthroscopically placed on the ACL (anteromedial bundle), and specimens were retested. Subsequently, bipedal landing from 30 cm was simulated in a subset of 14 specimens (mean age, 45 ± 6 years; 6 female and 8 male) using a novel custom-designed drop stand. Changes in joint laxity and ACL strain under applied anterior shear force were statistically analyzed using paired sample t tests and analysis of variance. Multiple linear regression analyses were conducted to determine the relationship between anterior shear force, anterior tibial translation, and ACL strain.
RESULTS
During simulated drawer tests, 134 N of applied anterior shear load produced a mean peak anterior tibial translation of 3.1 ± 1.1 mm and a mean peak ACL strain of 4.9% ± 4.3%. Anterior shear load was a significant determinant of anterior tibial translation (P < .0005) and peak ACL strain (P = .04). A significant correlation (r = 0.52, P < .0005) was observed between anterior tibial translation and ACL strain. Cadaveric simulations of landing produced a mean axial impact load of 4070 ± 732 N. Simulated landing significantly increased the mean peak anterior tibial translation to 10.4 ± 3.5 mm and the mean peak ACL strain to 6.8% ± 2.8% (P < .0005) compared with the prelanding condition. Significant correlations were observed between peak ACL strain during simulated landing and anterior tibial translation quantified by knee arthrometry.
CONCLUSION
Our first hypothesis is supported by a significant correlation between arthrometry displacement collected during laxity tests and concurrent ACL strain calculated from DVRT measurements. Experimental findings also support our second hypothesis that instrumented measures of anterior knee laxity predict peak ACL strain during landing, while specimens with greater knee laxity demonstrated higher levels of peak ACL strain during landing.
CLINICAL RELEVANCE
The current findings highlight the importance of instrumented anterior knee laxity assessments as a potential indicator of the risk of ACL injuries in addition to its clinical utility in the evaluation of ACL integrity. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.