title
stringlengths
8
300
abstract
stringlengths
0
10k
Effectiveness of HIV/STD sexual risk reduction groups for women in substance abuse treatment programs: results of NIDA Clinical Trials Network Trial.
CONTEXT Because drug-involved women are among the fastest growing groups with AIDS, sexual risk reduction intervention for them is a public health imperative. OBJECTIVE To test effectiveness of HIV/STD safer sex skills building (SSB) groups for women in community drug treatment. DESIGN Randomized trial of SSB versus standard HIV/STD Education (HE); assessments at baseline, 3 and 6 months. PARTICIPANTS Women recruited from 12 methadone or psychosocial treatment programs in Clinical Trials Network of National Institute on Drug Abuse. Five hundred fifteen women with >or=1 unprotected vaginal or anal sex occasion (USO) with a male partner in the past 6 months were randomized. INTERVENTIONS In SSB, five 90-minute groups used problem solving and skills rehearsal to increase HIV/STD risk awareness, condom use, and partner negotiation skills. In HE, one 60-minute group covered HIV/STD disease, testing, treatment, and prevention information. MAIN OUTCOME Number of USOs at follow-up. RESULTS A significant difference in mean USOs was obtained between SSB and HE over time (F = 67.2, P < 0.0001). At 3 months, significant decrements were observed in both conditions. At 6 months, SSB maintained the decrease and HE returned to baseline (P < 0.0377). Women in SSB had 29% fewer USOs than those in HE. CONCLUSIONS Skills building interventions can produce ongoing sexual risk reduction in women in community drug treatment.
Partonometry inW+ jet production
QCD predicts soft radiation patterns that are particularly simple for W+jet production. We demonstrate how these patterns can be used to distinguish between the parton-level subprocesses probabilistically on an event-by-event basis. As a test of our method we demonstrate correlations between the soft radiation and the radiation inside the outgoing jet. {copyright} {ital 1997} {ital The American Physical Society}
Graph Neural Networks: A Review of Methods and Applications
Lots of learning tasks require dealing with graph data which contains rich relation information among elements. Modeling physics system, learning molecular fingerprints, predicting protein interface, and classifying diseases require that a model learns from graph inputs. In other domains such as learning from non-structural data like texts and images, reasoning on extracted structures, like the dependency tree of sentences and the scene graph of images, is an important research topic which also needs graph reasoning models. Graph neural networks (GNNs) are connectionist models that capture the dependence of graphs via message passing between the nodes of graphs. Unlike standard neural networks, graph neural networks retain a state that can represent information from its neighborhood with arbitrary depth. Although the primitive GNNs have been found difficult to train for a fixed point, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful learning with them. In recent years, systems based on graph convolutional network (GCN) and gated graph neural network (GGNN) have demonstrated ground-breaking performance on many tasks mentioned above. In this survey, we provide a detailed review over existing graph neural network models, systematically categorize the applications, and propose four open problems for future research.
A Hybrid Approach to Query Answering under Expressive Datalog+/-
Datalog is a family of ontology languages that combine good computational properties with high expressive power. Datalog languages are provably able to capture many relevant Semantic Web languages. In this paper we consider the class of weakly-sticky (WS) Datalog programs, which allow for certain useful forms of joins in rule bodies as well as extending the well-known class of weakly-acyclic TGDs. So far, only nondeterministic algorithms were known for answering queries on WS Datalog programs. We present novel deterministic query answering algorithms under WS Datalog. In particular, we propose: (1) a bottom-up grounding algorithm based on a query-driven chase, and (2) a hybrid approach based on transforming a WS program into a so-called sticky one, for which query rewriting techniques are known. We discuss how our algorithms can be optimized and effectively applied for query answering in real-world scenarios.
MIX WITHOUT STIRRING: PUBLIC FINANCE AND PRIVATE MARKETS IN HEALTH INSURANCE
A decorative bag making apparatus 10 and method of use wherein the apparatus 10 comprises a sizing unit 11, a centerpiece unit 12, and a face unit 13 which cooperate with one another and a sheet of decorative material 100 to fabricate a finished decorative bag 110.
Sensing array of radically coupled genetic biopixels
Although there has been considerable progress in the development of engineering principles for synthetic biology, a substantial challenge is the construction of robust circuits in a noisy cellular environment. Such an environment leads to considerable intercellular variability in circuit behaviour, which can hinder functionality at the colony level. Here we engineer the synchronization of thousands of oscillating colony ‘biopixels’ over centimetre-length scales through the use of synergistic intercellular coupling involving quorum sensing within a colony and gas-phase redox signalling between colonies. We use this platform to construct a liquid crystal display (LCD)-like macroscopic clock that can be used to sense arsenic via modulation of the oscillatory period. Given the repertoire of sensing capabilities of bacteria such as Escherichia coli, the ability to coordinate their behaviour over large length scales sets the stage for the construction of low cost genetic biosensors that are capable of detecting heavy metals and pathogens in the field.
Serum zinc levels in patients with iron deficiency anemia and its association with symptoms of iron deficiency anemia
Iron deficiency anemia (IDA) is a major public health problem especially in underdeveloped and developing countries. Zinc is the co-factor of several enzymes and plays a role in iron metabolism, so zinc deficiency is associated with IDA. In this study, it was aimed to investigate the relationship of symptoms of IDA and zinc deficiency in adult IDA patients. The study included 43 IDA patients and 43 healthy control subjects. All patients were asked to provide a detailed history and were subjected to a physical examination. The hematological parameters evaluated included hemoglobin (Hb); hematocrit (Ht); red blood cell (erythrocyte) count (RBC); and red cell indices mean corpuscular volume (MCV), mean corpuscular hemoglobin (МСН), mean corpuscular hemoglobin concentration (МСНС), and red cell distribution width (RDW). Anemia was defined according to the criteria defined by the World Health Organization (WHO). Serum zinc levels were measured in the flame unit of atomic absorption spectrophotometer. Symptoms attributed to iron deficiency or depletion, defined as fatigue, cardiopulmonary symptoms, mental manifestations, epithelial manifestations, and neuromuscular symptoms, were also recorded and categorized. Serum zinc levels were lower in anemic patients (103.51 ± 34.64 μ/dL) than in the control subjects (256.92 ± 88.54 μ/dL; <0.001). Patients with zinc level <99 μ/dL had significantly more frequent mental manifestations (p < 0.001), cardiopulmonary symptoms (p = 0.004), restless leg syndrome (p = 0.016), and epithelial manifestations (p < 0.001) than patients with zinc level > 100 μ/dL. When the serum zinc level was compared with pica, no statistically significant correlation was found (p = 0.742). Zinc is a trace element that functions in several processes in the body, and zinc deficiency aggravates IDA symptoms. Measurement of zinc levels and supplementation if necessary should be considered for IDA patients.
Psychological pathways linking social support to health outcomes: a visit with the "ghosts" of research past, present, and future.
Contemporary models postulate the importance of psychological mechanisms linking perceived and received social support to physical health outcomes. In this review, we examine studies that directly tested the potential psychological mechanisms responsible for links between social support and health-relevant physiological processes (1980s-2010). Inconsistent with existing theoretical models, no evidence was found that psychological mechanisms such as depression, perceived stress, and other affective processes are directly responsible for links between support and health. We discuss the importance of considering statistical/design issues, emerging conceptual perspectives, and limitations of our existing models for future research aimed at elucidating the psychological mechanisms responsible for links between social support and physical health outcomes.
Accurate Measurement of the Air-Core Inductance of Iron-Core Transformers With a Non-Ideal Low-Power Rectifier
The air-core inductance of power transformers is measured using a nonideal low-power rectifier. Its dc output serves to drive the transformer into deep saturation, and its ripple provides low-amplitude variable excitation. The principal advantage of the proposed method is its simplicity. For validation, the experimental results are compared with 3-D finite-element simulations.
The DL-Lite Family and Relations
The recently introduced series of description logics under the common moniker ‘DLLite’ has attracted attention of the description logic and semantic web communities due to the low computational complexity of inference, on the one hand, and the ability to represent conceptual modeling formalisms, on the other. The main aim of this article is to carry out a thorough and systematic investigation of inference in extensions of the original DL-Lite logics along five axes: by (i) adding the Boolean connectives and (ii) number restrictions to concept constructs, (iii) allowing role hierarchies, (iv) allowing role disjointness, symmetry, asymmetry, reflexivity, irreflexivity and transitivity constraints, and (v) adopting or dropping the unique name assumption. We analyze the combined complexity of satisfiability for the resulting logics, as well as the data complexity of instance checking and answering positive existential queries. Our approach is based on embedding DL-Lite logics in suitable fragments of the one-variable first-order logic, which provides useful insights into their properties and, in particular, computational behavior.
Multi-Head Attention with Disagreement Regularization
Multi-head attention is appealing for the ability to jointly attend to information from different representation subspaces at different positions. In this work, we introduce a disagreement regularization to explicitly encourage the diversity among multiple attention heads. Specifically, we propose three types of disagreement regularization, which respectively encourage the subspace, the attended positions, and the output representation associated with each attention head to be different from other heads. Experimental results on widely-used WMT14 English⇒German and WMT17 Chinese⇒English translation tasks demonstrate the effectiveness and universality of the proposed approach.
Digital footprints: predicting personality from temporal patterns of technology use
Psychometric modeling using digital data traces is a growing field of research with a breadth of potential applications in marketing, personalization and psychological assessment. We present a novel form of digital traces for user modeling: temporal patterns of smartphone and personal computer activity. We show that some temporal activity metrics are highly correlated with certain Big Five personality metrics. We then present a machine learning method for binary classification of each Big Five personality trait using these temporal activity patterns of both computer and smartphones as model features. Our initial findings suggest that Extroversion, Openness, Agreeableness, and Neuroticism can be classified using temporal patterns of digital traces at a similar accuracy to previous research that classified personality traits using different types of digital traces.
Control Dynamics of a doubly fed induction generator under sub- and super-synchronous modes of operation
Depending on wind speed, a doubly fed induction generator (DFIG) based variable speed wind turbine is capable of operating in sub- or super-synchronous mode of operation using a back to back PWM converter. A smooth transition between these two modes of operation is necessary for reliable operations of the wind turbine under fluctuating wind. This paper presents the analysis and modeling of DFIG based variable speed wind turbine and investigates the control dynamics under two modes of operation. A battery energy storage (BESS) with a bidirectional DC-DC converter is added for a smooth transition between the modes. Mathematical analysis and corresponding modeling results show that the power flow in the rotor circuit under two modes can be controlled by changing current and voltage phase sequence through the rotor side converter (RSC) and line side converter (LSC). A coordinated control among RSC, LSC and DC link storage system ensure variable speed and maximum power extraction from the fluctuating wind and reduce the possibility of instability around synchronous speed. Extensive simulations have been conducted to investigate control dynamics under the two modes of operation and during transitions.
Expert consensus document: The International Scientific Association for Probiotics and Prebiotics consensus statement on the scope and appropriate use of the term probiotic
An expert panel was convened in October 2013 by the International Scientific Association for Probiotics and Prebiotics (ISAPP) to discuss the field of probiotics. It is now 13 years since the definition of probiotics and 12 years after guidelines were published for regulators, scientists and industry by the Food and Agriculture Organization of the United Nations and the WHO (FAO/WHO). The FAO/WHO definition of a probiotic—“live microorganisms which when administered in adequate amounts confer a health benefit on the host”—was reinforced as relevant and sufficiently accommodating for current and anticipated applications. However, inconsistencies between the FAO/WHO Expert Consultation Report and the FAO/WHO Guidelines were clarified to take into account advances in science and applications. A more precise use of the term 'probiotic' will be useful to guide clinicians and consumers in differentiating the diverse products on the market. This document represents the conclusions of the ISAPP consensus meeting on the appropriate use and scope of the term probiotic.
Neural and behavioral responses to attractiveness in adult and infant faces
Facial attractiveness provides a very powerful motivation for sexual and parental behavior. We therefore review the importance of faces to the study of neurobiological control of human reproductive motivations. For heterosexual individuals there is a common brain circuit involving the nucleus accumbens, the medial prefrontal, dorsal anterior cingulate and the orbitofrontal cortices that is activated more by attractive than unattractive faces, particularly for faces of the opposite sex. Behavioral studies indicate parallel effects of attractiveness on incentive salience or willingness to work to see faces. There is some evidence that the reward value of opposite sex attractiveness is more pronounced in men than women, perhaps reflecting the greater importance assigned to physical attractiveness by men when evaluating a potential mate. Sex differences and similarities in response to facial attractiveness are reviewed. Studies comparing heterosexual and homosexual observers indicate the orbitofrontal cortex and mediodorsal thalamus are more activated by faces of the desired sex than faces of the less-preferred sex, independent of observer gender or sexual orientation. Infant faces activate brain regions that partially overlap with those responsive to adult faces. Infant faces provide a powerful stimulus, which also elicits sex differences in behavior and brain responses that appear dependent on sex hormones. There are many facial dimensions affecting perceptions of attractiveness that remain unexplored in neuroimaging, and we conclude by suggesting that future studies combining parametric manipulation of face images, brain imaging, hormone assays and genetic polymorphisms in receptor sensitivity are needed to understand the neural and hormonal mechanisms underlying reproductive drives.
THE MCKINSEY 7S MODEL FRAMEWORK FOR E-LEARNING SYSTEM READINESS ASSESSMENT
These study it is necessary to can be used as a theoretical foundation upon which to base decision-making and strategic thinking about e-learning system. This paper proposes a new framework for assessing readiness of an organization to implement the e-learning system project on the basis of McKinsey 7S model using fuzzy logic analysis. The study considers 7 dimensions as approach to assessing the current situation of the organization prior to system implementation to identify weakness areas which may encounter the project with failure. Adopted was focus on Questionnaires and group interviews to specific data collection from three colleges in Mosul University in Iraq. This can be achieved success in building an e-learning system at the University of Mosul by readiness assessment according to the model of multidimensional based on the framework of 7S is selected by 23 factors, and thus can avoid failures or weaknesses facing the implementation process before the start of the project and a step towards enabling the administration to make decisions that achieve success in this area, as well as to avoid the high cost associated with the implementation process.
Keyflow: a prototype for evolving SDN toward core network fabrics
The large bulk of packets/flows in future core networks will require a highly efficient header processing in the switching elements. Simplifying lookup in core network switching elements is capital to transport data at high rates and with low latency. Flexible network hardware combined with agile network control is also an essential property for future software-defined networking. We argue that only further decoupling between the control and data planes will unlock the flexibility and agility in SDN for the design of new network solutions for core networks. This article proposes a new approach named KeyFlow to build a flexible network-fabricbased model. It replaces the table lookup in the forwarding engine by elementary operations relying on a residue number system. This provides us tools to design a stateless core network by still using OpenFlow centralized control. A proof of concept prototype is validated using the Mininet emulation environment and OpenFlow 1.0. The results indicate RTT reduction above 50 percent, especially for networks with densely populated flow tables. KeyFlow achieves above 30 percent reduction in keeping active flow state in the network.
Transanal endoscopic total mesorectal excision: technical aspects of approaching the mesorectal plane from below—a preliminary report
Laparoscopic total mesorectal excision (TME) for low rectal cancer can be technically challenging. This report describes our initial experience with a hybrid laparoscopic and transanal endoscopic technique for TME in low rectal cancer. Between December 2012 and October 2013, we identified patients with rectal cancer < 5 cm from the anorectal junction (ARJ) who underwent laparoscopic-assisted TME with a transanal minimally invasive surgery (TAMIS) technique. A standardized stepwise approach was used in all patients. Resection specimens were examined for completeness and measurement of margins. Preoperative magnetic resonance imaging (MRI) characteristics and short-term postoperative outcomes were examined. All values are mean ± standard deviation. Ten patients (8 males; median age: 60.5 (range 36–70) years) were included. On initial MRI, all tumors were T2 or T3, mean tumor height from the ARJ was 28.9 ± 12.2 mm, mean circumferential resection margin was 5.3 ± 3.1 mm , and the mean angle between the anal canal and the levator ani was 83.9° ± 9.7°. All patients had had preoperative chemoradiotherapy, TME via TAMIS, and distal anastomosis. There were no intraoperative complications, anastomotic leaks, or 30-day mortality. The pathologic quality of all mesorectal specimens was excellent. The distal resection margin was 19.4 ± 10.4 mm, the mean circumferential resection margin was 13.8 ± 5.1 mm, and the median lymph node harvest was 10.5 (range 5–15) nodes. A combined laparoscopic and transanal approach can achieve a safe and oncologically complete TME dissection for low rectal tumors. This approach may improve clinical outcomes in these technically difficult cases, but larger prospective studies are needed.
Adhesive capsulitis: sonographic changes in the rotator cuff interval with arthroscopic correlation
To evaluate the sonographic findings of the rotator interval in patients with clinical evidence of adhesive capsulitis immediately prior to arthroscopy. We prospectively compared 30 patients with clinically diagnosed adhesive capsulitis (20 females, 10 males, mean age 50 years) with a control population of 10 normal volunteers and 100 patients with a clinical suspicion of rotator cuff tears. Grey-scale and colour Doppler sonography of the rotator interval were used. Twenty-six patients (87%) demonstrated hypoechoic echotexture and increased vascularity within the rotator interval, all of whom had had symptoms for less than 1 year. Three patients had hypoechoic echotexture but no increase in vascularity, and one patient had a normal sonographic appearance. All patients were shown to have fibrovascular inflammatory soft-tissue changes in the rotator interval at arthroscopy commensurate with adhesive capsulitis. None of the volunteers or the patients with a clinical diagnosis of rotator cuff tear showed such changes. Sonography can provide an early accurate diagnosis of adhesive capsulitis by assessing the rotator interval for hypoechoic vascular soft tissue.
A Rational Theory of Socialist Public Ownership
This paper asks why socialist economies were historically centered on public ownership of industry, despite its many drawbacks, and offers an explanation founded on rational individual choice. It first shows that the Marxian program of state socialism was subject to intense competition from alternative blueprints by the turn of the 20th century. It then argues that the superiority of the Marxian program lay in the contract enforcement property of an arrangement in which a politicized bureaucracy in charge of production was accountable to a party controlled by the workers. Formally, in a setting in which all participants are selfish and rational, the workers’ sole objective is redistribution, and an initial system choice has to be made. Party-state control of enterprises turns out to be the optimal contract between a principal (the workers) and its agent (the party) for a one-time transaction plagued by extreme informational asymmetry. Finally, modifications of this choice setting and implications for the decline of, and transition from, communism are discussed.
Your AP knows how you move: fine-grained device motion recognition through WiFi
Recent WiFi standards use Channel State Information (CSI) feedback for better MIMO and rate adaptation. CSI provides detailed information about current channel conditions for different subcarriers and spatial streams. In this paper, we show that CSI feedback from a client to the AP can be used to recognize different fine-grained motions of the client. We find that CSI can not only identify if the client is in motion or not, but also classify different types of motions. To this end, we propose APsense, a framework that uses CSI to estimate the sensor patterns of the client. It is observed that client's sensor (e.g. accelerometer) values are correlated to CSI values available at the AP. We show that using simple machine learning classifiers, APsense can classify different motions with accuracy as high as 90%.
High D-dimer levels increase the likelihood of pulmonary embolism.
Objective. To determine the utility of high quantitative D-dimer levels in the diagnosis of pulmonary embolism. Methods. D-dimer testing was performed in consecutive patients with suspected pulmonary embolism. We included patients with suspected pulmonary embolism with a high risk for venous thromboembolism, i.e. hospitalized patients, patients older than 80 years, with malignancy or previous surgery. Presence of pulmonary embolism was based on a diagnostic management strategy using a clinical decision rule (CDR), D-dimer testing and computed tomography. Results. A total of 1515 patients were included with an overall pulmonary embolism prevalence of 21%. The pulmonary embolism prevalence was strongly associated with the height of the D-dimer level, and increased fourfold with D-dimer levels greater than 4000 ng mL(-1) compared to levels between 500 and 1000 ng mL(-1). Patients with D-dimer levels higher than 2000 ng mL(-1) and an unlikely CDR had a pulmonary embolism prevalence of 36%. This prevalence is comparable to the pulmonary embolism likely CDR group. When D-dimer levels were above 4000 ng mL(-1), the observed pulmonary embolism prevalence was very high, independent of CDR score. Conclusion. Strongly elevated D-dimer levels substantially increase the likelihood of pulmonary embolism. Whether this should translate into more intensive diagnostic and therapeutic measures in patients with high D-dimer levels irrespective of CDR remains to be studied.
A Graph-Based Approach to the Modeling of Changes in Construction Projects
The implementation of changes in construction projects often causes deviations from the objectives of the project client. One of the causes of this problem is that the tools currently used in projects do not support the identification of the consequences of a change before it is implemented in the project. The objective of the present research is to develop a model which facilitates an automatic identification of proposed changes which may have a significant impact on the objectives of the project client. The present research adopts a graph-based approach to the modeling of construction projects. A graph-based model was found to provide a simple but informative representation of the structure of projects, which can facilitate an 'order of magnitude' approximation of the impact of proposed changes. Graph theory provides a wealth of concepts, tools and algorithms, which were used in the present research for an analysis of the impact of proposed changes.
In vivo accuracy of multispectral magnetic resonance imaging for identifying lipid-rich necrotic cores and intraplaque hemorrhage in advanced human carotid plaques.
BACKGROUND High-resolution MRI has been shown to be capable of identifying plaque constituents, such as the necrotic core and intraplaque hemorrhage, in human carotid atherosclerosis. The purpose of this study was to evaluate differential contrast-weighted images, specifically a multispectral MR technique, to improve the accuracy of identifying the lipid-rich necrotic core and acute intraplaque hemorrhage in vivo. METHODS AND RESULTS Eighteen patients scheduled for carotid endarterectomy underwent a preoperative carotid MRI examination in a 1.5-T GE Signa scanner using a protocol that generated 4 contrast weightings (T1, T2, proton density, and 3D time of flight). MR images of the vessel wall were examined for the presence of a lipid-rich necrotic core and/or intraplaque hemorrhage. Ninety cross sections were compared with matched histological sections of the excised specimen in a double-blinded fashion. Overall accuracy (95% CI) of multispectral MRI was 87% (80% to 94%), sensitivity was 85% (78% to 92%), and specificity was 92% (86% to 98%). There was good agreement between MRI and histological findings, with a value of kappa=0.69 (0.53 to 0.85). CONCLUSIONS Multispectral MRI can identify the lipid-rich necrotic core in human carotid atherosclerosis in vivo with high sensitivity and specificity. This MRI technique provides a noninvasive tool to study the pathogenesis and natural history of carotid atherosclerosis. Furthermore, it will permit a direct assessment of the effect of pharmacological therapy, such as aggressive lipid lowering, on plaque lipid composition.
Bio‐optical and biogeochemical properties of different trophic regimes in oceanic waters
To examine the source and magnitude of the variability of bio-optical properties in open ocean, we simultaneously measured inherent optical properties (IOPs) and biogeochemical quantities during late summer from the eutrophic waters of the Moroccan upwelling to the oligotrophic waters of the northwestern Mediterranean and the ultraoligotrophic waters of the eastern Mediterranean. Vertical distributions of spectral absorption and attenuation coefficients were measured with a high-resolution in situ spectrophotometer (WETLabs ac9) together with biogeochemical measurements that included phytoplanktonic pigments and particulate organic carbon concentrations, particle size distributions, and picoplankton abundance. The variability in specific IOPs (i.e., per unit of biogeochemical constituent concentration) was examined, and an optical index of particle size was derived. The fine-scale vertical distributions of various biogeochemical properties were thus described from ac9 profiles. Particle attenuation and carbon budgets, estimated from a combination of optical and biogeochemical measurements, underlie a major contribution of nonalgal stocks in oceanic waters. We show that first-order variations in IOPs in oceanic waters are explained by the trophic state (i.e., chlorophyll a concentration) and that second-order variations are the result of changes in the composition of phytoplankton assemblage, the balance between algal and nonalgal stocks, and lightrelated processes (colored dissolved organic material photo-oxidation and algal photo-adaptation). At the interface between marine optics and biogeochemistry, bio-optical studies (Smith and Baker 1978) aim to characterize the biological and biogeochemical state of natural waters through their optical properties, and to quantify the role of the ocean in global biogeochemical (particularly carbon) budgets. These studies rely on the direct dependence of the water’s inherent optical properties (IOPs) and apparent optical properties on the concentration and nature of optically significant biogeochemical constituents. In open ocean case I waters (Morel and Maritorena 2001 and references therein), these constituents are, by definition, phytoplankton and their accompanying and covarying retinue of material with biological origin, namely nonalgal particles (including biogenous detritus and heterotrophic organisms) and yellow substances (so-called colored dissolved organic material [CDOM]). Thus, optical properties generally are modeled as a function of chlorophyll a concentration (Chl a; used as a proxy for phytoplankton) in generic remote sensing algorithms for case I waters (Morel and Maritorena 2001 and 1 To whom correspondence should be addressed (claustre@
Intensive exercise program after spinal cord injury (“Full-On”): study protocol for a randomized controlled trial
BACKGROUND Rehabilitation after spinal cord injury (SCI) has traditionally involved teaching compensatory strategies for identified impairments and deficits in order to improve functional independence. There is some evidence that regular and intensive activity-based therapies, directed at activation of the paralyzed extremities, promotes neurological improvement. The aim of this study is to compare the effects of a 12-week intensive activity-based therapy program for the whole body with a program of upper body exercise. METHODS/DESIGN A multicenter, parallel group, assessor-blinded randomized controlled trial will be conducted. One hundred eighty-eight participants with spinal cord injury, who have completed their primary rehabilitation at least 6 months prior, will be recruited from five SCI units in Australia and New Zealand. Participants will be randomized to an experimental or control group. Experimental participants will receive a 12-week program of intensive exercise for the whole body, including locomotor training, trunk exercises and functional electrical stimulation-assisted cycling. Control participants will receive a 12-week intensive upper body exercise program. The primary outcome is the American Spinal Injuries Association (ASIA) Motor Score. Secondary outcomes include measurements of sensation, function, pain, psychological measures, quality of life and cost effectiveness. All outcomes will be measured at baseline, 12 weeks, 6 months and 12 months by blinded assessors. Recruitment commenced in January 2011. DISCUSSION The results of this trial will determine the effectiveness of a 12-week program of intensive exercise for the whole body in improving neurological recovery after spinal cord injury. TRIAL REGISTRATION NCT01236976 (10 November 2010), ACTRN12610000498099 (17 June 2010).
On efficient mutual nearest neighbor query processing in spatial databases
This paper studies a new form of nearest neighbor queries in spatial databases, namely, mutual nearest neighbor (MNN) search. Given a set D of objects and a query object q, an MNN query returns from D, the set of objects that are among the k 1 (P1) nearest neighbors (NNs) of q; meanwhile, have q as one of their k 2 (P1) NNs. Although MNN queries are useful in many applications involving decision making, data mining, and pattern recognition, it cannot be efficiently handled by existing spatial query processing approaches. In this paper, we present the first piece of work for tackling MNN queries efficiently. Our methods utilize a conventional data-partitioning index (e.g., R-tree, etc.) on the dataset, employ the state-of-the-art database techniques including best-first based k nearest neighbor (kNN) retrieval and reverse kNN search with TPL pruning, and make use of the advantages of batch processing and reusing technique. An extensive empirical study, based on experiments performed using both real and synthetic datasets, has been conducted to demonstrate the efficiency and effectiveness of our proposed algorithms under various experimental settings. This paper studies a new form of nearest neighbor (NN) queries, namely mutual nearest neighbor (MNN) search. Given a dataset D, a query point q, and two parameters k 1 and k 2 , an MNN query retrieves those objects p 2 D such that p 2 NN k1 (q) 1 and q 2 NN k2 (p), i.e., it requires each answer object 2 to be one of the k 1 nearest neighbors (NNs) to q and meanwhile has q as one of its k 2 NNs. Consequently, it considers not only the spatial proximity of the answer objects to q, but also the spatial proximity of q to the answer objects. In other words, the conventional NN query is asymmetric, while MNN retrieval is symmetric. Although it is well known that asymmetric NN search fits the requirements of lots of applications, there are still many other practical applications that require symmetric NN queries. Some real-life applications are presented as follows. Resource allocation. Consider that a logistic company A has six branches (labeled as p 1 , p 2 , p 3 , p 4 , p 5 , p 6), as shown in Fig. 1a. In order to guarantee the quality of service, company A assigns each branch two nearby branches as backup to provide necessary supports in …
A Survey of How to Use Blockchain to Secure Internet of Things and the Stalker Attack
The Internet of Things (IoT) is increasingly a reality today. Nevertheless, some key challenges still need to be given particular attention so that IoT solutions further support the growing demand for connected devices and the services offered. Due to the potential relevance and sensitivity of services, IoT solutions should address the security and privacy concerns surrounding these devices and the data they collect, generate, and process. Recently, the Blockchain technology has gained much attention in IoT solutions. Its primary usage scenarios are in the financial domain, where Blockchain creates a promising applications world and can be leveraged to solve security and privacy issues. However, this emerging technology has a great potential in the most diverse technological areas and can significantly help achieve the Internet of Things view in different aspects, increasing the capacity of decentralization, facilitating interactions, enabling new transaction models, and allowing autonomous coordination of the devices. The paper goal is to provide the concepts about the structure and operation of Blockchain and, mainly, analyze how the use of this technology can be used to provide security and privacy in IoT. Finally, we present the stalker, which is a selfish miner variant that has the objective of preventing a node to publish its blocks on the main chain.
Numba: a LLVM-based Python JIT compiler
Dynamic, interpreted languages, like Python, are attractive for domain-experts and scientists experimenting with new ideas. However, the performance of the interpreter is often a barrier when scaling to larger data sets. This paper presents a just-in-time compiler for Python that focuses in scientific and array-oriented computing. Starting with the simple syntax of Python, Numba compiles a subset of the language into efficient machine code that is comparable in performance to a traditional compiled language. In addition, we share our experience in building a JIT compiler using LLVM[1].
Boosted Convolutional Neural Networks (BCNN) for Pedestrian Detection
A boosted convolutional neural network (BCNN) system is proposed to enhance the pedestrian detection performance in this work. Being inspired by the classic boosting idea, we develop a weighted loss function that emphasizes challenging samples in training a convolutional neural network (CNN). Two types of samples are considered challenging: 1) samples with detection scores falling in the decision boundary, and 2) temporally associated samples with inconsistent scores. A weighting scheme is designed for each of them. Finally, we train a boosted fusion layer to benefit from the integration of these two weighting schemes. We use the Fast-RCNN as the baseline, and test the corresponding BCNN on the Caltech pedestrian dataset in the experiment, and show a significant performance gain of the BCNN over its baseline.
Maintaining high vitamin A supplementation coverage in children: lessons from Niger.
In 1997, the reduction of child mortality became a policy priority for the Government of Niger because Niger's child mortality rate was the highest in the world. The Ministry of Public Health, Helen Keller International (HKI), and UNICEF spearheaded a coalition-building process linking vitamin A deficiency (VAD) control to national child survival goals. An evidence-based advocacy strategy was developed around the child survival benefits of adequate and sustained VAD control with one unambiguous message: "VAD control can avert over 25,000 child deaths per year." As a result, in 1997 Niger became one of the first countries in Africa to effectively integrate vitamin A supplementation into National Immunization Days (NIDs) for polio eradication. The challenge was then to provide children with a second annual dose of vitamin A. This led in 1999 to the first ever National Micronutrient Days (NMDs) in Africa. NMDs are mobilization campaigns in which caregivers are actively encouraged to take their children for the delivery of vitamin A supplements. Since 1999, the combination of NIDs and NMDs has ensured that over 80% of children 6 to 59 months of age receive two vitamin A doses annually. The success of NIDs/NMDs has relied on five pillars: leadership and ownership by the Ministry of Public Health; district-level planning and implementation; effective training and flexible delivery mechanisms; effective social information, communication, and mobilization; and responsiveness and flexibility of Ministry of Public Health and development partners. This successful approach has been widely disseminated, notably through the West African Nutrition Focal Points Network.
Roget's Thesaurus and Semantic Similarity
Roget’s Thesaurus has not been sufficiently appreciated in Natural Language Processing. We show that Roget's and WordNet are birds of a feather. In a few typical tests, we compare how the two resources help measure semantic similarity. One of the benchmarks is Miller and Charles’ list of 30 noun pairs to which human judges had assigned similarity measures. We correlate these measures with those computed by several NLP systems. The 30 pairs can be traced back to Rubenstein and Goodenough’s 65 pairs, which we have also studied. Our Roget’sbased system gets correlations of .878 for the smaller and .818 for the larger list of noun pairs; this is quite close to the .885 that Resnik obtained when he employed humans to replicate the Miller and Charles experiment. We further evaluate our measure by using Roget’s and WordNet to answer 80 TOEFL, 50 ESL and 300 Reader’s Digest questions: the correct synonym must be selected amongst a group of four words. Our system gets 78.75%, 82.00% and 74.33% of the questions respectively, better than any published results.
Smart Sensor Systems for Wearable Electronic Devices
Wearable human interaction devices are technologies with various applications for improving human comfort, convenience and security and for monitoring health conditions. Healthcare monitoring includes caring for the welfare of every person, which includes early diagnosis of diseases, real-time monitoring of the effects of treatment, therapy, and the general monitoring of the conditions of people’s health. As a result, wearable electronic devices are receiving greater attention because of their facile interaction with the human body, such as monitoring heart rate, wrist pulse, motion, blood pressure, intraocular pressure, and other health-related conditions. In this paper, various smart sensors and wireless systems are reviewed, the current state of research related to such systems is reported, and their detection mechanisms are compared. Our focus was limited to wearable and attachable sensors. Section 1 presents the various smart sensors. In Section 2, we describe multiplexed sensors that can monitor several physiological signals simultaneously. Section 3 provides a discussion about short-range wireless systems including bluetooth, near field communication (NFC), and resonance antenna systems for wearable electronic devices.
Noise disturbance in open-plan study environments: a field study on noise sources, student tasks and room acoustic parameters.
The aim of this study is to gain more insight in the assessment of noise in open-plan study environments and to reveal correlations between noise disturbance experienced by students and the noise sources they perceive, the tasks they perform and the acoustic parameters of the open-plan study environment they work in. Data were collected in five open-plan study environments at universities in the Netherlands. A questionnaire was used to investigate student tasks, perceived sound sources and their perceived disturbance, and sound measurements were performed to determine the room acoustic parameters. This study shows that 38% of the surveyed students are disturbed by background noise in an open-plan study environment. Students are mostly disturbed by speech when performing complex cognitive tasks like studying for an exam, reading and writing. Significant but weak correlations were found between the room acoustic parameters and noise disturbance of students. Practitioner Summary: A field study was conducted to gain more insight in the assessment of noise in open-plan study environments at universities in the Netherlands. More than one third of the students was disturbed by noise. An interaction effect was found for task type, source type and room acoustic parameters.
Co-Development of Diagnostic Vectors to Support Targeted Therapies and Theranostics: Essential Tools in Personalized Cancer Therapy
Novel technologies are being developed to improve patient therapy through the identification of targets and surrogate molecular signatures that can help direct appropriate treatment regimens for efficacy and drug safety. This is particularly the case in oncology whereby patient tumor and biofluids are routinely isolated and analyzed for genetic, immunohistochemical, and/or soluble markers to determine if a predictive biomarker signature (i.e., mutated gene product, differentially expressed protein, altered cell surface antigen, etc.) exists as a means for selecting optimal treatment. These biomarkers may be drug-specific targets and/or differentially expressed nucleic acids, proteins, or cell lineage profiles that can directly affect the patient's disease tissue or immune response to a therapeutic regimen. Improvements in diagnostics that can prescreen predictive response biomarker profiles will continue to optimize the ability to enhance patient therapy via molecularly defined disease-specific treatment. Conversely, patients lacking predictive response biomarkers will no longer needlessly be exposed to drugs that are unlikely to provide clinical benefit, thereby enabling patients to pursue other therapeutic options and lowering overall healthcare costs by avoiding futile treatment. While patient molecular profiling offers a powerful tool to direct treatment options, the difficulty in identifying disease-specific targets or predictive biomarker signatures that stratify a significant fraction within a disease indication remains challenging. A goal for drug developers is to identify and implement new strategies that can rapidly enable the development of beneficial disease-specific therapies for broad patient-specific targeting without the need of tedious predictive biomarker discovery and validation efforts, currently a bottleneck for development timelines. Successful strategies may gain an advantage by employing repurposed, less-expensive existing agents while potentially improving the therapeutic activity of novel, target-specific therapies that may otherwise have off-target toxicities or less efficacy in cells exhibiting certain pathways. Here, we discuss the use of co-developing diagnostic-targeting vectors to identify patients whose malignant tissue can specifically uptake a targeted anti-cancer drug vector prior to treatment. Using this system, a patient can be predetermined in real-time as to whether or not their tumor(s) can specifically uptake a drug-linked diagnostic vector, thus inferring the uptake of a similar vector linked to an anti-cancer agent. If tumor-specific uptake is observed, then the patient may be suitable for drug-linked vector therapy and have a higher likelihood of clinical benefit while patients with no tumor uptake should consider other therapeutic options. This approach offers complementary opportunities to rapidly develop broad tumor-specific agents for use in personalized medicine.
Real-time bidding algorithms for performance-based display ad allocation
We describe a real-time bidding algorithm for performance-based display ad allocation. A central issue in performance display advertising is matching campaigns to ad impressions, which can be formulated as a constrained optimization problem that maximizes revenue subject to constraints such as budget limits and inventory availability. The current practice is to solve the optimization problem offline at a tractable level of impression granularity (e.g., the page level), and to serve ads online based on the precomputed static delivery scheme. Although this offline approach takes a global view to achieve optimality, it fails to scale to ad allocation at the individual impression level. Therefore, we propose a real-time bidding algorithm that enables fine-grained impression valuation (e.g., targeting users with real-time conversion data), and adjusts value-based bids according to real-time constraint snapshots (e.g., budget consumption levels). Theoretically, we show that under a linear programming (LP) primal-dual formulation, the simple real-time bidding algorithm is indeed an online solver to the original primal problem by taking the optimal solution to the dual problem as input. In other words, the online algorithm guarantees the offline optimality given the same level of knowledge an offline optimization would have. Empirically, we develop and experiment with two real-time bid adjustment approaches to adapting to the non-stationary nature of the marketplace: one adjusts bids against real-time constraint satisfaction levels using control-theoretic methods, and the other adjusts bids also based on the statistically modeled historical bidding landscape. Finally, we show experimental results with real-world ad delivery data that support our theoretical conclusions.
Convergence guarantees for RMSProp and ADAM in non-convex optimization and their comparison to Nesterov acceleration on autoencoders
RMSProp and ADAM continue to be extremely popular algorithms for training neural nets but their theoretical foundations have remained unclear. In this work we make progress towards that by giving proofs that these adaptive gradient algorithms are guaranteed to reach criticality for smooth non-convex objectives and we give bounds on the running time. We then design experiments to compare the performances of RMSProp and ADAM against Nesterov Accelerated Gradient method on a variety of autoencoder setups. Through these experiments we demonstrate the interesting sensitivity that ADAM has to its momentum parameter β1. We show that in terms of getting lower training and test losses, at very high values of the momentum parameter (β1 = 0.99) (and large enough nets if using mini-batches) ADAM outperforms NAG at any momentum value tried for the latter. On the other hand, NAG can sometimes do better when ADAM’s β1 is set to the most commonly used value: β1 = 0.9. We also report experiments on different autoencoders to demonstrate that NAG has better abilities in terms of reducing the gradient norms and finding weights which increase the minimum eigenvalue of the Hessian of the loss function.
Value Creation in Cryptocurrency Networks: Towards A Taxonomy of Digital Business Models for Bitcoin Companies
Cryptocurrency networks have given birth to a diversity of start-ups and attracted a huge influx of venture capital to invest in these start-ups for creating and capturing value within and between such networks. Synthesizing strategic management and information systems (IS) literature, this study advances a unified theoretical framework for identifying and investigating how cryptocurrency companies configure value through digital business models. This framework is then employed, via multiple case studies, to examine digital business models of companies within the bitcoin network. Findings suggest that companies within the bitcoin network exhibits six generic digital business models. These six digital business models are in turn driven by three modes of value configurations with their own distinct logic for value creation and mechanisms for value capturing. A key finding of this study is that value-chain and value-network driven business models commercialize their products and services for each value unit transfer, whereas commercialization for value-shop driven business models is realized through the subsidization of direct users by revenue generating entities. This study contributes to extant literature on value configurations and digital businesses models within the emerging and increasingly pervasive domain of cryptocurrency networks.
Fail-Stop Failure Algorithm-Based Fault Tolerance for Cholesky Decomposition
Cholesky decomposition is a widely used algorithm to solve linear equations with symmetric and positive definite coefficient matrix. With large matrices, this often will be performed on high performance supercomputers with a large number of processors. Assuming a constant failure rate per processor, the probability of a failure occurring during the execution increases linearly with additional processors. Fault tolerant methods attempt to reduce the expected execution time by allowing recovery from failure. This paper presents an analysis and implementation of a fault tolerant Cholesky factorization algorithm that does not require checkpointing for recovery from fail-stop failures. Rather, this algorithm uses redundant data added in an additional set of processes. This differs from previous works with algorithmic methods as it addresses fail-stop failures rather than fail-continue cases. The proposed fault tolerance scheme is incorporated into ScaLAPACK and validated on the supercomputer Kraken. Experimental results demonstrate that this method has decreasing overhead in relation to overall runtime as the matrix size increases, and thus shows promise to reduce the expected runtime for Cholesky factorizations on very large matrices.
Activation of D2-like dopamine receptors reduces synaptic inputs to striatal cholinergic interneurons.
Dopamine (DA) plays a crucial role in the modulation of striatal function. Striatal cholinergic interneurons represent an important synaptic target of dopaminergic fibers arising from the substantia nigra and cortical glutamatergic inputs. By means of an electrophysiological approach from corticostriatal slices, we isolated three distinct synaptic inputs to cholinergic interneurons: glutamate-mediated EPSPs, GABAA-mediated potentials, and Acetylcholine (ACh)-mediated IPSPs. We therefore explored whether DA controls the striatal cholinergic activity through the modulation of these synaptic potentials. We found that SKF38393, a D1-like receptor agonist, induced a membrane depolarization (also see Aosaki et al., 1998) but had no effects on glutamatergic, GABAergic, and cholinergic synaptic potentials. Conversely, D2-like DA receptor activation by quinpirole inhibited both GABAA and cholinergic synaptic potentials. These effects of quinpirole were mimicked by omega-conotoxin GVIA, blocker of N-type calcium channels. The lack of effect both on the intrinsic membrane properties and on exogenously applied GABA and ACh by quinpirole supports a presynaptic site of action for the D2-like receptor-mediated inhibition. Moreover, the quinpirole-induced decrease in amplitude was accompanied by an increase in paired pulse facilitation ratio (EPSP2/EPSP1), an index of a decrease in transmitter release. Our findings demonstrate that DA modulates the excitability of cholinergic interneurons through either an excitatory D1-like-mediated postsynaptic mechanism or a presynaptic inhibition of the GABAergic and cholinergic inhibitory synaptic potentials.
The Three Cycle View of Design Science
As a commentary to Juhani Iivari’s insightful essay, I briefly analyze design science research as an embodiment of three closely related cycles of activities. The Relevance Cycle inputs requirements from the contextual environment into the research and introduces the research artifacts into environmental field testing. The Rigor Cycle provides grounding theories and methods along with domain experience and expertise from the foundations knowledge base into the research and adds the new knowledge generated by the research to the growing knowledge base. The central Design Cycle supports a tighter loop of research activity for the construction and evaluation of design artifacts and processes. The recognition of these three cycles in a research project clearly positions and differentiates design science from other research paradigms. The commentary concludes with a claim to the pragmatic nature of design science.
Positional information and the spatial pattern of cellular differentiation.
The problem of pattern is considered in terms of how genetic information can be translated in a reliable manner to give specific and different spatial patterns of cellular differentiation. Pattern formation thus differs from molecular differentiation which is mainly concerned with the control of synthesis of specific macromolecules within cells rather than the spatial arrangement of the cells. It is suggested that there may be a universal mechanism whereby the translation of genetic information into spatial patterns of differentiation is achieved. The basis of this is a mechanism whereby the cells in a developing system may have their position specified with respect to one or more points in the system. This specification of position is positional information. Cells which have their positional information specified with respect to the same set of points constitute a field. Positional information largely determines with respect to the cells' genome and developmental history the nature of its molecular differentiation. The specification of positional information in general precedes and is independent of molecular differentiation. The concept of positional information implies a co-ordinate system and polarity is defined as the direction in which positional information is specified or measured. Rules for the specification of positional information and polarity are discussed. Pattern regulation, which is the ability of the system to form the pattern even when parts are removed, or added, and to show size invariance as in the French Flag problem, is largely dependent on the ability of the cells to change their positional information and interpret this change. These concepts are applied in some detail to early sea urchin development, hydroid regeneration, pattern formation in the insect epidermis, and the development of the chick limb. It is concluded that these concepts provide a unifying framework within which a wide variety of patterns formed from fields may be discussed, and give new meaning to classical concepts such as induction, dominance and field. The concepts direct attention towards finding mechanisms whereby position and polarity are specified, and the nature of reference points and boundaries. More specifically, it is suggested that the mechanism is required to specify the position of about
A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation
Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.
Image retargeting based on self-learning 3D saliency for content-aware data analysis
Image retargeting is a process to change the resolution of image while preserve interesting regions and avoid obvious visual distortion. In other words, it focuses on image content more than anything else that applies to filter the useful information for data analysis. Existing approaches may encounter difficulties on the various types of images since most of these approaches only consider 2D features, which are sensitive to the complexity of the contents in images. Researchers are now focusing on the RGB-D information, hoping depth information can help to promote the accuracy. However it is not easy to obtain the RGB-D image we need anywhere and how to utilize depth information is still at the exploration stage. In this paper, instead of using RGB-D data captured by 3D camera, we employ an iterative MRF learning model to predict depth information from a single still image. Then we propose our self-learning 3D saliency model based on the RGB-D data and apply it on the seam carving framework. In seam caving, the self-learning 3D saliency is combined with L1-norm of gradient for better seam searching. Experimental results demonstrate the advantages of our method using RGB-D data in the seam carving framework.
Unsteady convective boundary layer flow of a viscous fluid at a vertical surface with variable fluid properties
In this paper we present numerical solutions to the unsteady convective boundary layer flow of a viscous fluid at a vertical stretching surface with variable transport properties and thermal radiation. Both assisting and opposing buoyant flow situations are considered. Using a similarity transformation, the governing time-dependent partial differential equations are first transformed into coupled, non-linear ordinary differential equations with variable coefficients. Numerical solutions to these equations subject to appropriate boundary conditions are obtained by a second order finite difference scheme known as the Keller-Box method. The numerical results thus obtained are analyzed for the effects of the pertinent parameters namely, the unsteady parameter, the free convection parameter, the suction/injection parameter, the Prandtl number, the thermal conductivity parameter and the thermal radiation parameter on the flow and heat transfer characteristics. It is worth mentioning that themomentum and thermal boundary layer thicknesses decrease with an increase in the unsteady parameter. © 2012 Elsevier Ltd. All rights reserved.
Market Structure and Productivity : A Concrete Example ∗ Chad
Many studies have documented large and persistent productivity differences across producers, even within narrowly defined industries. This paper both extends and departs from the past literature, which focused on technological explanations for these differences, by proposing that demand-side features also play a role in creating the observed productivity variation. The specific mechanism investigated here is the effect of spatial substitutability in the product market. When producers are densely clustered in a market, it is easier for consumers to switch between suppliers (making the market in a certain sense more competitive). Relatively inefficient producers find it more difficult to operate profitably as a result. Substitutability increases truncate the productivity distribution from below, resulting in higher minimum and average productivity levels as well as less productivity dispersion. The paper presents a model that makes this process explicit and empirically tests it using data from U.S. ready-mixed concrete plants, taking advantage of geographic variation in substitutability created by the industry’s high transport costs. The results support the model’s predictions and appear robust. Markets with high demand density for ready-mixed concrete—and thus high concrete plant densities—have higher lower-bound and average productivity levels and exhibit less productivity dispersion among their producers.
Family medicine residents' and community physicians' concerns about patient truthfulness.
PURPOSE To assess how often family physicians question patient truthfulness, what factors influence them to do so, and how often resident physicians experience such doubts as compared with senior physicians. METHOD In 1994-95, after half-day patient care sessions, 44 residents from the University of Kansas School of Medicine's three Wichita family practice residency programs and nine community family physicians associated with the programs recorded their impressions of each patient's truthfulness, what issues prompted concern about patient truthfulness, and their feelings about each encounter. RESULTS The residents doubted patients in 54 of 277 encounters (19.5%); the senior physicians doubted patients in 16 of 183 encounters (8.7%) (p = .003). Both groups had more negative than positive emotions toward such encounters, with no significant difference in feelings. The demographics of the resident and senior physician populations differed greatly. CONCLUSION Although preliminary, the present study suggests that family physicians question patient truthfulness fairly often, resident physicians more than senior physicians, and that these physicians have some negative feelings toward such situations. Because such feelings may contribute to inadequate patient care, the authors recommend that further research is warranted to understand contributing factors and to guide the development of resident and student education programs in this neglected area of the doctor-patient relationship.
Multi-label classification by mining label and instance correlations from heterogeneous information networks
Multi-label classification is prevalent in many real-world applications, where each example can be associated with a set of multiple labels simultaneously. The key challenge of multi-label classification comes from the large space of all possible label sets, which is exponential to the number of candidate labels. Most previous work focuses on exploiting correlations among different labels to facilitate the learning process. It is usually assumed that the label correlations are given beforehand or can be derived directly from data samples by counting their label co-occurrences. However, in many real-world multi-label classification tasks, the label correlations are not given and can be hard to learn directly from data samples within a moderate-sized training set. Heterogeneous information networks can provide abundant knowledge about relationships among different types of entities including data samples and class labels. In this paper, we propose to use heterogeneous information networks to facilitate the multi-label classification process. By mining the linkage structure of heterogeneous information networks, multiple types of relationships among different class labels and data samples can be extracted. Then we can use these relationships to effectively infer the correlations among different class labels in general, as well as the dependencies among the label sets of data examples inter-connected in the network. Empirical studies on real-world tasks demonstrate that the performance of multi-label classification can be effectively boosted using heterogeneous information net- works.
Effects of esomeprazole treatment for gastroesophageal reflux disease on quality of life in 12- to 17-year-old adolescents: an international health outcomes study
BACKGROUND Although gastroesophageal reflux disease (GERD) is common in adolescents, the burden of GERD on health-related quality of life (HRQOL) in adolescents has not been previously evaluated. Therefore, the objective of the study was to examine the effect of GERD on HRQOL in adolescents. METHODS This international, 31-site, 8-week safety study randomized adolescents, aged 12 to 17 years inclusive, with GERD to receive esomeprazole 20 or 40 mg once daily. The Quality of Life in Reflux and Dyspepsia questionnaire (QOLRAD), previously validated in adults, consists of 25 questions grouped into 5 domains: emotional distress, sleep disturbance, food/drink problems, physical/social functioning, and vitality. The QOLRAD was administered at the baseline and week-8 (final) visits. RESULTS Of the 149 patients randomized, 134 completed the QOLRAD at baseline and final visits and were eligible for analysis of their HRQOL data. Baseline QOLRAD scores indicated GERD had a negative effect on the HRQOL of these adolescents, especially in the domains of vitality and emotional distress, and problems with food/drink. At the final visit, mean scores for all 5 QOLRAD domains improved significantly (P < .0001); change of scores (ie, delta) for all domains met or exceeded the adult QOLRAD minimal clinically significant difference standard of 0.5 units. CONCLUSION GERD had a negative effect on QOL in adolescents. After esomeprazole treatment, statistically and clinically significant improvements occurred in all domains of the QOLRAD for these adolescents. TRIAL REGISTRATION D9614C00098; ClinicalTrials.gov Identifier NCT00241501.
Is cost-effectiveness analysis preferred to severity of disease as the main guiding principle in priority setting in resource poor settings? The case of Uganda
INTRODUCTION: Several studies carried out to establish the relative preference of cost-effectiveness of interventions and severity of disease as criteria for priority setting in health have shown a strong preference for severity of disease. These preferences may differ in contexts of resource scarcity, as in developing countries, yet information is limited on such preferences in this context. OBJECTIVE: This study was carried out to identify the key players in priority setting in health and explore their relative preference regarding cost-effectiveness of interventions and severity of disease as criteria for setting priorities in Uganda. DESIGN: 610 self-administered questionnaires were sent to respondents at national, district, health sub-district and facility levels. Respondents included mainly health workers. We used three different simulations, assuming same patient characteristics and same treatment outcome but with varying either severity of disease or cost-effectiveness of treatment, to explore respondents' preferences regarding cost-effectiveness and severity. RESULTS: Actual main actors were identified to be health workers, development partners or donors and politicians. This was different from what respondents perceived as ideal. Above 90% of the respondents recognised the importance of both severity of disease and cost-effectiveness of intervention. In the three scenarios where they were made to choose between the two, a majority of the survey respondents assigned highest weight to treating the most severely ill patient with a less cost-effective intervention compared to the one with a more cost-effective intervention for a less severely ill patient. However, international development partners in in-depth interviews preferred the consideration of cost-effectiveness of intervention. CONCLUSIONS: In a survey among health workers and other actors in priority setting in Uganda, we found that donors are considered to have more say than the survey respondents found ideal. Survey respondents considered both severity of disease and cost-effectiveness important criteria for setting priorities, with severity of disease as the leading principle. This pattern of preferences is similar to findings in context with relatively more resources. In-depth interviews with international development partners, showed that this group put relatively more emphasis on cost-effectiveness of interventions compared to severity of disease. These discrepancies in attitudes between national health workers and representatives from the donors require more investigation. The different attitudes should be openly debated to ensure legitimate decisions.
Perception driven texture generation
This paper investigates a novel task of generating texture images from perceptual descriptions. Previous work on texture generation focused on either synthesis from examples or generation from procedural models. Generating textures from perceptual attributes has not been extensively studied yet. Meanwhile, perceptual attributes, such as directionality, regularity and roughness are important factors for human observers to describe a texture. In this paper, we propose a joint deep network model that combines adversarial training and perceptual feature regression for texture generation, while only random noise and user-defined perceptual attributes are required as input. In this model, a preliminary trained convolutional neural network is essentially integrated with the adversarial framework, which can drive the generated textures to possess given perceptual attributes. An important aspect of the proposed model is that, if we change one of the input perceptual features, the corresponding appearance of the generated textures will also be changed. We designed several experiments to validate the effectiveness of the proposed method. The results show that the proposed method can produce high-quality texture images with desired perceptual properties.
R&D, innovation, and growth: evidence from four manufacturing sectors in OECD countries
This paper provides an empirical analysis of the relationship between R&D intensity, rate of innovation and the growth rate of output in four manufacturing sectors from 17 OECD countries. The findings suggest that the knowledge stock is the main determinant of innovation in all four manufacturing sectors and that R&D intensity increases innovation in the chemicals and the electrical and electronics sector. In addition, the rate of innovation has a positive effect on the growth rate of output in all sectors except for the drugs and medical sector. These results lend strong support for the non-scale endogenous growth models.
MODIFIED WAVENUMBER DOMAIN ALGORITHM FOR THREE-DIMENSIONAL MILLIMETER-WAVE IMAGING
Millimeter-wave (MMW) imaging techniques have been used for the detection of concealed weapons and contraband carried on personnel at airports and other secure locations. The combination of frequency-modulated continuous-wave (FMCW) technology and MMW imaging techniques should lead to compact, light-weight, and low-cost systems which are especially suitable for security and detection application. However, the long signal duration time leads to the failure of the conventional stop-and-go approximation of the pulsed system. Therefore, the motion within the signal duration time needs to be taken into account. Analytical threedimensional (3-D) backscattered signal model, without using the stop-and-go approximation, is developed in this paper. Then, a wavenumber domain algorithm, with motion compensation, is presented. In addition, conventional wavenumber domain methods use Stolt interpolation to obtain uniform wavenumber samples and compute the fast Fourier transform (FFT). This paper uses the 3D nonuniform fast Fourier transform (NUFFT) instead of the Stolt interpolation and FFT. The NUFFT-based method is much faster than the Stolt interpolation-based method. Finally, point target simulations are performed to verify the algorithm.
The Birth of Prolog
The programming language, Prolog, was born of a project aimed not at producing a programming language but at processing natural languages; in this case, French. The project gave rise to a preliminary version of Prolog at the end of 1971 and a more definitive version at the end of 1972. This article gives the history of this project and describes in detail the preliminary and then the final versions of Prolog. The authors also felt it appropriate to describe the Q-systems since it was a language which played a prominent part in Prolog's genesis.
Improved fibrosis staging by elastometry and blood test in chronic hepatitis C.
AIMS Our main objective was to improve non-invasive fibrosis staging accuracy by resolving the limits of previous methods via new test combinations. Our secondary objectives were to improve staging precision, by developing a detailed fibrosis classification, and reliability (personalized accuracy) determination. METHODS All patients (729) included in the derivation population had chronic hepatitis C, liver biopsy, 6 blood tests and Fibroscan. Validation populations included 1584 patients. RESULTS The most accurate combination was provided by using most markers of FibroMeter and Fibroscan results targeted for significant fibrosis, i.e. 'E-FibroMeter'. Its classification accuracy (91.7%) and precision (assessed by F difference with Metavir: 0.62 ± 0.57) were better than those of FibroMeter (84.1%, P < 0.001; 0.72 ± 0.57, P < 0.001), Fibroscan (88.2%, P = 0.011; 0.68 ± 0.57, P = 0.020), and a previous CSF-SF classification of FibroMeter + Fibroscan (86.7%, P < 0.001; 0.65 ± 0.57, P = 0.044). The accuracy for fibrosis absence (F0) was increased, e.g. from 16.0% with Fibroscan to 75.0% with E-FibroMeter (P < 0.001). Cirrhosis sensitivity was improved, e.g. E-FibroMeter: 92.7% vs. Fibroscan: 83.3%, P = 0.004. The combination improved reliability by deleting unreliable results (accuracy <50%) observed with a single test (1.2% of patients) and increasing optimal reliability (accuracy ≥85%) from 80.4% of patients with Fibroscan (accuracy: 90.9%) to 94.2% of patients with E-FibroMeter (accuracy: 92.9%), P < 0.001. The patient rate with 100% predictive values for cirrhosis by the best combination was twice (36.2%) that of the best single test (FibroMeter: 16.2%, P < 0.001). CONCLUSION The new test combination increased: accuracy, globally and especially in patients without fibrosis, staging precision, cirrhosis prediction, and even reliability, thus offering improved fibrosis staging.
Real-time wireless vibration monitoring for operational modal analysis of an integral abutment highway bridge
Remote structural health monitoring systems employing a sensor-based quantitative assessment of in-service demands and structural condition are perceived as the future in long-term bridge management programs. However, the data analysis techniques and, in particular, the technology conceived years ago that are necessary for accurately and efficiently extracting condition assessment measures from highway infrastructure have just recently begun maturation. In this study, a large-scale wireless sensor network is deployed for ambient vibration testing of a single span integral abutment bridge to derive in-service modal parameters. Dynamic behavior of the structure from ambient and traffic loads was measured with accelerometers for experimental determination of the natural frequencies, damping ratios, and mode shapes of the bridge. Real-time data collection from a 40-channel single network operating with a sampling rate of 128Hz per sensor was achieved with essentially lossless data transmission. Successful acquisition of high-rate, lossless data on the highway bridge validates the proprietary wireless network protocol within an actual service environment. Operational modal analysis is performed to demonstrate the capabilities of the acquisition hardware with additional correlation of the derived modal parameters to a Finite Element Analysis of a model developed using as-built drawings to check plausibility of the mode shapes. Results from this testing demonstrate that wireless sensor technology has matured to the degree that modal analysis of large civil structures with a distributed network is a currently feasible and a comparable alternative to cable-based measurement approaches.
The influence of business strategy on project portfolio management and its success — A conceptual framework
Firms are facing more difficulties with the implementation of strategies than with its formulation. Therefore, this paper examines the linkage between business strategy, project portfolio management, and business success to close the gap between strategy formulation and implementation. Earlier research has found some supporting evidence of a positive relationship between isolated concepts, but so far there is no coherent and integral framework covering the whole cycle from strategy to success. Therefore, the existing research on project portfolio management is extended by the concept of strategic orientation. Based on a literature review, a comprehensive conceptual model considering strategic orientation, project portfolio structuring, project portfolio success, and business success is developed. This model can be used for future empirical research on the influence of strategy on project portfolio management and its success. Furthermore, it can easily be extended e.g. by contextual factors. © 2010 Elsevier Ltd. and IPMA. All rights reserved.
SONDY: an open source platform for social dynamics mining and analysis
This paper describes SONDY, a tool for analysis of trends and dynamics in online social network data. SONDY addresses two audiences: (i) end-users who want to explore social activity and (ii) researchers who want to experiment and compare mining techniques on social data. SONDY helps end-users like media analysts or journalists understand social network users interests and activity by providing emerging topics and events detection as well as network analysis functionalities. To this end, the application proposes visualizations such as interactive time-lines that summarize information and colored user graphs that reflect the structure of the network. SONDY also provides researchers an easy way to compare and evaluate recent techniques to mine social data, implement new algorithms and extend the application without being concerned with how to make it accessible. In the demo, participants will be invited to explore information from several datasets of various sizes and origins (such as a dataset consisting of 7,874,772 messages published by 1,697,759 Twitter users during a period of 7 days) and apply the different functionalities of the platform in real-time.
SemEval-2018 Task 1: Affect in Tweets
We present the SemEval-2018 Task 1: Affect in Tweets, which includes an array of subtasks on inferring the affectual state of a person from their tweet. For each task, we created labeled data from English, Arabic, and Spanish tweets. The individual tasks are: 1. emotion intensity regression, 2. emotion intensity ordinal classification, 3. valence (sentiment) regression, 4. valence ordinal classification, and 5. emotion classification. Seventy-five teams (about 200 team members) participated in the shared task. We summarize the methods, resources, and tools used by the participating teams, with a focus on the techniques and resources that are particularly useful. We also analyze systems for consistent bias towards a particular race or gender. The data is made freely available to further improve our understanding of how people convey emotions through language.
Vision-Based Fallen Person Detection for the Elderly
Falls are serious and costly for elderly people. The Centers for Disease Control and Prevention of the US reports that millions of older people, 65 and older, fall each year at least once. Serious injuries such as; hip fractures, broken bones or head injury, are caused by 20% of the falls. The time it takes to respond and treat a fallen person is crucial. With this paper we present a new, non-invasive system for fallen people detection. Our approach uses only stereo camera data for passively sensing the environment. The key novelty is a human fall detector which uses a CNN based human pose estimator in combination with stereo data to reconstruct the human pose in 3D and estimate the ground plane in 3D. Furthermore, our system consists of a reasoning module which formulates a number of measures to reason whether a person is fallen. We have tested our approach in different scenarios covering most activities elderly people might encounter living at home. Based on our extensive evaluations, our systems shows high accuracy and almost no miss-classification. To reproduce our results, the implementation is publicly available to the scientific community.
Treating PTSD in refugees and asylum seekers within the general health care system. A randomized controlled multicenter study.
OBJECTIVE There has been uncertainty about whether refugees and asylum seekers with PTSD can be treated effectively in standard psychiatric settings in industrialized countries. In this study, Narrative Exposure Therapy (NET) was compared to Treatment As Usual (TAU) in 11 general psychiatric health care units in Norway. The focus was on changes in symptom severity and in the diagnostic status for PTSD and depression. METHOD Refugees and asylum seekers fulfilling the DSM-IV criteria for PTSD (N = 81) were randomized with an a-priori probability of 2:1 to either NET (N = 51) or TAU (N = 30). The patients were assessed with Clinician Administered PTSD Scale, Hamilton rating scale for depression and the MINI Neuropsychiatric Interview before treatment, and again at one and six months after the completion. RESULTS Both NET and TAU gave clinically relevant symptom reduction both in PTSD and in depression. NET gave significantly more symptom reduction compared to TAU as well as significantly more reduction in participants with PTSD diagnoses. No difference in treatment efficacy was found between refugees and asylum seekers. CONCLUSIONS The study indicated that refugees and asylum seekers can be treated successfully for PTSD and depression in the general psychiatric health care system; NET appeared to be a promising treatment for both groups.
Review of "Reasoning about Uncertainty by Joseph Y. Halpern." The MIT Press, 2003
Dealing with uncertain and imprecise information has been one of the major issues in almost all intelligent systems. Uncertainty is a fundamental and unavoidable feature of our daily life. It arises because agents almost never have access to the whole truth about their environment. It can also occur when the agents have incomplete and/or incorrect information about the properties of their environment.
Toward an Ecological Theory of Adaptation and Aging
1.3 The environmental docility hypothesis suggests that environmental stimuli ("press", in Murray's terms) have a greater demand quality as the competence of the individual decreases. The dynamics of ecological transactions are considered as a function of personal competence, strength of environmental press, the dual characteristics of the individual's response (affective quality and adaptiveness of behavior), adaptation level, and the optimization function. Behavioral homeostasis is maintained by the individual as both respondent and initiator in interaction with his environment. Hypotheses are suggested to account for striving vs. relaxation and for changes in the individual's level of personal competence. Four transaction-al types discussed are environmental engineering, rehabilitation and therapy, individual growth, and active change of the environment. Recent work in the psychology of stimulation (1) has led to theoretical advances in the area of social ecology. We propose an elaboration in this area that is middle-range, in the sense of attempting to account for a limited aspect of human behavior. This contribution to the theory of man-environment relationships deals with the aspects of human responses that can be viewed in evaluative terms, that is, behavior that can be rated on the continuum of adaptiveness, and inner states that can be rated on the continuum of positive to negative. This is, perhaps, a limited view of the human response repertory, but it stems from the traditional concern of the psychologist with mental health and mental illness. Similarly, our view of environment for this purpose is limited to the "demand quality" of the environment, an abstraction that represents only one of many ways of dimensionalizing the environment. We shall use our knowledge from the area of gerontology to provide content for the theoretical structure, but suggest that the constructs are more generally applicable to any area involving the understanding of mental or social pathology [see Lawton and Nahemow, (2) for a more complete discussion]. One way to begin is to look at the old ecological equation B = f (P, E) to acknowledge its veracity and familiarity, but linger on a few of its implications: 1. All behavior is transactional, that is, not explainable solely on the basis of knowledge about either the person behaving or the environment in which it occurs.
Sound and precise analysis of web applications for injection vulnerabilities
Web applications are popular targets of security attacks. One common type of such attacks is SQL injection, where an attacker exploits faulty application code to execute maliciously crafted database queries. Bothstatic and dynamic approaches have been proposed to detect or prevent SQL injections; while dynamic approaches provide protection for deployed software, static approaches can detect potential vulnerabilities before software deployment. Previous static approaches are mostly based on tainted information flow tracking and have at least some of the following limitations: (1) they do not model the precise semantics of input sanitization routines; (2) they require manually written specifications, either for each query or for bug patterns; or (3) they are not fully automated and may require user intervention at various points in the analysis. In this paper, we address these limitations by proposing a precise, sound, and fully automated analysis technique for SQL injection. Our technique avoids the need for specifications by consideringas attacks those queries for which user input changes the intended syntactic structure of the generated query. It checks conformance to this policy byconservatively characterizing the values a string variable may assume with a context free grammar, tracking the nonterminals that represent user-modifiable data, and modeling string operations precisely as language transducers. We have implemented the proposed technique for PHP, the most widely-used web scripting language. Our tool successfully discovered previously unknown and sometimes subtle vulnerabilities in real-world programs, has a low false positive rate, and scales to large programs (with approx. 100K loc).
Evolution of a subsumption architecture that performs a wall following task for an autonomous mobile robot
The goal in automatic programming is to get a computer to perform a task by telling it what needs to be done, rather than by explicitly programming it. This paper considers the task of automatically generating a computer program to enable an autonomous mobile robot to perform the task of following the wall of an irregular shaped room. A human programmer has written such a program in the style of the subsumption architecture. The solution produced by genetic programming emerges as a result of Darwinian natural selection and genetic crossover (sexual recombination) in a population of computer programs. This evolutionary process is driven by a fitness measure which communicates the nature of the task to the computer.
Mutation of the conserved calcium-binding motif in Neisseria gonorrhoeae PilC1 impacts adhesion but not piliation.
Neisseria gonorrhoeae PilC1 is a member of the PilC family of type IV pilus-associated adhesins found in Neisseria species and other type IV pilus-producing genera. Previously, a calcium-binding domain was described in the C-terminal domains of PilY1 of Pseudomonas aeruginosa and in PilC1 and PilC2 of Kingella kingae. Genetic analysis of N. gonorrhoeae revealed a similar calcium-binding motif in PilC1. To evaluate the potential significance of this calcium-binding region in N. gonorrhoeae, we produced recombinant full-length PilC1 and a PilC1 C-terminal domain fragment. We show that, while alterations of the calcium-binding motif disrupted the ability of PilC1 to bind calcium, they did not grossly affect the secondary structure of the protein. Furthermore, we demonstrate that both full-length wild-type PilC1 and full-length calcium-binding-deficient PilC1 inhibited gonococcal adherence to cultured human cervical epithelial cells, unlike the truncated PilC1 C-terminal domain. Similar to PilC1 in K. kingae, but in contrast to the calcium-binding mutant of P. aeruginosa PilY1, an equivalent mutation in N. gonorrhoeae PilC1 produced normal amounts of pili. However, the N. gonorrhoeae PilC1 calcium-binding mutant still had partial defects in gonococcal adhesion to ME180 cells and genetic transformation, which are both essential virulence factors in this human pathogen. Thus, we conclude that calcium binding to PilC1 plays a critical role in pilus function in N. gonorrhoeae.
Knowledge Based Image Enhancement Using Neural Networks
In this paper we combine the concept of adaptive filters with neural networks in order to be able to include high level knowledge about the contents of the image in the filtering process. Adaptive image enhancement algorithms often utilize low level knowledge like gradient information to guide filtering parameters. The advantage is that these filters do not need any specific knowledge and can thus be applied to a broad spectrum of images. However, for many problems this low level information is not sufficient to achieve good results. For example in medical imaging it is often very important that some features are preserved while others are suppressed. Usually these features cannot be distinguished by low level information. Therefore we propose a method to incorporate high level knowledge in the filtering process in order to adjust the parameters of any given filter thus creating a guided filter. We present a scheme for acquiring this high level knowledge which allows us to apply our method to all kinds of images using pattern recognition and special preprocessing techniques. The design of the guided filter itself is easy as for the high level knowledge only some sample pixels including their neighborhood and the desired parameters for these pixels are necessary
Automatic Medical Image Classification and Abnormality Detection Using K-Nearest Neighbour
This research work presents a method for automatic classification of medical images in two classes Normal and Abnormal based on image features and automatic abnormality detection. Our proposed system consists of four phases Preprocessing, Feature extraction, Classification, and Post processing. Statistical texture feature set is derived from normal and abnormal images. We used the KNN classifier for classifying image. The KNN classifier performance compared with kernel based SVM classifier (Linear and RBF). The confusion matrix computed and result shows that KNN obtain 80% classification rate which is more than SVM classification rate. So we choose KNN algorithm for classification of images. If image classified as abnormal then post processing step applied on the image and abnormal region is highlighted on the image. The system has been tested on the number of real CT scan brain images.
Calorie restriction extends Saccharomyces cerevisiae lifespan by increasing respiration
Calorie restriction (CR) extends lifespan in a wide spectrum of organisms and is the only regimen known to lengthen the lifespan of mammals. We established a model of CR in budding yeast Saccharomyces cerevisiae. In this system, lifespan can be extended by limiting glucose or by reducing the activity of the glucose-sensing cyclic-AMP-dependent kinase (PKA). Lifespan extension in a mutant with reduced PKA activity requires Sir2 and NAD (nicotinamide adenine dinucleotide). In this study we explore how CR activates Sir2 to extend lifespan. Here we show that the shunting of carbon metabolism toward the mitochondrial tricarboxylic acid cycle and the concomitant increase in respiration play a central part in this process. We discuss how this metabolic strategy may apply to CR in animals.
Measurement of functional activities in older adults in the community.
Two measures of social function designed for community studies of normal aging and mild senile dementia were evaluated in 195 older adults who underwent neurological, cognitive, and affective assessment. An examining and a reviewing neurologist and a neurologically trained nurse independently rated each on a Scale of Functional Capacity. Interrater reliability was high (examining vs. reviewing neurologist, r = .97; examining neurologist vs. nurse, tau b = .802; p less than .001 for both comparisons). Estimates correlated well with an established measure of social function and with results of cognitive tests. Alternate informants evaluated participants on the Functional Activities Questionnaire and the Instrumental Activities of Daily Living Scale. The Functional Activities Questionnaire was superior to the Instrumental Activities of Daily scores. Used alone as a diagnostic tool, the Functional Activities Questionnaire was more sensitive than distinguishing between normal and demented individuals.
Association Between a Serotonin Transporter Gene Variant and Hopelessness Among Men in the Heart and Soul Study
Hopelessness is associated with mortality in patients with cardiac disease even after accounting for severity of depression. We sought to determine whether a polymorphism in the promoter region of the serotonin transporter gene (5-HTTLPR) is associated with increased hopelessness, and whether this effect is modified by sex, age, antidepressant use or depression in patients with coronary heart disease. We conducted a cross-sectional study of 870 patients with stable coronary heart disease. Our primary outcomes were hopelessness score (range 0-8) and hopeless category (low, moderate and high) as measured by the Everson hopelessness scale. Analysis of covariance and ordinal logistic regression were used to examine the independent association of genotype with hopelessness. Compared to patients with l/l genotype, adjusted odds of a higher hopeless category increased by 35% for the l/s genotype and 80% for s/s genotype (p-value for trend = 0.004). Analysis of covariance demonstrated that the effect of 5-HTTLPR genotype on hopelessness was modified by sex (.04), but not by racial group (p = 0.63). Among men, odds of higher hopeless category increased by 40% for the l/s genotype and by 2.3-fold for s/s genotype (p-value p < 0.001), compared to no effect in the smaller female sample (p = 0.42). Results stratified by race demonstrated a similar dose-response effect of the s allele on hopelessness across racial groups. We found that the 5-HTTLPR is independently associated with hopelessness among men with cardiovascular disease.
Collecting and analyzing qualitative data for system dynamics: methods and models
System dynamics depends heavily upon quantitative data to generate feedback models. Qualitative data and their analysis also have a central role to play at all levels of the modeling process. Although the classic literature on system dynamics strongly supports this argument, the protocols to incorporate this information during the modeling process are not detailed by the most influential authors. Data gathering techniques such as interviews and focus groups, and qualitative data analysis techniques such as grounded theory methodology and ethnographic decision models could have a strong, critical role in rigorous system dynamics efforts. This paper describes some of the main qualitative, social science techniques and explores their suitability in the different stages of the modeling process. Additionally, the authors argue that the techniques described in the paper could contribute to the understanding of the modeling process, facilitate
Continuous Head Movement Estimator for Driver Assistance: Issues, Algorithms, and On-Road Evaluations
Analysis of a driver's head behavior is an integral part of a driver monitoring system. In particular, the head pose and dynamics are strong indicators of a driver's focus of attention. Many existing state-of-the-art head dynamic analyzers are, however, limited to single-camera perspectives, which are susceptible to occlusion of facial features from spatially large head movements away from the frontal pose. Nonfrontal glances away from the road ahead, however, are of special interest since interesting events, which are critical to driver safety, occur during those times. In this paper, we present a distributed camera framework for head movement analysis, with emphasis on the ability to robustly and continuously operate even during large head movements. The proposed system tracks facial features and analyzes their geometric configuration to estimate the head pose using a 3-D model. We present two such solutions that additionally exploit the constraints that are present in a driving context and video data to improve tracking accuracy and computation time. Furthermore, we conduct a thorough comparative study with different camera configurations. For experimental evaluations, we collected a novel head pose data set from naturalistic on-road driving in urban streets and freeways, with particular emphasis on events inducing spatially large head movements (e.g., merge and lane change). Our analyses show promising results.
A deep learning architecture for sentiment analysis
The fabulous results of Deep Convolution Neural Networks in computer vision and image analysis have recently attracted considerable attention from researchers of other application domains as well. In this paper we present NgramCNN, a neural network architecture we designed for sentiment analysis of long text documents. It uses pretrained word embeddings for dense feature representation and a very simple single-layer classifier. The complexity is encapsulated in feature extraction and selection parts that benefit from the effectiveness of convolution and pooling layers. For evaluation we utilized different kinds of emotional text datasets and achieved an accuracy of 91.2 % accuracy on the popular IMDB movie reviews. NgramCNN is more accurate than similar shallow convolution networks or deeper recurrent networks that were used as baselines. In the future, we intent to generalize the architecture for state of the art results in sentiment analysis of variable-length texts.
Software Crash Analysis for Automatic Exploit Generation on Binary Programs
This paper presents a new method, capable of automatically generating attacks on binary programs from software crashes. We analyze software crashes with a symbolic failure model by performing concolic executions following the failure directed paths, using a whole system environment model and concrete address mapped symbolic memory in S2 E. We propose a new selective symbolic input method and lazy evaluation on pseudo symbolic variables to handle symbolic pointers and speed up the process. This is an end-to-end approach able to create exploits from crash inputs or existing exploits for various applications, including most of the existing benchmark programs, and several large scale applications, such as a word processor (Microsoft office word), a media player (mpalyer), an archiver (unrar), or a pdf reader (foxit). We can deal with vulnerability types including stack and heap overflows, format string, and the use of uninitialized variables. Notably, these applications have become software fuzz testing targets, but still require a manual process with security knowledge to produce mitigation-hardened exploits. Using this method to generate exploits is an automated process for software failures without source code. The proposed method is simpler, more general, faster, and can be scaled to larger programs than existing systems. We produce the exploits within one minute for most of the benchmark programs, including mplayer. We also transform existing exploits of Microsoft office word into new exploits within four minutes. The best speedup is 7,211 times faster than the initial attempt. For heap overflow vulnerability, we can automatically exploit the unlink() macro of glibc, which formerly requires sophisticated hacking efforts.
Around-the-clock, controlled-release oxycodone therapy for osteoarthritis-related pain: placebo-controlled trial and long-term evaluation.
BACKGROUND Although opioid analgesics have well-defined efficacy and safety in treatment of chronic cancer pain, further research is needed to define their role in treatment of chronic noncancer pain. OBJECTIVE To evaluate the effects of controlled-release oxycodone (OxyContin tablets) treatment on pain and function and its safety vs placebo and in long-term use in patients with moderate to severe osteoarthritis pain. METHODS One hundred thirty-three patients experiencing persistent osteoarthritis-related pain for at least 1 month were randomized to double-blind treatment with placebo (n = 45) or 10 mg (n = 44) or 20 mg (n = 44) of controlled-release oxycodone every 12 hours for 14 days. One hundred six patients enrolled in an open-label, 6-month extension trial; treatment for an additional 12 months was optional. RESULTS Use of controlled-release oxycodone, 20 mg, was superior (P<.05) to placebo in reducing pain intensity and the interference of pain with mood, sleep, and enjoyment of life. During long-term treatment, the mean dose remained stable at approximately 40 mg/d after titration, and pain intensity was stable. Fifty-eight patients completed 6 months of treatment, 41 completed 12 months, and 15 completed 18 months. Common opioid side effects were reported, several of which decreased in duration as therapy continued. CONCLUSIONS Around-the-clock controlled-release oxycodone therapy seemed to be effective and safe for patients with chronic, moderate to severe, osteo-arthritis-related pain. Effective analgesia was accompanied by a reduction in the interference of pain with mood, sleep, and enjoyment of life. Analgesia was maintained during long-term treatment, and the daily dose remained stable after titration. Typical opioid side effects were reported during short- and long-term therapy.
Regular Model Checking Made Simple and Efficient
We present a new technique for computing the transitive closure of a regular relation characterized by a finite-state transducer. The construction starts from the original transducer, and repeatedly adds new transitions which are compositions of currently existing transitions. Furthermore, we define an equivalence relation which we use to merge states of the transducer during the construction. The equivalence relation can be determined by a simple local check, since it is syntactically characterized in terms of “columns” that label constructed states. This makes our algorithm both simpler to present and more efficient to implement, compared to existing approaches. We have implemented a prototype and carried out verification of a number of parameterized protocols.
Data augmentation instead of explicit regularization
Modern deep artificial neural networks have achieved impressive results through models with very large capacity—compared to the number of training examples— that control overfitting with the help of different forms of regularization. Regularization can be implicit, as is the case of stochastic gradient descent and parameter sharing in convolutional layers, or explicit. Most common explicit regularization techniques, such as weight decay and dropout, reduce the effective capacity of the model and typically require the use of deeper and wider architectures to compensate for the reduced capacity. Although these techniques have been proven successful in terms of improved generalization, they seem to waste capacity. In contrast, data augmentation techniques do not reduce the effective capacity and improve generalization by increasing the number of training examples. In this paper we systematically analyze the effect of data augmentation on some popular architectures and conclude that data augmentation alone—without any other explicit regularization techniques—can achieve the same performance or higher as regularized models, especially when training with fewer examples, and exhibits much higher adaptability to changes in the architecture.
Sim4CV: A Photo-Realistic Simulator for Computer Vision Applications
We present a photo-realistic training and evaluation simulator (Sim4CV) (http://www.sim4cv.org) with extensive applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator integrates full featured physics based cars, unmanned aerial vehicles (UAVs), and animated human actors in diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning. The simulator fully integrates both several state-of-the-art tracking algorithms with a benchmark evaluation tool and a deep neural network architecture for training vehicles to drive autonomously. It generates synthetic photo-realistic datasets with automatic ground truth annotations to easily extend existing real-world datasets and provides extensive synthetic data variety through its ability to reconfigure synthetic worlds on the fly using an automatic world generation tool.
A 1V supply 50nV//spl radic/Hz noise PSD CMOS amplifier using noise reduction technique of autozeroing and chopper stabilization
A low-noise CMOS amplifier operating at a low supply voltage is developed using the two noise reduction techniques of autozeroing and chopper stabilization. The proposed amplifier utilizes a feedback with virtual grounded input-switches and a multiple-output switched op-amp. The low-noise amplifier fabricated in a 0.18-/spl mu/m CMOS technology achieved 50-nV/VHz input noise at 1-MHz chopping and 0.5-mW power consumption at 1-V supply voltage.
Parsing-based sarcasm sentiment recognition in Twitter data
Sentiment Analysis is a technique to identify people's opinion, attitude, sentiment, and emotion towards any specific target such as individuals, events, topics, product, organizations, services etc. Sarcasm is a special kind of sentiment that comprise of words which mean the opposite of what you really want to say (especially in order to insult or wit someone, to show irritation, or to be funny). People often expressed it verbally through the use of heavy tonal stress and certain gestural clues like rolling of the eyes. These tonal and gestural clues are obviously not available for expressing sarcasm in text, making its detection reliant upon other factors. In this paper, two approaches to detect sarcasm in the text of Twitter data were proposed. The first is a parsing-based lexicon generation algorithm (PBLGA) and the second was to detect sarcasm based on the occurrence of the interjection word. The combination of two approaches is also shown and compared with the existing state-of-the-art approach to detect sarcasm. First approach attains a 0.89, 0.81 and 0.84 precision, recall and f -- score respectively. Second approach attains 0.85, 0.96 and 0.90 precision, recall and f -- score respectively in tweets with sarcastic hashtag.
Comparative Study on Distributed Generator Sizing Using Three Types of Particle Swarm Optimization
Total power losses in a distribution network can be minimized by installing Distributed Generator (DG) with correct size. In line with this objective, most of the researchers have used multiple types of optimization technique to regulate the DG's output to compute its optimal size. In this paper, a comparative studies of a new proposed Rank Evolutionary Particle Swarm Optimization (REPSO) method with Evolutionary Particle Swarm Optimization (EPSO) and Traditional Particle Swarm Optimization (PSO) is conducted. Both REPSO and EPSO are using the concept of Evolutionary Programming (EP) in Particle Swarm Optimization (PSO) process. The implementation of EP in PSO allows the entire particles to move toward the optimal value faster. A test on determining optimum size of DGs in 69 bus radial distribution system reveals the superiority of REPSO over PSO and EPSO.
Comparative analysis of piezoelectric power harvesting circuits for rechargeable batteries
Using piezoelectric materials to harvest energy from ambient vibrations to power wireless sensors has been of great interest over the past few years. Due to the power output of the piezoelectric materials is relatively low, rechargeable battery is considered as one kind of energy storage to accumulate the harvested energy for intermittent use. Piezoelectric harvesting circuits for rechargeable batteries have two schemes: non-adaptive and adaptive ones. A non-adaptive harvesting scheme includes a conventional diode bridge rectifier and a passive circuit. In recent years, several researchers have developed adaptive schemes for the harvesting circuit. Among them, the adaptive harvesting scheme by Ottman et al. is the most promising. This paper is aimed to quantify the performances of adaptive and non-adaptive schemes and to discuss their performance characteristics.
What I talk about when I talk about Global Law
In this contribution I attempt to sketch what I mean when I talk about ‘global law’, finishing up with a brief consideration of what I think our responsibilities are, as legal scholars, when we engage in such talk.
Modelado difuso Takagi-Sugeno para sintonizar un controlador de calefacción en invernaderos
Resumen. Este artículo trata sobre el modelado y el control de temperatura en un invernadero, con el fin de compensar las perturbaciones ante la presencia de heladas. Las condiciones de operación involucran una ventilación pasiva, y la irrigación se encuentra apagada. De manera que sólo se controla la temperatura mediante calefacción. Para sintonizar el controlador, un modelo difuso TakagiSugeno fue empleado y contruido usando el algoritmo fuzzy c-means para las premisas y mínimos cuadrados para los consecuentes. Mediante mediciones experimentales, donde se activó la calefacción, fue posible sintonizar un modelo difuso. Cada regla difusa corresponde a un submodelo, esta técnica es conocida como compensación paralela distribuida (CPD). El controlador propuesto está basado en el lugar geométrico de las raices para cada submodelo lineal. La adquisición de datos del microclima se realizó LabVIEW. Los resultados obtenidos son para aplicarse a una válvula proporcional para calefacción, y se muestran los resultados en simulación para dos tipos de aprendizaje: local y global para los consecuentes de los submodelos difusos del tipo Takagi-Sugeno.
Automatic Discovery of Speech Act Categories in Educational Games
In this paper we address the important task of automated discovery of speech act categories in dialogue-based, multi-party educational games. Speech acts are important in dialogue-based educational systems because they help infer the student speaker’s intentions (the task of speech act classification) which in turn is crucial to providing adequate feedback and scaffolding. A key step in the speech act classification task is defining the speech act categories in an underlying speech act taxonomy. Most research to date has relied on taxonomies which are guided by experts’ intuitions, which we refer to as an extrinsic design of the speech act taxonomies. A pure data-driven approach would discover the natural groupings of dialogue utterances and therefore reveal the intrinsic speech act categories. To this end, this paper presents a fully-automated data-driven method to discover speech act taxonomies based on utterance clustering. Experiments were conducted on three datasets from three online educational games. This work is a step towards building speech act taxonomies based on both extrinsic (expert-driven) and intrinsic aspects (datadriven) of the target domain.
CC-Hunter: Uncovering Covert Timing Channels on Shared Processor Hardware
As we increasingly rely on computers to process and manage our personal data, safeguarding sensitive information from malicious hackers is a fast growing concern. Among many forms of information leakage, covert timing channels operate by establishing an illegitimate communication channel between two processes and through transmitting information via timing modulation, thereby violating the underlying system's security policy. Recent studies have shown the vulnerability of popular computing environments, such as cloud computing, to these covert timing channels. In this work, we propose a new micro architecture-level framework, CC-Hunter, that detects the possible presence of covert timing channels on shared hardware. Our experiments demonstrate that Chanter is able to successfully detect different types of covert timing channels at varying bandwidths and message patterns.
Four Square Step Test in ambulant persons with multiple sclerosis: validity, reliability, and responsiveness.
The aim of this study was to establish the concurrent validity and relative and absolute reliability, define the minimal detectable change, and evaluate the floor and ceiling effects of the Four Square Step Test (FSST) in ambulant persons with multiple sclerosis (pwMS). Twenty-five pwMS performed the FSST on two occasions, 8.1±4.1 days apart. During the first testing, session participants also reported their fall history, performed the Berg Balance Scale, Dynamic Gait Index, and completed the Activities-Specific Balance Confidence Scale. Performance on the FSST was significantly (P<0.001) and strongly associated with performance on the Berg Balance Scale (rs=-0.84), Dynamic Gait Index (rs=-0.81), and Activities-Specific Balance Confidence Scale (rs=-0.78). Relative reliability of the FSST was excellent (ICC2,1=0.922). The minimal detectable change estimate for the FSST was 4.6 s. The FSST is a valid and reliable measure of dynamic standing balance in ambulant pwMS. However, because a substantial change (43%) is required to demonstrate a real change in individual performance, the FSST is unlikely to be sensitive in detecting longitudinal change in dynamic standing balance.
Vitamin-mineral treatment of attention-deficit hyperactivity disorder in adults: double-blind randomised placebo-controlled trial.
BACKGROUND The role of nutrition in the treatment of attention-deficit hyperactivity disorder (ADHD) is gaining international attention; however, treatments have generally focused only on diet restriction or supplementing with one nutrient at a time. AIMS To investigate the efficacy and safety of a broad-based micronutrient formula consisting mainly of vitamins and minerals, without omega fatty acids, in the treatment of ADHD in adults. METHOD This double-blind randomised controlled trial assigned 80 adults with ADHD in a 1:1 ratio to either micronutrients (n = 42) or placebo (n = 38) for 8 weeks (trial registered with the Australian New Zealand Clinical Trials Registry: ACTRN12609000308291). RESULTS Intent-to-treat analyses showed significant between-group differences favouring active treatment on self- and observer- but not clinician-ADHD rating scales. However, clinicians rated those receiving micronutrients as more improved than those on placebo both globally and on ADHD symptoms. Post hoc analyses showed that for those with moderate/severe depression at baseline, there was a greater change in mood favouring active treatment over placebo. There were no group differences in adverse events. CONCLUSIONS This study provides preliminary evidence of efficacy for micronutrients in the treatment of ADHD symptoms in adults, with a reassuring safety profile.
Loan Sales and Relationship Banking
Firms raise money from banks and the bond market. Banks sell loans in a secondary market to recycle their funds or to trade on private information. Liquidity in the loan market depends on the relative likelihood of each motive for trade and affects firms’ optimal financial structure. The endogenous degree of liquidity is not always socially optimal: There is excessive trade in highly rated names, and insufficient liquidity in riskier bonds. We provide testable implications for prices and quantities in primary and secondary loan markets, and bond markets. Further, we posit that risk-based capital requirements may be socially desirable. THE TERM “COLLATERALIZED LOAN OBLIGATIONS” (CLOs) was coined in 1989, when corporate loans were first used as collateral in Collateralized Debt Obligations (CDOs).1 Since then, the growth in loan sales has been enormous. According to Lucas et al. (2006), $1.1 trillion of CDOs were outstanding as of 2005, and 50% of their collateral was comprised of loans. In addition to pooled securities, banks also trade first loss positions on single names through direct sales of individual loans. In the United States, loan sales have grown from $8 billion in 1991 to $154.8 billion by 2004.2 If a bank securitizes or sells a loan that it originated, it is buying insurance on credit events over which it has either more control or more information than the buyer. In the face of this informational friction, why did the secondary market for corporate loans develop in the 1990s? What effect has this had on relationship banking? In this paper, we characterize when a liquid secondary market for loans arises, when a liquid secondary loan market is socially desirable, and we provide testable predictions on the effect of the emergence of this market on prices and quantities in bond and primary loan markets. Our predictions are based on both changes in the parameters that lead to higher loan liquidity and changes in the contracts that are written between banks and firms given this higher liquidity. ∗Parlour is from the Haas School, UC Berkeley, and Plantin is from the London Business School. We have benefitted from helpful comments by Viral Acharya, Doug Diamond, Denis Gromb, Rupert Cox, Martin Ruckes, an anonymous referee, and the editor Rob Stambaugh. In addition, we thank seminar participants at Baruch, IHS (Vienna), the Federal Reserve Bank of New York, Center for Financial Studies Credit Risk Conference (Frankfurt), the London School of Economics, Yale, Stanford, the Western Finance Association, Berkeley, and the Federal Reserve Bank of Richmond. A previous version of this paper was entitled “Credit Risk Transfer.” 1 This is reported in Lucas, Goodman, and Fabozzi (2006). 2 This figure is reported in Drucker and Puri (2006).
CAS/PI: A Portable and Extensible Interface for Computer Algebra Systems
CAS/7T is a Computer Algebra System graphic user interface designed to be highly portable and extensible. It has been developed by composition of pre-existing software tools such as Maple, Sisyphe, or Ulysse systems, ZicVis 3-D plotting library, etc, using control integration technology and a set of high level graphic toolkits to build the formula editor and the dialog manager. The main aim of CAS/71_ is to allow a wide range of runtime reconfiguration and extensions. For instance, it is possible to add new tools to a running system, to modify connections between working tools, to extend the set of graphic symbols managed by the formula editor, to design new high level editing commands based on the syntax or semantics of mathematical formulas, to customize and extend the menu-button based user interface, etc. More generally, CAS/7T can be seen equally as a powerful system-independent graphic user interface enabling inter-systems communications, a toolkit to allow fast development of custom-made scientific software environments, or a very convenient framework for experimenting with computer algebra systems protocols and man-machine interfaces.
Colostrum Feeding: Knowledge, Attitude and Practice in Pregnant Women in a Teaching Hospital in Nepal
Background: The role of colostrum in promoting growth and development of the newborn as well as fighting with the infection is widely acknowledged. In Nepal, there are differences in cultures in the acceptability of colostrum and the prevalence of colostrum feeding. Although, breastfeeding is a common practice in Nepal, importance of colostrum feeding is still poorly understood. Objectives of the study: To assess the awareness of the importance of colostrum feeding in pregnant women. Methods: Data collection was done through semi structured questionnaire regarding colostrum feeding among pregnant women attending Gynaecology and Obstetrics Outpatient Department (OPD) and Antenatal Ward of Kathmandu Medical College Teaching Hospital (KMCTH). The study was conducted during the months of December 2011and January 2012. Results: The study shows that 74% of women had heard about colostrum, 69% knew that it is nutritious milk to be fed to the new born babies. Nine percent (9%) women were aware about its protective effect and 41 % had knowledge that it helps in proper growth of children and fight against infections. There were still many women (26%) who lacked knowledge about colostrum, majority being uneducated and who came from the rural areas. Those women who knew about it, received the information about colostrum via various media (30%), followed by family and friends (16%) and antenatal advice (12%) which contributes the reason of improved practice of colostrum feeding in urban areas. Conclusion: Many women were aware about the importance of colostrum but the data still indicates that further efforts are necessary to improve the Knowledge, Attitude and Practice of colostrum feeding. Introduction Colostrum is the first milk produced by the mammary glands of mammals in late pregnancy just prior to giving birth and continuing through the early days of breastfeeding. Colostrum is very rich in proteins, carbohydrates, vitamin A, and sodium chloride, but contains lower amounts of lipids and potassium than normal milk. Newborns have premature digestive system which suits the low-volume concentrated form of nutrient supply system of colostrums. The laxative effect of colostrum encourages passage of baby’s first stool, meconium. This helps to clear excess bilirubin which is produced in large quantities at birth and helps prevent jaundice. It contains various immunoglobulins like IgA (reactive to Escherichia coli virulence associated proteins), IgG and IgM. Other immune components of colostrum are lactoferrin, lysozyme, lactoperoxidase, complement and proline-rich peptide (PRP). It also contains various cytokines and growth factors. PRP helps fight against various viral infections like herpes viruses and HIV, bacterial and viral infections which are difficult to treat, various cancer, asthma, allergies and autoimmune diseases. It helps to reduce one of the leading causes of death in our country like diarrhoea and ARI. Colostrum feeding Practices: Though colostrum has been proved beneficial to the new born babies, studies have revealed that breast feeding mothers and the other family members do not have adequate knowledge about it, thus, preventing the infants from acquiring this nutritional food. A study in India revealed that mothers were unaware about the time of initiation of breastfeeding and colostrum feeding. Only 92% and 70% women undergoing normal delivery and caesarean section respectively gave correct response about time of initiation of breastfeeding. Though 92% of the mothers knew that breastfeeding should be initiated within one hour after delivery, only 36% of them had actually done so. It also showed that 52% of the mothers did not receive any advice on breastfeeding during antenatal period. A similar study conducted in the eastern part of Nepal on knowledge, attitude and practice of mothers regarding WebmedCentral: International Journal of Medicine and Molecular Medicine > Research articles Page 2 of 14 WMC003601 Downloaded from http://www.webmedcentral.com on 01-Aug-2012, 09:42:10 PM breast feeding showed that though all mothers knew that they had to breastfeed their babies, they did not have knowledge about the appropriate timing for breastfeeding and colostrum feeding. None of the mothers got advice regarding breast feeding and colostrum feeding during ANC visits. Colostrum feeding myths and barrier: The importance of colostrum is known to the limited population. There are still many people who believe that colostrum is a harmful substance which should be discarded. It is thought to be an unwanted substance related with ill health. There are certain barriers perverting the feeding of colostrum to the new born babies. Maternal barriers – Many mothers lack knowledge about the importance of early initiation of breastfeeding and the benefits of colostrum feeding. Some mothers dislike the colour of colostrum. They even discard it themselves and also on the advice of in-laws. There is also misinterpretation that breast milk doesn’t come in the first few days after delivery and it is insufficient for the baby’s needs. Prolonged labour and surgical deliveries are also the hindrance to colostrum feeding. Neonatal barriersNeonatal illness is one of the major barriers to colostrum feeding. Some babies are not able to suck breast milk due to illness, deformities or other reasons. Other barriers: Bathing baby and mother after birth delays initiation of breastfeeding. Lack of family support, discouragement for early initiation of breastfeeding by traditional birth attendants, decision made by family members to give other fluids are some important barriers to colostrums feeding. This study was initiated to assess the knowledge, attitude and practice of colostrum feeding in pregnant women who have visited a teaching hospital. The study was also done with an aim to create awareness regarding importance of colostrums feeding among the pregnant women. The objective of the study was to find awareness of the importance of colostrum feeding in pregnant women.
CORE: A Cloud-based Object Recognition Engine for robotics
An object recognition engine needs to extract discriminative features from data representing an object and accurately classify the object to be of practical use in robotics. Furthermore, the classification of the object must be rapidly performed in the presence of a voluminous stream of data. These conditions call for a distributed and scalable architecture that can utilize a cloud computing infrastructure for performing object recognition. This paper introduces a Cloud-based Object Recognition Engine (CORE) to address these needs. CORE is able to train on large-scale datasets, perform classification of 3D point cloud data, and efficiently transfer data in a robotic network.
Hop-by-hop adaptive video streaming in content centric network
To guarantee Quality of Experience (QoE) for video streaming services in a future Internet architecture, Content Centric Network (CCN), Dynamic Adaptive Streaming via HTTP (DASH) technology is used to deliver the proper video content according to the network situation. However, CCN enables a host-to-content communication model and has a universal caching design, which seriously decreases the performance of DASH over CCN. In this paper, we propose a hop-by-hop adaptive video streaming scheme (HAVS-CCN) to improve the performance of adaptive video streaming in CCN. HAVS-CCN is simple and applicable to be deployed on DASH over CCN. It directly adjusts video quality and solves network congestion at the bottleneck of transmission path when DASH inaccurately estimates the network throughout. Our scheme optimizes the hop-by-hop content transmission, which achieves video quality adaption and data packet flow control simultaneously. Simulation results, on small-scale networks and large-scale networks, reveal that DASH with HAVS-CCN scheme outperforms the original DASH over CCN, in terms of video playback quality and average delay.
Three-Dimensional Urban EM Wave Propagation Model for Radio Network Planning and Optimization Over Large Areas
A new 3D urban electromagnetic wave propagation model is presented. It provides fast 3D deterministic predictions in urban radio configurations and over large areas. The various techniques to make it suitable to the network planning and optimization of large wireless networks are described. The resulting radio propagation maps exhibit seamless coverage between the various environments (dense urban, urban, and suburban). The model efficiently addresses all types of outdoor transmitter configurations (macrocells, minicells, microcells, and picocells) and all types of receiver locations (at ground level, over the rooftop, and at high building floors). It predicts the field strength as well as the dominant specular contributions of the impulse responses to build ray spectra (including delays and angles). Thus, the model may also be used to estimate the performances of new radio systems [diversity and multiple-input-multiple-output (MIMO)]. The narrowband power prediction of the model is evaluated by comparison with microcell measurements. The evaluation stresses the advantage of 3D modeling compared with the vertical-plane approach or 2D ray tracing. Finally, the ability of the model to simulate radio wideband characteristics in a complex environment is demonstrated by comparing delay-spread estimates to measurements collected from a high-macrocell transmitter in a hilly city and to arrival angles collected in a suburban macrocell area.
ATUAÇÃO DA RADIOFREQUÊNCIA NA GORDURA LOCALIZADA NO ABDÔMEN: revisão de literatura DOI: http://dx.doi.org/10.5892/ruvrd.v13i1.2013
The requirement of body silhouette within agreed st andards of beauty has been increasing over the year s. Men and women want a body free of imperfections, and locali zed fat has been one of the biggest reasons demand for treatment in beauty centers , gyms, spas and nutritionists. The Radio Frequency is a technique mainly used for stim ulation and production of collagen, but also generates an energ y that makes the machine reach the layers of fat, t hereby reducing the volume of fat cells and speeding up your metabolism , making it a treatment modern and complete aesthet ic in he loss of waist measurement. The main objective of this re search is to review the literature on radiofrequenc y technique applied in localized fat in the abdomen. For these purposes, several searches were made in scientific articles available in scientific databases. A current research show that technological advances can contribute greatly to im prove aesthetic disorders such as is seen in the use of radio frequ ency.
Artificial Intelligence, Employment, and Income
Artificial intelligence (AI) will have many profound societal effects It promises potential benefits (and may also pose risks) in education, defense, business, law, and science In this article we explore how AI is likely to affect employment and the distribution of income. We argue that AI will indeed reduce drastically the need fol human toil We also note that some people fear the automation of work hy machines and the resulting unemployment Yet, since the majority of us probably would rather use our time for activities other than our present jobs, we ought thus to greet the work-eliminating consequences of AI enthusiastically The paper discusses two reasons, one economic and one psychological, for this paradoxical apprehension We conclude with a discussion of problems of moving toward the kind of economy that will he enahled by developments in AI ARTIFICIAL INTELLIGENCE [Al] and other developments in computer science are giving birth to a dramatically different class of machinesPmachines that can perform tasks requiring reasoning, judgment, and perception that previously could be done only by humans. Will these I am grateful for the helpful comments provided by many people Specifically I would like to acknowledge the advice teceived from Sandra Cook and Victor Walling of SRI, Wassily Leontief and Faye Duchin of the New York University Institute for Economic Analysis, Margaret Boden of The University of Sussex, Henry Levin and Charles Holloway of Stanford University, James Albus of the National Bureau of Standards, and Peter Hart of Syntelligence Herbert Simon, of CarnegieMellon Univetsity, wrote me extensive criticisms and rebuttals of my arguments Robert Solow of MIT was quite skeptical of my premises, but conceded nevertheless that my conclusions could possibly follow from them if certain other economic conditions were satisfied. Save1 Kliachko of SRI improved my composition and also referred me to a prescient article by Keynes (Keynes, 1933) who, a half-century ago, predicted an end to toil within one hundred years machines reduce the need for human toil and thus cause unemployment? There are two opposing views in response to this question Some claim that AI is not really very different from other technologies that have supported automation and increased productivitytechnologies such as mechanical engineering, ele&onics, control engineering, and operations rcsearch. Like them, AI may also lead ultimately to an expanding economy with a concomitant expansion of employment opportunities. At worst, according to this view, thcrc will be some, perhaps even substantial shifts in the types of jobs, but certainly no overall reduction in the total number of jobs. In my opinion, however, such an out,come is based on an overly conservative appraisal of the real potential of artificial intelligence. Others accept a rather strong hypothesis with regard to AI-one that sets AI far apart from previous labor-saving technologies. Quite simply, this hypothesis affirms that anything people can do, AI can do as well. Cert,ainly AI has not yet achieved human-level performance in many important functions, but many AI scientists believe that artificial intelligence inevitably will equal and surpass human mental abilities-if not in twenty years, then surely in fifty. The main conclusion of this view of AI is that, even if AI does create more work, this work can also be performed by AI devices without necessarily implying more jobs for humans Of course, the mcrc fact that some work can be performed automatically does not make it inevitable that it, will be. Automation depends on many factorsPeconomic, political, and social. The major economic parameter would seem to be the relative cost of having either people or machines execute a given task (at a specified rate and level of quality) In THE AI MAGAZINE Summer 1984 5 AI Magazine Volume 5 Number 2 (1984) (© AAAI)
Design and Implementation of Scalable Wireless Sensor Network for Structural Monitoring
An integrated hardware and software system for a scalable wireless sensor network WSN is designed and developed for structural health monitoring. An accelerometer sensor node is designed, developed, and calibrated to meet the requirements for structural vibration monitoring and modal identification. The nodes have four channels of accelerometers in two directions and a microcontroller for processing and wireless communication in a multihop network. Software components have been implemented within the TinyOS operating system to provide a flexible software platform and scalable performance for structural health monitoring applications. These components include a protocol for reliable command dissemination through the network and data collection, and improvements to software components for data pipelining, jitter control, and high-frequency sampling. The prototype WSN was deployed on a long-span bridge with 64 nodes. The data acquired from the testbed were used to examine the scalability of the network and the data quality. Robust and scalable performance was demonstrated even with a large number of hops required for communication. The results showed that the WSN provides spatially dense and accurate ambient vibration data for identifying vibration modes of a bridge. DOI: 10.1061/ ASCE 1076-0342 2008 14:1 89 CE Database subject headings: Sensors; Design; Implementation; Networks; Monitoring.
New security features in DLMS/COSEM — A comparison to the smart meter gateway
In this paper we describe the new security features of the international standard DLMS/COSEM that came along with its new Green Book Ed. 8. We compare them with those of the German Smart Meter Gateway approach that uses TLS to protect the privacy of connections. We show that the security levels of the cryptographic core methods are similar in both systems. However, there are several aspects concerning the security on which the German approach provides more concrete implementation instructions than DLMS/COSEM does (like lifetimes of certificates and random generators). We describe the differences in security and architecture of the two systems.
Psychometric properties and clinical cut-off scores of the Spanish version of the Social Anxiety Scale for Adolescents.
This study examined the reliability and validity evidence drawn from the scores of the Spanish version of the Slovenian-developed Social Anxiety Scale for Adolescents (SASA; Puklek, 1997; Puklek & Vidmar, 2000) using a community sample (Study 1) and a clinical sample (Study 2). Confirmatory factor analysis in Study 1 replicated the 2-factor structure found by the original authors in a sample of Slovenian adolescents. Test-retest reliability was adequate. Furthermore, the SASA correlated significantly with other social anxiety scales, supporting concurrent validity evidence in Spanish adolescents. The results of Study 2 confirmed the correlations between the SASA and other social anxiety measures in a clinical sample. In addition, findings revealed that the SASA can effectively discriminate between adolescents with a clinical diagnosis of social anxiety disorder (SAD) and those without this disorder. Finally, cut-off scores for the SASA are provided for Spanish adolescents.