title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Synergistic Control of Kinetochore Protein Levels by Psh1 and Ubr2 | The accurate segregation of chromosomes during cell division is achieved by attachment of chromosomes to the mitotic spindle via the kinetochore, a large multi-protein complex that assembles on centromeres. The budding yeast kinetochore comprises more than 60 different proteins. Although the structure and function of many of these proteins has been investigated, we have little understanding of the steady state regulation of kinetochores. The primary model of kinetochore homeostasis suggests that kinetochores assemble hierarchically from the centromeric DNA via the inclusion of a centromere-specific histone into chromatin. We tested this model by trying to perturb kinetochore protein levels by overexpressing an outer kinetochore gene, MTW1. This increase in protein failed to change protein recruitment, consistent with the hierarchical assembly model. However, we find that deletion of Psh1, a key ubiquitin ligase that is known to restrict inner kinetochore protein loading, does not increase levels of outer kinetochore proteins, thus breaking the normal kinetochore stoichiometry. This perturbation leads to chromosome segregation defects, which can be partially suppressed by mutation of Ubr2, a second ubiquitin ligase that normally restricts protein levels at the outer kinetochore. Together these data show that Psh1 and Ubr2 synergistically control the amount of proteins at the kinetochore. |
Intellectual capital ROI : a causal map of human capital antecedents and consequents | This report describes the results of a ground-breaking research study that measured the antecedents and consequents of effective human capital management. The research sample consisted of 76 senior executives from 25 companies in the financial services industry. The results of the study yielded a holistic causal map that integrated constructs from the fields of intellectual capital, knowledge management, human resources, organizational behaviour, information technology and accounting. The integration of both quantitative and qualitative measures in an overall conceptual model yielded several research implications. The resulting structural equation model allows participating organizations and researchers to gauge the effectiveness of an organization’s human capital capabilities. This will allow practitioners and researchers to more efficiently allocate resources with regard to human capital management. The potential outcomes of the study are limitless, since a program of consistent re-evaluation can lead to the establishment of causal relationships between human capital management and economic and business results. Introduction Today’s knowledge-based world consists of universal dynamic change and massive information bombardment. By the year 2010, the codified information base of the world is expected to `̀ double every 11 hours’’ (Bontis, 1999, p. 435). Information storage capacities continue to expand enormously. In 1950, IBM’s Rama C tape contained 4.4 megabytes and they were able to store as many as 50 of these tapes together. At that time, 220 megabytes represented the frontiers of information storage. Many of today’s standard desktop computers are being sold with 40 gigabytes of hard disk space. It is sobering to remember that full motion video in uncompressed form requires 1 gigabyte per minute and that the 83 minutes of Snow White digitized in full colour amount to 15 terabytes of space. Unfortunately, the conscious mind is only capable of processing somewhere between 16 and 40 bits of information (ones and zeros) per second. How do we reconcile this information bombardment conundrum when it seems that human beings are the bottle-neck? The current issue and full text archive of this journal is available at http://www.emeraldinsight.com/1469-1930.htm The authors would like to acknowledge the following organizations for their financial support: Accenture, Saratoga Institute and the Institute for Intellectual Capital Research. The authors would also like to highlight the contribution of Vanessa Yeh, who administered the data collection phase of this research. |
Building Language Models for Text with Named Entities | Text in many domains involves a significant amount of named entities. Predicting the entity names is often challenging for a language model as they appear less frequent on the training corpus. In this paper, we propose a novel and effective approach to building a discriminative language model which can learn the entity names by leveraging their entity type information. We also introduce two benchmark datasets based on recipes and Java programming codes, on which we evaluate the proposed model. Experimental results show that our model achieves 52.2% better perplexity in recipe generation and 22.06% on code generation than the stateof-the-art language models. |
Towards MDA implementation based on a novel BPMN metamodel and ATL transformation rules | A few research works have proposed a BPMN metamodel fully compliant to the BPMN specification. In a recent project called PERCOMOM and based on MDA approach, we have noticed the need to a BPMN metamodel and a transformation tool to handle automatically transformations from a BPMN model to an XML model. In this paper, we have proposed a BPMN metamodel and a set of transformation rules based on ATL to automatically handle those transformations. We have also applied and validate our approach on some samples of BPMN models. |
Meaning in life across the life span : Levels and correlates of meaning in life from emerging adulthood to older adulthood | This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. Meaning in life is thought to be important to well-being throughout the human life span. We assessed the structure, levels, and correlates of the presence of meaning in life, and the search for meaning, within four life stage groups: emerging adulthood, young adulthood, middle-age adulthood, and older adulthood. Results from a sample of Internet users (N ¼ 8756) demonstrated the structural invariance of the meaning measure used across life stages. Those at later life stages generally reported a greater presence of meaning in their lives, whereas those at earlier life stages reported higher levels of searching for meaning. Correlations revealed that the presence of meaning has similar relations to well-being across life stages, whereas searching for meaning is more strongly associated with well-being deficits at later life stages. Introduction Meaning in life has enjoyed a renaissance of interest in recent years, and is considered to be an important component of broader well-being (e. Perceptions of meaning in life are thought to be related to the development of a coherent sense of one's identity (Heine, Proulx, & Vohs, 2006), and the process of creating a sense of meaning theoretically begins in adolescence, continuing throughout life (Fry, 1998). Meaning creation should then be linked to individual development, and is likely to unfold in conjunction with other processes, such as the development of identity, relationships, and goals. Previous research has revealed that people experience different levels of the presence of meaning at different ages (e.g., Ryff & Essex, 1992), although these findings have been inconsistent, and inquiries have generally focused on limited age ranges (e.g., Pinquart, 2002). The present study aimed to integrate research on dimensions of meaning in life across the life span by providing an analysis … |
Pseudo Double Bubble: Jejunal Duplication Mimicking Duodenal Atresia on Prenatal Ultrasound | Prenatal ultrasound showing a double bubble is considered to be pathognomonic of duodenal atresia. We recently encountered an infant with prenatal findings suggestive of duodenal atresia with a normal karyotype who actually had a jejunal duplication cyst on exploration. A finding of an antenatal double bubble should lead to a thorough evaluation of the gastrointestinal tract and appropriate prenatal/neonatal testing and management as many cystic lesions within the abdomen can present with this prenatal finding. |
Simultaneous assay of pigments, carbohydrates, proteins and lipids in microalgae. | Biochemical compositional analysis of microbial biomass is a useful tool that can provide insight into the behaviour of an organism and its adaptational response to changes in its environment. To some extent, it reflects the physiological and metabolic status of the organism. Conventional methods to estimate biochemical composition often employ different sample pretreatment strategies and analytical steps for analysing each major component, such as total proteins, carbohydrates, and lipids, making it labour-, time- and sample-intensive. Such analyses when carried out individually can also result in uncertainties of estimates as different pre-treatment or extraction conditions are employed for each of the component estimations and these are not necessarily standardised for the organism, resulting in observations that are not easy to compare within the experimental set-up or between laboratories. We recently reported a method to estimate total lipids in microalgae (Chen, Vaidyanathan, Anal. Chim. Acta, 724, 67-72). Here, we propose a unified method for the simultaneous estimation of the principal biological components, proteins, carbohydrates, lipids, chlorophyll and carotenoids, in a single microalgae culture sample that incorporates the earlier published lipid assay. The proposed methodology adopts an alternative strategy for pigment assay that has a high sensitivity. The unified assay is shown to conserve sample (by 79%), time (67%), chemicals (34%) and energy (58%) when compared to the corresponding assay for each component, carried out individually on different samples. The method can also be applied to other microorganisms, especially those with recalcitrant cell walls. |
Ant system: optimization by a colony of cooperating agents | An analogy with the way ant colonies function has suggested the definition of a new computational paradigm, which we call ant system (AS). We propose it as a viable new approach to stochastic combinatorial optimization. The main characteristics of this model are positive feedback, distributed computation, and the use of a constructive greedy heuristic. Positive feedback accounts for rapid discovery of good solutions, distributed computation avoids premature convergence, and the greedy heuristic helps find acceptable solutions in the early stages of the search process. We apply the proposed methodology to the classical traveling salesman problem (TSP), and report simulation results. We also discuss parameter selection and the early setups of the model, and compare it with tabu search and simulated annealing using TSP. To demonstrate the robustness of the approach, we show how the ant system (AS) can be applied to other optimization problems like the asymmetric traveling salesman, the quadratic assignment and the job-shop scheduling. Finally we discuss the salient characteristics-global data structure revision, distributed communication and probabilistic transitions of the AS. |
The welfare costs of unreliable water service | Throughout the developing world, many water distribution systems are unreliable. As a result, it becomes necessary for each household to store its own water as a hedge against this uncertainty. Since arrivals of water are not synchronized across households, serious distributional inefficiencies arise. We develop a model describing the optimal intertemporal depletion of each household’s private water storage when it is uncertain when water will next arrive to replenish supplies. The model is calibrated using survey data from Mexico City, a city where many households store water in sealed rooftop tanks known as tinacos. The calibrated model is used to evaluate the potential welfare gains that would occur if alternative modes of water provision were implemented. We estimate that most of the potential distributional inefficiencies can be eliminated simply by making the frequency of deliveries the same across households which now face haphazard deliveries. This would require neither costly investments in infrastructure nor price increases. |
A Communication Theoretical Analysis of Multiple-Access Channel Capacity in Magneto-Inductive Wireless Networks | Magneto-inductive (MI) wireless communications is an emerging subject with a rich set of applications, including local area networks for the Internet-of-Things, wireless body area networks, in-body and on-chip communications, and underwater and underground sensor networks as a low-cost alternative to radio frequency, acoustic or optical methods. Practical MI networks include multiple access channel (MAC) mechanisms for connecting a random number of coils without any specific topology or coil orientation assumptions covering both short and long ranges. However, there is not any information theoretical modeling of MI MAC (MIMAC) capacity of such universal networks with fully coupled frequency selective channel models and exact 3-D coupling model of circular coils instead of long range dipole approximations. In this paper, K-user MIMAC capacity is information theoretically modeled and analyzed, and two-user MIMACs are modeled with explicitly detailed channel responses, bandwidths and coupled thermal noise. K-user MIMAC capacity is achieved through Lagrangian solution with K-user water-filling optimization. Optimum orientations maximizing capacity and received power are theoretically analyzed, and numerically simulated for two-user MIMACs. Constructive gain and destructive interference mechanisms on MIMACs are introduced in comparison with the classical interference based approaches. The theoretical basis promises the utilization of MIMACs in 5G architectures. |
Norms with feeling: towards a psychological account of moral judgment | There is a large tradition of work in moral psychology that explores the capacity for moral judgment by focusing on the basic capacity to distinguish moral violations (e.g. hitting another person) from conventional violations (e.g. playing with your food). However, only recently have there been attempts to characterize the cognitive mechanisms underlying moral judgment (e.g. Cognition 57 (1995) 1; Ethics 103 (1993) 337). Recent evidence indicates that affect plays a crucial role in mediating the capacity to draw the moral/conventional distinction. However, the prevailing account of the role of affect in moral judgment is problematic. This paper argues that the capacity to draw the moral/conventional distinction depends on both a body of information about which actions are prohibited (a Normative Theory) and an affective mechanism. This account leads to the prediction that other normative prohibitions that are connected to an affective mechanism might be treated as non-conventional. An experiment is presented that indicates that "disgust" violations (e.g. spitting at the table), are distinguished from conventional violations along the same dimensions as moral violations. |
Beach Profile Equilibrium and Patterns of Wave Decay and Energy Dissipation across the Surf Zone Elucidated in a Large-Scale Laboratory Experiment | WANG, P. and KRAUS, N.C., 2005. Beach profile equilibrium and patterns of wave decay and energy dissipation across the surf zone elucidated in a large-scale laboratory experiment. Journal of Coastal Research, 21(3), 522–534. West Palm Beach (Florida), ISSN 0749-0208. The widely accepted assumption that the equilibrium beach profile in the surf zone corresponds with uniform waveenergy dissipation per unit volume is directly examined in six cases from the large-scale SUPERTANK laboratory experiment. Under irregular waves, the pattern of wave-energy dissipation across a large portion of the surf zone became relatively uniform as the beach profile evolved toward equilibrium. Rates of wave-energy dissipation across a near-equilibrium profile calculated from wave decay in the surf zone support the prediction derived by DEAN (1977). Substantially different equilibrium beach-profile shapes and wave-energy dissipation rates and patterns were generated for regular waves as compared to irregular waves of similar statistical significant wave height and spectral peak period. Large deviation of wave-energy dissipation from the equilibrium rate occurred at areas on the beach profile with active net cross-shore sediment transport and substantial sedimentation and erosion. The rate of waveenergy dissipation was greater at the main breaker line and in the swash zone, as compared to middle of the surf zone. Based on analysis of the SUPERTANK data, a simple equation is developed for predicting the height of irregular waves in the surf zone on an equilibrium profile. The decay in wave height is proportional to the water depth to the one-half power, as opposed to values of unity or greater derived previously for regular waves. ADDITIONAL INDEX WORDS: Beach profile, equilibrium, cross-shore sediment transport, wave breaking, coastal morphology, SUPERTANK, physical modeling. |
The generation of hexahedral meshes for assembly geometry : survey and progress † | The nite element method is being used today to model component assemblies in a wide variety of application areas, including structural mechanics, uid simulations, and others. Generating hexahedral meshes for these assemblies usually requires the use of geometry decomposition, with di erent meshing algorithms applied to di erent regions. While the primary motivation for this approach remains the lack of an automatic, reliable all-hexahedral meshing algorithm, requirements in mesh quality and mesh con guration for typical analyses are also factors. For these reasons, this approach is also sometimes required when producing other types of unstructured meshes. This paper will review progress to date in automating many parts of the hex meshing process, which has halved the time to produce all-hex meshes for large assemblies. Particular issues which have been exposed due to this progress will also be discussed, along with their applicability to the general unstructured meshing problem. Published in 2001 by John Wiley & Sons, Ltd. |
DTAM: Dense tracking and mapping in real-time | DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non-convex optimisation framework. Interleaved, we track the camera's 6DOF motion precisely by frame-rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves real-time performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application. |
A robust miniature robot design for land/air hybrid locomotion | The utility of miniature ground robots has long been limited by reduced locomotion capabilities compared to larger robots. Many avenues of research have been pursued to improve ground locomotion to alleviate this issue. In this paper, another option is explored in which a small ground robot is equipped with the ability to fly, allowing it to move to previously unreachable areas if necessary. |
Distributed Sequence Memory of Multidimensional Inputs in Recurrent Networks | Recurrent neural networks (RNNs) have drawn interest from machine learning researchers because of their effectiveness at preserving past inputs for time-varying data processing tasks. To understand the success and limitations of RNNs, it is critical that we advance our analysis of their fundamental memory properties. We focus on echo state networks (ESNs), which are RNNs with simple memoryless nodes and random connectivity. In most existing analyses, the short-term memory (STM) capacity results conclude that the ESN network size must scale linearly with the input size for unstructured inputs. The main contribution of this paper is to provide general results characterizing the STM capacity for linear ESNs with multidimensional input streams when the inputs have common low-dimensional structure: sparsity in a basis or significant statistical dependence between inputs. In both cases, we show that the number of nodes in the network must scale linearly with the information rate and poly-logarithmically with the input dimension. The analysis relies on advanced applications of random matrix theory and results in explicit non-asymptotic bounds on the recovery error. Taken together, this analysis provides a significant step forward in our understanding of the STM properties in RNNs. |
Logic Programs with Annotated Disjunctions | Current literature offers a number of different approaches to what could generally be called “probabilistic logic programming”. These are usually based on Horn clauses. Here, we introduce a new formalism, Logic Programs with Annotated Disjunctions, based on disjunctive logic programs. In this formalism, each of the disjuncts in the head of a clause is annotated with a probability. Viewing such a set of probabilistic disjunctive clauses as a probabilistic disjunction of normal logic programs allows us to derive a possible world semantics, more precisely, a probability distribution on the set of all Herbrand interpretations. We demonstrate the strength of this formalism by some examples and compare it to related work. |
Lichen sclerosus et atrophicus and sexual abuse. | AIMS
The aetiology of lichen sclerosus et atrophicus (LSA) is unknown. A series of 42 cases of this uncommon condition is reported. The aim of this study was to identify associations of LSA and document the association with sexual abuse.
METHODS
Information about the patients was obtained by retrospective case note review and some patients were contacted by telephone for further information.
RESULTS
In 12 cases there was evidence of sexual abuse. The abused group were slightly older than the non-abused group but were similar in all other respects. All three patients who presented over the age of 12 years had evidence of sexual abuse. Genital trauma was recalled by the patient or found at examination in 17 cases. Evidence of autoimmunity was present in five cases. Positive microbiological isolates were obtained in 18 cases. In only 11 cases were there no associated factors. The symptoms of LSA started between the ages of 3 and 7 years in most patients. The usual symptoms were related to genital skin involvement, and symptoms related to bladder and bowel function were common (50%).
CONCLUSION
In this large series of paediatric LSA, associations with trauma, autoimmunity, and infection were noted. There was a high rate of coexisting sexual abuse with LSA, possibly due to genital trauma. |
Handling Obstacles in Goal-Oriented Requirements Engineering | is nd ABSTRACT Requirements engineering is concerned with the elicitatio of high-level goals to be achieved by the envisioned syste the refinement of such goals and their operationalizatio into specifications of services and constraints, and th assignment of responsibilities for the resulting requiremen to agents such as humans, devices, and software. Requirements engineering processes often result in goa requirements and assumptions about agent behavior that too ideal; some of them are likely to be not satisfied from time to time in the running system due to unexpected age behavior. The lack of anticipation of exceptional behavior results in unrealistic, unachievable and/or incomple requirements. As a consequence, the software develop from those requirements will not be robust enough and w inevitably result in poor performance or failures, sometime with critical consequences on the environment. The paper presents formal techniques for reasoning abo obstacles to the satisfaction of goals, requirements, a assumptions elaborated in the requirements engineer process. A first set of techniques allow obstacles to be ge erated systematically from goal formulations and doma properties. A second set of techniques allow resolutions be generated once the obstacles have been identifi thereby. Our techniques are based on a temporal logic formalizatio of goals and domain properties; they are integrated into existing method for goal-oriented requirements elaboratio with the aim of deriving more realistic, complete and robus requirements specifications. A key principle in this paper is to handle exceptions a requirements engineering time and at the goal level, so th more freedom is left for resolving them in a satisfactor way. The various techniques proposed are illustrated a assessed in the context of a real safety-critical system. |
Vague matrices in linear programming | This paper deals with so-called vague matrices, the columns of which are convex sets. A special “square” problem of the vague optimization is analysed. The results form a base for the subsequent outline of an algorithm for solving the LP-problem with a vague matrix. The paper is concluded by the discussion of possible types of degeneracy. |
Entropic Brain-computer Interfaces - Using fNIRS and EEG to Measure Attentional States in a Bayesian Framework | Implicit Brain-Computer Interfaces (BCI) adapt system settings subtly based on real time measures of brain activation without the user’s explicit awareness. For example, measures of the user’s cognitive workload might drive a system that alters the timing of notifications in order to minimize user interruption. Here, we consider new avenues for implicit BCI based on recent discoveries in cognitive neuroscience and conduct a series of experiments using BCI’s principal non-invasive brain sensors, fNIRS and EEG. We show how Bayesian and systems neuroscience formulations explain the difference in performance of machine learning algorithms trained on brain data in different conditions. These new formulations posit that the brain aims to minimize its long-term surprisal of sensory data and organizes its calculations on two anti-correlated networks. We consider how to use real-time input that portrays a user along these dimensions in designing Bidirectional BCIs, which are Implicit BCIs that aim to optimize the user’s state by modulating computer output based on feedback from a brain monitor. We introduce Entropic Brain-Computer Interfacing as a type of Bidirectional BCI which uses physiological measurements of information theoretical dimensions of the user’s state to evaluate the digital flow of information to the user’s brain, tweaking this output in a feedback loop to the user’s benefit. |
Toxicity, biotransformation, and mode of action of arsenic in two freshwater microalgae (Chlorella sp. and Monoraphidium arcuatum). | The toxicity of As(V) and As(III) to two axenic tropical freshwater microalgae, Chlorella sp. and Monoraphidium arcuatum, was determined using 72-h growth rate-inhibition bioassays. Both organisms were tolerant to As(III) (72-h concentration to cause 50% inhibition of growth rate [IC50], of 25 and 15 mg As[III]/L, respectively). Chlorella sp. also was tolerant to As(V) with no effect on growth rate over 72 h at concentrations up to 0.8 mg/L (72-h IC50 of 25 mg As[V]/L). Monoraphidium arcuatum was more sensitive to As(V) (72-h IC50 of 0.25 mg As[V]/L). An increase in phosphate in the growth medium (0.15-1.5 mg PO4(3-)/L) decreased toxicity, i.e., the 72-h IC50 value for M. arcuatum increased from 0.25 mg As(V)/L to 4.5 mg As(V)/L, while extracellular As and intracellular As decreased, indicating competition between arsenate and phosphate for cellular uptake. Both microalgae reduced As(V) to As(III) in the cell, with further biological transformation to methylated species (monomethyl arsonic acid and dimethyl arsinic acid) and phosphate arsenoriboside. Less than 0.01% of added As(V) was incorporated into algal cells, suggesting that bioaccumulation and subsequent methylation was not the primary mode of detoxification. When exposed to As(V), both species reduced As(V) to As(III); however, only M. arcuatum excreted As(III) into solution. Intracellular arsenic reduction may be coupled to thiol oxidation in both species. Arsenic toxicity most likely was due to arsenite accumulation in the cell, when the ability to excrete and/or methylate arsenite was overwhelmed at high arsenic concentrations. Arsenite may bind to intracellular thiols, such as glutathione, potentially disrupting the ratio of reduced to oxidized glutathione and, consequently, inhibiting cell division. |
Assessment , Enhancement , and Verification Determinants of the Self-Evaluation Process | The 3 major self-evaluation motives were compared: self-assessment (people pursue accurate selfknowledge), self-enhancement (people pursue favorable self-knowledge), and self-verification (people pursue highly certain self-knowledge). Ss considered the possession of personality traits that were either positive or negative and either central or peripheral by asking themselves questions that varied in diagnosticity (the extent to which the questions could discriminate between a trait and its alternative) and in confirmation value (the extent to which the questions confirmed possession of a trait). Ss selected higher diagnosticity questions when evaluating themselves on central positive rather than central negative traits and confirmed possession of their central positive rather than central negative traits. The self-enhancement motive emerged as the most powerful determinant of the self-evaluation process, followed by the self-verification motive. |
Prospective trial of high-frequency oscillation in adults with acute respiratory distress syndrome. | OBJECTIVE
To evaluate the safety and efficacy of high-frequency oscillatory ventilation (HFOV) in adult patients with the acute respiratory distress syndrome (ARDS) and oxygenation failure.
DESIGN
Prospective, clinical study.
SETTING
Intensive care and burn units of two university teaching hospitals.
PATIENTS
Twenty-four adults (10 females, 14 males, aged 48.5 +/- 15.2 yrs, Acute Physiology and Chronic Health Evaluation II score 21.5 +/- 6.9) with ARDS (lung injury score 3.4 +/- 0.6, Pao2/Fio2 98.8 +/- 39.0 mm Hg, and oxygenation index 32.5 +/- 19.6) who met one of the following criteria: Pao2 < or =65 mm Hg with Fio2 > or =0.6, or plateau pressure > or =35 cm H2O.
INTERVENTIONS
HFOV was initiated in patients with ARDS after varying periods of conventional ventilation (CV). Mean airway pressure (Paw) was initially set 5 cm H2O greater than Paw during CV, and was subsequently titrated to maintain oxygen saturation between 88% and 93% and Fio2 < or =0.60.
MEASUREMENTS AND MAIN RESULTS
Fio2, Paw, pressure amplitude of oscillation, frequency, blood pressure, heart rate, and arterial blood gases were monitored during the transition from CV to HFOV, and every 8 hrs thereafter for 72 hrs. In 16 patients who had pulmonary artery catheters in place, cardiac hemodynamics were recorded at the same time intervals. Throughout the HFOV trial, Paw was significantly higher than that applied during CV. Within 8 hrs of HFOV application, and for the duration of the trial, Fio2 and Paco2 were lower, and Pao2/Fio2 was higher than baseline values during CV. Significant changes in hemodynamic variables following HFOV initiation included an increase in pulmonary artery occlusion pressure (at 8 and 40 hrs) and central venous pressure (at 16 and 40 hrs), and a reduction in cardiac output throughout the course of the study. There were no significant changes in systemic or pulmonary pressure associated with initiation and maintenance of HFOV. Complications occurring during HFOV included pneumothorax in two patients and desiccation of secretions in one patient. Survival at 30 days was 33%, with survivors having been mechanically ventilated for fewer days before institution of HFOV compared with nonsurvivors (1.6 +/- 1.2 vs. 7.8 +/- 5.8 days; p =.001).
CONCLUSIONS
These findings suggest that HFOV has beneficial effects on oxygenation and ventilation, and may be a safe and effective rescue therapy for patients with severe oxygenation failure. In addition, early institution of HFOV may be advantageous. |
Advice weaving in AspectJ | This paper describes the implementation of advice weaving in AspectJ. The AspectJ language picks out dynamic join points in a program's execution with pointcuts and uses advice to change the behavior at those join points. The core task of AspectJ's advice weaver is to statically transform a program so that at runtime it will behave according to the AspeetJ language semantics. This paper describes the 1.1 implementation which is based on transforming bytecode. We describe how AspectJ's join points are mapped to regions of bytecode, how these regions are efficiently matched by AspectJ's pointcuts, and how pieces of advice are efficiently implemented at these regions. We also discuss both run-time and compile-time performance of this implementation. |
Evidence-based target recall rates for screening mammography. | PURPOSE
To retrospectively identify target recall rates for screening mammography on the basis of how sensitivity shifts with recall rate.
MATERIALS AND METHODS
The study group included 1 872 687 subsequent and 171 104 first screening mammograms from 1996 to 2001 from 172 and 139 facilities, respectively, in six sites of the Breast Cancer Surveillance Consortium. Institutional review board (IRB) approval was obtained from each site. Informed consent requirements of the IRBs were followed. The study was HIPAA compliant. Recall rate was defined as the percentage of screening studies for which further work-up was recommended by the radiologist. Sensitivity was defined as the proportion of cancers that were detected at screening mammography. Piecewise linear regression was used to model sensitivity as a function of recall rate. This model allows detection of critical recall rates in which significant changes (shifts) occurred in the rates that sensitivity increased with increasing recall rate. Rates were interpreted as number of additional work-ups per additional cancer detected (AW/ACD) or, in other words, the estimated number of additional women needed to be recalled at a given rate to detect one additional cancer.
RESULTS
For first mammograms, a single shift in the estimated AW/ACD rate occurred at a recall rate of 10.0%, with the rate jumping dramatically from 35 to 172. For subsequent mammograms, four shifts were identified. At a recall rate of 6.7%, the estimated AW/ACD increased from 80 to 132, which rendered it the highest desirable target recall rate. At a recall rate of 12.3%, the estimated AW/ACD was 304, which suggests little benefit for any higher recall rate.
CONCLUSION
Recall rates of 10.0% for first and 6.7% for subsequent mammograms are recommended targets on the basis of their AW/ACD rates (less than 100). |
Electromyographic analysis of specific exercises for scapular control in early phases of shoulder rehabilitation. | BACKGROUND
Restoration of control of dynamic scapular motion by specific activation of the serratus anterior and lower trapezius muscles is an important part of functional rehabilitation. This study evaluated activation of those muscles in specific exercises.
HYPOTHESIS
Specific exercises will activate key scapular-stabilizing muscles in clinically significant amplitudes and patterns.
STUDY DESIGN
Controlled laboratory study.
METHODS
Muscle activation amplitudes and patterns were evaluated in the serratus anterior, upper trapezius, lower trapezius, anterior deltoid, and posterior deltoid muscles with electromyography in symptomatic (n = 18) and asymptomatic (n = 21) subjects as they executed the low row, inferior glide, lawnmower, and robbery exercises.
RESULTS
There were no significant differences in muscle activation amplitude between groups. Muscle activation was moderate across all of the exercises and varied slightly with the specific exercise. The serratus anterior and lower trapezius were activated between 15% and 30% in all exercises. Upper trapezius activation was high (21%-36%) in the dynamic exercises (lawnmower and robbery). Serratus anterior was activated first in the low row and last in the lawnmower and robbery. The upper trapezius and lower trapezius were activated first in the lawnmower and robbery.
CONCLUSION
These specific exercises activate key scapular-stabilizing muscles at amplitudes that are known to increase muscle strength.
CLINICAL RELEVANCE
These exercises can be used as part of a comprehensive rehabilitation program for restoration of shoulder function. They activate the serratus anterior and lower trapezius-key muscles in dynamic shoulder control-while variably activating the upper trapezius. Activation patterns depended on scapular position resulting in variability of amplitude and activation sequencing between exercises. Inferior glide and low row can be performed early in rehabilitation because of their limited range of motion, while lawnmower and robbery, which require larger movements, can be instituted later in the sequence. |
Differential effects of lercanidipine and nifedipine GITS on plasma norepinephrine in chronic treatment of hypertension. | This study aimed to compare the effects of two long-acting dihydropyridine calcium channel blockers (CCBs) with different pharmacologic properties, lercanidipine and nifedipine Gastro-Intestinal Therapeutic System (GITS), in the chronic treatment of essential hypertension. After a 4-week placebo run-in period, 60 patients of both sexes were randomly treated with lercanidipine 10 to 20 mg or nifedipine GITS 30 to 60 mg taken orally for 48 weeks, according to a double-blind, parallel group design. For the first 4 weeks of treatment, the lowest dose of each drug was used, followed by higher doses if diastolic blood pressure (BP) was >90 mm Hg. At the end of the placebo period and after 4, 8, 12, 24, and 48 weeks of active treatment BP, heart rate (HR), and plasma norepinephrine (NE) levels were assessed. Lercanidipine and nifedipine GITS similarly reduced BP values after 48 weeks (-21.7/15.9 mm Hg and -20.7/14.6 mm Hg, respectively, both P <.001 v placebo), with no change in HR. Despite the similar lack of effect on HR, the two drugs displayed different influences on plasma NE, which was significantly increased by nifedipine GITS (+56 pg/mL, P <.05 v placebo) but not by lercanidipine. These findings suggest that 1) sympathetic activation occurs during chronic therapy with nifedipine GITS but not with lercanidipine, which might be related to the different pharmacologic characteristics of the two CCBs at the doses evaluated; and 2) nifedipine GITS seems to activate peripheral but not cardiac sympathetic nerves, consistent with differing regulation of cardiac and peripheral sympathetic activity. |
Asthma-like symptoms prevalence in five Turkish urban centers. | BACKGROUND
Asthma, which is a chronic inflammatory disorder of the airways characterized by the infiltration of inflammatory cells, is a common cause of morbidity in adults. It is almost the third leading cause of preventable hospitalization in the developed countries and accounts for approximately millions of visits to emergency departments.
METHODS
In this study, we aimed to determine asthma prevalence in five urban centers in Turkey. Three of the cities were located in the middle-west region of the Anatolia one of them as located across the Mediterranean cost and the last one was in the north part of the country. Data of totally 2353 participants was collected by the trained interviewers, who visited the households and administered the questionnaire to the household members at or over the age of 15 years.
RESULTS
The prevalence of asthma was found to be 6.6 % and the difference of asthma prevalence between the urban centers was statistically non-significant (p = 0.059). |
Vision-based Human Gender Recognition: A Survey | Gender is an important demographic attribute of people. This paper provides a survey of human gender recognition in computer vision. A review of approaches exploiting information from face and whole body (either from a still image or gait sequence) is presented. We highlight the challenges faced and survey the representative methods of these approaches. Based on the results, good performance have been achieved for datasets captured under controlled environments, but there is still much work that can be done to improve the robustness of gender recognition under real-life environments. |
Data for Development: the D4D Challenge on Mobile Phone Data | The Orange “Data for Development” (D4D) challenge is an open data challenge on anonymous call patterns of Orange’s mobile phone users in Ivory Coast. The goal of the challenge is to help address society development questions in novel ways by contributing to the socio-economic development and well-being of the Ivory Coast population. Participants to the challenge are given access to four mobile phone datasets and the purpose of this paper is to describe the four datasets. The website http://www.d4d.orange.com contains more information about the participation rules. The datasets are based on anonymized Call Detail Records (CDR) of phone calls and SMS exchanges between five million of Orange’s customers in Ivory Coast between December 1, 2011 and April 28, 2012. The datasets are: (a) antenna-to-antenna traffic on an hourly basis, (b) individual trajectories for 50,000 customers for two week time windows with antenna location information, (3) individual trajectories for 500,000 customers over the entire observation period with sub-prefecture location information, and (4) a sample of communication graphs for 5,000 customers. The geofast web interface www.geofast.net for the visualisation of mobile phone communications (countries available: France, Belgium, Ivory Coast). ∗University of Louvain, B-1348 Louvain-la-Neuve, Belgium. [email protected] †Orange Labs, France 1 ar X iv :1 21 0. 01 37 v2 [ cs .C Y ] 2 8 Ja n 20 13 |
Critical Success Factors to Improve the Game Development Process from a Developer’s Perspective | The growth of the software game development industry is enormous and is gaining importance day by day. This growth imposes severe pressure and a number of issues and challenges on the game development community. Game development is a complex process, and one important game development choice is to consider the developer’s perspective to produce good-quality software games by improving the game development process. The objective of this study is to provide a better understanding of the developer’s dimension as a factor in software game success. It focuses mainly on an empirical investigation of the effect of key developer’s factors on the software game development process and eventually on the quality of the resulting game. A quantitative survey was developed and conducted to identify key developer’s factors for an enhanced game development process. For this study, the developed survey was used to test the research model and hypotheses. The results provide evidence that game development organizations must deal with multiple key factors to remain competitive and to handle high pressure in the software game industry. The main contribution of this paper is to investigate empirically the influence of key developer’s factors on the game development process. |
Text Mining For Information Systems Researchers: An Annotated Topic Modeling Tutorial | Analysts have estimated that more than 80 percent of today’s data is stored in unstructured form (e.g., text, audio, image, video)—much of it expressed in rich and ambiguous natural language. Traditionally, to analyze natural language, one has used qualitative data-analysis approaches, such as manual coding. Yet, the size of text data sets obtained from the Internet makes manual analysis virtually impossible. In this tutorial, we discuss the challenges encountered when applying automated text-mining techniques in information systems research. In particular, we showcase how to use probabilistic topic modeling via Latent Dirichlet allocation, an unsupervised text-mining technique, with a LASSO multinomial logistic regression to explain user satisfaction with an IT artifact by automatically analyzing more than 12,000 online customer reviews. For fellow information systems researchers, this tutorial provides guidance for conducting text-mining studies on their own and for evaluating the quality of others. |
Some key differences between a happy life and a meaningful life | Being happy and finding life meaningful overlap, but there are important differences. A large survey revealed multiple differing predictors of happiness (controlling for meaning) and meaningfulness (controlling for happiness). Satisfying one’s needs and wants increased happiness but was largely irrelevant to meaningfulness. Happiness was largely present oriented, whereas meaningfulness involves integrating past, present, and future. For example, thinking about future and past was associated with high meaningfulness but low happiness. Happiness was linked to being a taker rather than a giver, whereas meaningfulness went with being a giver rather than a taker. Higher levels of worry, stress, and anxiety were linked to higher meaningfulness but lower happiness. Concerns with personal identity and expressing the self contributed to meaning but not happiness. We offer brief composite sketches of the unhappy but meaningful life and of the happy but meaningless life. |
Ethical considerations for short-term experiences by trainees in global health. | ACADEMIC GLOBAL HEALTH PROGRAMS ARE BURGEONing. According to a recent review of the Web sites of 129 accredited MD-granting US medical schools and their parent universities, almost half (60; 47%) have established initiatives, institutes, centers, or offices for global health. These programs announce goals that include reducing disparities in global health through a combination of research, education, and service. In part responding to student demand and enthusiasm, many programs provide short-term training and service experiences in resource-limited settings. Nevertheless, there are important ethical considerations inherent to sending individuals from resource-replete settings for training and service experiences in resource-limited settings. However, unlike clinical research conducted across international borders, which has attracted considerable attention in the lay and scholarly literature, much less attention has been given to ethical issues associated with education and service initiatives of global health programs. We describe some of these issues so they can be addressed explicitly by those engaged in global health education and service initiatives to facilitate the goals of providing medical students, residents, and other trainees in disciplines related to global health the opportunity for international experience while minimizing unintended adverse consequences. |
POP: Privacy-Preserving Outsourced Photo Sharing and Searching for Mobile Devices | Facing a large number of personal photos and limited resource of mobile devices, cloud plays an important role in photo storing, sharing and searching. Meanwhile, some recent reputation damage and stalk events caused by photo leakage increase people's concern about photo privacy. Though most would agree that photo search function and privacy are both valuable, few cloud system supports both of them simultaneously. The center of such an ideal system is privacy-preserving outsourced image similarity measurement, which is extremely challenging when the cloud is untrusted and a high extra overhead is disliked. In this work, we introduce a framework POP, which enables privacy-seeking mobile device users to outsource burdensome photo sharing and searching safely to untrusted servers. Unauthorized parties, including the server, learn nothing about photos or search queries. This is achieved by our carefully designed architecture and novel non-interactive privacy-preserving protocols for image similarity computation. Our framework is compatible with the state-of-the-art image search techniques, and it requires few changes to existing cloud systems. For efficiency and good user experience, our framework allows users to define personalized private content by a simple check-box configuration and then enjoy the sharing and searching services as usual. All privacy protection modules are transparent to users. The evaluation of our prototype implementation with 31,772 real-life images shows little extra communication and computation overhead caused by our system. |
Templates in Chess Memory: A Mechanism for Recalling Several Boards | This paper addresses empirically and theoretically a question derived from the chunking theory of memory (Chase & Simon, 1973a, 1973b): To what extent is skilled chess memory limited by the size of short-term memory (about seven chunks)? This question is addressed first with an experiment where subjects, ranking from class A players to grandmasters, are asked to recall up to five positions presented during 5 s each. Results show a decline of percentage of recall with additional boards, but also show that expert players recall more pieces than is predicted by the chunking theory in its original form. A second experiment shows that longer latencies between the presentation of boards facilitate recall. In a third experiment, a Chessmaster gradually increases the number of boards he can reproduce with higher than 70% average accuracy to nine, replacing as many as 160 pieces correctly. To account for the results of these experiments, a revision of the Chase-Simon theory is proposed. It is suggested that chess players, like experts in other recall tasks, use long-term memory retrieval structures (Chase & Ericsson, 1982) or templates in addition to chunks in short-term memory to store information rapidly. |
Multimodal speech recognition: increasing accuracy using high speed video data | To date, multimodal speech recognition systems based on the processing of audio and video signals show significantly better results than their unimodal counterparts. In general, researchers divide the solution of the audio–visual speech recognition problem into two parts. First, in extracting the most informative features from each modality and second, in the most successful way of fusion both modalities. Ultimately, this leads to an improvement in the accuracy of speech recognition. Almost all modern studies use this approach with video data of a standard recording speed of 25 frames per second. The choice of such a recording speed is easily explained, since the vast majority of existing audio–visual databases are recorded with this rate. However, it should be noticed that the number of 25 frames per second is a world standard for many areas and has never been specifically calculated for speech recognition tasks. The main purpose of this study is to investigate the effect brought by the high-speed video data (up to 200 frames per second) on the speech recognition accuracy. And also to find out whether the use of a high-speed video camera makes the speech recognition systems more robust to acoustical noise. To this end, we recorded a database of audio–visual Russian speech with high-speed video recordings, which consists of records of 20 speakers, each of them pronouncing 200 phrases of continuous Russian speech. Experiments performed on this database showed an improvement in the absolute speech recognition rate up to 3.10%. We also proved that the use of the high-speed camera with 200 fps allows achieving better recognition results under different acoustically noisy conditions (signal-to-noise ratio varied between 40 and 0 dB) with different types of noise (e.g. white noise, babble noise). |
Estimation of quantile mixtures via L-moments and trimmed L-moments | Moments or cumulants have been traditionally used to characterize a probability distribution or an observed data set. Recently, L-moments and trimmed L-moments have been noticed as appealing alternatives to the conventional moments. This paper promotes the use of L-moments proposing new parametric families of distributions that can be estimated by the method of L-moments. The theoretical L-moments are defined by the quantile function i.e. the inverse of cumulative distribution function. An approach for constructing parametric families from quantile functions is presented. Because of the analogy to mixtures of densities, this class of parametric families is called quantile mixtures. The method of L-moments is a natural way to estimate the parameters of quantile mixtures. As an example, two parametric families are introduced: the normal-polynomial quantile mixture and the Cauchypolynomial quantile mixture. The proposed quantile mixtures are applied to model monthly, weekly and daily returns of some major stock indexes. |
The motivations and experiences of the on-demand mobile workforce | On-demand mobile workforce applications match physical world tasks and willing workers. These systems offer to help conserve resources, streamline courses of action, and increase market efficiency for micro- and mid-level tasks, from verifying the existence of a pothole to walking a neighbor's dog. This study reports on the motivations and experiences of individuals who regularly complete physical world tasks posted in on-demand mobile workforce marketplaces. Data collection included semi-structured interviews with members (workers) of two different services. The analysis revealed the main drivers for participating in an on-demand mobile workforce, including desires for monetary compensation and control over schedules and task selection. We also reveal main reasons for task selection, which involve situational factors, convenient physical locations, and task requester profile information. Finally, we discuss the key characteristics of the most worthwhile tasks and offer implications for novel crowdsourcing systems for physical world tasks. |
Levodopa alone and in combination with a peripheral decarboxylase inhibitor benserazide (madopar®) in the treatment of Parkinson's disease | A combination of levodopa and the extracerebrally acting decarboxylase inhibitor benserazide (ratio 4:1) (Madopar®), was compared with levodopa alone in a controlled double-blind clinical multicenter trial on 94 patients with Parkinson's disease. During 4 months of therapy levodopa + benserazide proved superior to levodopa on several accounts. Nausea and vomiting occurred with statistically significant less severity and frequency. Clinical improvement expressed through improvement in Webster rating occurred sooner and was all together greater. The treatment schedules did not differ with regard to other side effects, in particular involuntary movements and reduction in supine blood pressure. Neither treatment seemed to influence liver function, renal function and hematological parameters. Levodopa in Kombination mit dem extracerebral wirkenden Dekarboxylasehemmer Benserazid (Dosisverhältnis 4:1) (Madopar®) wurde mit Levodopa allein in einer kontrollierten, doppelblinden, klinischen Multizenterprüfung an 94 Patienten mit Morbus Parkinson verglichen. Während der viermonatigen Therapie zeigte sich in mehreren Beziehungen Levodopa + Benserazid dem Levodopa überlegen. Übelkeit und Erbrechen waren statistisch signifikant weniger schwerwiegend und traten seltener auf. Klinische Besserung, ausgedrückt durch Reduktion im Webster Rating, trat schneller ein und war im großen und ganzen höher. Andere Nebenerscheinungen, insbesondere unwillkürliche Bewegungen und Reduktion des Blutdruckes im Liegen, verteilen sich gleich über die beiden Gruppen. Bestimmungen von sowohl Leberfunktion und Nierenfunktion als auch hämatologischen Parametern ergaben keine signifikanten Änderungen. |
I Spot a Bot : Building a binary classifier to detect bots on Twitter | It has been estimated that up to 50% of the activity on Twitter comes from bots [1]: algorithmically-automated accounts created to advertize products, distribute spam, or sway public opinion. It is perhaps this last intent that is most alarming; studies have found that up to 20% of the Twitter activity related to the 2016 U.S. presidential election came from suspected bot accounts, and there has been evidence of bots used to spread false rumors about French presidential candidate Emmanuel Macron and to escalate a recent conflict in Qatar [1]. Detecting bots is necessary in order to identify bad actors in the “Twitterverse” and protect genuine users from misinformation and malicious intents. This has been an area of research for several years, but current algorithms still lag in performance relative to humans [2]. Given a Twitter user’s profile and tweet history, our project was to build a binary classifier that identifies a given user as “bot” or “human.” The end-user application for a classifier such as this one would be a web plug-in for the browser that can score a given account in real-time (See page 5 for mock-ups). All of the raw inputs required to classify a public Twitter account via our algorithm are available for download from the Twitter API; in fact, our check_screenname.py program is a working prototype that uses the API to classify a given Twitter user handle within seconds. It is our opinion that a product like this is sorely needed for the average Twitter consumer. |
A multilayer ontology-based hybrid recommendation model | We propose a novel hybrid recommendation model in which user preferences and item features are described in terms of semantic concepts defined in domain ontologies. The concept, item and user spaces are clustered in a coordinated way, and the resulting clusters are used to find similarities among individuals at multiple semantic layers. Such layers correspond to implicit Communities of Interest, and enable enhanced recommendations. |
An Efficient & Secure Content Contribution and Retrieval content in Online Social Networks using Level-level Security Optimization & Content Visualization Algorithm | Received Nov 14, 2017 Revised Jan 26, 2018 Accepted Feb 11, 2018 Online Social Networks (OSNs) is currently popular interactive media to establish the communication, share and disseminate a considerable amount of human life data. Daily and continuous communications imply the exchange of several types of content, including free text, image, audio, and video data. Security is one of the friction points that emerge when communications get mediated in Online Social Networks (OSNs). However, there are no contentbased preferences supported, and therefore it is not possible to prevent undesired messages. Providing the service is not only a matter of using previously defined web content mining and security techniques. To overcome the issues, Level-level Security Optimization & Content Visualization Algorithm is proposed to avoid the privacy issues during content sharing and data visualization. It adopts level by level privacy based on user requirement in the social network. It evaluates the privacy compatibility in the online social network environment to avoid security complexities. The mechanism divided into three parts namely like online social network platform creation, social network privacy, social network within organizational privacy and network controlling and authentication. Based on the experimental evaluation, a proposed method improves the privacy retrieval accuracy (PRA) 9.13% and reduces content retrieval time (CRT) 7 milliseconds and information loss (IL) 5.33%. |
Real Time Implementation of RTOS based Vehicle Tracking System | A vehicle or fleet management system is implemented for tracking the movement of the vehicle at any time from any location. This proposed system helps in real time tracking of the vehicle using a smart phone application. This method is easy and efficient when compared to other implementations. In emerging technology of developing IOT (Internet of Things) the generic 8 bit/16 bit micro controllers are replaced by 32bit micro controllers in the embedded systems. This has many advantages like use of 32bit micro controller’s scalability, reusability and faster execution speed. Implementation of RTOS is very much necessary for having a real time system. RTOS features are application portability, reusability, more efficient use of system resources. The proposed system uses a 32bit ARM7 based microcontroller with an embedded Real Time Operating System (RTOS).The vehicle unit application is written on FreeRTOS. The peripheral drivers like UART, External interrupt are developed for RTOS aware environment. The vehicle unit consists of a GPS/GPRS module where the position of the vehicle is got from the Global Positioning System (GPS) and the General Packet Radio Service (GPRS) is used to update the timely information of the vehicle position. The vehicle unit updates the location to the Fleet management application on the web server. The vehicle management is a java based web application integrated with MySQL Database. The web application in the proposed system is based on OpenGTS open source vehicle tracking application. A GoTrack Android application is configured to work with web application. The smart phone application also provides a separate login for administrator to add, edit and remove the vehicles on the fleet management system. The users and administrators can continuously monitor the vehicle using a smart phone application. |
Disorders of the Cerebellum : Ataxia , Dysmetria of Thought , and the Cerebellar Cognitive Affective Syndrome | From the Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts. Address correspondence to Dr. Schmahmann, Department of Neurology, VBK 915, Massachusetts General Hospital, Fruit St., Boston, MA 02114; [email protected] (E-mail). Copyright 2004 American Psychiatric Publishing, Inc. Disorders of the Cerebellum: Ataxia, Dysmetria of Thought, and the Cerebellar Cognitive Affective Syndrome |
Self-report Depression Scales-Reply | In Reply.— Mr Zimmerman has some exciting ideas for a self-report instrument to assess major depression. We have attempted to try out his ideas with some of the data from the 1975 to 1976 New Haven (Conn) Health Survey, although we did not have all the information suggested. Using items from the CES-D, the Symptom Checklist-90 (SCL-90), the Social Adjustment Self-report Questionnaire, and some questions on suicidal feelings, we have simulated a self-report instrument that has better face validity than the traditional CES-D for the detection of RDC-diagnosed major depression. In this reanalysis of the data we followed two principles suggested by Zimmerman. First, we used self-report items that are directly parallel to specific criteria in the RDC definition of major depression, and second, we counted only those Items that were scored above a given threshold. Specifically, we considered criterion A of the RDC definition of major depression (dysphoric mood) to be |
A-life , Organism and Body : the semiotics of emergent levels | This paper comments upon some of the open problems in artificial life (cf. Bedeau et al 2000) from the perspective of a philosophy of biology tradition called qualitative organicism, and more specifically the emerging field of biosemiotics, the study of life processes as sign processes. Semiotics, in the sense of the pragmaticist philosopher and scientist Charles S. Peirce, is the general study of signs, and biosemiotics attempts to provide a new ground for understanding the nature of molecular information processing and sign processes at higher levels as well. Although we should not expect in Peirce to find any answers to the theoretical chalenges and open questions posed by ‘Wet’ Artificial Life, his semiotics (along with emergentist theories and cyborg studies) provide inspiration and conceptual tools to deal with the problems of life, mind, and information in the |
Quality and reliability investigation of Ni/Sn transient liquid phase bonding technology | A submicron of Ni/Sn transient liquid phase bonding at low temperature was investigated to surmount nowadays fine-pitch Cu/Sn process challenge. After bonding process, only uniform and high-temperature stable Ni3Sn4 intermetallic compound was existed. In addition, the advantages of this scheme showed excellent electrical and reliability performance and mechanical strength. |
Hypoascorbemia induces atherosclerosis and vascular deposition of lipoprotein(a) in transgenic mice. | Lipoprotein(a), a variant of LDL carrying the adhesive glycoprotein apo(a), is a leading risk factor for cardiovascular disease. Lipoprotein(a) (Lp(a)) is found in humans and subhuman primates but rarely in lower mammals. Better understanding of the evolutionary advantage of this molecule should elucidate its physiological role. We developed a new mouse model with two characteristics of human metabolism: the expression of Lp(a) and the lack of endogenous ascorbate (vitamin C) production. We show that dietary deficiency of ascorbate increases serum levels of Lp(a). Moreover, chronic hypoascorbemia and complete depletion of ascorbate (scurvy) leads to Lp(a) accumulation in the vascular wall and parallels atherosclerotic lesion development. The results suggest that dietary ascorbate deficiency is a risk factor for atherosclerosis independent of dietary lipids. We provide support for the concept that Lp(a) functions as a mobile repair molecule compensating for the structural impairment of the vascular wall, a morphological hallmark of hypoascorbemia and scurvy. |
CEM-RL: Combining evolutionary and gradient-based methods for policy search | Deep neuroevolution and deep reinforcement learning (deep RL) algorithms are two popular approaches to policy search. The former is widely applicable and rather stable, but suffers from low sample efficiency. By contrast, the latter is more sample efficient, but the most sample efficient variants are also rather unstable and highly sensitive to hyper-parameter setting. So far, these families of methods have mostly been compared as competing tools. However, an emerging approach consists in combining them so as to get the best of both worlds. Two previously existing combinations use either an ad hoc evolutionary algorithm or a goal exploration process together with the Deep Deterministic Policy Gradient (ddpg) algorithm, a sample efficient off-policy deep RL algorithm. In this paper, we propose a different combination scheme using the simple cross-entropy method (cem) and Twin Delayed Deep Deterministic policy gradient (td3), another off-policy deep RL algorithm which improves over ddpg. We evaluate the resulting method, cem-rl, on a set of benchmarks classically used in deep RL. We show that cem-rl benefits from several advantages over its competitors and offers a satisfactory trade-off between performance and sample efficiency. |
Solar fuels via artificial photosynthesis. | Because sunlight is diffuse and intermittent, substantial use of solar energy to meet humanity's needs will probably require energy storage in dense, transportable media via chemical bonds. Practical, cost effective technologies for conversion of sunlight directly into useful fuels do not currently exist, and will require new basic science. Photosynthesis provides a blueprint for solar energy storage in fuels. Indeed, all of the fossil-fuel-based energy consumed today derives from sunlight harvested by photosynthetic organisms. Artificial photosynthesis research applies the fundamental scientific principles of the natural process to the design of solar energy conversion systems. These constructs use different materials, and researchers tune them to produce energy efficiently and in forms useful to humans. Fuel production via natural or artificial photosynthesis requires three main components. First, antenna/reaction center complexes absorb sunlight and convert the excitation energy to electrochemical energy (redox equivalents). Then, a water oxidation complex uses this redox potential to catalyze conversion of water to hydrogen ions, electrons stored as reducing equivalents, and oxygen. A second catalytic system uses the reducing equivalents to make fuels such as carbohydrates, lipids, or hydrogen gas. In this Account, we review a few general approaches to artificial photosynthetic fuel production that may be useful for eventually overcoming the energy problem. A variety of research groups have prepared artificial reaction center molecules. These systems contain a chromophore, such as a porphyrin, covalently linked to one or more electron acceptors, such as fullerenes or quinones, and secondary electron donors. Following the excitation of the chromophore, photoinduced electron transfer generates a primary charge-separated state. Electron transfer chains spatially separate the redox equivalents and reduce electronic coupling, slowing recombination of the charge-separated state to the point that catalysts can use the stored energy for fuel production. Antenna systems, employing a variety of chromophores that absorb light throughout the visible spectrum, have been coupled to artificial reaction centers and have incorporated control and photoprotective processes borrowed from photosynthesis. Thus far, researchers have not discovered practical solar-driven catalysts for water oxidation and fuel production that are robust and use earth-abundant elements, but they have developed artificial systems that use sunlight to produce fuel in the laboratory. For example, artificial reaction centers, where electrons are injected from a dye molecule into the conduction band of nanoparticulate titanium dioxide on a transparent electrode, coupled to catalysts, such as platinum or hydrogenase enzymes, can produce hydrogen gas. Oxidizing equivalents from such reaction centers can be coupled to iridium oxide nanoparticles, which can oxidize water. This system uses sunlight to split water to oxygen and hydrogen fuel, but efficiencies are low and an external electrical potential is required. Although attempts at artificial photosynthesis fall short of the efficiencies necessary for practical application, they illustrate that solar fuel production inspired by natural photosynthesis is achievable in the laboratory. More research will be needed to identify the most promising artificial photosynthetic systems and realize their potential. |
Aging , Obsolescence and Organizational Innovation | • • The order of authorship on this paper is random and contributions were equal. We would like to thank Ron Burt, Jim March and Mike Tushman for many helpful suggestions. Olav Sorenson provided particularly extensive comments on this paper. We would like to acknowledge the financial support of the University of Chicago, Graduate School of Business and a grant from the Kauffman Center for Entrepreneurial Leadership. Clarifying the relationship between organizational aging and innovation processes is an important step in understanding the dynamics of high-technology industries, as well as for resolving debates in organizational theory about the effects of aging on organizational functioning. We argue that aging has two seemingly contradictory consequences for organizational innovation. First, we believe that aging is associated with increases in firms' rates of innovation. Simultaneously, however, we argue that the difficulties of keeping pace with incessant external developments causes firms' innovative outputs to become obsolete relative to the most current environmental demands. These seemingly contradictory outcomes are intimately related and reflect inherent trade-offs in organizational learning and innovation processes. Multiple longitudinal analyses of the relationship between firm age and patenting behavior in the semiconductor and biotechnology industries lend support to these arguments. Introduction In an increasingly knowledge-based economy, pinpointing the factors that shape the ability of organizations to produce influential ideas and innovations is a central issue for organizational studies. Among all organizational outputs, innovation is fundamental not only because of its direct impact on the viability of firms, but also because of its profound effects on the paths of social and economic change. In this paper, we focus on an ubiquitous organizational process-aging-and examine its multifaceted influence on organizational innovation. In so doing, we address an important unresolved issue in organizational theory, namely the nature of the relationship between aging and organizational behavior (Hannan 1998). Evidence clarifying the relationship between organizational aging and innovation promises to improve our understanding of the organizational dynamics of high-technology markets, and in particular the dynamics of technological leadership. For instance, consider the possibility that aging has uniformly positive consequences for innovative activity: on the foundation of accumulated experience, older firms innovate more frequently, and their innovations have greater significance than those of younger enterprises. In this scenario, technological change paradoxically may be associated with organizational stability, as incumbent organizations come to dominate the technological frontier and their preeminence only increases with their tenure. 1 Now consider the … |
A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion | This paper considers regularized block multiconvex optimization, where the feasible set and objective function are generally nonconvex but convex in each block of variables. It also accepts nonconvex blocks and requires these blocks to be updated by proximal minimization. We review some interesting applications and propose a generalized block coordinate descent method. Under certain conditions, we show that any limit point satisfies the Nash equilibrium conditions. Furthermore, we establish global convergence and estimate the asymptotic convergence rate of the method by assuming a property based on the Kurdyka– Lojasiewicz inequality. The proposed algorithms are tested on nonnegative matrix and tensor factorization, as well as matrix and tensor recovery from incomplete observations. The tests include synthetic data and hyperspectral data, as well as image sets from the CBCL and ORL databases. Compared to the existing state-of-the-art algorithms, the proposed algorithms demonstrate superior performance in both speed and solution quality. The MATLAB code of nonnegative matrix/tensor decomposition and completion, along with a few demos, are accessible from the authors’ homepages. |
Towards large-scale twitter mining for drug-related adverse events | Drug-related adverse events pose substantial risks to patients who consume post-market or Drug-related adverse events pose substantial risks to patients who consume post-market or investigational drugs. Early detection of adverse events benefits not only the drug regulators, but also the manufacturers for pharmacovigilance. Existing methods rely on patients' "spontaneous" self-reports that attest problems. The increasing popularity of social media platforms like the Twitter presents us a new information source for finding potential adverse events. Given the high frequency of user updates, mining Twitter messages can lead us to real-time pharmacovigilance. In this paper, we describe an approach to find drug users and potential adverse events by analyzing the content of twitter messages utilizing Natural Language Processing (NLP) and to build Support Vector Machine (SVM) classifiers. Due to the size nature of the dataset (i.e., 2 billion Tweets), the experiments were conducted on a High Performance Computing (HPC) platform using MapReduce, which exhibits the trend of big data analytics. The results suggest that daily-life social networking data could help early detection of important patient safety issues. |
Conditional generative adversarial nets for convolutional face generation | We apply an extension of generative adversarial networks (GANs) [8] to a conditional setting. In the GAN framework, a “generator” network is tasked with fooling a “discriminator” network into believing that its own samples are real data. We add the capability for each network to condition on some arbitrary external data which describes the image being generated or discriminated. By varying the conditional information provided to this extended GAN, we can use the resulting generative model to generate faces with specific attributes from nothing but random noise. We evaluate the likelihood of real-world faces under the generative model, and examine how to deterministically control face attributes by modifying the conditional information provided to the model. |
Long-term treadmill exercise improves spatial memory of male APPswe/PS1dE9 mice by regulation of BDNF expression and microglia activation. | Increasing evidence suggests that physical activity could delay or attenuate the symptoms of Alzheimer's disease (AD). But the underlying mechanisms are still not fully understood. To investigate the effect of long-term treadmill exercise on the spatial memory of AD mice and the possible role of β-amyloid, brain-derived neurotrophic factor (BDNF) and microglia in the effect, male APPswe/PS1dE9 AD mice aged 4 months were subjected to treadmill exercise for 5 months with 6 sessions per week and gradually increased load. A Morris water maze was used to evaluate the spatial memory. Expression levels of β-amyloid, BDNF and Iba-1 (a microglia marker) in brain tissue were detected by immunohistochemistry. Sedentary AD mice and wildtype C57BL/6J mice served as controls. The results showed that 5-month treadmill exercise significantly decreased the escape latencies (P < 0.01 on the 4th day) and improved the spatial memory of the AD mice in the water maze test. Meanwhile, treadmill exercise significantly increased the number of BDNF-positive cells and decreased the ratios of activated microglia in both the cerebral cortex and the hippocampus. However, treadmill exercise did not significantly alleviate the accumulation of β-amyloid in either the cerebral cortex or the hippocampus of the AD mice (P > 0.05). The study suggested that long-term treadmill exercise could improve the spatial memory of the male APPswe/PS1dE9 AD mice. The increase in BDNF-positive cells and decrease in activated microglia might underpin the beneficial effect. |
Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention | Classifying the semantic relationship between two entities in a sentence is termed as Relation Extraction (RE). RE from entity mentions is an important step in various Natural Language Processing tasks, such as, knowledge base construction, question-answering etc. Supervised methods have been successful on the relation extraction task [2, 18]. However, the extensive training data necessary for supervised learning is expensive to obtain and therefore restrictive in a Web-scale relation extraction task. To overcome this challenge, [6] proposed a Distant Supervision (DS) method for relation extraction to help automatically generate new training data by taking an intersection between a text corpora and knowledge base. However, the DS assumption is too strong, and may introduce noise such as false negative samples due to missing facts in knowledge base. In order to address this challenge, DS has been modeled as Multi-Instance Multi-Label (MIML) problem [14]. More recently, neural models for DS have been proposed [17, 12]. In this paper, we define ‘instance’ as a sentence containing an entity-pair, and ‘instance set’ as a set of sentences containing the same entity-pair. |
Room temperature ionic liquid as a novel medium for liquid/liquid extraction of metal ions | Room temperature ionic liquids (RTILs) have been used as novel solvents to replace traditional volatile organic solvents in organic synthesis, solvent extraction, and electrochemistry. The hydrophobic character and water immiscibility of certain ionic liquids allow their use in solvent extraction of hydrophobic compounds. In this work, a typical room temperature ionic liquid, 1-butyl-3-methylimidazolium hexafluorophosphate [C 4mim][PF6], was used as an alternative solvent to study liquid/liquid extraction of heavy metal ions. Dithizone was employed as a metal chelator to form neutral metal–dithizone complexes with heavy metal ions to extract metal ions from aqueous solution into [C 4mim][PF6]. This extraction is possible due to the high distribution ratios of the metal complexes between [C 4mim][PF6] and aqueous phase. Since the distribution ratios of metal dithiozonates between [C 4mim][PF6] and aqueous phase are strongly pH dependent, the extraction efficiencies of metal complexes can be manipulated by tailoring the pH value of the extraction system. Hence, the extraction, separation, and preconcentraction of heavy metal ions with the biphasic system of [C 4mim][PF6] and aqueous phase can be achieved by controlling the pH value of the extraction system. Preliminary results indicate that the use of [C 4mim][PF6] as an alternate solvent to replace traditional organic solvents in liquid/liquid extraction of heavy metal ions is very promising. © 2003 Elsevier B.V. All rights reserved. |
Update: clinically significant cytochrome P-450 drug interactions. | Recent technologies have resulted in an explosion of information concerning the cytochrome P-450 isoenzymes and increased awareness of life-threatening interactions with such commonly prescribed drugs as cisapride and some antihistamines. Knowledge of the substrates, inhibitors, and inducers of these enzymes assists in predicting clinically significant drug interactions. In addition to inhibition and induction, microsomal drug metabolism is affected by genetic polymorphisms, age, nutrition, hepatic disease, and endogenous chemicals. Of the more than 30 human isoenzymes identified to date, the major ones responsible for drug metabolism include CYP3A4, CYP2D6, CYP1A2, and the CYP2C subfamily. |
Linguistic analysis of differences in portrayal of movie characters | •Movies alter societal thinking patterns in previously unexplored social phenomena, by exposing the individual to what is shown on screen as the “norm” •Typical studies focus on audio/video modalities to estimate differences along factors such as gender •Linguistic analysis provides complementary information to the audio/video based analytics •We examine differences across gender, race and age |
Turbo Learning Framework for Human-Object Interactions Recognition and Human Pose Estimation | Human-object interactions (HOI) recognition and pose estimation are two closely related tasks. Human pose is an essential cue for recognizing actions and localizing the interacted objects. Meanwhile, human action and their interacted objects’ localizations provide guidance for pose estimation. In this paper, we propose a turbo learning framework to perform HOI recognition and pose estimation simultaneously. First, two modules are designed to enforce message passing between the tasks, i.e. pose aware HOI recognition module and HOI guided pose estimation module. Then, these two modules form a closed loop to utilize the complementary information iteratively, which can be trained in an end-to-end manner. The proposed method achieves the state-of-the-art performance on two public benchmarks including Verbs in COCO (V-COCO) and HICO-DET datasets. |
Can Computers Overcome Humans? Consciousness Interaction and its Implications | Can computers overcome human capabilities? This is a paradoxical and controversial question, particularly because there are many hidden assumptions. This article focuses on that issue putting on evidence some misconception related to future generations of machines and the understanding of the brain. It will discuss to what extent computers might reach human capabilities, and how it would be possible only if the computer is a conscious machine. However, it will be shown that if the computer is conscious, an interference process due to consciousness would affect the information processing of the system. Therefore, it might be possible to make conscious machines to overcome human capabilities, which will have similar limitations than humans. In other words, trying to overcome human capabilities with computers implies the paradoxical conclusion that a computer will never overcome human capabilities at all, or if the computer does, it should not be considered as a computer anymore. |
Findings of the Third Shared Task on Multimodal Machine Translation | We present the results from the third shared task on multimodal machine translation. In this task a source sentence in English is supplemented by an image and participating systems are required to generate a translation for such a sentence into German, French or Czech. The image can be used in addition to (or instead of) the source sentence. This year the task was extended with a third target language (Czech) and a new test set. In addition, a variant of this task was introduced with its own test set where the source sentence is given in multiple languages: English, French and German, and participating systems are required to generate a translation in Czech. Seven teams submitted 45 different systems to the two variants of the task. Compared to last year, the performance of the multimodal submissions improved, but text-only systems remain competitive. |
Disadvantage, inequality, and social policy. | Eliminating disparities in health is a primary goal of the federal government and many states. Our overarching objective should be to improve population health for all groups to the maximum extent. Ironically, enhancing population health and even the health of the disadvantaged can conflict with efforts to reduce disparities. This paper presents data showing that interventions that offer some of the largest possible gains for the disadvantaged may also increase disparities, and it examines policies that offer the potential to decrease disparities while improving population health. Enhancement of educational attainment and access to health services and income support for those in greatest need appear to be particularly important pathways to improved population health. |
Differential pattern of functional brain plasticity after compassion and empathy training. | Although empathy is crucial for successful social interactions, excessive sharing of others' negative emotions may be maladaptive and constitute a source of burnout. To investigate functional neural plasticity underlying the augmentation of empathy and to test the counteracting potential of compassion, one group of participants was first trained in empathic resonance and subsequently in compassion. In response to videos depicting human suffering, empathy training, but not memory training (control group), increased negative affect and brain activations in anterior insula and anterior midcingulate cortex-brain regions previously associated with empathy for pain. In contrast, subsequent compassion training could reverse the increase in negative effect and, in contrast, augment self-reports of positive affect. In addition, compassion training increased activations in a non-overlapping brain network spanning ventral striatum, pregenual anterior cingulate cortex and medial orbitofrontal cortex. We conclude that training compassion may reflect a new coping strategy to overcome empathic distress and strengthen resilience. |
Sources of Trust and Consumers' Participation in Permission-Based Mobile Marketing | The chapter investigates different sources of trust as factors affecting consumers’ willingness to provide companies with personal information and the permission to use it in mobile marketing. The chapter develops a conceptual model, which is tested with data from a survey of 200 young Finnish consumers. The data were analyzed by means of structural equation modeling (LISREL8.7). The main source of trust affecting the consumers’ decision to participate in mobile marketing is the company’s media presence, rather than personal experiences or social influence. Hence, mobile marketers should focus on building a strong and positive media presence and image in order to gain consumers’ permission for mobile marketing. Further, international research is required in order to investigate especially institutionally-based sources of trust in different regulatory and cultural environments. |
Learning to Generate Corrective Patches using Neural Machine Translation | Bug fixing is generally a manually-intensive task. However, recent work has proposed the idea of automated program repair, which aims to repair (at least a subset of) bugs in different ways such as code mutation, etc. Following in the same line of work as automated bug repair, in this paper we aim to leverage past fixes to propose fixes of current/future bugs. Specifically, we propose Ratchet, a corrective patch generation system using neural machine translation. By learning corresponding pre-correction and post-correction code in past fixes with a neural sequence-to-sequence model, Ratchet is able to generate a fix code for a given bug-prone code query. We perform an empirical study with five open source projects, namely Ambari, Camel, Hadoop, Jetty and Wicket, to evaluate the effectiveness of Ratchet. Our findings show that Ratchet can generate syntactically valid statements 98.7% of the time, and achieve an F1-measure between 0.41-0.83 with respect to the actual fixes adopted in the code base. In addition, we perform a qualitative validation using 20 participants to see whether the generated statements can be helpful in correcting bugs. Our survey showed that Ratchet’s output was considered to be helpful in fixing the bugs on many occasions, even if fix was not 100% |
Practical and lightweight domain isolation on Android | In this paper, we introduce a security framework for practical and lightweight domain isolation on Android to mitigate unauthorized data access and communication among applications of different trust levels (e.g., private and corporate). We present the design and implementation of our framework, TrustDroid, which in contrast to existing solutions enables isolation at different layers of the Android software stack: (1) at the middleware layer to prevent inter-domain application communication and data access, (2) at the kernel layer to enforce mandatory access control on the file system and on Inter-Process Communication (IPC) channels, and (3) at the network layer to mediate network traffic. For instance, (3) allows network data to be only read by a particular domain, or enables basic context-based policies such as preventing Internet access by untrusted applications while an employee is connected to the company's network.
Our approach accurately addresses the demands of the business world, namely to isolate data and applications of different trust levels in a practical and lightweight way. Moreover, our solution is the first leveraging mandatory access control with TOMOYO Linux on a real Android device (Nexus One). Our evaluation demonstrates that TrustDroid only adds a negligible overhead, and in contrast to contemporary full virtualization, only minimally affects the battery's life-time. |
Trichotillomania (hair pulling disorder), skin picking disorder, and stereotypic movement disorder: toward DSM-V. | In DSM-IV-TR, trichotillomania (TTM) is classified as an impulse control disorder (not classified elsewhere), skin picking lacks its own diagnostic category (but might be diagnosed as an impulse control disorder not otherwise specified), and stereotypic movement disorder is classified as a disorder usually first diagnosed in infancy, childhood, or adolescence. ICD-10 classifies TTM as a habit and impulse disorder, and includes stereotyped movement disorders in a section on other behavioral and emotional disorders with onset usually occurring in childhood and adolescence. This article provides a focused review of nosological issues relevant to DSM-V, given recent empirical findings. This review presents a number of options and preliminary recommendations to be considered for DSM-V: (1) Although TTM fits optimally into a category of body-focused repetitive behavioral disorders, in a nosology comprised of relatively few major categories it fits best within a category of motoric obsessive-compulsive spectrum disorders, (2) available evidence does not support continuing to include (current) diagnostic criteria B and C for TTM in DSM-V, (3) the text for TTM should be updated to describe subtypes and forms of hair pulling, (4) there are persuasive reasons for referring to TTM as "hair pulling disorder (trichotillomania)," (5) diagnostic criteria for skin picking disorder should be included in DSM-V or in DSM-Vs Appendix of Criteria Sets Provided for Further Study, and (6) the diagnostic criteria for stereotypic movement disorder should be clarified and simplified, bringing them in line with those for hair pulling and skin picking disorder. |
SEISMIC LOSS ESTIMATION OF MID-RISE MASONRY INFILLED STEEL FRAME STRUCTURES THROUGH INCREMENTAL DYNAMIC ANALYSIS | The seismic loss estimation is greatly influenced by the identification of the failure mechanism and distribution of the structures. In case of infilled structures, the final failure mechanism greatly differs to that expected during the design and the analysis stages. This is mainly due to the resultant composite behaviour of the frame and the infill panel, which makes the failure assessment and consequently the loss estimation a challenge. In this study, a numerical investigation has been conducted on the influence of masonry infilled panels on physical structural damages and the associated economic losses, under seismic excitation. The selected index buildings have been simulated following real case typical mid-rise masonry infilled steel frame structures. A realistic simulation of construction details, such as variation of infill material properties, type of connections and built quality have been implemented in the models. The fragility functions have been derived for each model using the outcomes obtained from incremental dynamic analysis (IDA). Moreover, by considering different cases of building distribution, the losses have been estimated following an intensity-based assessment approach. The results indicate that the presence of infill panel have a noticeable influence on the vulnerability of the structure and should not be ignored in loss estimations. |
A Framework for Integrating Non-Functional Requirements into Conceptual Models | The development of complex information systems calls for conceptual models that describe aspects beyond entities and activities. In particular, recent research has pointed out that conceptual models need to model goals, in order to capture the intentions which underlie complex situations within an organisational context. This paper focuses on one class of goals, namely nonfunctional requirements (NFR), which need to be captured and analysed from the very early phases of the software development process. The paper presents a framework for integrating NFRs into the ER and OO models. This framework has been validated by two case studies, one of which is very large. The results of the case studies suggest that goal modelling during early phases can lead to a more productive and complete modelling activity. |
Comparing the impact of supine and leg elevation positions during coronary artery bypass graft on deep vein thrombosis occurrence: a randomized clinical trial study. | Deep vein thrombosis (DVT) is a common preoperative complication that occurs in patients who undergoing coronary artery bypass grafting surgery (CABG). Early ambulation, elastic stockings, intermittent pneumatic compression, and leg elevation, before and after surgery, are among preventative interventions. The goal of the study was to compare the effect of supine position with that of leg elevation on the occurrence of DVT during CABG and after, until ambulation. Between October, 2008, and May, 2011, a total of 185 eligible CABG patients admitted to the Cardiac Surgery Unit were randomly assigned to groups designated as the supine group (n = 92) or the leg-elevation group (n = 93). Of this total, 92 patients were assigned to the supine group and 93 to the leg-elevation group. Doppler ultrasonography of the superficial and deep veins in the lower extremities was performed for each patient before and after surgery. Logistic regression analysis was conducted to investigate the possible independent factors associated with DVT. DVT was detected in 25 (13.5%) patients: 17 (18.4%) patients in the supine position group and 8 (8.6%) in the leg-elevation group (P value = .065). After adjustment for confounding factors there was no effect of position on the presence of DVT (P = .126).Clots were often localized in legs ipsilateral to the saphenous vein harvest. The authors conclude that a positive, albeit not statistically significant, trend was evident toward higher incidence of silent DVT in supine position during and after CABG in comparison with leg elevation. Future studies with larger sample sizes are required to confirm this result. |
Impact of the variation of horse racing odds on the outcome of horse races at the Champ de Mars | Horse Racing is the favourite sports of Mauritians. This can be demonstrated by the presence of huge crowds at the Champ de Mars on racing days. Many people wait for the last moment to bet as they feel that the variation of odds has an influence on the winner of a race at the Champ de Mars. This is the motivation for this research. Thus, in this work, we have used artificial neural networks to predict the winner of a race based on the variation of the odds. The odds were collected at eight different intervals. The training has been done on 232 races and the testing on 27 races. An overall percentage of 7.4% was obtained for the prediction of winners. This shows that the variation in horse racing odds does not have any impact on the outcome of horse races at the Champs de Mars. To our knowledge, this is the first study which studies the relationship between the variation in odds and the rank of horses. |
Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation | We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form–function relationship in language, our “composed” word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish). |
Prediction of Aggressive Comments in Social Media: an Exploratory Study | This paper presents a set of techniques for predicting aggressive comments in social media. In a time when cyberbullying has, unfortunately, made its entrance into society and Internet, it becomes necessary to find ways for preventing and overcoming this phenomenon. One of these concerns the use of machine learning techniques for automatically detecting cases of cyberbullying; a primary task within this cyberbullying detection consists of aggressive text detection. We concretely explore different computational techniques for carrying out this task, either as a classification or as a regression problem, and our results suggest that a key feature is the identification of profane words. |
Defending against VM rollback attack | Recently it became a hot topic to protect VMs from a compromised or even malicious hypervisor. However, most previous systems are vulnerable to rollback attack, since it is hard to distinguish from normal suspend/resume and migration operations that an IaaS platform usually offers. Some of the previous systems simply disable these features to defend rollback attack, while others heavily need user involvement. In this paper, we propose a new solution to make a balance between security and functionality. By securely logging all the suspend/resume and migration operation inside a small trusted computing base, a user can audit the log to check malicious rollback and constrain the operations on the VMs. The solution considers several practical issues including hardware limitations and minimizing user's interaction, and has been implemented on a recent VM protection system. |
Medicolegal anthropology in France. | Medicolegal anthropology has a very long history in France. Basic studies on human skeletal remains started as early as the 18th century. The 19th century produced many medical theses and research papers on age, sex, as well as stature estimation. The research proliferated in the first 60 years of the 20th century, much of which is still in use in France and abroad. The later half of the 20th century, however, was dormant in research on human skeletal biology at a time when forensic anthropology was becoming an active field worldwide. In the last decade, medicolegal anthropology took a different perspective, independent of its traditional roots. Research and practice have both been in the professional domain of forensic physicians unlike the situation in many other countries. Population based studies requiring large databases or skeletal collections have diminished considerably. Thus, most research has been on factors of individualization such as trauma, time since death, crime scene investigation, and facial reconstruction. It is suggested that there is a need for cooperation between the forensic physician and anthropologist to further research. This also encourages anthropologists to carry out research and practice that can fulfill the needs of the medicolegal system of the country. |
On the modeling of error functions as high dimensional landscapes for weight initialization in learning networks | Next generation deep neural networks for classification hosted on embedded platforms will rely on fast, efficient, and accurate learning algorithms. Initialization of weights in learning networks has a great impact on the classification accuracy. In this paper we focus on deriving good initial weights by modeling the error function of a deep neural network as a high-dimensional landscape. We observe that due to the inherent complexity in its algebraic structure, such an error function may conform to general results of the statistics of large systems. To this end we apply some results from Random Matrix Theory to analyse these functions. We model the error function in terms of a Hamiltonian in N-dimensions and derive some theoretical results about its general behavior. These results are further used to make better initial guesses of weights for the learning algorithm. |
Learning Spectral Clustering | Spectral clustering refers to a class of techniques which rely on the eigenstructure of a similarity matrix to partition points into disjoint clusters, with points in the same cluster having high similarity and points in different clusters having low similarity. In this paper, we derive a new cost function for spectral clustering based on a measure of error between a given partition and a solution of the spectral relaxation of a minimum normalized cut problem. Minimizing this cost function with respect to the partition leads to a new spectral clustering algorithm. Minimizing with respect to the similarity matrix leads to an algorithm for learning the similarity matrix. We develop a tractable approximation of our cost function that is based on the power method of computing eigenvectors. |
Empaglif lozin as Add-On to Metformin in Patients With Type 2 Diabetes : A 24-Week , Randomized , Double-Blind , Placebo-Controlled Trial | RESEARCH DESIGN AND METHODS Patients with HbA1c levels of‡7% to£ 10% (‡53 to£86mmol/mol) while receiving metformin (‡1,500 mg/day) were randomized and treated with once-daily treatment with empagliflozin 10mg (n = 217), empagliflozin 25 mg (n = 213), or placebo (n = 207) for 24 weeks. The primary end point was the change in HbA1c level from baseline at week 24. Key secondary end points were changes from baseline in weight and mean daily glucose (MDG) at week 24. |
Mechanism-Aware Neural Machine for Dialogue Response Generation | To the same utterance, people’s responses in everyday dialogue may be diverse largely in terms of content semantics, speaking styles, communication intentions and so on. Previous generative conversational models ignore these 1-to-n relationships between a post to its diverse responses, and tend to return high-frequency but meaningless responses. In this study we propose a mechanism-aware neural machine for dialogue response generation. It assumes that there exists some latent responding mechanisms, each of which can generate different responses for a single input post. With this assumption we model different responding mechanisms as latent embeddings, and develop a encoder-diverter-decoder framework to train its modules in an end-to-end fashion. With the learned latent mechanisms, for the first time these decomposed modules can be used to encode the input into mechanism-aware context, and decode the responses with the controlled generation styles and topics. Finally, the experiments with human judgements, intuitive examples, detailed discussions demonstrate the quality and diversity of the generated responses with 9.80% increase of acceptable ratio over the best of six baseline methods. |
Comparison of Simulation Tools ATP-EMTP and MATLAB-Simulink for Time Domain Power System Transient Studies | Continuous development of appropriate software packages makes simulation of power engineering problems more and more effective. However, these analysis tools differ from each other considerably from the point of view of the applicability to a special problem. The authors compare two widespread environments: MATLAB-SIMULINK, which can be used to simulate a wide spectrum of dynamic systems, and ATP-EMTP, which is specific software to simulate power system transient problems. In the first part of the paper the components (function-blocks) that can be used to build a circuit, are listed. Then three examples are presented which demonstrate the capabilities and underline the advantages and drawbacks of both programs. |
Information-Flow Security for a Core of JavaScript | Tracking information flow in dynamic languages remains an important and intricate problem. This paper makes substantial headway toward understanding the main challenges and resolving them. We identify language constructs that constitute a core of Java Script: objects, higher-order functions, exceptions, and dynamic code evaluation. The core is powerful enough to naturally encode native constructs as arrays, as well as functionalities of Java Script's API from the document object model (DOM) related to document tree manipulation and event processing. As the main contribution, we develop a dynamic type system that guarantees information-flow security for this language. |
Edge-based visual-inertial odometry | In this paper we propose a method for monocular visual-inertial odometry that utilizes image edges as measurements. In contrast to previous feature-based approaches, the proposed method does not employ any assumption on the geometry of the scene (e.g., it does not assume straight lines). It can thus use measurements from all image areas with significant gradient, similarly to direct semi-dense methods. However, in contrast to direct semi-dense approaches, the proposed method's measurement model is invariant to linear changes in the image intensity. The novel edge parameterization and measurement model we propose explicitly account for the fact that edge points can only provide useful information in the direction of the image gradient. We present both Monte-Carlo simulations, as well as results from real-world experimental testing, which demonstrate that the proposed edge-based approach to visual-inertial odometry is consistent, and outperforms the point-based one. |
CONDITIONAL RANDOM FIELDS FOR LIDAR POINT CLOUD CLASSIFICATION IN COMPLEX URBAN AREAS | In this paper, we investigate the potential of a Conditional Random Field (CRF) approach for the classification of an airborne LiDAR (Light Detection And Ranging) point cloud. This method enables the incorporation of contextual information and learning of specific relations of object classes within a training step. Thus, it is a powerful approach for obtaining reliable results even in complex urban scenes. Geometrical features as well as an intensity value are used to distinguish the five object classes building, low vegetation, tree, natural ground, and asphalt ground. The performance of our method is evaluated on the dataset of Vaihingen, Germany, in the context of the ’ISPRS Test Project on Urban Classification and 3D Building Reconstruction’. Therefore, the results of the 3D classification were submitted as a 2D binary label image for a subset of two classes, namely building and tree. |
Integrating Surface and Abstract Features for Robust Cross-Domain Chinese Word Segmentation | Current character-based approaches are not robust for cross domain Chin ese word segmentation. In this paper, we alleviate this problem by deriving a novel enhanced ch aracterbased generative model with a new abstract aggregate candidate-feature, which indicates if th given candidate prefers the corresponding position-tag of the longest dictionary matching wo rd. Since the distribution of the proposed feature is invariant across domains, our m del thus possesses better generalization ability. Open tests on CIPS-SIGHAN-2010 show that the enhance d gen rative model achieves robust cross-domain performance for various OOV cov erage rates and obtains the best performance on three out of four domains. The enhanced gen erative model is then further integrated with a discriminative model which also utilizes dictionary information . This integrated model is shown to be either superior or comparable to all other models repo rted in the literatur e on every domain of this task. |
The state of the art in semantic relatedness: a framework for comparison | Semantic relatedness (SR) is a form of measurement that quantitatively identifies the relationship between two words or concepts based on the similarity or closeness of their meaning. In the recent years, there have been noteworthy efforts to compute SR between pairs of words or concepts by exploiting various knowledge resources such as linguistically structured (e.g. WordNet) and collaboratively developed knowledge bases (e.g. Wikipedia), among others. The existing approaches rely on different methods for utilizing these knowledge resources, for instance, methods that depend on the path between two words, or a vector representation of the word descriptions. The purpose of this paper is to review and present the state of the art in SR research through a hierarchical framework. The dimensions of the proposed framework cover three main aspects of SR approaches including the resources they rely on, the computational methods applied on the resources for developing a relatedness metric, and the evaluation models that are used for measuring their effectiveness. We have selected 14 representative SR approaches to be analyzed using our framework. We compare and critically review each of them through the dimensions of our framework, thus, identifying strengths and weaknesses of each approach. In addition, we provide guidelines for researchers and practitioners on how to select the most relevant SR method for their purpose. Finally, based on the comparative analysis of the reviewed relatedness measures, we identify existing challenges and potentially valuable future research directions in this domain. |
Comparison of vaginal mesh repair with sacrospinous vaginal colpopexy in the management of vaginal vault prolapse after hysterectomy in patients with levator ani avulsion: a randomized controlled trial. | OBJECTIVE
To compare the efficacy of two standard surgical procedures for post-hysterectomy vaginal vault prolapse in patients with levator ani avulsion.
METHODS
This was a single-center, randomized interventional trial, of two standard surgical procedures for post-hysterectomy vaginal vault prolapse: Prolift Total vs unilateral vaginal sacrospinous colpopexy with native tissue vaginal repair (sacrospinous fixation, SSF), during the period from 2008 to 2011. Entry criteria included at least two-compartment prolapse, as well as complete unilateral or bilateral levator ani avulsion injury. The primary outcome was anatomical failure based on clinical and ultrasound assessment. Failure was defined clinically, according to the Pelvic Organ Prolapse Quantification system, as Ba, C or Bp at the hymen or below, and on translabial ultrasound as bladder descent to 10 mm or more below the lower margin of the symphysis pubis on maximum Valsalva maneuver. Secondary outcomes were evaluation of continence, sexual function and prolapse symptoms based on validated questionnaires.
RESULTS
During the study period, 142 patients who were post-hysterectomy underwent surgery for prolapse in our unit; 72 of these were diagnosed with an avulsion injury and were offered participation in the study. Seventy patients were randomized into two groups: 36 in the Prolift group and 34 in the SSF group. On clinical examination at 1-year follow-up, we observed one (3%) case of anatomical failure in the Prolift group and 22 (65%) in the SSF group (P < 0.001). Using ultrasound criteria, there was one (2.8%) failure in the Prolift group compared with 21 (61.8%) in the SSF group (P < 0.001). The postoperative POPDI (Pelvic Organ Prolapse Distress Inventory) score for subjective outcome was 15.3 in the Prolift group vs 21.7 in the SSF group (P = 0.16).
CONCLUSION
In patients with prolapse after hysterectomy and levator ani avulsion injury, SSF has a higher anatomical failure rate than does the Prolift Total procedure at 1-year follow-up. |
Scanpath comparison revisited | The scanpath comparison framework based on string editing is revisited. The previous method of clustering based on k-means "preevaluation" is replaced by the mean shift algorithm followed by elliptical modeling via Principal Components Analysis. Ellipse intersection determines cluster overlap, with fast nearest-neighbor search provided by the kd-tree. Subsequent construction of Y - matrices and parsing diagrams is fully automated, obviating prior interactive steps. Empirical validation is performed via analysis of eye movements collected during a variant of the Trail Making Test, where participants were asked to visually connect alphanumeric targets (letters and numbers). The observed repetitive position similarity index matches previously published results, providing ongoing support for the scanpath theory (at least in this situation). Task dependence of eye movements may be indicated by the global position index, which differs considerably from past results based on free viewing. |
Modes of theorizing in strategic human resource management: tests of universalistic, contingency, | The field of strategic human resource management (SHRM) has been criticized for lacking a solid theoretical foundation. This article documents that, contrary to this criticism, the SHRM literature ... |
Combined effect of angiotensin II receptor blocker and either a calcium channel blocker or diuretic on day-by-day variability of home blood pressure: the Japan Combined Treatment With Olmesartan and a Calcium-Channel Blocker Versus Olmesartan and Diuretics Randomized Efficacy Study. | Day-by-day home blood pressure (BP) variability (BPV) was reported to be associated with increased cardiovascular risk. We aimed to test the hypothesis that the angiotensin II receptor blocker/calcium-channel blocker combination decreases day-by-day BPV more than the angiotensin II receptor blocker/diuretic combination does and investigated the mechanism underlying the former reduction. We enrolled 207 hypertensive subjects treated with olmesartan monotherapy for 12 weeks. The subjects were randomly assigned to treatment with hydrochlorothiazide (n = 104) or azelnidipine (n = 103) for 24 weeks. Home BP was taken in triplicate with a memory-equipped device in the morning and evening, respectively, for 5 consecutive days before each visit. Visits occurred at 4-week intervals. Home BPV was defined as within-individual SD of the 5-day home BP. Arterial stiffness was assessed by aortic pulse wave velocity at baseline and 24 weeks later. The reductions in home systolic BP were similar between the 2 groups, whereas the SD of home systolic BP decreased more in the azelnidipine group than in the hydrochlorothiazide group during the follow-up period (follow-up mean: 6.3 versus 7.1 mm Hg; P = 0.007). In the azelnidipine group, the change in aortic pulse wave velocity was independently associated with the change in SD of home systolic BP (regression coefficient ± SE = 0.79 ± 0.37; P = 0.036). This study demonstrated that the angiotensin II receptor blocker/calcium-channel blocker combination improved home BPV in addition to home BP reduction and that the reduction in home BPV was partly attributable to the arterial stiffness reduction by this combination. |
Association of Rare Loss-Of-Function Alleles in HAL, Serum Histidine: Levels and Incident Coronary Heart Disease. | BACKGROUND
Histidine is a semiessential amino acid with antioxidant and anti-inflammatory properties. Few data are available on the associations between genetic variants, histidine levels, and incident coronary heart disease (CHD) in a population-based sample.
METHODS AND RESULTS
By conducting whole exome sequencing on 1152 African Americans in the Atherosclerosis Risk in Communities (ARIC) study and focusing on loss-of-function (LoF) variants, we identified 3 novel rare LoF variants in HAL, a gene that encodes histidine ammonia-lyase in the first step of histidine catabolism. These LoF variants had large effects on blood histidine levels (β=0.26; P=1.2×10(-13)). The positive association with histidine levels was replicated by genotyping an independent sample of 718 ARIC African Americans (minor allele frequency=1%; P=1.2×10(-4)). In addition, high blood histidine levels were associated with reduced risk of developing incident CHD with an average of 21.5 years of follow-up among African Americans (hazard ratio=0.18; P=1.9×10(-4)). This finding was validated in an independent sample of European Americans from the Framingham Heart Study (FHS) Offspring Cohort. However, LoF variants in HAL were not directly significantly associated with incident CHD after meta-analyzing results from the CHARGE Consortium.
CONCLUSIONS
Three LoF mutations in HAL were associated with increased histidine levels, which in turn were shown to be inversely related to the risk of CHD among both African Americans and European Americans. Future investigations on the association between HAL gene variation and CHD are warranted. |
Economic burden of dengue in Indonesia | BACKGROUND
Dengue is associated with significant economic expenditure and it is estimated that the Asia Pacific region accounts for >50% of the global cost. Indonesia has one of the world's highest dengue burdens; Aedes aegypti and Aedes albopictus are the primary and secondary vectors. In the absence of local data on disease cost, this study estimated the annual economic burden during 2015 of both hospitalized and ambulatory dengue cases in Indonesia.
METHODS
Total 2015 dengue costs were calculated using both prospective and retrospective methods using data from public and private hospitals and health centres in three provinces: Yogyakarta, Bali and Jakarta. Direct costs were extracted from billing systems and claims; a patient survey captured indirect and out-of-pocket costs at discharge and 2 weeks later. Adjustments across sites based on similar clinical practices and healthcare landscapes were performed to fill gaps in cost estimates. The national burden of dengue was extrapolated from provincial data using data from the three sites and applying an empirically-derived epidemiological expansion factor.
RESULTS
Total direct and indirect costs per dengue case assessed at Yogyakarta, Bali and Jakarta were US$791, US$1,241 and US$1,250, respectively. Total 2015 economic burden of dengue in Indonesia was estimated at US$381.15 million which comprised US$355.2 million for hospitalized and US$26.2 million for ambulatory care cases.
CONCLUSION
Dengue imposes a substantial economic burden for Indonesian public payers and society. Complemented with an appropriate weighting method and by accounting for local specificities and practices, these data may support national level public health decision making for prevention/control of dengue in public health priority lists. |
Image change detection algorithms: a systematic survey | Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. This paper presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling. We also discuss important preprocessing methods, approaches to enforcing the consistency of the change mask, and principles for evaluating and comparing the performance of change detection algorithms. It is hoped that our classification of algorithms into a relatively small number of categories will provide useful guidance to the algorithm designer. |
Multi-label Fashion Image Classification with Minimal Human Supervision | We tackle the problem of multi-label classification of fashion images, learning from noisy data with minimal human supervision. We present a new dataset of full body poses, each with a set of 66 binary labels corresponding to the information about the garments worn in the image obtained in an automatic manner. As the automatically-collected labels contain significant noise, we manually correct the labels for a small subset of the data, and use these correct labels for further training and evaluation. We build upon a recent approach that both cleans the noisy labels and learns to classify, and introduce simple changes that can significantly improve the performance. |
Resistance and Biosorption of Mercury by Bacteria Isolated from Industrial Effluents | The present study is aimed at assessing the ability of two Hg resistant bacterial strains, Brevibacterium casei and Pseudomonas aeruginosa, to uptake metal from the medium. For the bacterial isolates the minimum inhibitory concentration of Hg ranged between 400-500 μg/mL. Pseudomonas aeruginosa could tolerate Pb (600 μg/mL), Cu (200 μg/mL), Cd (50 μg/mL), Zn (50 μg/mL), Ni (550 μg/mL) and Cr (50 μg/mL). Brevibacterium casei, on the other hand, showed resistance against Pb, Cu, Cr, Ni, Zn and Cd at a concentration of 650, 200, 150, 550, 50, and 50 μg/mL, respectively. The isolates showed typical growth curves but lag and log phases extended in the presence of mercury. Both isolates showed optimum growth at 37 oC and pH varying from 7-7.5. Metal processing ability of the isolates was determined in a medium containing 100 μg/mL of Hg. Pseudomonas aeruginosa could reduce 93% of mercury from the medium after 40 hours and was also capable to remove Hg 35%, 55% 70% and 85% from the medium after 8, 16, 24 and 32 hours, respectively. Brevibacterium casei could also efficiently remove 80% mercury from the medium after 40 hours and was also able to remove Hg 20%, 40%, 50%, and 65% from the medium after 8, 16, 24 and 32 hours, respectively. Both bacterial strains have shown remarkable ability to uptake metal ions from the culture medium. Pseudomonas aeruginosa was observed to uptake 80% and Brevibacterium casei 70% of Hg from the medium after 24 hours of incubation at 37C. The metal uptake ability suggests possibility of using these bacterial strains for removal of mercury from Hg contaminated wastewater. |
Direction of Arrival Estimation of GNSS Signals Based on Synthetic Antenna Array | Jammer and interference are sources of errors in positions estimated by GNSS receivers. The interfering signals reduce signal-to-noise ratio and cause receiver failure to correctly detect satellite signals. Because of the robustness of beamforming techniques to jamming and multipath mitigation by placing nulls in direction of interference signals, an antenna array with a set of multi-channel receivers can be used to improve GNSS signal reception. Spatial reference beam forming uses the information in the Direction Of Arrival (DOA) of desired and interference signals for this purpose. However, using a multi-channel receiver is not applicable in many applications for estimating the Angle Of Arrival (AOA) of the signal (hardware limitations or portability issues). This paper proposes a new method for DOA estimation of jammer and interference signals based on a synthetic antenna array. In this case, the motion of a single antenna can be used to estimate the AOA of the interfering signals. |
LTE-advanced in 3GPP Rel -13/14: an evolution toward 5G | As the fourth generation (4G) LTE-Advanced network becomes a commercial success, technologies for beyond 4G and 5G are being actively investigated from the research perspective as well as from the standardization perspective. While 5G will integrate the latest technology breakthroughs to achieve the best possible performance, it is expected that LTE-Advanced will continue to evolve, as a part of 5G technologies, in a backward compatible manner to maximize the benefit from the massive economies of scale established around the 3rd Generation Partnership Project (3GPP) LTE/LTE-Advanced ecosystem from Release 8 to Release 12. In this article we introduce a set of key technologies expected for 3GPP Release 13 and 14 with a focus on air interface aspects, as part of the continued evolution of LTE-Advanced and as a bridge from 4G to 5G. |
Measuring Coverage in MNCH: A Prospective Validation Study in Pakistan and Bangladesh on Measuring Correct Treatment of Childhood Pneumonia | BACKGROUND
Antibiotic treatment for pneumonia as measured by Demographic and Health Surveys (DHS) and Multiple Indicator Cluster Surveys (MICS) is a key indicator for tracking progress in achieving Millennium Development Goal 4. Concerns about the validity of this indicator led us to perform an evaluation in urban and rural settings in Pakistan and Bangladesh.
METHODS AND FINDINGS
Caregivers of 950 children under 5 y with pneumonia and 980 with "no pneumonia" were identified in urban and rural settings and allocated for DHS/MICS questions 2 or 4 wk later. Study physicians assigned a diagnosis of pneumonia as reference standard; the predictive ability of DHS/MICS questions and additional measurement tools to identify pneumonia versus non-pneumonia cases was evaluated. Results at both sites showed suboptimal discriminative power, with no difference between 2- or 4-wk recall. Individual patterns of sensitivity and specificity varied substantially across study sites (sensitivity 66.9% and 45.5%, and specificity 68.8% and 69.5%, for DHS in Pakistan and Bangladesh, respectively). Prescribed antibiotics for pneumonia were correctly recalled by about two-thirds of caregivers using DHS questions, increasing to 72% and 82% in Pakistan and Bangladesh, respectively, using a drug chart and detailed enquiry.
CONCLUSIONS
Monitoring antibiotic treatment of pneumonia is essential for national and global programs. Current (DHS/MICS questions) and proposed new (video and pneumonia score) methods of identifying pneumonia based on maternal recall discriminate poorly between pneumonia and children with cough. Furthermore, these methods have a low yield to identify children who have true pneumonia. Reported antibiotic treatment rates among these children are therefore not a valid proxy indicator of pneumonia treatment rates. These results have important implications for program monitoring and suggest that data in its current format from DHS/MICS surveys should not be used for the purpose of monitoring antibiotic treatment rates in children with pneumonia at the present time. |
Subsets and Splits