title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Body odor attractiveness as a cue of impending ovulation in women: Evidence from a study using hormone-confirmed ovulation | Scent communication plays a central role in the mating behavior of many nonhuman mammals but has often been overlooked in the study of human mating. However, a growing body of evidence suggests that men may perceive women's high-fertility body scents (collected near ovulation) as more attractive than their low-fertility body scents. The present study provides a methodologically rigorous replication of this finding, while also examining several novel questions. Women collected samples of their natural body scent twice--once on a low-fertility day and once on a high-fertility day of the ovulatory cycle. Tests of luteinizing hormone confirmed that women experienced ovulation within two days of their high-fertility session. Men smelled each woman's high- and low-fertility scent samples and completed discrimination and preference tasks. At above-chance levels, men accurately discriminated between women's high- and low-fertility scent samples (61%) and chose women's high-fertility scent samples as more attractive than their low-fertility scent samples (56%). Men also rated each scent sample on sexiness, pleasantness, and intensity, and estimated the physical attractiveness of the woman who had provided the sample. Multilevel modeling revealed that, when high- and low-fertility scent samples were easier to discriminate from each other, high-fertility scent samples received even more favorable ratings compared with low-fertility scent samples. This study builds on a growing body of evidence indicating that men are attracted to cues of impending ovulation in women and raises the intriguing question of whether women's cycling hormones influence men's attraction and sexual approach behavior. |
Partitioning graphs to speedup Dijkstra's algorithm | We study an acceleration method for point-to-point shortest-path computations in large and sparse directed graphs with given nonnegative arc weights. The acceleration method is called the arc-flag approach and is based on Dijkstra's algorithm. In the arc-flag approach, we allow a preprocessing of the network data to generate additional information, which is then used to speedup shortest-path queries. In the preprocessing phase, the graph is divided into regions and information is gathered on whether an arc is on a shortest path into a given region. The arc-flag method combined with an appropriate partitioning and a bidirected search achieves an average speedup factor of more than 500 compared to the standard algorithm of Dijkstra on large networks (1 million nodes, 2.5 million arcs). This combination narrows down the search space of Dijkstra's algorithm to almost the size of the corresponding shortest path for long-distance shortest-path queries. We conduct an experimental study that evaluates which partitionings are best suited for the arc-flag method. In particular, we examine partitioning algorithms from computational geometry and a multiway arc separator partitioning. The evaluation was done on German road networks. The impact of different partitions on the speedup of the shortest path algorithm are compared. Furthermore, we present an extension of the speedup technique to multiple levels of partitions. With this multilevel variant, the same speedup factors can be achieved with smaller space requirements. It can, therefore, be seen as a compression of the precomputed data that preserves the correctness of the computed shortest paths. |
Unsupervised Modeling of Twitter Conversations | We propose the first unsupervised approach to the problem of modeling dialogue acts in an open domain. Trained on a corpus of noisy Twitter conversations, our method discovers dialogue acts by clustering raw utterances. Because it accounts for the sequential behaviour of these acts, the learned model can provide insight into the shape of communication in a new medium. We address the challenge of evaluating the emergent model with a qualitative visualization and an intrinsic conversation ordering task. This work is inspired by a corpus of 1.3 million Twitter conversations, which will be made publicly available. This huge amount of data, available only because Twitter blurs the line between chatting and publishing, highlights the need to be able to adapt quickly to a new medium. |
Cannabis use and psychosis: a longitudinal population-based study. | Cannabis use may increase the risk of psychotic disorders and result in a poor prognosis for those with an established vulnerability to psychosis. A 3-year follow-up (1997-1999) is reported of a general population of 4,045 psychosis-free persons and of 59 subjects in the Netherlands with a baseline diagnosis of psychotic disorder. Substance use was assessed at baseline, 1-year follow-up, and 3-year follow-up. Baseline cannabis use predicted the presence at follow-up of any level of psychotic symptoms (adjusted odds ratio (OR) = 2.76, 95% confidence interval (CI): 1.18, 6.47), as well as a severe level of psychotic symptoms (OR = 24.17, 95% CI: 5.44, 107.46), and clinician assessment of the need for care for psychotic symptoms (OR = 12.01, 95% CI: 2.24, 64.34). The effect of baseline cannabis use was stronger than the effect at 1-year and 3-year follow-up, and more than 50% of the psychosis diagnoses could be attributed to cannabis use. On the additive scale, the effect of cannabis use was much stronger in those with a baseline diagnosis of psychotic disorder (risk difference, 54.7%) than in those without (risk difference, 2.2%; p for interaction = 0.001). Results confirm previous suggestions that cannabis use increases the risk of both the incidence of psychosis in psychosis-free persons and a poor prognosis for those with an established vulnerability to psychotic disorder. |
A double-blind placebo-controlled trial of bupropion sustained-release for smoking cessation in schizophrenia. | The objective of this study was to examine the efficacy of bupropion for smoking cessation in patients with schizophrenia. Adults with schizophrenia who smoked more than 10 cigarettes per day and wished to try to quit smoking were recruited from community mental health centers, enrolled in a 12-week group cognitive behavioral therapy intervention, and randomly assigned to receive either bupropion sustained-release 300 mg/d or identical placebo. Fifty-three adults, 25 on bupropion and 28 on placebo, were randomized, completed at least 1 postbaseline assessment and were included in the analysis. The primary outcome measures were 7-day point prevalence abstinence in the week after the quit date (week 4) and at the end of the intervention (week 12). Subjects in the bupropion group were significantly more likely to be abstinent for the week after the quit date (36% [9/25] vs. 7% [2/28], P = 0.016) and at end of the intervention (16% [4/25] vs. 0%, P = 0.043). Subjects in the bupropion group also had a higher rate of 4-week continuous abstinence (weeks 8-12) (16% [4/25] vs. 0%, P = 0.043) and a longer duration of abstinence (4.2 [3.2] weeks vs. 1.8 [0.96] weeks, t = 2.30, P = 0.037). The effect of bupropion did not persist after discontinuation of treatment. Subjects in the bupropion group had no worsening of clinical symptoms and had a trend toward improvement in depressive and negative symptoms. We conclude that bupropion does not worsen clinical symptoms of schizophrenia and is modestly effective for smoking cessation in patients with schizophrenia. The relapse rate is high after treatment discontinuation. |
Interventions targeting attention in young children with autism. | PURPOSE
The ability to focus and sustain one's attention is critical for learning. Children with autism demonstrate unusual characteristics of attention from infancy. It is reasonable to assume that early anomalies in attention influence a child's developmental trajectories. Therapeutic interventions for autism often focus on core features of autism such as communication and socialization, while very few interventions specifically address attention. The purpose of this article is to provide clinicians a description of attention characteristics in children with autism and discuss interventions thought to improve attention.
METHOD
Characteristics of attention in children with autism are presented. Intervention studies featuring measures of attention as an outcome variable for young children with autism are reviewed to present interventions that have empirical evidence for improvements in attention. Results are synthesized by strategy, specific feature of attention targeted, and results for both habilitative goals and accommodations for attention.
CONCLUSION
Although research is not extensive, several strategies to support attention in young children with autism have been investigated. The empirical findings regarding these strategies can inform evidence-based practice. |
Being participated: a community approach | In this paper, we explore the concept of participatory design from a different viewpoint by drawing on an African philosophy of humanness -Ubuntu-, and African rural community practices. The situational dynamics of participatory interaction become obvious throughout the design experiences within our community project. Supported by a theoretical framework we reflect upon current participatory design practices. We intend to inspire and refine participatory design concepts and methods beyond the particular context of our own experiences. |
A New Method of Region Embedding for Text Classification | To represent a text as a bag of properly identified “phrases” and use the representation for processing the text is proved to be useful. The key question here is how to identify the phrases and represent them. The traditional method of utilizing n-grams can be regarded as an approximation of the approach. Such a method can suffer from data sparsity, however, particularly when the length of n-gram is large. In this paper, we propose a new method of learning and utilizing task-specific distributed representations of n-grams, referred to as “region embeddings”. Without loss of generality we address text classification. We specifically propose two models for region embeddings. In our models, the representation of a word has two parts, the embedding of the word itself, and a weighting matrix to interact with the local context, referred to as local context unit. The region embeddings are learned and used in the classification task, as parameters of the neural network classifier. Experimental results show that our proposed method outperforms existing methods in text classification on several benchmark datasets. The results also indicate that our method can indeed capture the salient phrasal expressions in the texts. |
Dataset for forensic analysis of B-tree file system | Since B-tree file system (Btrfs) is set to become de facto standard file system on Linux (and Linux based) operating systems, Btrfs dataset for forensic analysis is of great interest and immense value to forensic community. This article presents a novel dataset for forensic analysis of Btrfs that was collected using a proposed data-recovery procedure. The dataset identifies various generalized and common file system layouts and operations, specific node-balancing mechanisms triggered, logical addresses of various data structures, on-disk records, recovered-data as directory entries and extent data from leaf and internal nodes, and percentage of data recovered. |
Forestry and Field Plant Production Technologies in Environmental Life-Cycle Thinking | In our work we developed an environmental analysis model for technological aspects of landuse
changes caused by climate changes. In our research we examined environmental implications of
cultivation technologies of agricultural land-uses. For the typical agricultural/forestry technologies
we set up an eco-balance (input-output material and energy balance) established by process and life
cycle approach. The results of each technology balance � by using environmental impact categories of
environmental problems � become assessable and comparable. The agricultural technologies connected
with land using can be examined by environmental impacts and in terms of environmental riskiness. |
Intrusive tuffs of west Cork, Ireland | Minor intrusions in west Cork include pipes and dykes believed to have originated by a process of fluidization. These bodies are described, emphasis being placed on features known to be characteristic of fluidization. The intrusions are pipes or dykes with sharp contacts with the country-rock, which plunge vertically or at high angles. Some show a zonal form based on the xenolith content; xenoliths moved upwards in the central zones but there is evidence of downward movement in some of the outer zones. Matrices are devoid of igneous material. The west Cork bodies are distinctive in occurring between phases of deformation of a fold-belt and in having their positions controlled by pre-existing structures. Petrologically they are distinguished by having a carbonate content that is variable but attains a maximum of 95 per cent. The bodies are compared with certain carbonatites that are held to be high-level developments of kimberlite pipes. |
An ultra-low-power programmable analog bionic ear processor | We report a programmable analog bionic ear (cochlear implant) processor in a 1.5-/spl mu/m BiCMOS technology with a power consumption of 211 /spl mu/W and 77-dB dynamic range of operation. The 9.58 mm/spl times/9.23 mm processor chip runs on a 2.8 V supply and has a power consumption that is lower than state-of-the-art analog-to-digital (A/D)-then-DSP designs by a factor of 25. It is suitable for use in fully implanted cochlear-implant systems of the future which require decades of operation on a 100-mAh rechargeable battery with a finite number of charge-discharge cycles. It may also be used as an ultra-low-power spectrum-analysis front end in portable speech-recognition systems. The power consumption of the processor includes the 100 /spl mu/W power consumption of a JFET-buffered electret microphone and an associated on-chip microphone front end. An automatic gain control circuit compresses the 77-dB input dynamic range into a narrower internal dynamic range (IDR) of 57 dB at which each of the 16 spectral channels of the processor operate. The output bits of the processor are scanned and reported off chip in a format suitable for continuous-interleaved-sampling stimulation of electrodes. Power-supply-immune biasing circuits ensure robust operation of the processor in the high-RF-noise environment typical of cochlear implant systems. |
Is it really clean? An evaluation of the efficacy of four methods for determining hospital cleanliness. | An important component of effective cleaning in hospitals involves monitoring the efficacy of the methods used. Generally the recommended tool for monitoring cleaning efficacy is visual assessments. In this study four methods to determine cleaning efficacy of hospital surfaces were compared, namely visual assessment, chemical (ATP) and microbiological methods, i.e. aerobic colony count (ACC) and the presence of meticillin-resistant Staphylococcus aureus. Respectively, 93.3%, 71.5%, 92.1% and 95.0% of visual, ATP, ACC and MRSA assessments were considered acceptable or 'clean' according to each test standard. Visual assessment alone did not always provide a meaningful measure of surface cleanliness or cleaning efficacy. The average ATP value from 120 swabs before cleaning was 612 relative light units (RLU) (range: 72-2575) and 375 RLU after cleaning (range: 106-1071); the accepted standard is 500 RLU. In a hospital setting with low microbiological counts, the use of chemical tests such as ATP may provide additional information of cleaning efficacy and ATP trends allow identification of environmental surfaces that require additional cleaning or cleaning schedule amendments. |
The possibility of cosmic acceleration via spatial averaging in Lemaître–Tolman–Bondi models | We investigate the possible occurrence of a positive cosmic acceleration in a spatially averaged, expanding, unbound Lema?tre?Tolman?Bondi cosmology. By studying an approximation in which the contribution of 3-curvature dominates over the matter density, we construct numerical models which exhibit acceleration. |
Cellular angiofibroma: a benign neoplasm distinct from angiomyofibroblastoma and spindle cell lipoma. | Four cases of a distinctive soft-tissue tumor of the vulva are described. They were characterized by occurrence in middle-aged women (39-50 years), small size (< 3 cm), and a usually well-circumscribed margin. The preoperative clinical diagnosis was that of a labial or Bartholin gland cyst in three of the four cases. The microscopic appearance was remarkably consistent and was characterized by a cellular neoplasm composed of uniform, bland, spindled stromal cells, numerous thick-walled and often hyalinized vessels, and a scarce component of mature adipocytes. Mitotic activity was brisk in three cases (up to 11 mitoses per 10 high power fields). The stromal cells were positive for vimentin and negative for CD34, S-100 protein, actin, desmin, and epithelial membrane antigen, suggesting fibroblastic differentiation. Two patients with follow-up showed no evidence of recurrence. The differential diagnosis of this distinctive tumor includes aggressive angiomyxoma, angiomyofibroblastoma, spindle cell lipoma, solitary fibrous tumor, perineurioma, and leiomyoma. The designation of "cellular angiofibroma" is chosen to emphasize the two principal components of this tumor: the cellular spindle cell component and the prominent blood vessels. |
Negative-ion element impurities breakdown model | Negative-ion element impurities breakdown model of HfO 2 optical thin film is reported. We believe the main impurity element in thin film comes from t he coating material. The weak absorption and laser induced damage threshold (LID T) of HfO 2 thin film are measured to testify the negative-ion element impurities breakdown model. These results indicate that the LIDT would decrease and the absorption of the films would increase with the increase of the content of negative-ion element. The main reason is the negative-ion elements become th e center of volatile gas source and form defects, which in turn become the cente r of absorption during laser irradiation. So negative-ion elements are harmful i mpurities, their existence will speed up the damage of the thin film. |
S.I. success models, 25 years of evolution | This study intends to report a review of the literature on the evolution of the systems information success model, specifically the DeLone & McLean model (1992) during the last twenty-five years. It is also intended to refer the main critics to the model by the various researchers who contributed to its updating, making it one of the most used until today. |
Subspace Clustering via Learning an Adaptive Low-Rank Graph | By using a sparse representation or low-rank representation of data, the graph-based subspace clustering has recently attracted considerable attention in computer vision, given its capability and efficiency in clustering data. However, the graph weights built using the representation coefficients are not the exact ones as the traditional definition is in a deterministic way. The two steps of representation and clustering are conducted in an independent manner, thus an overall optimal result cannot be guaranteed. Furthermore, it is unclear how the clustering performance will be affected by using this graph. For example, the graph parameters, i.e., the weights on edges, have to be artificially pre-specified while it is very difficult to choose the optimum. To this end, in this paper, a novel subspace clustering via learning an adaptive low-rank graph affinity matrix is proposed, where the affinity matrix and the representation coefficients are learned in a unified framework. As such, the pre-computed graph regularizer is effectively obviated and better performance can be achieved. Experimental results on several famous databases demonstrate that the proposed method performs better against the state-of-the-art approaches, in clustering. |
A Finite Difference Domain Decomposition Method Using Local Corrections for the Solution of Poisson ’ s Equation | We present a domain decomposition method for computing finite difference solutions to the Poisson equation with infinite domain boundary conditions. Our method is a finite difference analogue of Anderson’s Method of Local Corrections. The solution is computed in three steps. First, fine-grid solutions are computed in parallel using infinite domain boundary conditions on each subdomain. Second, information is transferred globally through a coarse-grid representation of the charge, and a global coarse-grid solution is found. Third, a fine-grid solution is computed on each subdomain using boundary conditions set with the global coarse solution, corrected locally with fine-grid information from nearby subdomains. There are three important features of our algorithm. First, our method requires only a single iteration between the local fine-grid solutions and the global coarse representation. Second, the error introduced by the domain decomposition is small relative to the solution error obtained in a single-grid calculation. Third, the computed solution is second-order accurate and only weakly dependent on the coarse-grid spacing and the number of subdomains. As a result of these features, we are able to compute accurate solutions in parallel with a much smaller ratio of communication to computation than more traditional domain decomposition methods. We present results to verify the overall accuracy, confirm the small communication costs, and demonstrate the parallel scalability of the method. c © 2002 Elsevier Science (USA) |
Regionalism as an Engine of Multilateralism: A Case for a Single East Asian FTA | As East Asia becomes increasingly integrated through market-driven trade and FDI activities, free trade agreements (FTAs) are proliferating. Consolidation of multiple and overlapping FTAs into a single East Asian FTA can help mitigate the harmful noodle bowl effects of different or competing tariffs, standards, and rules. A region-wide FTA will also encourage participation of low-income countries and reduce trade-related business costs, particularly for small and medium enterprises. A computable general equilibrium (CGE) model examines the economic impact of various types of FTAs in East Asia (among ASEAN+1, ASEAN+3, and ASEAN+6) finding that consolidation at the ASEAN+6 level would yield the largest gains to East Asia among plausible regional trade arrangements. |
Effective LSTMs for Target-Dependent Sentiment Classification | Target-dependent sentiment classification remains a challenge: modeling the semantic relatedness of a target with its context words in a sentence. Different context words have different influences on determining the sentiment polarity of a sentence towards the target. Therefore, it is desirable to integrate the connections between target word and context words when building a learning system. In this paper, we develop two target dependent long short-term memory (LSTM) models, where target information is automatically taken into account. We evaluate our methods on a benchmark dataset from Twitter. Empirical results show that modeling sentence representation with standard LSTM does not perform well. Incorporating target information into LSTM can significantly boost the classification accuracy. The target-dependent LSTM models achieve state-of-the-art performances without using syntactic parser or external sentiment lexicons.1 |
Risks and risk mitigation in global software development: A tertiary study | Context There is extensive interest in global software development (GSD) which has led to a large number of papers reporting on GSD. A number of systematic literature reviews (SLRs) have attempted to aggregate information from individual studies. Objective: We wish to investigate GSD SLR research with a focus on discovering what research has been conducted in the area and to determine if the SLRs furnish appropriate risk and risk mitigation advice to provide guidance to organizations involved with GSD. Method: We performed a broad automated search to identify GSD SLRs. Data extracted from each study included: (1) authors, their affiliation and publishing venue, (2) SLR quality, (3) research focus, (4) GSD risks, (5) risk mitigation strategies and, (6) for each SLR the number of primary studies reporting each risk and risk mitigation strategy. Results: We found a total of 37 papers reporting 24 unique GSD SLR studies. Major GSD topics covered include: (1) organizational environment, (2) project execution, (3) project planning and control and (4) project scope and requirements. We extracted 85 risks and 77 risk mitigation advice items and categorized them under four major headings: outsourcing rationale, software development, human resources, and project management. The largest group of risks was related to project management. GSD outsourcing rationale risks ranked highest in terms of primary study support but in many cases these risks were only identified by a single SLR. Conclusions: The focus of the GSD SLRs we identified is mapping the research rather than providing evidence-based guidance to industry. Empirical support for the majority of risks identified is moderate to low, both in terms of the number of SLRs identifying the risk, and in the number of primary studies providing empirical support. Risk mitigation advice is also limited, and empirical support for these items is low. 2013 Elsevier B.V. All rights reserved. |
The EIT-based global inhomogeneity index is highly correlated with regional lung opening in patients with acute respiratory distress syndrome | The electrical impedance tomography (EIT)-based global inhomogeneity (GI) index was introduced to quantify tidal volume distribution within the lung. Up to now, the GI index was evaluated for plausibility but the analysis of how it is influenced by various physiological factors is still missing. The aim of our study was to evaluate the influence of proportion of open lung regions measured by EIT on the GI index. A constant low-flow inflation maneuver was performed in 18 acute respiratory distress syndrome (ARDS) patients (58 ± 14 years, mean age ± SD) and 8 lung-healthy patients (41 ± 12 years) under controlled mechanical ventilation. EIT raw data were acquired at 25 scans/s and reconstructed offline. Recruited lung regions were identified as those image pixels of the lung regions within the EIT scans where local impedance amplitudes exceeded 10% of the maximum amplitude during the maneuver. A series of GI indices was calculated during mechanical lung inflation, based on the differential images obtained between different time points. Respiratory system elastance (Ers) values were calculated at 10 lung volume levels during low-flow maneuver. The GI index decreased during low-flow inflation, while the percentage of open lung regions increased. The values correlated highly in both ARDS (r2 = 0.88 ± 0.08, p < 0.01) and lung-healthy patients (r2 = 0.92 ± 0.05, p < 0.01). Ers and GI index were also significantly correlated in 16 out of 18 ARDS (r2 = 0.84 ± 0.13, p < 0.01) and in 6 out of 8 lung-healthy patients (r2 = 0.84 ± 0.07, p < 0.01). Significant differences were found in GI values between two groups (0.52 ± 0.21 for ARDS and 0.41 ± 0.04 for lung-healthy patients, p < 0.05) as well in Ers values (0.017 ± 0.008 cmH2O/ml for ARDS and 0.009 ± 0.001 cmH2O/ml for lung-healthy patients, p < 0.01). We conclude that the GI index is a reliable measure of ventilation heterogeneity highly correlated with lung recruitability measured with EIT. The GI index may prove to be a useful EIT-based index to guide ventilation therapy. |
New insights into neuron-glia communication. | Two-way communication between neurons and nonneural cells called glia is essential for axonal conduction, synaptic transmission, and information processing and thus is required for normal functioning of the nervous system during development and throughout adult life. The signals between neurons and glia include ion fluxes, neurotransmitters, cell adhesion molecules, and specialized signaling molecules released from synaptic and nonsynaptic regions of the neuron. In contrast to the serial flow of information along chains of neurons, glia communicate with other glial cells through intracellular waves of calcium and via intercellular diffusion of chemical messengers. By releasing neurotransmitters and other extracellular signaling molecules, glia can affect neuronal excitability and synaptic transmission and perhaps coordinate activity across networks of neurons. |
Subsampled Exponential Mechanism: Differential Privacy in Large Output Spaces | In the last several years, differential privacy has become the leading framework for private data analysis. It provides bounds on the amount that a randomized function can change as the result of a modification to one record of a database. This requirement can be satisfied by using the exponential mechanism to perform a weighted choice among the possible alternatives, with better options receiving higher weights. However, in some situations the number of possible outcomes is too large to compute all weights efficiently. We present the subsampled exponential mechanism, which scores only a sample of the outcomes. We show that it still preserves differential privacy, and fulfills a similar accuracy bound. Using a clustering application, we show that the subsampled exponential mechanism outperforms a previously published private algorithm and is comparable to the full exponential mechanism but more scalable. |
The Logic of Plausible Reasoning: A Core Theory | 1. o formal representation of plausible inference patterns; such OS deductions, inductions, ond analogies, thot ore frequently employed in answering every doy questions: 2. a set of parameters, such OS conditional likelihood, typicality, and similarity, that affect the certainty of people’s onswers to such questions; ond 3. a system relating the different plorisible inference patterns and the different certainty parameters. |
Causes, magnitude and management of burns in under-fives in district hospitals in Dar es Salaam, Tanzania. | OBJECTIVES
To determine the causes, magnitude and management of burns in children under five years of age who were admitted in the district hospitals of Dar es Salaam City, Tanzania.
METHODS
In this study, a total of 204 under fives were enrolled. Questionnaires were used to elicit if the parent/caretaker had the knowledge of the cause of the burns, what was done immediately after burn injury, first aid given immediately after burn, source of the knowledge of first aid and when the child was taken to the hospital. Also the questionnaire was cited with data on the management of burns in the hospitals through observation and checking the treatment files.
RESULTS
Forty nine percent were males while 50.5% were females. Most of the children (54.9%) were aged between 1-2 years. 78.4% had scalds while 21.6% had flame burns. No children were found to have burns caused by chemicals or electricity. Most of the burns (97.5%) occurred accidentally, although some (2.5%) were intentional. 68.6% of these burn injuries occurred in the kitchen. Immediately after burn 87.3% of the children had first aid applied on their wounds while 12.7% didn't apply anything. Of the agents used, honey was the most used (32.8%) followed by cold water (16.7%). The source of knowledge on these agents was from relatives and friends (72.5%), schools (7%), media (6%) and medical personnel (14%). The study further revealed that analgesics, intravenous fluids, antiseptics and antibiotics were the drugs used for treatment of burns in the hospital and that there was no specialized unit for burns in the hospitals.
CONCLUSIONS
Causes of childhood burns are largely preventable requiring active social/medical education and public enlighten campaigns on the various methods of prevention. The government to see to it that hospitals have specialized units for managing burn cases and also the socio-economic status of its people be improved. |
Building a HighLevel Dataflow System on top of MapReduce: The Pig Experience | Increasingly, organizations capture, transform and analyze enormous data sets. Prominent examples include internet companies and e-science. The Map-Reduce scalable dataflow paradigm has become popular for these applications. Its simple, explicit dataflow programming model is favored by some over the traditional high-level declarative approach: SQL. On the other hand, the extreme simplicity of MapReduce leads to much low-level hacking to deal with the many-step, branching dataflows that arise in practice. Moreover, users must repeatedly code standard operations such as join by hand. These practices waste time, introduce bugs, harm readability, and impede optimizations. Pig is a high-level dataflow system that aims at a sweet spot between SQL and Map-Reduce. Pig offers SQL-style high-level data manipulation constructs, which can be assembled in an explicit dataflow and interleaved with custom Mapand Reduce-style functions or executables. Pig programs are compiled into sequences of Map-Reduce jobs, and executed in the Hadoop Map-Reduce environment. Both Pig and Hadoop are open-source projects administered by the Apache Software Foundation. This paper describes the challenges we faced in developing Pig, and reports performance comparisons between Pig execution and raw Map-Reduce execution. |
Criterion-Related Validity of a Diarrhea Questionnaire in HIV-Infected Patients | Clinical trials evaluating HIV-related diarrhea have used varied unidimensional end points to assess diarrhea severity. We hypothesized that a self-reported measure of diarrhea that assesses stool form, stool frequency, and diarrhea morbidity would accurately portray the severity of HIV-related diarrhea. During a clinical trial for HIV-related diarrhea, we evaluated the instrument among 17 patients, comparing survey results with objective measures of diarrhea morbidity recorded concurrently. The survey scores demonstrated consistently high Spearman correlations with nursing assessment of stool form (0.6693), observed stool frequency (0.7023), and cumulative stool weight (0.8216), all recorded over six days of intensive inpatient observation (P < 0.01 for each). Of the three components of the survey, only the stool form assessment, which uses pictorial representations of stool consistency, correlated significantly across all three objective measures (0.8069–0.8792). In demonstrating the concurrent, criterion-related validity of this survey, we found it helpful for evaluating HIV-related diarrhea and suggest its utility for HIV-seronegative subjects as well. |
Mollifying Quantum Field Theory or Lattice QFT in Minkowski Spacetime and Symmetry Breaking | This work develops and applies the concept of mollification in order to smooth out highly oscillatory exponentials. This idea, known for quite a while in the mathematical community (mollifiers are a means to smooth distributions), is new to numerical Quantum Field Theory. It is potentially very useful for calculating phase transitions [highly oscillatory integrands in general], for computations with imaginary chemical potentials and Lattice QFT in Minkowski spacetime. |
Weighted Task Regularization for Multitask Learning | Multitask Learning has been proven to be more effective than the traditional single task learning on many real-world problems by simultaneously transferring knowledge among different tasks which may suffer from limited labeled data. However, in order to build a reliable multitask learning model, nontrivial effort to construct the relatedness between different tasks is critical. When the number of tasks is not large, the learning outcome may suffer if there exists outlier tasks that inappropriately bias majority. Rather than identifying or discarding such outlier tasks, we present a weighted regularized multitask learning framework based on regularized multitask learning, which uses statistical metrics, such as Kullback-Leibler divergence, to assign weights prior to regularization process that robustly reduces the impact of outlier tasks and results in better learned models for all tasks. We then show that this formulation can be solved using dual form like optimizing a standard support vector machine with varied kernels. We perform experiments using both synthetic dataset and real-world dataset from petroleum industry which shows that our methodology outperforms existing methods. |
Information extraction from scientific paper using rhetorical classifier | Time constraints often lead a reader of scientific paper to read only the title and abstract of the paper, but reading these parts is often ineffective. This study aims to extract information automatically in order to help the readers get structured information from a scientific paper. The information extraction is done by rhetorical classification of each sentence in a scientific paper. Rhetoric information is the intention to be conveyed to the reader by the author of the paper. This research used corpus-based approach to build rhetorical classifier. Since there was a lack of rethorical corpus, we constructed our own corpus, which is a collection of sentences that have been labeled with rhetorical information. Each sentence represented as a vector of content, location, citation, and meta-discourses features. This collection of feature vectors is used to build rhetorical classifiers by using machine learning techniques. Experiments were conducted to select the best learning techniques for rhetorical classifier. Training set consists of 7239 labeled sentences, and the testing set consists of 3638 labeled sentences. We used WEKA (Waikato Environment for Knowledge Analysis) and LibSVM libraries. Learning techniques being considered were Naive Bayes, C4.5, Logistic, Multi-Layer Perceptron, PART, Instance-based Learning, and Support Vector Machines (SVM). The best performers are the SVM and Logistic classifier with accuracy of 0.51. By applying one-against-all strategy, the SVM accuracy can be improved to 0.60. |
Wideband Patch Antenna for HPM Applications | This paper presents the design, fabrication, and characterization of a compact wideband antenna for high-power microwave applications. The antennas proposed are array of high-power wideband patches with high compactness and less than λ/10 thick. The concept developed can be fed by high-voltage signals (up to 60 kV) in repetitive operation. Two designs are produced at central frequencies of 350 MHz and 1 GHz. Their relative bandwidth is larger than 40% at 350 MHz and 25% at 1 GHz for S11 <; - 10 dB, respectively. The arrays studied produce a gain of more than 14 dB. |
A pilot study to test the effects of art-making classes for family caregivers of patients with cancer. | PURPOSE/OBJECTIVES
To test the effects of an art-making class (AMC) on reducing anxiety and stress among family caregivers of patients with cancer.
DESIGN
A pretest and post-test quasi-experimental design.
SETTING
A residential care facility near tertiary treatment centers in the southeastern United States.
SAMPLE
The convenience sample of 69 family caregivers was aged 18-81 years (X = 48 years) and predominantly Catholic. Most had at least a high school education. Two-thirds were daughters, wives, or mothers of patients with cancer.
METHODS
Participants completed a demographic data survey and a Beck Anxiety Inventory (BAI). Researchers collected a saliva sample from each participant to measure salivary cortisol, which indicates stress levels. Following pretesting, a two-hour AMC was delivered. Post-tests included a repeat BAI and a second saliva sample.
MAIN RESEARCH VARIABLES
Anxiety and stress.
FINDINGS
Anxiety was significantly reduced after AMC. Stress was reduced.
CONCLUSIONS
The AMC appeared to reduce anxiety and stress. The addition of a control group and replication with larger numbers are suggested. The physiologic cortisol measure corroborated BAI findings but was difficult to obtain from some cultural groups and was expensive to analyze.
IMPLICATIONS FOR NURSING
Family caregivers may benefit from participation in art-making interventions. Nurses should continue to investigate the use of creative approaches to promote holistic care. |
The amputee mobility predictor: an instrument to assess determinants of the lower-limb amputee's ability to ambulate. | OBJECTIVES
To describe the development of the Amputee Mobility Predictor (AMP) instrument designed to measure ambulatory potential of lower-limb amputees with (AMPPRO) and without (AMPnoPRO) the use of a prosthesis, and to test its reliability and validity.
DESIGN
Measurement study using known groups method and concurrence with existing measures.
SETTING
Academic medical center.
PARTICIPANTS
A convenience sample of 191 lower-limb amputee subjects who had completed prosthetic training, 24 in the reliability study (mean age +/- standard deviation, 68.3+/-17.9y, range, 28-99y) and 167 in the validity study (mean age, 54.8+/-18.6y; range, 18-100y).
INTERVENTIONS
Not applicable.
MAIN OUTCOME MEASURES
Intra- and interrater reliability; construct validity by known groups method; concurrent validity by comparisons with 6-minute walk test, Comorbidity Index, age, and time since amputation; predictive validity by comparison with 6-minute walk test after controlling for other factors.
RESULTS
Interrater reliability was.99 for subjects tested with and without their prosthesis; intrarater reliability was.96 and.97. Both the AMPnoPRO (P<.0001) and the AMPPRO scores (P<.0001) distinguished among the 4 Medicare functional classification levels. The AMP correlated strongly with 6-minute walk scores (AMPnoPRO r=.69, P<.0001; AMPPRO r=.82, P<.0001) and the amputee activity survey (AMPnoPRO r=.67, P<.0001; AMPPRO r=.77, P<.0001), and negatively correlated with age (AMPnoPRO r=-.69, P<.0001; AMPPRO r=.56, P<.0001) and comorbidity (AMPnoPRO r=-.43, P<.0001; AMPPRO r=.38, P<.0001).
CONCLUSION
The AMP with and without a prosthesis are reliable and valid measures for the assessment of functional ambulation in lower-limb amputee subjects. |
A Photovoltaic Array Simulation Model for Matlab-Simulink GUI Environment | A photovoltaic array (PVA) simulation model to be used in Matlab-Simulink GUI environment is developed and presented in this paper. The model is developed using basic circuit equations of the photovoltaic (PV) solar cells including the effects of solar irradiation and temperature changes. The new model was tested using a directly coupled dc load as well as ac load via an inverter. Test and validation studies with proper load matching circuits are simulated and results are presented here. |
Annealing-Driven Microstructural Evolution and Its Effects on the Surface and Nanomechanical Properties of Cu-Doped NiO Thin Films | The effects of annealing temperature on the structural, surface morphological and nanomechanical properties of Cu-doped (Cu-10 at %) NiO thin films grown on glass substrates by radio-frequency magnetron sputtering are investigated in this study. The X-ray diffraction (XRD) results indicated that the as-deposited Cu-doped NiO (CNO) thin films predominantly consisted of highly defective (200)-oriented grains, as revealed by the broadened diffraction peaks. Progressively increasing the annealing temperature from 300 to 500 °C appeared to drive the films into a more equiaxed polycrystalline structure with enhanced film crystallinity, as manifested by the increased intensities and narrower peak widths of (111), (200) and even (220) diffraction peaks. The changes in the film microstructure appeared to result in significant effects on the surface energy, in particular the wettability of the films as revealed by the X-ray photoelectron spectroscopy and the contact angle of the water droplets on the film surface. The nanoindentation tests further revealed that both the hardness and Young’s modulus of the CNO thin films increased with the annealing temperature, suggesting that the strain state and/or grain boundaries may have played a prominent role in determining the film’s nanomechanical characterizations. |
Hybrides Steuerungs- und Regelungskonzept für das hochautomatisierte Fahren auf Autobahnen | Zusammenfassung Es wird eine neuartige hybride Systemarchitektur für kontinuierliche Steuerungsund Regelungssysteme mit diskreten Entscheidungsfindungsprozessen vorgestellt. Die Funktionsweise wird beispielhaft für das hochautomatisierte Fahren auf Autobahnen und den Nothalteassistenten dargestellt. Da für einen zukünftigen Einsatz derartiger Systeme deren Robustheit entscheidend ist, wurde diese bei der Entwicklung des Ansatzes in den Mittelpunkt gestellt. Summary An innovative hybrid system structure for continuous control systems with discrete decisionmaking processes is presented. The functionality is demonstrated on a highly automated driving system on freeways and on the emergency stop assistant. Due to the fact that the robustness will be a determining factor for future usage of these systems, the presented structure focuses on this feature. |
Sample Selection Bias as a Specification Error | Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. |
Damascene Process and Chemical Mechanical Planarization | The constant demand to scale down transistors and improve device performance has led to material as well as process changes in the formation of IC interconnect. Traditionally, aluminum has been used to form the IC interconnects. The process involved subtractive etching of blanket aluminum as defined by the patterned photo resist. However, the scaling and performance demands have led to transition from Aluminum to Copper interconnects. The primary motivation behind the introduction of copper for forming interconnects is the advantages that copper offers over Aluminum. The table 1 below gives a comparison between Aluminum and Copper properties. |
HOW MUCH DOES INDUSTRY MATTER , REALLY ? | In this paper, we examine the importance of year, industry, corporate-parent, and businessspecific effects on the profitability of U.S. public corporations within specific 4-digit SIC categories. Our results indicate that year, industry, corporate-parent, and business-specific effects account for 2 percent, 19 percent, 4 percent, and 32 percent, respectively, of the aggregate variance in profitability. We also find that the importance of the effects differs substantially across broad economic sectors. Industry effects account for a smaller portion of profit variance in manufacturing but a larger portion in lodging/entertainment, services, wholesale/retail trade, and transportation. Across all sectors we find a negative covariance between corporate-parent and industry effects. A detailed analysis suggests that industry, corporate-parent, and business-specific effects are related in complex ways. 1997 by John Wiley & Sons, Ltd. |
Safety of long-term biologic therapy in rheumatologic patients with a previously resolved hepatitis B viral infection. | UNLABELLED
European and Asian studies report conflicting data on the risk of hepatitis B virus (HBV) reactivation in rheumatologic patients with a previously resolved HBV (prHBV) infection undergoing long-term biologic therapies. In this patient category, the safety of different immunosuppressive biologic therapies, including rituximab, was assessed. A total of 1218 Caucasian rheumatologic patients, admitted consecutively as outpatients between 2001 and 2012 and taking biologic therapies, underwent evaluation of anti-HCV and HBV markers as well as liver amino transferases every 3 months. Starting from January 2009, HBV DNA monitoring was performed in patients with a prHBV infection who had started immunosuppressive biologic therapy both before and after 2009. Patients were considered to have elevated aminotransferase levels if values were >1× upper normal limit at least once during follow-up. We found 179 patients with a prHBV infection (14 treated with rituximab, 146 with anti-tumor necrosis factor-alpha, and 19 with other biologic therapies) and 959 patients without a prHBV infection or other liver disease (controls). The mean age in the former group was significantly higher than the controls. Patients with a prHBV infection never showed detectable HBV DNA serum levels or antibody to hepatitis B surface antigen/hepatitis B surface antigen seroreversion. However, when the prevalence of elevated amino transferases in patients with prHBV infection was compared to controls, it was significantly higher in the former group only for aminotransferase levels >1× upper normal limit but not when aminotransferase levels >2× upper normal limit were considered.
CONCLUSION
Among patients with a prHBV infection and rheumatologic indications for long-term biologic therapies, HBV reactivation was not seen; this suggests that universal prophylaxis is not justified and is not cost-effective in this clinical setting. |
An Emergency-Demand-Response Based Under Speed Load Shedding Scheme to Improve Short-Term Voltage Stability | The dynamics of load, especially induction motors, are the driving force for short-term voltage stability (STVS) problems. In this paper, the equivalent rotation speed of motors is identified online and its recovery time is estimated next to realize an emergency-demand-response (EDR) based under speed load shedding (USLS) scheme to improve STVS. The proposed scheme consists of an EDR program and two regular stages (RSs). In the EDR program, contracted load is used as a fast-response resource rather than the last defense. The estimated recovery time (ERT) is used as the triggering signal for the EDR program. In the RSs, the amount of load to be shed at each bus is determined according to the assigned weights based on ERTs. Case studies on a practical power system in China Southern Power Grid have validated the performance of the proposed USLS scheme under various contingency scenarios. The utilization of EDR resources and the adaptive distribution of shedding amount in RSs guarantee faster voltage recovery. Therefore, USLS offers a new and more effective approach compared with existing under voltage load shedding to improve STVS. |
Effects of a polyelectrolyte additive on the selective dialysis membrane permeability for low-molecular-weight proteins. | BACKGROUND
Improving the sieving characteristics of dialysis membranes enhances the clearance of low-molecular-weight (LMW) proteins and may have an impact on outcome in patients receiving haemodialysis. To approach this goal, a novel polyelectrolyte additive process was applied to a polyethersulphone (PES) membrane.
METHODS
Polyelectrolyte-modified PES was characterized in vitro by measuring complement activation and sieving coefficients of cytochrome c and serum albumin. In a prospective, randomized, cross-over study, instantaneous plasma water clearances and reduction rates of LMW proteins [beta(2)-microglobulin (b2m), cystatin c, myoglobin, retinol binding protein] were determined in eight patients receiving dialysis treatment with PES in comparison with polysulphone (PSU). Biocompatibility was assessed by determination of transient leucopenia, plasma levels of complement C5a, thrombin-antithrombin III (TAT), myeloperoxidase (MPO) and elastase (ELT).
RESULTS
PES showed a steeper sieving profile and lower complement activation in vitro compared with PSU. Instantaneous clearance (69 +/- 8 vs. 58 +/- 3 ml/min; P < 0.001) and reduction rate (72.3 +/- 1 5% vs 66.2 +/- 6.1%; P < 0.001) of b2m were significantly higher with PES as compared with PSU. With higher molecular weight of proteins, differences in the solute removal between PES and PSU further increased, whereas albumin loss remained low (PES, 0.53 +/- 0.17 vs PSU, <0.22 g/dialysis). MPO, ELT and TAT did not differ between the two membranes. In contrast, leucopenia was less pronounced and C5a generation was significantly lower during dialysis with PES.
CONCLUSIONS
Polyelectrolyte modification of PES results in a higher LMW protein removal and in optimized biocompatibility. Whether these findings translate into better outcome of patients receiving haemodialysis requires further studies. |
Modelling stock-market investors as Reinforcement Learning agents | Decision making in uncertain and risky environments is a prominent area of research. Standard economic theories fail to fully explain human behaviour, while a potentially promising alternative may lie in the direction of Reinforcement Learning (RL) theory. We analyse data for 46 players extracted from a financial market online game and test whether Reinforcement Learning (Q-Learning) could capture these players behaviour using a riskiness measure based on financial modeling. Moreover we test an earlier hypothesis that players are “naíve” (short-sighted). Our results indicate that Reinforcement Learning is a component of the decision-making process. We also find that there is a significant improvement of fitting for some of the players when using a full RL model against a reduced version (myopic), where only immediate reward is valued by the players, indicating that not all players are naíve. |
Machine Comprehension Using Match-LSTM and Answer Pointer | Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al. (2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al. (2016) using logistic regression and manually crafted features. |
Summary of Usability Evaluations of an Educational Augmented Reality Application | We summarize three evaluations of an educational augmented reality application for geometry education, which have been conducted in 2000, 2003 and 2005 respectively. Repeated formative evaluations with more than 100 students guided the redesign of the application and its user interface throughout the years. We present and discuss the results regarding usability and simulator sickness providing guidelines on how to design augmented reality applications utilizing head-mounted displays. |
Soft Robotics : Review of Fluid-Driven Intrinsically Soft Devices ; Manufacturing , Sensing , Control , and Applications in Human-Robot Interaction | The emerging field of soft robotics makes use of many classes of materials including metals, low glass transition temperature (Tg) plastics, andhighTgelastomers.Dependent on the specific design, all of these materials may result in extrinsically soft robots. Organic elastomers, however, have elastic moduli ranging from tens ofmegapascals down to kilopascals; robots composed of suchmaterials are intrinsically soft they are always compliant independent of their shape. This class of soft machines has been used to reduce control complexity and manufacturing cost of robots, while enabling sophisticated and novel functionalities often in direct contact with humans. This review focuses on a particular type of intrinsically soft, elastomeric robot those powered via fluidic pressurization. |
Design and Analysis of Machine Tool Spindle for Special Purpose Machines ( SPM ) and Standardizing the Design Using Autodesk Inventor ( I-Logic ) | The work focuses on designing a machine tool spindle for SPM to carry out Face-milling operation on the given work-piece. The machine tool spindle is one of the major mechanical components in any machining center. The main structural properties of the spindle generally depend on the dimensions of the shaft, motor, tool holder, bearings and the design configuration of the overall spindle assembly. The bearing arrangements are defined by the type of operation, the cutting forces acting and the life of bearings. The forces which are affecting the machine tool spindle during machining are Tangential force (Fz), Axial force (Fx), radial force (Fy) will be estimated. Based on maximum cutting force incurred the analysis will be carried out. Design calculations are performed for bearing and spindle shaft using Analytical approach for determination of bearing life and the rigidity of the spindle shaft respectively. The spindle is then modeled using Autodesk Inventor modeling software. Finite Element method is used on Spindle shaft, critical and structural components to determine total deformation and von-Mises stress in the components. In Finite Element Method, Static structural analysis is performed to determine stress and total deformation in the components. Validation of the result is obtained by comparing of Analytical result with Finite Element Analysis using ANSYS. Here, in this project we use iLogic, a program in Autodesk inventor to design the different sizes of machine tool spindle as per the requirement at a faster rate. iLogic permits rules-driven design, which provides a simple way to obtain and reuse your work. Use of iLogic helps in standardization and automation of the design processes and configures virtual products. It also supports design optimization. |
Role of Social Media on Information Accessibility | Social media is the gathering place of a large pool of consumers. It is the repository of consumer information and acts as a means of spreading information to build market presence. Most literature states that organizational usage of social media enhances customer relations, but social media also acts as a medium for information acquisition. Not many previous studies have investigated the role of social media on information accessibility. Therefore this study examined the impact of social media on information accessibility. A total of 171 organization responded to the survey and the result of the survey showed that social media usage has a positive impact on information accessibility. Also it was found that factors such as interactivity, trust and institutional pressure positively influence social media usage in organizations. This study provided a clearer understanding on the real importance of social media and its benefits towards information acquisition. The results would motivate and guide organizations in the adoption of social media for information acquisition which is important for understanding the customers, competitors and the industry and to develop strategies for enhancing business performance. |
Agent-Based and Individual-Based Modeling: A Practical Introduction | agent-based models of geographical systems PDF architecture-based design of multi-agent systems PDF agent-based models of the economy from theories to applications PDF new trends in agent based complex automated negotiations PDF churchills secret agent a novel based on a true story PDF agent-based models quantitative applications in the social sciences PDF the complexity of cooperation agent-based models of competition and collaboration PDF party competition an agent-based model princeton studies in complexity PDF modeling for decision support in network-based services the application of quantitative modeling to service science lecture notes in business information processing volume 42 PDF |
Associations of Soluble CD14 and Endotoxin with Mortality, Cardiovascular Disease, and Progression of Kidney Disease among Patients with CKD. | BACKGROUND AND OBJECTIVES
CD14 plays a key role in the innate immunity as pattern-recognition receptor of endotoxin. Higher levels of soluble CD14 (sCD14) are associated with overall mortality in hemodialysis patients. The influence of kidney function on plasma sCD14 levels and its relationship with adverse outcomes in patients with CKD not yet on dialysis is unknown. This study examines the associations between plasma levels of sCD14 and endotoxin with adverse outcomes in patients with CKD.
DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS
We measured plasma levels of sCD14 and endotoxin in 495 Leuven Mild-to-Moderate CKD Study participants. Mild-to-moderate CKD was defined as presence of kidney damage or eGFR<60 ml/min per 1.73 m(2) for ≥3 months, with exclusion of patients on RRT. Study participants were enrolled between November 2005 and September 2006.
RESULTS
Plasma sCD14 was negatively associated with eGFR (ρ=-0.34, P<0.001). During a median follow-up of 54 (interquartile range, 23-58) months, 53 patients died. Plasma sCD14 was predictive of mortality, even after adjustment for renal function, Framingham risk factors, markers of mineral bone metabolism, and nutritional and inflammatory parameters (hazard ratio [HR] per SD higher of 1.90; 95% confidence interval [95% CI],1.32 to 2.74; P<0.001). After adjustment for the same risk factors, plasma sCD14 was also a predictor of cardiovascular disease (HR, 1.30; 95% CI, 1.00 to 1.69; P=0.05). Although plasma sCD14 was associated with progression of CKD, defined as reaching ESRD or doubling of serum creatinine in models adjusted for CKD-specific risk factors (HR, 1.24; 95% CI, 1.01 to 1.52; P=0.04), significance was lost when adjusted for proteinuria (HR, 1.19; 95% CI, 0.96 to 1.48; P=0.11). There was neither correlation between plasma endotoxin and sCD14 (ρ=-0.06, P=0.20) nor was endotoxin independently associated with adverse outcome during follow-up.
CONCLUSIONS
Plasma sCD14 is elevated in patients with decreased kidney function and associated with mortality and cardiovascular disease in patients with CKD not yet on dialysis. |
Alternative energy technologies | Fossil fuels currently supply most of the world's energy needs, and however unacceptable their long-term consequences, the supplies are likely to remain adequate for the next few generations. Scientists and policy makers must make use of this period of grace to assess alternative sources of energy and determine what is scientifically possible, environmentally acceptable and technologically promising. |
Bosnian Peacekeeping and EU Tax Harmony: Evolving Policy Frames and Changing Policy Processes | Michael J. Butler is a doctoral candidate in the Department of Political Science at the University of Connecticut and a Simulation Coordinator for the GlobalEd Project. Mark A. Boyer is a Professor in the Department of Political Science, University of Connecticut, where he is Co-Director of the GlobalEd Project (www.globaled.uconn.edu). The authors would like to thank Ernie Zirakzadeh for inspiring the conceptual underpinnings for this article.THE EVENTS OF 11 SEPTEMBER 2001 VIVIDLY ILLUSTRATE how an international issue can leap to the top of a nation's foreign policy agenda. Although combating terrorism has long been a focus for collaboration among the dominant players in the international system, it has rarely occupied the attention of high-level policy-makers as it has since the terrorist attacks on the World Trade Center and the Pentagon. This changed policy environment forces us to consider why some issues (such as terrorism) come to the fore on the foreign policy agenda, whether dramatically or in a more evolutionary fashion, while others (such as foreign aid programmes) persist in relative obscurity and still others (such as most trade issues) command a significant but not excessive share of the policy spotlight over long periods of time. If we accept the likely proposition that the fundamental characteristics of the issues themselves change very little (if at all), we then need to understand what causes the relative importance of particular issues to change, at least for short periods of time.Understanding the changeability of the international policy environment has several important implications for both citizens and elites. For the former, changing policy priorities means that it is more difficult to stay informed on the nuances of particular issues. Citizens making electoral and other political decisions must synthesize new and necessary information as it becomes available. If we believe, as some in the public opinion field do,(1) that citizens are not particularly attentive to foreign policy and international relations, the changing policy environment and the demands it imposes for more information will make the average citizen even less inclined to worry about the world and more inclined to cede decision-making power over complex and dynamic issues to political elites.The problems may be even greater for elites as they try to cope with the need to learn about newly important issues even as they hammer out policy 'on the fly' with incomplete and sometimes inaccurate information about emergent international challenges. Even a layman's eye could discern the scramble that took place in official Washington in the wake of 11 September to find experts on the new buzzwords of the day: Osama bin Laden, al-Qaeda, and the Taliban. Policy challenges such as these are made more difficult when we factor in the complexity of the foreign policy decision-making process and the need to identify and court international coalition members in an age of increasing multilateralism and interdependence.Recently, international relations scholarship has begun to show an appreciation for these and other challenges faced by policy-makers. Works such as Robert Putnam's study of two-level games, Terrence Hopmann's comprehensive analysis of the complexity of negotiation, David Held and Anthony McGrew's excellent compilation examining the impact of globalization on domestic and international policy challenges, and even Paul Sharp's article on the centrality of studying diplomacy within the international relations field have helped push toward more complex analyses of international and global phenomena and the policy-making process more generally.(2) As a result of studies such as these and the growing diversity of methodological approaches available in the field, scholars are beginning to place foreign policy decision-making more directly into the complex of relationships that engender contemporary global affairs.In this vein, this article examines two recent cases of international policy change - discord within the European Union (EU) over the harmonization of fuel taxes and the evolution of North Atlantic Treaty Organization (NATO) operations in Bosnia - in an effort to understand the forces at work in changing the policy environment in each case and the implications of those changes for policy outcomes. … |
06-025 Multi-agent reinforcement learning : A survey ∗ | Multi-agent systems are rapidly finding applications in a variety of domains, including robotics, distributed control, telecommunications, economics. Many tasks arising in these domains require that the agents learn behaviors online. A significant part of the research on multi-agent learning concerns reinforcement learning techniques. However, due to different viewpoints on central issues, such as the formal statement of the learning goal, a large number of different methods and approaches have been introduced. In this paper we aim to present an integrated survey of the field. First, the issue of the multi-agent learning goal is discussed, after which a representative selection of algorithms is reviewed. Finally, open issues are identified and future research directions are outlined. Keywords—multi-agent systems, reinforcement learning, game theory, distributed control |
False-name bids in combinatorial auctions | In Internet auctions, it is easy for a bidder to submit multiple bids under multiple identifiers (e.g., multiple e-mail addresses). If only one good is sold, a bidder cannot make any additional profit by using multiple bids. However, in combinatorial auctions, where multiple goods are sold simultaneously, submitting multiple bids under fictitious names can be profitable. A bid made under a fictitious name is called a false-name bid. This article gives a brief introducion on false-name bids in combinatorial auctions. |
BIM-based Sustainability Analysis : An Evaluation of Building Performance Analysis Software | With the rising cost of energy and growing environmental concerns, the demand for sustainable building facilities with minimal environmental impact is increasing. The most effective decisions regarding sustainability in a building facility are made in the early design and preconstruction stages. In this context, Building Information Modeling (BIM) can aid in performing complex building performance analyses to ensure an optimized sustainable building design. In this exploratory research, three building performance analysis software namely EcotectTM, Green Building StudioTM (GBS) and Virtual EnvironmentTM are evaluated to gage their suitability for BIM-based sustainability analysis. First presented in this paper are the main concepts of sustainability and BIM. Then an evaluation of the three abovementioned software is performed with their pros and cons. An analytical weight-based scoring system is used for this purpose. At the end, a conceptual framework is presented to illustrate how construction companies can use BIM for sustainability analysis and evaluate LEED (Leadership in Energy and Environmental Design) rating of a building facility. |
CAT-SLAM: probabilistic localisation and mapping using a continuous appearance-based trajectory | This paper describes a new system, dubbed Continuous Appearance-based Trajectory SLAM (CAT-SLAM), which augments sequential appearance-based place recognition with local metric pose filtering to improve the frequency and reliability of appearance based loop closure. As in other approaches to appearance-based mapping, loop closure is performed without calculating global feature geometry or performing 3D map construction. Loop closure filtering uses a probabilistic distribution of possible loop closures along the robot’s previous trajectory, which is represented by a linked list of previously visited locations linked by odometric information. Sequential appearance-based place recognition and local metric pose filtering are evaluated simultaneously using a Rao-Blackwellised particle filter, which weights particles based on appearance matching over sequential frames and the similarity of robot motion along the trajectory. The particle filter explicitly models both the likelihood of revisiting previous locations and exploring new locations. A modified resampling scheme counters particle deprivation and allows loop closure updates to be performed in constant time for a given environment. We compare the performance of CAT-SLAM to FAB-MAP (a state-of-the-art appearance-only SLAM algorithm) using multiple real-world datasets, demonstrating an increase in the number of correct loop closures detected by CAT-SLAM. |
Biomechanical strategies for mitigating collision damage in insect wings: structural design versus embedded elastic materials. | The wings of many insects accumulate considerable wear and tear during their lifespan, and this irreversible structural damage can impose significant costs on insect flight performance and survivability. Wing wear in foraging bumblebees (and likely many other species) is caused by inadvertent, repeated collisions with vegetation during flight, suggesting the possibility that insect wings may display biomechanical adaptations to mitigate the damage associated with collisions. We used a novel experimental technique to artificially induce wing wear in bumblebees and yellowjacket wasps, closely related species with similar life histories but distinct wing morphologies. Wasps have a flexible resilin joint (the costal break) positioned distally along the leading edge of the wing, which allows the wing tip to crumple reversibly when it hits an obstacle, whereas bumblebees lack an analogous joint. Through experimental manipulation of its stiffness, we found that the costal break plays a critical role in mitigating collision damage in yellowjacket wings. However, bumblebee wings do not experience as much damage as would be expected based on their lack of a costal break, possibly due to differences in the spatial arrangement of supporting wing veins. Our results indicate that these two species utilize different wing design strategies for mitigating damage resulting from collisions. A simple inertial model of a flapping wing reveals the biomechanical constraints acting on the costal break, which may help explain its absence in bumblebee wings. |
Notice of RetractionThe study on Haier's marketing strategy of household appliance going to the countryside | With the context of the global financial turmoil, the implementation of the policy of household appliance going to the countryside not only expands domestic demand but also offers new market opportunities to appliance manufacturers. By analyzing Haier's targeted market of refrigerators to countryside and the argumentation on its capacity in market promoting strategy in rural areas, this paper elaborates how Haier occupied the rural market by its customized refrigerator products. Haier has taken a marketing strategy in rural market different from that in urban market. It developed various new products suitable for rural consumption on the base of market segments, formulated a reasonable price strategy, selected an appropriate distribution channel, took promoting method easily accepted by rural residents, and formed a special marketing mix for the rural market. In the context of the implementation of household appliance to the countryside, though continuous product innovation and promotion, Haier draw away from competitors, improved the image of Haier, formed consumer loyalty and achieved a leading position in the appliance industry. |
Visual Sentiment Prediction with Deep Convolutional Neural Networks | Images have become one of the most popular types of media through which users convey their emotions within online social networks. Although vast amount of research is devoted to sentiment analysis of textual data, there has been very limited work that focuses on analyzing sentiment of image data. In this work, we propose a novel visual sentiment prediction framework that performs image understanding with Convolutional Neural Networks (CNN). Specifically, the proposed sentiment prediction framework performs transfer learning from a CNN with millions of parameters, which is pre-trained on large-scale data for object recognition. Experiments conducted on two real-world datasets from Twitter and Tumblr demonstrate the effectiveness of the proposed visual sentiment analysis framework. |
Exploring behavior of an unusual megaherbivore: A spatially explicit foraging model of the hippopotamus | Herbivore foraging theories have been developed for and tested on herbivores across a range of sizes. Due to logistical constraints, however, little research has focused on foraging behavior of megaherbivores. Here we present a research approach that explores megaherbivore foraging behavior, and assesses the applicability of foraging theories developed on smaller herbivores to megafauna. With simulation models as reference points for the analysis of empirical data, we investigate foraging strategies of the common hippopotamus (Hippopotamus amphibius). Using a spatially explicit individual based foraging model, we apply traditional herbivore foraging strategies to a model hippopotamus, compare model output, and then relate these results to field data from wild hippopotami. Hippopotami appear to employ foraging strategies that respond to vegetation characteristics, such as vegetation quality, as well as spatial reference information, namely distance to a water source. Model predictions, field observations, and comparisons of the two support that hippopotami generally conform to the central place foraging construct. These analyses point to the applicability of general herbivore foraging concepts to megaherbivores, but also point to important differences between hippopotami and other herbivores. Our synergistic approach of models as reference points for empirical data highlights a useful method of behavioral analysis for hard-to-study megafauna. © 2003 Elsevier B.V. All rights reserved. |
Nailfold videocapillaroscopic patterns and serum autoantibodies in systemic sclerosis. | BACKGROUND
Microvascular lesions are a predominant feature in systemic sclerosis (SSc) and seem to play a central pathogenetic role. Recently, we graded scleroderma microangiopathy by nailfold videocapillaroscopy (NVC) into three NVC patterns (early, active and late). The aim of the present study was to confirm, in a larger number of SSc patients, the presence of three patterns of microvascular damage, and to detect any possible relationship between these patterns and both specific serum autoantibodies and the subsets of cutaneous involvement.
METHODS
Two hundred and forty-one consecutive patients (227 women and 14 men) affected by SSc were recruited. One hundred and forty-eight patients were affected by limited cutaneous SSc (lSSc) and 93 patients by diffuse cutaneous SSc (dSSc). The ages at onset of Raynaud's phenomenon (RP) and SSc, the durations of RP and SSc, ANA and antitopoisomerase I (anti-Scl70) and anticentromere (ACA) antibodies were investigated in all patients. The SSc patients were subdivided on the basis of the NVC pattern into three groups.
RESULTS
A statistically significant correlation was found between the NVC patterns and the durations of both RP and SSc (P<0.001). Enlarged and giant capillaries, together with haemorrhages, constituted the earliest NVC finding in SSc (early NVC pattern). These abnormalities were mostly expressed in the active NVC pattern. Loss of capillaries, ramified capillaries and vascular architectural disorganization were increased in the late NVC pattern. Age and the duration of both RP and SSc were lower in 24 patients complaining of RP alone. Anti-Scl70 antibodies were statistically less frequent in the early vs both the active and the late NVC pattern, whereas no significant correlation was found between the presence of anti-Scl70 antibodies and the duration of either RP or SSc. ACA positivity was more frequent in patients with longer RP duration. Patients with lSSc had shorter SSc duration and showed the early or active NVC pattern more frequently. Conversely, patients with dSSc showed longer disease duration and mostly showed the late NVC pattern.
CONCLUSIONS
NVC is an appropriate tool for differential diagnosis between primary and secondary RP through the clear recognition of the early NVC scleroderma pattern. This study confirms, in a large number of SSc patients, the existence of three distinct NVC patterns that might reflect the evolution of SSc microangiopathy. The presence of anti-Scl70 antibodies seems be related to earlier expression of the active and late NVC patterns of SSc microvascular damage. The presence of ACA seems to be related to delayed expression of the late NVC pattern. |
Homomorphic Tallying for the Estonian Internet Voting System | In this paper we study the feasibility of using homomorphic tallying in the Estonian Internet voting system. The paper analyzes the security benefits provided by homomorphic tallying, the costs introduced and the required changes to the voting system. We find that homomorphic tallying has several security benefits, such as improved ballot secrecy, public verifiability of the vote tallying server and the possibility for observers to recalculate the tally without compromising ballot secrecy. The use of modern elliptic curve cryptography allows homomorphic tallying to be implemented without a significant loss of performance. |
Market Index and Stock Price Direction Prediction using Machine Learning Techniques: An empirical study on the KOSPI and HSI | The prediction of a stock market direction may serve as an ear ly recommendation system for short-term investors and as an early financial distress warning system for long-term shareholde rs. In this paper, we propose an empirical study on the Korean and Hong Kong stock market with an integrated machine learning f ramework that employs Principal Component Analysis (PCA) a nd Support Vector Machine (SVM). We try to predict the upward or d wnward direction of stock market index and stock price. In the proposed framework, PCA, as a feature selection method, ide ntifies principal components in the stock market movement an d SVM, as a classifier for future stock market movement, processes t hem along with other economic factors in training and foreca sting. We present the results of an extensive empirical study of the proposed method on the Korean composite stock price index (K OSPI) and Hangseng index (HSI), as well as the individual constitu ents included in the indices. In our experiment, ten years da ta (from January 1st, 2002 to January 1st, 2012) are collected and sch emed by rolling windows to predict one-day-ahead direction s. The experimental results show notably high hit ratios in predic ting the movements of the individual constituents in the KOS PI and HSI. The results also varify the co-movement effect between the Korean (Hong Kong) stock market and the Ameri can stock market. c © 2013 Published by Elsevier Ltd. |
Vertebral body replacement with an expandable cage for reconstruction after spinal tumor resection. | OBJECT
The authors report their clinical experience with expandable cages used to stabilize the spine after vertebrectomy. The objectives of surgical treatment for spine tumors include a decrease in pain, decompression of the neural elements, mechanical stabilization of the spine, and wide resection to gain local control of certain primary tumors. Most of the lesions occur in the anterior column or vertebral body (VB). Anterior column defects following resection of VBs require surgical restoration of anterior column support. Recently, various expandable cages have been developed and used clinically for VB replacement (VBR).
METHODS
Between January 2001 and June 2003, the authors treated 15 patients who presented with primary spinal tumors and metastatic lesions from remote sites. All patients underwent vertebrectomy, VBR with an expandable cage, and anterior instrumentation with or without posterior instrumentation, depending on the stability of the involved segment. The correction of kyphotic angle was achieved at an average of 20 degrees. Pain scores according to the visual analog scale decreased from 8.4 to 5.2 at the last follow-up review. Patients whose Frankel neurological grade was below D attained at least a one-grade improvement after surgery. All patients achieved immediate stability postsurgery and there were no significant complications related to the expandable cage.
CONCLUSIONS
The advantage of the expandable cage is that it is easy to use because it permits optimal fit and correction of the deformity by in vivo expansion of the device. These results are promising, but long-term follow up is required. |
Optimising Health Informatics Outcomes--Getting Good Evidence to Where it Matters. | This editorial is part of a For-Discussion-Section of Methods of Information in Medicine about the paper "Evidence-based Health informatics: How do we know what we know?", written by Elske Ammenwerth [1]. Health informatics uses and applications have crept up on health systems over half a century, starting as simple automation of large-scale calculations, but now manifesting in many cases as rule- and algorithm-based creation of composite clinical analyses and 'black box' computation of clinical aspects, as well as enablement of increasingly complex care delivery modes and consumer health access. In this process health informatics has very largely bypassed the rules of precaution, proof of effectiveness, and assessment of safety applicable to all other health sciences and clinical support systems. Evaluation of informatics applications, compilation and recognition of the importance of evidence, and normalisation of Evidence Based Health Informatics, are now long overdue on grounds of efficiency and safety. Ammenwerth has now produced a rigorous analysis of the current position on evidence, and evaluation as its lifeblood, which demands careful study then active promulgation. Decisions based on political aspirations, 'modernisation' hopes, and unsupported commercial claims must cease - poor decisions are wasteful and bad systems can kill. Evidence Based Health Informatics should be promoted, and expected by users, as rigorously as Cochrane promoted Effectiveness and Efficiency, and Sackett promoted Evidence Based Medicine - both of which also were introduced retrospectively to challenge the less robust and partially unsafe traditional 'wisdom' in vogue. Ammenwerth's analysis gives the necessary material to promote that mission. |
Design and SAR of thienopyrimidine and thienopyridine inhibitors of VEGFR-2 kinase activity. | Novel classes of thienopyrimidines and thienopyridines have been identified as potent inhibitors of VEGFR-2 kinase. The synthesis and SAR of these compounds is presented, along with successful efforts to diminish EGFR activity present in the lead series. |
A framework for detection and measurement of phishing attacks | Phishing is form of identity theft that combines social engineering techniques and sophisticated attack vectors to harvest financial information from unsuspecting consumers. Often a phisher tries to lure her victim into clicking a URL pointing to a rogue page. In this paper, we focus on studying the structure of URLs employed in various phishing attacks. We find that it is often possible to tell whether or not a URL belongs to a phishing attack without requiring any knowledge of the corresponding page data. We describe several features that can be used to distinguish a phishing URL from a benign one. These features are used to model a logistic regression filter that is efficient and has a high accuracy. We use this filter to perform thorough measurements on several million URLs and quantify the prevalence of phishing on the Internet today |
Commutation torque ripple reduction in brushless DC motor drives using a single DC current sensor | This paper presents a comprehensive study on reducing commutation torque ripples generated in brushless DC motor drives with only a single DC-link current sensor provided. In such drives, commutation torque ripple suppression techniques that are practically effective in low speed as well as high speed regions are scarcely found. The commutation compensation technique proposed here is based on a strategy that the current slopes of the incoming and the outgoing phases during the commutation interval can be equalized by a proper duty-ratio control. Being directly linked with deadbeat current control scheme, the proposed control method accomplishes suppression of the spikes and dips superimposed on the current and torque responses during the commutation intervals of the inverter. Effectiveness of the proposed control method is verified through simulations and experiments. |
Ruthenium red modifies the cardiac and skeletal muscle Ca(2+) release channels (ryanodine receptors) by multiple mechanisms. | The effects of ruthenium red (RR) on the skeletal and cardiac muscle ryanodine receptors (RyRs) were studied in vesicle-Ca(2+) flux, [(3)H]ryanodine binding, and single channel measurements. In vesicle-Ca(2+) flux measurements, RR was more effective in inhibiting RyRs at 0.2 microM than 20 microM free Ca(2+). [(3)H]Ryanodine binding measurements suggested noncompetitive interactions between RR inhibition and Ca(2+) regulatory sites of RyRs. In symmetric 0.25 M KCl with 10-20 microM cytosolic Ca(2+), cytosolic RR decreased single channel activities at positive and negative holding potentials. In close to fully activated skeletal (20 microM Ca(2+) + 2 mM ATP) and cardiac (200 microM Ca(2+)) RyRs, cytosolic RR induced a predominant subconductance at a positive but not negative holding potential. Lumenal RR induced a major subconductance in cardiac RyR at negative but not positive holding potentials and several subconductances in skeletal RyR. The RR-related subconductances of cardiac RyR showed a nonlinear voltage dependence, and more than one RR molecule appeared to be involved in their formation. Cytosolic and lumenal RR also induced subconductances in Ca(2+)-conducting skeletal and cardiac RyRs recorded at 0 mV holding potential. These results suggest that RR inhibits RyRs and induces subconductances by binding to cytosolic and lumenal sites of skeletal and cardiac RyRs. |
Clinical profile, level of affection and therapeutic management of patients with osteoarthritis in primary care: The Spanish multicenter study EVALÚA. | OBJECTIVE
To determine the clinical profile, degree of involvement and management in patients with knee, hip or hand osteoarthritis.
MATERIAL AND METHOD
Observational study (health centers from 14 autonomous regions, n=363 primary care physicians), involving patients with clinical and/or radiological criteria for osteoarthritis from the American College of Rheumatology, consecutively selected (n=1,258). Sociodemographic variables, clinical and radiological findings, comorbidity and therapeutic management were analyzed.
RESULTS
Mean age was 68.0±9.5 years old; 77.8% were women and 47.6% obese. Distribution by location was: 84.3% knee, 23.4% hip, 14.7% hands. All patients reported pain. The most frequent radiographic Kellgren-Lawrence grade was stage 3 for knee and hip (42.9% and 51.9%, respectively), and 3 (37.2%) and 2 (34.5%) for hip. Time since onset of osteoarthritis symptoms was 9.4±7.5 years, with a mean age at onset of around 60 years old and a family history of osteoarthritis in 66.0%. The most frequent comorbidities were: hypertension (55.1%), depression/anxiety (24.7%) and gastroduodenal diseases (22.9%). A total of 97.6% of the patients received pharmacological treatment, with oral analgesics (paracetamol) (70.5%) and oral NSAIDs (67.9%) being the most frequent drugs. Bilateral osteoarthritis was present in 76.9% of patients with knee osteoarthritis, 59.3% in hip and 94.7% in hands. Female gender and time since onset were associated with bilateral knee and hip osteoarthritis.
CONCLUSIONS
The profile of the osteoarthritis patient is female, >65 years old, overweight/obese, with comorbidity, frequent symptoms and moderate radiologic involvement. Most of patients had bilateral osteoarthritis, associated with female gender and time since onset of disease. Paracetamol was the most common pharmacological treatment. |
Sexually transmitted diseases treatment guidelines, 2010. | These guidelines for the treatment of persons who have or are at risk for sexually transmitted diseases (STDs) were updated by CDC after consultation with a group of professionals knowledgeable in the field of STDs who met in Atlanta on April 18-30, 2009. The information in this report updates the 2006 Guidelines for Treatment of Sexually Transmitted Diseases (MMWR 2006;55[No. RR-11]). Included in these updated guidelines is new information regarding 1) the expanded diagnostic evaluation for cervicitis and trichomoniasis; 2) new treatment recommendations for bacterial vaginosis and genital warts; 3) the clinical efficacy of azithromycin for chlamydial infections in pregnancy; 4) the role of Mycoplasma genitalium and trichomoniasis in urethritis/cervicitis and treatment-related implications; 5) lymphogranuloma venereum proctocolitis among men who have sex with men; 6) the criteria for spinal fluid examination to evaluate for neurosyphilis; 7) the emergence of azithromycin-resistant Treponema pallidum; 8) the increasing prevalence of antimicrobial-resistant Neisseria gonorrhoeae; 9) the sexual transmission of hepatitis C; 10) diagnostic evaluation after sexual assault; and 11) STD prevention approaches. |
Stretch and activation of the human biarticular hamstrings across a range of running speeds | The human biarticular hamstrings [semimembranosus (SM), semitendinosus (ST) and biceps femoris long head (BFLH)] have an important role in running. This study determined how hamstrings neuro-mechanical behaviour changed with faster running, and whether differences existed between SM, ST and BFLH. Whole-body kinematics and hamstrings electromyographic (EMG) activity were measured from seven participants running at four discrete speeds (range: 3.4 ± 0.1 to 9.0 ± 0.7 m/s). Kinematic data were combined with a three-dimensional musculoskeletal model to calculate muscle–tendon unit (MTU) stretch and velocity. Activation duration and magnitude were determined from EMG data. With faster running, MTU stretch and velocity patterns remained similar, but maxima and minima significantly increased. The hamstrings were activated from foot-strike until terminal stance or early swing, and then again from mid-swing until foot-strike. Activation duration was similar with faster running, whereas activation magnitude significantly increased. Hamstrings activation almost always ended before minimum MTU stretch, and it always started before maximum MTU stretch. Comparing the hamstrings, maximum MTU stretch was largest for BFLH and smallest for ST irrespective of running speed, while the opposite was true for peak-to-peak MTU stretch. Furthermore, peak MTU shortening velocity was largest for ST and smallest for BFLH at all running speeds. Finally, for the two fastest running speeds, the amount of MTU stretch that occurred during terminal swing after activation had started was less for BFLH compared to SM and ST. Differences were evident in biarticular hamstrings neuro-mechanical behaviour during running. Such findings have implications for hamstrings function and injury. |
Executive Function and the Frontal Lobes: A Meta-Analytic Review | Currently, there is debate among scholars regarding how to operationalize and measure executive functions. These functions generally are referred to as “supervisory” cognitive processes because they involve higher level organization and execution of complex thoughts and behavior. Although conceptualizations vary regarding what mental processes actually constitute the “executive function” construct, there has been a historical linkage of these “higher-level” processes with the frontal lobes. In fact, many investigators have used the term “frontal functions” synonymously with “executive functions” despite evidence that contradicts this synonymous usage. The current review provides a critical analysis of lesion and neuroimaging studies using three popular executive function measures (Wisconsin Card Sorting Test, Phonemic Verbal Fluency, and Stroop Color Word Interference Test) in order to examine the validity of the executive function construct in terms of its relation to activation and damage to the frontal lobes. Empirical lesion data are examined via meta-analysis procedures along with formula derivatives. Results reveal mixed evidence that does not support a one-to-one relationship between executive functions and frontal lobe activity. The paper concludes with a discussion of the implications of construing the validity of these neuropsychological tests in anatomical, rather than cognitive and behavioral, terms. |
Image Super-Resolution via Deep Recursive Residual Network | Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17. |
Transfer Learning by Borrowing Examples for Multiclass Object Detection | Despite the recent trend of increasingly large datasets for object detection, there still exist many classes with few training examples. To overcome this lack of training data for certain classes, we propose a novel way of augmenting the training data for each class by borrowing and transforming examples from other classes. Our model learns which training instances from other classes to borrow and how to transform the borrowed examples so that they become more similar to instances from the target class. Our experimental results demonstrate that our new object detector, with borrowed and transformed examples, improves upon the current state-of-the-art detector on the challenging SUN09 object detection dataset. Thesis Supervisor: Antonio Torralba Title: Associate Professor of Electrical Engineering and Computer Science |
Geometric control of multiple quadrotor UAVs transporting a cable-suspended rigid body | This paper is focused on tracking control for a rigid body payload, that is connected to an arbitrary number of quadrotor unmanned aerial vehicles via rigid links. An intrinsic form of the equations of motion is derived on the nonlinear configuration manifold, and a geometric controller is constructed such that the payload asymptotically follows a given desired trajectory for its position and attitude. The unique feature is that the coupled dynamics between the rigid body payload, links, and quadrotors are explicitly incorporated into control system design and stability analysis. These are developed in a coordinate-free fashion to avoid singularities and complexities that are associated with local parameterizations. The desirable features of the proposed control system are illustrated by a numerical example. |
Management of diabetic neuropathy by sodium valproate and glyceryl trinitrate spray: a prospective double-blind randomized placebo-controlled study. | OBJECTIVES
Combination of drugs with different mechanisms of action helps in achieving synergistic analgesic effect in neuropathic pain. Keeping this point in view, the effect and safety aspects of sodium valproate and GTN were assessed alone as well as in combination in this study.
DESIGN
Prospective double-blind randomized placebo-controlled study.
MATERIAL AND METHOD
Eighty-seven type 2 diabetics with painful neuropathy were enrolled. Four were excluded: three with HbA1c>11 while one withdrew consent. The remaining 83 were given either sodium valproate and GTN spray (group A) or placebo drug and GTN spray (group B) or sodium valproate and placebo spray (group C) or placebo drug and placebo spray (group D). Quantitative assessment of pain was done by McGill pain questionnaire, visual analogue score (VAS) and present pain intensity (PPI) at the beginning of the study and after 3 months along with motor and sensory nerve conduction velocities measurements.
RESULTS
All the three treatment groups experienced significant improvement in pain score in their drug phase of trial (p<0.001/<0.05) along with some of the electrophysiological parameters. The assessment of the magnitude of therapeutic effect of sodium valproate, GTN and their combination gave numbers needed to treat (NNT) of 7, 5 and 4, respectively.
CONCLUSION
Sodium valproate and GTN are well tolerated and provide significant improvement in pain scores as well as in electrophysiological parameters. |
Effects of strength training on muscle cellular outcomes in prostate cancer patients on androgen deprivation therapy. | Androgen deprivation therapy (ADT) improves life expectancy in prostate cancer (PCa) patients, but is associated with adverse effects on muscle mass. Here, we investigated the effects of strength training during ADT on muscle fiber cross-sectional area (CSA) and regulators of muscle mass. PCa patients on ADT were randomized to 16 weeks of strength training (STG) (n = 12) or a control group (CG; n = 11). Muscle biopsies were obtained from m. vastus lateralis and analyzed by immunohistochemistry and western blot. Muscle fiber CSA increased with strength training (898 μm(2) , P = 0.04), with the only significant increase observed in type II fibers (1076 μm(2) , P = 0.03). There was a trend toward a difference in mean change between groups myonuclei number (0.33 nuclei/fiber, P = 0.06), with the only significant increase observed in type I fibers, which decreased the myonuclear domain size of type I fibers (P = 0.05). Satellite cell numbers and the content of androgen receptor and myostatin remained unchanged. Sixteen weeks of strength training during ADT increased type II fiber CSA and reduced myonuclear domain in type I fibers in PCa patients. The increased number of satellite cells normally seen following strength training was not observed. |
Iloprost for prevention of contrast-mediated nephropathy in high-risk patients undergoing a coronary procedure. Results of a randomized pilot study | The prevention of contrast-mediated nephropathy (CMN), which accounts for considerable morbidity and mortality, remains a vexing problem. Contrast induced renal vasoconstriction is believed to play a pivotal role in the CMN mechanism. The aim of this pilot study was to examine the safety and efficacy of two doses of the prostacyclin analogue iloprost in preventing CMN in high-risk patients undergoing a coronary procedure. Forty-five patients undergoing coronary angiography and/or intervention who had a serum creatinine concentration ≥1.4 mg/dL were randomized to receive iloprost at 1 or 2 ng/kg/min or placebo, beginning 30–90 minutes before and terminating 4 hours after the procedure. CMN was defined by an absolute increase of serum creatinine ≥0.5 mg/dL or a relative increase of ≥25% measured 2 to 5 days after the procedure. Study drug infusion was discontinued in 2 patients in the low-dose iloprost group due to flush/nausea and in 5 patients in the high-dose group due to severe hypotension. The mean creatinine concentration change in the placebo group (0.02 mg/dL) was unfavorable compared to that in the low-dose iloprost group (−0.11 mg/dL; p=0.08) and high-dose iloprost group (−0.23 mg/dL; p=0.048). The difference between the absolute changes in creatinine clearance was favorable compared to placebo for both the low (mean difference 6.1 mL/min, 95%CI −0.5 to 12.8 mL/min, p=0.07) and the high-dose iloprost group (11.8 mL/min, 95%CI 4.7 to 18.8 mL/min, p=0.002). Three cases of CMN were recorded; all in the placebo group (p=0.032). The results of this pilot study suggest that prophylactic administration of iloprost may effectively prevent CMN, but higher dosages are connected with substantial tolerability issues. |
Contactless Air-Filled Substrate Integrated Waveguide | The contactless version of the air-filled substrate integrated waveguide (AF-SIW) is introduced for the first time. The conventional AF-SIW configuration requires a pure and flawless connection of the covering layers to the intermediate substrate. To operate efficiently at high frequencies, this requires a costly fabrication process. In the proposed configuration, the boundary condition on both sides around the AF guiding medium is modified to obtain artificial magnetic conductor (AMC) boundary conditions. The AMC surfaces on both sides of the waveguide substrate are realized by a single-periodic structure with the new type of unit cells. The PEC–AMC parallel plates prevent the leakage of the AF guiding region. The proposed contactless AF-SIW shows low-loss performance in comparison with the conventional AF-SIW at millimeter-wave frequencies when the layers of both waveguides are connected poorly. |
Conservative approach of a symptomatic carious immature permanent tooth using a tricalcium silicate cement (Biodentine): a case report | The restorative management of deep carious lesions and the preservation of pulp vitality of immature teeth present real challenges for dental practitioners. New tricalcium silicate cements are of interest in the treatment of such cases. This case describes the immediate management and the follow-up of an extensive carious lesion on an immature second right mandibular premolar. Following anesthesia and rubber dam isolation, the carious lesion was removed and a partial pulpotomy was performed. After obtaining hemostasis, the exposed pulp was covered with a tricalcium silicate cement (Biodentine, Septodont) and a glass ionomer cement (Fuji IX extra, GC Corp.) restoration was placed over the tricalcium silicate cement. A review appointment was arranged after seven days, where the tooth was asymptomatic with the patient reporting no pain during the intervening period. At both 3 and 6 mon follow up, it was noted that the tooth was vital, with normal responses to thermal tests. Radiographic examination of the tooth indicated dentin-bridge formation in the pulp chamber and the continuous root formation. This case report demonstrates a fast tissue response both at the pulpal and root dentin level. The use of tricalcium silicate cement should be considered as a conservative intervention in the treatment of symptomatic immature teeth. |
A New Algorithm for Processing Interferometric Data-Stacks: SqueeSAR | Permanent Scatterer SAR Interferometry (PSInSAR) aims to identify coherent radar targets exhibiting high phase stability over the entire observation time period. These targets often correspond to point-wise, man-made objects widely available over a city, but less present in non-urban areas. To overcome the limits of PSInSAR, analysis of interferometric data-stacks should aim at extracting geophysical parameters not only from point-wise deterministic objects (i.e., PS), but also from distributed scatterers (DS). Rather than developing hybrid processing chains where two or more algorithms are applied to the same data-stack, and results are then combined, in this paper we introduce a new approach, SqueeSAR, to jointly process PS and DS, taking into account their different statistical behavior. As it will be shown, PS and DS can be jointly processed without the need for significant changes to the traditional PSInSAR processing chain and without the need to unwrap hundreds of interferograms, provided that the coherence matrix associated with each DS is properly “squeezed” to provide a vector of optimum (wrapped) phase values. Results on real SAR data, acquired over an Alpine area, challenging for any InSAR analysis, confirm the effectiveness of this new approach. |
Understanding Performance Interference of I/O Workload in Virtualized Cloud Environments | Server virtualization offers the ability to slice large, underutilized physical servers into smaller, parallel virtual machines (VMs), enabling diverse applications to run in isolated environments on a shared hardware platform. Effective management of virtualized cloud environments introduces new and unique challenges, such as efficient CPU scheduling for virtual machines, effective allocation of virtual machines to handle both CPU intensive and I/O intensive workloads. Although a fair number of research projects have dedicated to measuring, scheduling, and resource management of virtual machines, there still lacks of in-depth understanding of the performance factors that can impact the efficiency and effectiveness of resource multiplexing and resource scheduling among virtual machines. In this paper, we present our experimental study on the performance interference in parallel processing of CPU and network intensive workloads in the Xen Virtual Machine Monitors (VMMs). We conduct extensive experiments to measure the performance interference among VMs running network I/O workloads that are either CPU bound or network bound. Based on our experiments and observations, we conclude with four key findings that are critical to effective management of virtualized cloud environments for both cloud service providers and cloud consumers. First, running network-intensive workloads in isolated environments on a shared hardware platform can lead to high overheads due to extensive context switches and events in driver domain and VMM. Second, co-locating CPU-intensive workloads in isolated environments on a shared hardware platform can incur high CPU contention due to the demand for fast memory pages exchanges in I/O channel. Third, running CPU-intensive workloads and network-intensive workloads in conjunction incurs the least resource contention, delivering higher aggregate performance. Last but not the least, identifying factors that impact the total demand of the exchanged memory pages is critical to the in-depth understanding of the interference overheads in I/O channel in the driver domain and VMM. |
Intelligent defense using pretense against targeted attacks in cloud platforms | Cloud-hosted services are being increasingly used in online businesses in e.g., retail, healthcare, manufacturing, entertainment due to benefits such as scalability and reliability. These benefits are fueled by innovations in orchestration of cloud platforms that make them programmable as Software Defined everything Infrastructures (SDxI). At the same time, sophisticated targeted attacks such as Distributed Denial-of-Service (DDoS) and Advanced Persistent Threats (APTs) are growing on an unprecedented scale threatening the availability of online businesses. In this paper, we present a novel defense system called Dolus to mitigate the impact of targeted attacks launched against high-value services hosted in SDxI-based cloud platforms. Our Dolus system is able to initiate a ‘pretense’ in a scalable and collaborative manner to deter the attacker based on threat intelligence obtained from attack feature analysis. Using foundations from pretense theory in child play, Dolus takes advantage of elastic capacity provisioning via ‘quarantine virtual machines’ and SDxI policy co-ordination across multiple network domains to deceive the attacker by creating a false sense of success. We evaluate the efficacy of Dolus using a GENI Cloud testbed and demonstrate its real-time capabilities to: (a) detect DDoS and APT attacks and redirect attack traffic to quarantine resources to engage the attacker under pretense, (b) coordinate SDxI policies to possibly block attacks closer to the attack source(s). |
Early on-treatment prediction of response to peginterferon alfa-2a for HBeAg-negative chronic hepatitis B using HBsAg and HBV DNA levels. | UNLABELLED
Peginterferon alfa-2a results in a sustained response (SR) in a minority of patients with hepatitis B e antigen (HBeAg)-negative chronic hepatitis B (CHB). This study investigated the role of early on-treatment serum hepatitis B surface antigen (HBsAg) levels in the prediction of SR in HBeAg-negative patients receiving peginterferon alfa-2a. HBsAg (Architect from Abbott) was quantified at the baseline and during treatment (weeks 4, 8, 12, 24, 36, and 48) and follow-up (weeks 60 and 72) in the sera from 107 patients who participated in an international multicenter trial (peginterferon alfa-2a, n = 53, versus peginterferon alfa-2a and ribavirin, n = 54). Overall, 24 patients (22%) achieved SR [serum hepatitis B virus (HBV) DNA level < 10,000 copies/mL and normal alanine aminotransferase levels at week 72]. Baseline characteristics were comparable between sustained responders and nonresponders. From week 8 onward, serum HBsAg levels markedly decreased in sustained responders, whereas only a modest decline was observed in nonresponders. However, HBsAg declines alone were of limited value in the prediction of SR [area under the receiver operating characteristic curve (AUC) at weeks 4, 8, and 12 = 0.59, 0.56, and 0.69, respectively]. Combining the declines in HBsAg and HBV DNA allowed the best prediction of SR (AUC at week 12 = 0.74). None of the 20 patients (20% of the study population) in whom a decrease in serum HBsAg levels was absent and whose HBV DNA levels declined less than 2 log copies/mL exhibited an SR (negative predictive value = 100%).
CONCLUSION
At week 12 of peginterferon alfa-2a treatment for HBeAg-negative CHB, a solid stopping rule was established with a combination of declines in serum HBV DNA and HBsAg levels from the baseline. Quantitative serum HBsAg in combination with HBV DNA enables on-treatment adjustments of peginterferon therapy for HBeAg-negative CHB. |
On the Spectral Bias of Deep Neural Networks | It is well known that over-parametrized deep neural networks (DNNs) are an overly expressive class of functions that can memorize even random data with 100% training accuracy. This raises the question why they do not easily overfit real data. To answer this question, we study deep networks using Fourier analysis. We show that deep networks with finite weights (or trained for finite number of steps) are inherently biased towards representing smooth functions over the input space. Specifically, the magnitude of a particular frequency component (k) of deep ReLU network function decays at least as fast as O(k−2), with width and depth helping polynomially and exponentially (respectively) in modeling higher frequencies. This shows for instance why DNNs cannot perfectly memorize peaky delta-like functions. We also show that DNNs can exploit the geometry of low dimensional data manifolds to approximate complex functions that exist along the manifold with simple functions when seen with respect to the input space. As a consequence, we find that all samples (including adversarial samples) classified by a network to belong to a certain class are connected by a path such that the prediction of the network along that path does not change. Finally we find that DNN parameters corresponding to functions with higher frequency components occupy a smaller volume in the parameter. |
Nearly 650 Memories of the 650 | Of the many interesting aspects of Bemer’s career, one that has impressed me most is his ability to move back and forth between what we at one time perceived as the two classes of computers: “scientific” and “commercial. ” Note his contributions to the IBM 650, 704, and 705, as well as to compilers (perhaps still called automatic coding devices) and interpreters of quite different sorts. I do not know B. C. Borden or where he/she may Carnegie and IBM to produce FORTRANSIT is welcome and helps to compl$e the record. I especially appreciate Bemer’s generous sketches of his colleagues at Lockheed, Here he closes one loop: he discusses Fletcher Jones, a founder and former president of Computer Sciences Corporation, whose foundation endowed the professorship Donald Knuth holds at Stanford. |
A Fast Learning Method for Accurate and Robust Lane Detection Using Two-Stage Feature Extraction with YOLO v3 | To improve the accuracy of lane detection in complex scenarios, an adaptive lane feature learning algorithm which can automatically learn the features of a lane in various scenarios is proposed. First, a two-stage learning network based on the YOLO v3 (You Only Look Once, v3) is constructed. The structural parameters of the YOLO v3 algorithm are modified to make it more suitable for lane detection. To improve the training efficiency, a method for automatic generation of the lane label images in a simple scenario, which provides label data for the training of the first-stage network, is proposed. Then, an adaptive edge detection algorithm based on the Canny operator is used to relocate the lane detected by the first-stage model. Furthermore, the unrecognized lanes are shielded to avoid interference in subsequent model training. Then, the images processed by the above method are used as label data for the training of the second-stage model. The experiment was carried out on the KITTI and Caltech datasets, and the results showed that the accuracy and speed of the second-stage model reached a high level. |
Swing-Up Control of the Pendubot: An Impulse–Momentum Approach | The standard control problem of the pendubot refers to the task of stabilizing its equilibrium configuration with the highest potential energy. Linearization of the dynamics of the pendubot about this equilibrium results in a completely controllable system and allows a linear controller to be designed for local asymptotic stability. For the underactuated pendubot, the important task is, therefore, to design a controller that will swing up both links and bring the configuration variables of the system within the region of attraction of the desired equilibrium. This paper provides a new method for swing-up control based on a series of rest-to-rest maneuvers of the first link about its vertically upright configuration. The rest-to-rest maneuvers are designed such that each maneuver results in a net gain in energy of the second link. This results in swing-up of the second link and the pendubot configuration reaching the region of attraction of the desired equilibrium. A four-step algorithm is provided for swing-up control followed by stabilization. Simulation results are presented to demonstrate the efficacy of the approach. |
Lane Detection Based on the Random Sample Consensus | In order to improve real-time and robustness of the lane detection and get more ideal lane, in the image preprocessing, the filter is used in strengthening lane information of the binary image, reducing the noise and removing irrelevant information. The lane edge detection is by using Canny operator, then the corner detection method is used in getting the Image corners coordinates and finally using the RANSAC to circulation fit for corners, according to the optimal lanes parameters drawing lane. Through experiment of different scenes, this method can not only effectively rule out linear pixel interference of outside the road in multiple complex environments, but also quickly and accurately identify lane. This method improves the stability of the lane detection to a certain extent, which has good robust and real-time. |
Driver fatigue detection based on saccadic eye movements | The correct determination of driver's level of fatigue has been of vital importance for the safety of driving. There are various methods, such as analyzing facial expression, eyelid activity, and head movements to assess the fatigue level of drivers. This paper describes the design and prototype implementation of a driver fatigue level determination system based on detection of saccadic eye movements. Driver's eye movement speed is used to assess driver's fatigue level. The information about eyes is obtained via infrared led camera device. Movements of pupils were recorded in two driving scenarios with different traffic density. In the first scenario, the traffic density was set to low while the second scenario was based on high density and aggressive traffic. Based on the movements of pupils, the data on saccadic eye movement was analyzed to determine fatigue level of the driver. Acceleration, speed, and size of pupils at both traffic scenarios were compared with data mining techniques, such as segmentation adaptive peak, entropy, and data distribution analyses. Significantly different levels of fatigue were found between the tired and vigorous driver for the different types of scenarios. |
Hybrid DC Circuit Breaker and Fault Current Limiter With Optional Interruption Capability | Since most of renewable energy resources are dc type, dc microgrids find more attention in deregulated power systems for improvement of reliability and efficiency of the system. In dc microgrids, there is no zero-crossing in current and voltage; so the circuit breaking and fault current limiting are serious concerns. Conventional thyristor-based dc fault current limiters are attractive solutions for tackling this problem. However, they impose permanent interruption on the system in case of temporary fault. In this paper, a novel hybrid circuit breaker (HCB) is proposed to improve power quality of the system by adopting different limiting behaviors for permanent and temporary faults. The proposed HCB transfers load power through a mechanical circuit breaker (MCB) under normal condition. Upon fault occurrence in both sides of the HCB, the fault current is transferred from the MCB to a current limiting path incorporating a zero voltage switching (ZVS) transition mechanism. The HCB limits the fault current for a predefined duration and then interrupts the faulty feeder. In case of temporary fault, the current is returned to the MCB in ZVS. The HCB operation principle is discussed in different operation modes. Capabilities of the proposed HCB are tested and verified by different experiments carried out on a prototype. |
ShiDianNao: Shifting vision processing closer to the sensor | In recent years, neural network accelerators have been shown to achieve both high energy efficiency and high performance for a broad application scope within the important category of recognition and mining applications.
Still, both the energy efficiency and performance of such accelerators remain limited by memory accesses. In this paper, we focus on image applications, arguably the most important category among recognition and mining applications. The neural networks which are state-of-the-art for these applications are Convolutional Neural Networks (CNN), and they have an important property: weights are shared among many neurons, considerably reducing the neural network memory footprint. This property allows to entirely map a CNN within an SRAM, eliminating all DRAM accesses for weights. By further hoisting this accelerator next to the image sensor, it is possible to eliminate all remaining DRAM accesses, i.e., for inputs and outputs.
In this paper, we propose such a CNN accelerator, placed next to a CMOS or CCD sensor. The absence of DRAM accesses combined with a careful exploitation of the specific data access patterns within CNNs allows us to design an accelerator which is 60× more energy efficient than the previous state-of-the-art neural network accelerator. We present a full design down to the layout at 65 nm, with a modest footprint of 4.86mm2 and consuming only 320mW, but still about 30× faster than high-end GPUs. |
High-Efficiency Isolated Boost DC–DC Converter for High-Power Low-Voltage Fuel-Cell Applications | A new design approach achieving very high conversion efficiency in low-voltage high-power isolated boost dc-dc converters is presented. The transformer eddy-current and proximity effects are analyzed, demonstrating that an extensive interleaving of primary and secondary windings is needed to avoid high winding losses. The analysis of transformer leakage inductance reveals that extremely low leakage inductance can be achieved, allowing stored energy to be dissipated. Power MOSFETs fully rated for repetitive avalanches allow primary-side voltage clamp circuits to be eliminated. The oversizing of the primary-switch voltage rating can thus be avoided, significantly reducing switch-conduction losses. Finally, silicon carbide rectifying diodes allow fast diode turn-off, further reducing losses. Detailed test results from a 1.5-kW full-bridge boost dc-dc converter verify the theoretical analysis and demonstrate very high conversion efficiency. The efficiency at minimum input voltage and maximum power is 96.8%. The maximum efficiency of the proposed converter is 98%. |
Automatic Medical Concept Extraction from Free Text Clinical Reports, a New Named Entity Recognition Approach | Actually in the Hospital Information Systems, there is a wide range of clinical information representation from the Electronic Health Records (EHR), and most of the information contained in clinical reports is written in natural language free text. In this context, we are researching the problem of automatic clinical named entities recognition from free text clinical reports. We are using Snomed-CT (Systematized Nomenclature of Medicine – Clinical Terms) as dictionary to identify all kind of clinical concepts, and thus the problem we are considering is to map each clinical entity named in a free text report with its Snomed-CT unique ID. More in general, we are developed a new approach for the named entity recognition (NER) problem in specific domains, and we have applied it to recognize clinical concepts in free text clinical reports. In our approach we apply two types of NER approaches, dictionary-based and machine learning-based. We use a specific domain dictionary-based gazetteer (using Snomed-CT to get the standard clinical code for the clinical concept), and the main approach that we introduce is using a unsupervised shallow learning neural network, word2vec from Mikolov et al., to represent words as vectors, and then making the recognition based on the distance between candidates and dictionary terms. We have applied our approach on a Dataset with 318.585 clinical reports in Spanish from the emergency service of the Hospital “Rafael Méndez” from Lorca (Murcia) Spain, and preliminary results are encouraging. Key-Words: Snomed-CT, word2vec, doc2vec, clinical information extraction, skipgram, medical terminologies, search semantic, named entity recognition, ner, medical entity recognition |
Approximation and Inapproximation for the Influence Maximization Problem in Social Networks under Deterministic Linear Threshold Model | Influence Maximization is the problem of finding a certain amount of people in a social network such that their aggregation influence through the network is maximized. In the past this problem has been widely studied under a number of different models. In 2003, Kempe \emph{et al.} gave a $(1-{1 \over e})$-approximation algorithm for the \emph{linear threshold model} and the \emph{independent cascade model}, which are the two main models in the social network analysis. In addition, Chen \emph{et al.} proved that the problem of exactly computing the influence given a seed set in the two models is $\#$P-hard. Both the \emph{linear threshold model} and the \emph{independent cascade model} are based on randomized propagation. However such information might be obtained by surveys or data mining techniques, which makes great difference on the properties of the problem. In this paper, we study the Influence Maximization problem in the \emph{deterministic linear threshold model}. As a contrast, we show that in the \emph{deterministic linear threshold model}, there is no polynomial time $n^{1-\epsilon}$-approximation unless P=NP even at the simple case that one person needs at most two active neighbors to become active. This inapproximability result is derived with self-contained proofs without using PCP theorem. In the case that a person can be activated when one of its neighbors become active, there is a polynomial time ${e\over e-1}$-approximation, and we prove it is the best possible approximation under a reasonable assumption in the complexity theory, $NP \not\subset DTIME(n^{\log\log n})$. We also show that the exact computation of the final influence given a seed set can be solved in linear time in the \emph{deterministic linear threshold model}. The Least Seed Set problem, which aims to find a seed set with least number of people to activate all the required people in a given social network, is discussed. Using an analysis framework based on Set Cover, we show a $O($log$n)$-approximation in the case that a people become active when one of its neighbors is activated. |
Accelerating the XGBoost algorithm using GPU computing | We present a CUDA-based implementation of a decision tree construction algorithm within the gradient boosting library XGBoost. The tree construction algorithm is executed entirely on the graphics processing unit (GPU) and shows high performance with a variety of datasets and settings, including sparse input matrices. Individual boosting iterations are parallelised, combining two approaches. An interleaved approach is used for shallow trees, switching to a more conventional radix sort-based approach for larger depths. We show speedups of between 3 and 6 using a Titan X compared to a 4 core i7 CPU, and 1.2 using a Titan X compared to 2 Xeon CPUs (24 cores). We show that it is possible to process the Higgs dataset (10 million instances, 28 features) entirely within GPU memory. The algorithm is made available as a plug-in within the XGBoost library and fully supports all XGBoost features including classification, regression and ranking tasks. Subjects Artificial Intelligence, Data Mining and Machine Learning, Data Science |
Decentralized Stochastic Control with Partial History Sharing: A Common Information Approach | A general model of decentralized stochastic control called partial history sharing information structure is presented. In this model, at each step the controllers share part of their observation and control history with each other. This general model subsumes several existing models of information sharing as special cases. Based on the information commonly known to all the controllers, the decentralized problem is reformulated as an equivalent centralized problem from the perspective of a coordinator. The coordinator knows the common information and selects prescriptions that map each controller's local information to its control actions. The optimal control problem at the coordinator is shown to be a partially observable Markov decision process (POMDP) which is solved using techniques from Markov decision theory. This approach provides 1) structural results for optimal strategies and 2) a dynamic program for obtaining optimal strategies for all controllers in the original decentralized problem. Thus, this approach unifies the various ad-hoc approaches taken in the literature. In addition, the structural results on optimal control strategies obtained by the proposed approach cannot be obtained by the existing generic approach (the person-by-person approach) for obtaining structural results in decentralized problems; and the dynamic program obtained by the proposed approach is simpler than that obtained by the existing generic approach (the designer's approach) for obtaining dynamic programs in decentralized problems. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.