title
stringlengths
8
300
abstract
stringlengths
0
10k
Quality-of-life benefits of catheter ablation of persistent atrial fibrillation: a reanalysis of data from the SARA study.
AIMS The recently published SARA study was a prospective, multi-centre randomized controlled trial that compared CA to antiarrhythmic drug therapy (ADT) in 146 patients with persistent atrial fibrillation (AF). The study found that recurrence of AF or atrial flutter occurred significantly less often in the CA arm compared to the ADT arm (29.6% vs. 56.3%, p = 0.002). Despite this clear superiority in terms of efficacy, the authors were not able to demonstrate a corresponding Quality of Life (QoL) improvement. We sought to investigate this apparent disparity using alternative analytical methods. METHODS AND RESULTS We were able to show that a high coefficient of variation existed for all QoL measures at each time point which may explain the lack of statistical difference originally reported. We reanalyzed the raw QoL data from the SARA study using paired sample t-tests for the change in QOL for individual patients between baseline and 12 month (final) follow up. For patients randomized to ADT the difference in QoL after 12 months was not significant for any of the four QoL domains (global, physical, psychological and sexual) whereas for patients randomized to CA all comparisons were significant (global, p < 0.001; physical, p = 0.001; psychological, p < 0.001; sexual, p = 0.003). CONCLUSION In the SARA study, after 12 months' follow up, CA significantly improved QoL for patients with persistent AF whereas medical therapy had no appreciable effect.
Energy-Efficient I/O Thread Schedulers for NVMe SSDs on NUMA
Non-volatile memory express (NVMe) based SSDs and the NUMA platform are widely adopted in servers to achieve faster storage speed and more powerful processing capability. As of now, very little research has been conducted to investigate the performance and energy efficiency of the state-of-the-art NUMA architecture integrated with NVMe SSDs, an emerging technology used to host parallel I/O threads. As this technology continues to be widely developed and adopted, we need to understand the runtime behaviors of such systems in order to design software runtime systems that deliver optimal performance while consuming only the necessary amount of energy. This paper characterizes the runtime behaviors of a Linux-based NUMA system employing multiple NVMe SSDs. Our comprehensive performance and energy-efficiency study using massive numbers of parallel I/O threads shows that the penalty due to CPU contention is much smaller than that due to remote access of NVMe SSDs. Based on this insight, we develop a dynamic "lesser evil" algorithm called ESN, to minimize the impact of these two types of penalties. ESN is an energy-efficient profiling-based I/O thread scheduler for managing I/O threads accessing NVMe SSDs on NUMA systems. Our empirical evaluation shows that ESN can achieve optimal I/O throughput and latency while consuming up to 50% less energy and using fewer CPUs.
Component-based robotic engineering (Part I) [Tutorial]
This article is the first of a two-part series intended as an introduction to component-based software engineering (CBSE) in robotics. In this tutorial, we regard a component as a piece of software that implements robotic functionality (e.g., path planning). The focus of this article is on design principles and implementation guidelines that enable the development of reusable and maintainable software-building blocks, which can be assembled to build robotic applications.
Revisiting Word Embedding for Contrasting Meaning
Contrasting meaning is a basic aspect of semantics. Recent word-embedding models based on distributional semantics hypothesis are known to be weak for modeling lexical contrast. We present in this paper the embedding models that achieve an F-score of 92% on the widely-used, publicly available dataset, the GRE “most contrasting word” questions (Mohammad et al., 2008). This is the highest performance seen so far on this dataset. Surprisingly at the first glance, unlike what was suggested in most previous work, where relatedness statistics learned from corpora is claimed to yield extra gains over lexicon-based models, we obtained our best result relying solely on lexical resources (Roget’s and WordNet)—corpora statistics did not lead to further improvement. However, this should not be simply taken as that distributional statistics is not useful. We examine several basic concerns in modeling contrasting meaning to provide detailed analysis, with the aim to shed some light on the future directions for this basic semantics modeling problem.
Oxidation of Columbium and coated Columbium Alloys
Abstract : microscopy, high temperatureniobium alloy su-31, niobium alloy fs-85, fluorescence, x raysOxidation of columbium alloys FS-85, SU-31, and a complex disilicide coating/SU-31 and FS-85 alloy systems has been studied in the temperature range 1400 to 2700 F at 1 atm and 50 Cm/sec flow rate of air. For studies of the uncoated alloys, the flow rate and pressure dependence of the alloys was also investigated. The experimental methods included thermogravimetric measurements of oxidation rates and studies on reacted specimens by means of X-ray diffraction, metallographic techniques, and electron microprobe analysis. Two anomalies in the temperature dependence of the oxidation. rate were observed for both alloys and were related to the oxides formed. The SU-31 alloy has the best oxidation resistance. The oxidation mechanism of the alloys is discussed along with the mechanism of protection afforded by the complex disilicide coating system. The coating did not fail after 1000 hours of oxidation
Span: An energy-efficient coordination algorithm for topology maintenance in Ad Hoc wireless networks
This paper presents Span, a power saving technique for multi-hop ad hoc wireless networks that reduces energy consumption without significantly diminishing the capacity or connectivity of the network. Span builds on the observation that when a region of a shared-channel wireless network bag a sufficient density of nodes, only a small number of them need be on at any time to forward traffic for active connections. Span is a distributed, randomized algorithm where nodes make local decisions on whether to sleep, or to join a forwarding backbone as a coordinator. Each node bases its decision on an estimate of how many of its neighbors will benefit from it being awake, and the amount of energy available to it. We give a randomized algorithm where coordinators rotate with time, demonstrating how localized node decisions lead to a connected, capacity-preserving global topology. Improvement in system lifetime due to Span increases as the ratio of idle-to-sleep energy consumption increases, and increases as the density of the network increases. For example, our simulations show that with a practical energy model, system lifetime of an 802.11 network in power saving mode with Span is a factor of two better than without. Span integrates nicely with 802.11—when run in conjunction with the 802.11 power saving mode, Span improves communication latency, capacity, and system lifetime.
Genome-wide prediction of cis-regulatory regions using supervised deep learning methods
In the human genome, 98% of DNA sequences are non-protein-coding regions that were previously disregarded as junk DNA. In fact, non-coding regions host a variety of cis-regulatory regions which precisely control the expression of genes. Thus, Identifying active cis-regulatory regions in the human genome is critical for understanding gene regulation and assessing the impact of genetic variation on phenotype. The developments of high-throughput sequencing and machine learning technologies make it possible to predict cis-regulatory regions genome wide. Based on rich data resources such as the Encyclopedia of DNA Elements (ENCODE) and the Functional Annotation of the Mammalian Genome (FANTOM) projects, we introduce DECRES based on supervised deep learning approaches for the identification of enhancer and promoter regions in the human genome. Due to their ability to discover patterns in large and complex data, the introduction of deep learning methods enables a significant advance in our knowledge of the genomic locations of cis-regulatory regions. Using models for well-characterized cell lines, we identify key experimental features that contribute to the predictive performance. Applying DECRES, we delineate locations of 300,000 candidate enhancers genome wide (6.8% of the genome, of which 40,000 are supported by bidirectional transcription data), and 26,000 candidate promoters (0.6% of the genome). The predicted annotations of cis-regulatory regions will provide broad utility for genome interpretation from functional genomics to clinical applications. The DECRES model demonstrates potentials of deep learning technologies when combined with high-throughput sequencing data, and inspires the development of other advanced neural network models for further improvement of genome annotations.
CHALLENGES AND OPPORTUNITIES FOR IOT-ENABLED CYBERMANUFACTURING : SOME INITIAL FINDINGS FROM AN IOT-ENABLED MANUFACTURING TECHNOLOGY TESTBED
Internet of Things (IoT) have gained tremendous momentum and importance in recent years. Initiated from services and consumer products industries, there is a growing interest in using IoT technologies in various industries. In manufacturing, advanced or smart manufacturing and cybermanufacturing are also drawing increasing attention. Because IoT devices such as IoT sensors are usually much cheaper and smaller than the traditional sensors, there is a potential for instrumenting manufacturing systems with massive number of sensors, then big data subsequently collected from IoT sensors can be utilized to advance manufacturing. This type of IoT applications has not drawn much attention from either academic researchers or industrial practitioners. One possible reason is that the benefits of such applications have not been recognized or tested. Therefore, we built an IoT-enabled manufacturing technology testbed (MTT) to explore the potential of IoT sensors. In this work, the characteristics of a type of IoT temperature sensor were studied and mathematical models were developed to capture these characteristics and to accurately reproduce the observed behaviors. Based on the initial findings from our MTT experiments, challenges and opportunities for IoT-enabled manufacturing are discussed.
Design and Dynamics of a Novel Solar Tracker With Parallel Mechanism
This paper proposes a two-axis-decoupled solar tracker based on parallel mechanism. Utilizing Grassmann line geometry, the type design of the two-axis solar tracker is investigated. Then, singularity is studied to obtain the workspace without singularities. By using the virtual work principle, the inverse dynamics is derived to find out the driving torque. Taking Beijing as a sample city where the solar tracker is placed, the motion trajectory of the tracker is planned to collect the maximum solar energy. The position of the mass center of the solar mirror on the platform is optimized to minimize the driving torque. The driving torque of the proposed tracker is compared with that of a conventional serial tracker, which shows that the proposed tracker can greatly reduce the driving torque and the reducers with large reduction ratio are not necessary. Thus, the complexity and power dissipation of the system can be reduced.
Parasitic mites of medical and veterinary importance--is there a common research agenda?
There are an estimated 0.5-1 million mite species on earth. Among the many mites that are known to affect humans and animals, only a subset are parasitic but these can cause significant disease. We aim here to provide an overview of the most recent work in this field in order to identify common biological features of these parasites and to inform common strategies for future research. There is a critical need for diagnostic tools to allow for better surveillance and for drugs tailored specifically to the respective parasites. Multi-'omics' approaches represent a logical and timely strategy to identify the appropriate mite molecules. Recent advances in sequencing technology enable us to generate de novo genome sequence data, even from limited DNA resources. Consequently, the field of mite genomics has recently emerged and will now rapidly expand, which is a particular advantage for parasitic mites that cannot be cultured in vitro. Investigations of the microbiota associated with mites will elucidate the link between parasites and pathogens, and define the role of the mite in transmission and pathogenesis. The databases generated will provide the crucial knowledge essential to design novel diagnostic tools, control measures, prophylaxes, drugs and immunotherapies against the mites and associated secondary infections.
10.6 THz figure-of-merit phase-change RF switches with embedded micro-heater
We report on GeTe-based phase-change RF switches with embedded micro-heater for thermal switching. With heater parasitics reduced, GeTe RF switches show onstate resistance of 0.05 ohm*mm and off-state capacitance of 0.3 pF/mm. The RF switch figure-of-merit is estimated to be 10.6 THz, which is about 15 times better than state-of-the-art silicon-on-insulator switches. With on-state resistance of 1 ohm and off-state capacitance of 15 fF, RF insertion loss was measured at <;0.2 dB, and isolation was >25 dB at 20 GHz, respectively. RF power handling was >5.6 W for both onand off-state of GeTe.
A Framework for Mining Signatures from Event Sequences and Its Applications in Healthcare Data
This paper proposes a novel temporal knowledge representation and learning framework to perform large-scale temporal signature mining of longitudinal heterogeneous event data. The framework enables the representation, extraction, and mining of high-order latent event structure and relationships within single and multiple event sequences. The proposed knowledge representation maps the heterogeneous event sequences to a geometric image by encoding events as a structured spatial-temporal shape process. We present a doubly constrained convolutional sparse coding framework that learns interpretable and shift-invariant latent temporal event signatures. We show how to cope with the sparsity in the data as well as in the latent factor model by inducing a double sparsity constraint on the β-divergence to learn an overcomplete sparse latent factor model. A novel stochastic optimization scheme performs large-scale incremental learning of group-specific temporal event signatures. We validate the framework on synthetic data and on an electronic health record dataset.
Nonlinear dynamics aspects of modern storage rings
Particle dynamics in storage rings is considered from the point of view of modern developments in nonlinear dynamics. The paper is intended as a brief introduction for accelerator physicists to the modern world of nonlinear mechanics and for mathematical physicists to the nonlinear world of modern storage rings. * Work supported in part by the Department of Energy, contracts DE-AC03-84ER40182 and DE-AC03-76SF00515. Contributed to the Joint US/CERN School on Particle Accelerators Sardinia, Italy, January 31 February 5, 1985
Review of PM generator designs for direct-drive wind turbines
A review of the direct drive permanent magnet generator technologies for wind energy is carried out in the paper. A preliminary study of the V-shaped interior permanent magnet (IPM) machine for application as the direct drive wind-power generator was also conducted. The air-gap flux density of this type of IPM machine has been studied which shows that by optimizing V angle, high air-gap flux density can be maintained even with small pole pitch angle of high pole number structure.
The PhysIO Toolbox for Modeling Physiological Noise in fMRI Data
BACKGROUND Physiological noise is one of the major confounds for fMRI. A common class of correction methods model noise from peripheral measures, such as ECGs or pneumatic belts. However, physiological noise correction has not emerged as a standard preprocessing step for fMRI data yet due to: (1) the varying data quality of physiological recordings, (2) non-standardized peripheral data formats and (3) the lack of full automatization of processing and modeling physiology, required for large-cohort studies. NEW METHODS We introduce the PhysIO Toolbox for preprocessing of physiological recordings and model-based noise correction. It implements a variety of noise models, such as RETROICOR, respiratory volume per time and heart rate variability responses (RVT/HRV). The toolbox covers all intermediate steps - from flexible read-in of data formats to GLM regressor/contrast creation - without any manual intervention. RESULTS We demonstrate the workflow of the toolbox and its functionality for datasets from different vendors, recording devices, field strengths and subject populations. Automatization of physiological noise correction and performance evaluation are reported in a group study (N=35). COMPARISON WITH EXISTING METHODS The PhysIO Toolbox reproduces physiological noise patterns and correction efficacy of previously implemented noise models. It increases modeling robustness by outperforming vendor-provided peak detection methods for physiological cycles. Finally, the toolbox offers an integrated framework with full automatization, including performance monitoring, and flexibility with respect to the input data. CONCLUSIONS Through its platform-independent Matlab implementation, open-source distribution, and modular structure, the PhysIO Toolbox renders physiological noise correction an accessible preprocessing step for fMRI data.
Graph stream algorithms: a survey
Over the last decade, there has been considerable interest in designing algorithms for processing massive graphs in the data stream model. The original motivation was two-fold: a) in many applications, the dynamic graphs that arise are too large to be stored in the main memory of a single machine and b) considering graph problems yields new insights into the complexity of stream computation. However, the techniques developed in this area are now finding applications in other areas including data structures for dynamic graphs, approximation algorithms, and distributed and parallel computation. We survey the state-of-the-art results; identify general techniques; and highlight some simple algorithms that illustrate basic ideas.
Human Knowledge: Classical and Contemporary Approaches
*=NEW TO THE THIRD EDITION General Introduction: Human Knowledge-Its Nature, Sources, and Limits PART I. CLASSICAL SOURCES Meno Phaedo Republic Theaetetus Posterior Analytics De Anima Outlines of Pyrrhonism Contra Academicos De Civitas Dei Summa Theologiae Meditations on First Philosophy An Essay Concerning Human Understanding Introduction to New Essays on the Human Understanding A Treatise Concerning the Principles of Human Knowledge An Enquiry Concerning Human Understanding 11. * An Inquiry into the Human Mind Prolegomena to Any Future Metaphysics PART II. CONTEMPORARY SOURCES The Will to Believe Appearance, Reality, and Knowledge By Acquaintance Verification and Philosophy The Pragmatic Element in Knowledge Empiricism, Semantics, and Ontology Two Dogmas of Empiricism 19. * Pragmatism, Relativism, and Irrationalism Is Justified True Belief Knowledge? An Alleged Defect in Gettier Counter-Examples The Gettier Problem A Pragmatic Conception of the A Priori The Truths of Reason A Priori Knowledge, Necessity, and Contingency Concepts of Epistemic Justification The Raft and the Pyramid: Coherence versus Foundations in the Theory of Knowledge 28. * A Contextualist Theory of Epistemic Justification 29. * Evidentialism Reflective Equilibrium, Analytic Epistemology, and the Problem of Cognitive Diversity Proof of an External World Cause and Effect: Intuitive Awareness Skepticism, Naturalism, and Transcendental Arguments 34. * Philosophical Scepticism and Epistemic Circularity 35. * Scepticism, 'Externalism', and the Goal of Epistemology Epistemology Naturalized Why Reason Can't Be Naturalized Epistemic Folkways and Scientific Epistemology 39. * Quine as Feminist: The Radical Import of Naturalized Epistemology
Systematic Generalization: What Is Required and Can It Be Learned?
Numerous models for grounded language understanding have been recently proposed, including (i) generic models that can be easily adapted to any given task with little adaptation and (ii) intuitively appealing modular models that require background knowledge to be instantiated. We compare both types of models in how much they lend themselves to a particular form of systematic generalization. Using a synthetic VQA test, we evaluate which models are capable of reasoning about all possible object pairs after training on only a small subset of them. Our findings show that the generalization of modular models is much more systematic and that it is highly sensitive to the module layout, i.e. to how exactly the modules are connected. We furthermore investigate if modular models that generalize well could be made more end-to-end by learning their layout and parametrization. We find that end-to-end methods from prior work often learn a wrong layout and a spurious parametrization that do not facilitate systematic generalization. Our results suggest that, in addition to modularity, systematic generalization in language understanding may require explicit regularizers or priors.
Turn-on Performance of Reverse Blocking IGBT (RB IGBT) and Optimization Using Advanced Gate Driver
Turn-on performance of a reverse blocking insulated gate bipolar transistor (RB IGBT) is discussed in this paper. The RB IGBT is a specially designed IGBT having ability to sustain blocking voltage of both the polarities. Such a switch shows superior conduction but much worst switching (turn- on) performances compared to a combination of an ordinary IGBT and blocking diode. Because of that, optimization of the switching performance is a key issue that makes the RB IGB not well accepted in the real applications. In this paper, the RB IGBT turn-on losses and reverse recovery current are analyzed for different gate driver techniques, and a new gate driver is proposed. Commonly used conventional gate drivers do not have capability for the switching dynamics optimization. In contrast to this, the new proposed gate driver provides robust and simple way to control and optimize the reverse recovery current and turn-on losses. The collector current slope and reverse recovery current are controlled by the means of the gate emitter voltage control in feedforward manner. In addition, the collector emitter voltage slope is controlled during the voltage falling phase by the means of inherent increase of the gate current. Therefore, the collector emitter voltage tail and the total turn- on losses are reduced, independently on the reverse recovery current. The proposed gate driver was experimentally verified and the results presented and discussed.
Chapter 8 Location-Based Social Networks : Users
In this chapter, we introduce and define the meaning of location-based social network (LBSN) and discuss the research philosophy behind LBSNs from the perspective of users and locations. Under the circumstances of trajectory-centric LBSN, we then explore two fundamental research points concerned with understanding users in terms of their locations. One is modeling the location history of an individual using the individual’s trajectory data. The other is estimating the similarity between two different people according to their location histories. The inferred similarity represents the strength of connection between two users in a locationbased social network, and can enable friend recommendations and community discovery. The general approaches for evaluating these applications are also presented.
Psychometric properties of the Chinese version of the Swanson, Nolan, and Pelham, Version IV Scale-Teacher Form.
OBJECTIVES To examine the psychometric properties of the Chinese version of Swanson, Nolan and Pelham IV Scale (SNAP-IV)-Teacher Form. METHODS The sample included a representative sample of 3,653 first to eighth graders (boys, 52.3%) and 190 children diagnosed with ADHD (aged 6-15). Teachers completed the Chinese versions of the SNAP-IV, and Strengths and Difficulties Questionnaire. RESULTS The confirmatory factor analysis revealed a four-factor structure (inattention, hyperactivity, impulsivity, and opposition) with an adequate fit (Comparative Fit Index = 0.990; root mean square error of approximation = 0.058). The test-retest reliability (intraclass correlations = 0.60-0.84), internal consistency (alpha = .88-.95), and concurrent validity (Pearson correlations = 0.61-0.84) were satisfactory. Children with both ADHD and oppositional defiant/conduct disorders had the highest scores, followed by children with ADHD only who had intermediate scores and then school-based participants who had the lowest scores. CONCLUSIONS Our findings suggest that the Chinese SNAP-IV-Teacher Form is a reliable and valid instrument for rating ADHD and oppositional symptoms (ClinicalTrials.Gov number, NCT00491361).
Pharmacokinetics and physiologic effects of intramuscularly administered xylazine hydrochloride-ketamine hydrochloride-butorphanol tartrate alone or in combination with orally administered sodium salicylate on biomarkers of pain in Holstein calves following castration and dehorning.
OBJECTIVE To determine the pharmacokinetic parameters of xylazine, ketamine, and butorphanol (XKB) administered IM and sodium salicylate (SAL) administered PO to calves and to compare drug effects on biomarkers of pain and distress following sham and actual castration and dehorning. ANIMALS 40 Holstein bull calves from 3 farms. PROCEDURES Calves weighing 108 to 235 kg (n = 10 calves/group) received one of the following treatments prior to sham (period 1) and actual (period 2) castration and dehorning: saline (0.9% NaCl) solution IM (placebo); SAL administered PO through drinking water at concentrations from 2.5 to 5 mg/mL from 24 hours prior to period 1 to 48 hours after period 2; butorphanol (0.025 mg/kg), xylazine (0.05 mg/kg), and ketamine (0.1 mg/kg) coadministered IM immediately prior to both periods; and a combination of SAL and XKB (SAL+XKB). Plasma drug concentrations, average daily gain (ADG), chute exit velocity, serum cortisol concentrations, and electrodermal activity were evaluated. RESULTS ADG (days 0 to 13) was significantly greater in the SAL and SAL+XKB groups than in the other 2 groups. Calves receiving XKB had reduced chute exit velocity in both periods. Serum cortisol concentrations increased in all groups from period 1 to period 2. However, XKB attenuated the cortisol response for the first hour after castration and dehorning and oral SAL administration reduced the response from 1 to 6 hours. Administration of XKB decreased electrodermal activity scores in both periods. CONCLUSIONS AND CLINICAL RELEVANCE SAL administered PO through drinking water decreased cortisol concentrations and reduced the decrease in ADG associated with castration and dehorning in calves.
Genome-wide haplotype association study identifies the FRMD4A gene as a risk locus for Alzheimer's disease
Recently, several genome-wide association studies (GWASs) have led to the discovery of nine new loci of genetic susceptibility in Alzheimer's disease (AD). However, the landscape of the AD genetic susceptibility is far away to be complete and in addition to single-SNP (single-nucleotide polymorphism) analyses as performed in conventional GWAS, complementary strategies need to be applied to overcome limitations inherent to this type of approaches. We performed a genome-wide haplotype association (GWHA) study in the EADI1 study (n=2025 AD cases and 5328 controls) by applying a sliding-windows approach. After exclusion of loci already known to be involved in AD (APOE, BIN1 and CR1), 91 regions with suggestive haplotype effects were identified. In a second step, we attempted to replicate the best suggestive haplotype associations in the GERAD1 consortium (2820 AD cases and 6356 controls) and observed that 9 of them showed nominal association. In a third step, we tested relevant haplotype associations in a combined analysis of five additional case–control studies (5093 AD cases and 4061 controls). We consistently replicated the association of a haplotype within FRMD4A on Chr.10p13 in all the data set analyzed (OR: 1.68; 95% CI: (1.43–1.96); P=1.1 × 10−10). We finally searched for association between SNPs within the FRMD4A locus and Aβ plasma concentrations in three independent non-demented populations (n=2579). We reported that polymorphisms were associated with plasma Aβ42/Aβ40 ratio (best signal, P=5.4 × 10−7). In conclusion, combining both GWHA study and a conservative three-stage replication approach, we characterised FRMD4A as a new genetic risk factor of AD.
Distributed access control on IoT ledger-based architecture
Due to increased number of attacks on the Internet of Things (IoT) devices, the security of IoT networks became critical. Some recent researches proposed the adoption of blockchain in IoT networks without a thorough discussion on the impact of the solution on the devices performance. Furthermore, blockchain employment in the context of IoT can be challenging due to the devices hardware limitations. To fill this gap, this paper proposes an IoT ledger-based architecture to ensure access control on heterogeneous scenarios. This research applies conventional devices used on IoT networks, such as Arduino, Raspberry and Orange Pi boards. Finally, we perform performance evaluation focused on access control of IoT devices and on information propagation through peers on a private IoT network scenario.
DIGITAL INNOVATION MANAGEMENT : REINVENTING INNOVATION MANAGEMENT RESEARCH IN A DIGITAL WORLD
Rapid and pervasive digitization of innovation processes and outcomes has upended extant theories on innovation management by calling into question fundamental assumptions about the definitional boundaries for innovation, agency for innovation, and the relationship between innovation processes and outcomes. There is a critical need for novel theorizing on digital innovation management that does not rely on such assumptions and draws on the rich and rapidly emerging research on digital technologies. We offer suggestions for such theorizing in the form of four new theorizing logics, or elements, that are likely to be valuable in constructing more accurate explanations of innovation processes and outcomes in an increasingly digital world. These logics can open new avenues for researchers to contribute to this important area. Our suggestions in this paper, coupled with the six research notes included in the special issue on digital innovation management, seek to offer a broader foundation for reinventing innovation management research in a digital world.
Software-Defined Wireless Networking Opportunities and Challenges for Internet-of-Things: A Review
With the emergence of Internet-of-Things (IoT), there is now growing interest to simplify wireless network controls. This is a very challenging task, comprising information acquisition, information analysis, decision-making, and action implementation on large scale IoT networks. Resulting in research to explore the integration of software-defined networking (SDN) and IoT for a simpler, easier, and strain less network control. SDN is a promising novel paradigm shift which has the capability to enable a simplified and robust programmable wireless network serving an array of physical objects and applications. This paper starts with the emergence of SDN and then highlights recent significant developments in the wireless and optical domains with the aim of integrating SDN and IoT. Challenges in SDN and IoT integration are also discussed from both security and scalability perspectives.
Management of resource constrained devices in the internet of things
The embedded computing devices deployed within the Internet of Things are expected to be resource constrained. This resource constraint not only applies to memory and processing capabilities, but the low-power radio standards utilized further constrain the network interfaces. The IPv6 protocol provides a suitable basis for interoperability in the IoT, due to its large address space and a number of existing protocols that function over IP and its flexibility. We investigate how existing IP-based network management protocols can be implemented on resource-constrained devices. We present the resource requirements for SNMP and NETCONF on an 8-bit AVR based device.
The Histone Database: a comprehensive resource for histones and histone fold-containing proteins.
The Histone Database is a curated and searchable collection of full-length sequences and structures of histones and nonhistone proteins containing histone-like folds, compiled from major public databases. Several new histone fold-containing proteins have been identified, including the huntingtin-interacting protein HYPM. Additionally, based on the recent crystal structure of the Son of Sevenless protein, an interpretation of the sequence analysis of the histone fold domain is presented. The database contains an updated collection of multiple sequence alignments for the four core histones (H2A, H2B, H3, and H4) and the linker histones (H1/H5) from a total of 975 organisms. The database also contains information on the human histone gene complement and provides links to three-dimensional structures of histone and histone fold-containing proteins. The Histone Database is a comprehensive bioinformatics resource for the study of structure and function of histones and histone fold-containing proteins. The database is available at http://research.nhgri.nih.gov/histones/.
Autonomous tracking of vehicle rear lights and detection of brakes and turn signals
Automatic detection of vehicle alert signals is extremely critical in autonomous vehicle applications and collision avoidance systems, as these detection systems can help in the prevention of deadly and costly accidents. In this paper, we present a novel and lightweight algorithm that uses a Kalman filter and a codebook to achieve a high level of robustness. The algorithm is able to detect braking and turning signals of the vehicle in front both during the daytime and at night (daytime detection being a major advantage over current research), as well as correctly track a vehicle despite changing lanes or encountering periods of no or low-visibility of the vehicle in front. We demonstrate that the proposed algorithm is able to detect the signals accurately and reliably under different lighting conditions.
Gratitude as a psychotherapeutic intervention.
Gratitude practice can be a catalyzing and relational healing force, often untapped in clinical practice. In this article, we provide an overview of current thinking about gratitude's defining and beneficial properties, followed by a brief review of the research on mental health outcomes that result from gratitude practice. Following an analysis of our case study of the use of gratitude as a psychotherapeutic intervention, we present various self-strategies and techniques for consciously choosing and cultivating gratitude. We conclude by describing ways in which gratitude might be capitalized upon for beneficial outcomes in therapeutic settings.
An interleaving and load sharing method for multiphase LLC converters
Interleaving frequency-controlled LLC resonant converters will encounter load-sharing problem due to the tolerance of the resonant components. In this paper, full-wave and half-wave switch-controlled capacitors (SCCs) are used in LLC stages to solve this problem. By using resonant capacitance as a control variable, the output current can be modulated even when all the LLC stages are synchronized at the same switching frequency. A design procedure is developed. A 600W prototype is built to verify the feasibility.
Deep Convolutional Neural Network to Detect J-UNIWARD
This paper presents an empirical study on applying convolutional neural networks (CNNs) to detecting J-UNIWARD -- one of the most secure JPEG steganographic method. Experiments guiding the architectural design of the CNNs have been conducted on the JPEG compressed BOSSBase containing 10,000 covers of size 512×512. Results have verified that both the pooling method and the depth of the CNNs are critical for performance. Results have also proved that a 20-layer CNN, in general, outperforms the most sophisticated feature-based methods, but its advantage gradually diminishes on hard-to-detect cases. To show that the performance generalizes to large-scale databases and to different cover sizes, one experiment has been conducted on the CLS-LOC dataset of ImageNet containing more than one million covers cropped to unified size of 256×256. The proposed 20-layer CNN has cut the error achieved by a CNN recently proposed for large-scale JPEG steganalysis by 35%. Source code is available via GitHub: https://github.com/GuanshuoXu/deep_cnn_jpeg_steganalysis
Infrared face recognition: a comprehensive review of methodologies and databases
Automatic face recognition is an area with immense practical potential which includes a wide range of commercial and law enforcement applications. Hence it is unsurprising that it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in face recognition continues to improve, benefitting from advances in a range of different research fields such as image processing, pattern recognition, computer graphics, and physiology. Systems based on visible spectrum images, the most researched face recognition modality, have reached a significant level of maturity with some practical success. However, they continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease recognition accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject. Our key contributions are (i) a summary of the inherent properties of infrared imaging which makes this modality promising in the context of face recognition; (ii) a systematic review of the most influential approaches, with a focus on emerging common trends as well as key differences between alternative methodologies; (iii) a description of the main databases of infrared facial images available to the researcher; and lastly (iv) a discussion of the most promising avenues for future research. & 2014 Elsevier Ltd. All rights reserved.
Incidence, correlates, and outcomes of acute, hospital-acquired anemia in patients with acute myocardial infarction.
BACKGROUND Anemia is common among patients hospitalized with acute myocardial infarction and is associated with poor outcomes. Less is known about the incidence, correlates, and prognostic implications of acute, hospital-acquired anemia (HAA). METHODS AND RESULTS We identified 2909 patients with acute myocardial infarction who had normal hemoglobin (Hgb) on admission in the multicenter TRIUMPH registry and defined HAA by criteria proposed by Beutler and Waalen. We used hierarchical Poisson regression to identify independent correlates of HAA and multivariable proportional hazards regression to identify the association of HAA with mortality and health status. At discharge, 1321 (45.4%) patients had HAA, of whom 348 (26.3%) developed moderate-severe HAA (Hgb <11 g/dL). The incidence of HAA varied significantly across hospitals (range, 33% to 69%; median rate ratio for HAA, 1.13; 95% confidence interval, 1.07 to 1.23, adjusting for patient characteristics). Although documented bleeding was more frequent with more severe HAA, fewer than half of the patients with moderate-severe HAA had any documented bleeding. Independent correlates of HAA included age, female sex, white race, chronic kidney disease, ST-segment elevation myocardial infarction, acute renal failure, use of glycoprotein IIb/IIIa inhibitors, in-hospital complications (cardiogenic shock, bleeding and bleeding severity), and length of stay. After adjustment for GRACE score and bleeding, patients with moderate-severe HAA had higher mortality rates (hazard ratio, 1.82; 95% confidence interval, 1.11 to 2.98 versus no HAA) and poorer health status at 1 year. CONCLUSIONS HAA develops in nearly half of acute myocardial infarction hospitalizations among patients treated medically or with percutaneous coronary intervention, commonly in the absence of documented bleeding, and is associated with worse mortality and health status. Better understanding of how HAA can be prevented and whether its prevention can improve patient outcomes is needed.
Convergence and Consistency Analysis for a 3-D Invariant-EKF SLAM
In this letter, we investigate the convergence and consistency properties of an invariant-extended Kalman filter (RI-EKF) based simultaneous localization and mapping (SLAM) algorithm. Basic convergence properties of this algorithm are proven. These proofs do not require the restrictive assumption that the Jacobians of the motion and observation models need to be evaluated at the ground truth. It is also shown that the output of RI-EKF is invariant under any <italic>stochastic rigid body transformation</italic> in contrast to <inline-formula><tex-math notation="LaTeX"> $\mathbb {SO}(3)$</tex-math></inline-formula> based EKF SLAM algorithm (<inline-formula><tex-math notation="LaTeX"> $\mathbb {SO}(3)$</tex-math></inline-formula>-EKF) that is only invariant under <italic>deterministic rigid body transformation</italic>. Implications of these invariance properties on the consistency of the estimator are also discussed. Monte Carlo simulation results demonstrate that RI-EKF outperforms <inline-formula> <tex-math notation="LaTeX">$\mathbb {SO}(3)$</tex-math></inline-formula>-EKF, Robocentric-EKF and the “First Estimates Jacobian” EKF, for three-dimensional point feature-based SLAM.
Stress fractures in elite male football players.
The objective was to investigate the incidence, type and distribution of stress fractures in professional male football players. Fifty-four football teams, comprising 2379 players, were followed prospectively for 189 team seasons during the years 2001-2009. Team medical staff recorded individual player exposure and time-loss injuries. The first team squads of 24 clubs selected by UEFA as belonging to the 50 best European teams, 15 teams of the Swedish Super League and 15 teams playing their home matches on artificial turf pitches were included. In total, 51 stress fractures occurred during 1,180,000 h of exposure, giving an injury incidence of 0.04 injuries/1000 h. A team of 25 players can therefore expect one stress fracture every third season. All fractures affected the lower extremities and 78% the fifth metatarsal bone. Stress fractures to the fifth metatarsal bone, tibia or pelvis caused absences of 3-5 months. Twenty-nine percent of the stress fractures were re-injuries. Players that sustained stress fractures were significantly younger than those that did not. Stress fractures are rare in men's professional football but cause long absences. Younger age and intensive pre-season training appear to be risk factors.
Grid Integration of Wind Farms Using SVC and STATCOM
This paper considers the use of the static VAr compensator (SVC) and static synchronous compensator (STATCOM) for wind farm integration. Wind farm models based on fixed speed induction generators (FSIG), using AC connection and equipped with either SVC or STATCOM, are developed. Stability problems with the FSIG are described. An investigation is conducted on the impact of STATCOM/SVC ratings and network strength on system stability after network faults, and comparison is also made between the performances of the two devices. It was found that the SVC and STATCOM considerably improve the system stability during and after disturbances, especially when the network is weak. It showed that the STATCOM gave a much better dynamic performance, and provided better reactive power support to the network, as its maximum reactive current output was virtually independent of the PCC voltage.
Andro-AutoPsy: Anti-malware system based on similarity matching of malware and malware creator-centric information
Mobile security threats have recently emerged because of the fast growth in mobile technologies and the essential role that mobile devices play in our daily lives. For that, and to particularly address threats associated with malware, various techniques are developed in the literature, including ones that utilize static, dynamic, on-device, off-device, and hybrid approaches for identifying, classifying, and defend against mobile threats. Those techniques fail at times, and succeed at other times, while creating a trade-off of performance and operation. In this paper, we contribute to the mobile security defense posture by introducing Andro-AutoPsy, an anti-malware system based on similarity matching of malware-centric and malware creator-centric information. Using Andro-AutoPsy, we detect and classify malware samples into similar subgroups by exploiting the profiles extracted from integrated footprints, which are implicitly equivalent to distinct characteristics. The experimental results demonstrate that Andro-AutoPsy is scalable, performs precisely in detecting and classifying malware with low false positives and false negatives, and is capable of identifying zero-day mobile malware. © 2015 Elsevier Ltd. All rights reserved.
Design and optimization of thermo-mechanical reliability in wafer level packaging
Article history: Received 4 July 2009 Received in revised form 16 November 2009 Available online 29 January 2010 0026-2714/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.microrel.2009.11.010 * Corresponding author. Address: Department of M University, P.O. Box 10028, Beaumont, TX 77710, USA +1 409 880 8121. E-mail address: [email protected] (X.J. Fan). In this paper, a variety of wafer level packaging (WLP) structures, including both fan-in and fan-out WLPs, are investigated for solder joint thermo-mechanical reliability performance, from a structural design point of view. The effects of redistribution layer (RDL), bump structural design/material selection, polymer-cored ball application, and PCB design/material selection are studied. The investigation focuses on four different WLP technologies: standard WLP (ball on I/O WLP), ball on polymer WLP without under bump metallurgy (UBM) layer, ball on polymer WLP with UBM layer, and encapsulated copper post WLP. Ball on I/O WLP, in which solder balls are directly attached to the metal pads on silicon wafer, is used as a benchmark for the analysis. 3-D finite element modeling is performed to investigate the effects of WLP structures, UBM layer, polymer film material properties (in ball on polymer WLP), and encapsulated epoxy material properties (in copper post WLP). Both ball on polymer and copper post WLPs have shown great reliability improvement in thermal cycling. For ball on polymer WLP structures, polymer film between silicon and solder balls creates a ‘cushion’ effect to reduce the stresses in solder joints. Such cushion effect can be achieved either by an extremely compliant film or a ‘hard’ film with a large coefficient of thermal expansion. Encapsulated copper post WLP shows the best thermo-mechanical performance among the four WLP structures. Furthermore, for a fan-out WLP, it has been found that the critical solder balls are the outermost solder balls under die-area, where the maximum thermal mismatch takes place. In a fan-out WLP package, chip size, other than package size, determines the limit of solder joint reliability. This paper also discusses the polymer-cored solder ball applications to enhance thermo-mechanical reliability of solder joints. Finally, both experimental and finite element analysis have demonstrated that making corner balls non-electrically connected can greatly improve the WLP thermomechanical reliability. 2009 Elsevier Ltd. All rights reserved.
Cocaine-induced metaplasticity in the nucleus accumbens: Silent synapse and beyond
The neuroadaptation theory of addiction suggests that, similar to the development of most memories, exposure to drugs of abuse induces adaptive molecular and cellular changes in the brain which likely mediate addiction-related memories or the addictive state. Compared to other types of memories, addiction-related memories develop fast and last extremely long, suggesting that the cellular and molecular processes that mediate addiction-related memories are exceptionally adept and efficient. We recently demonstrated that repeated exposure to cocaine generated a large portion of "silent" glutamatergic synapses within the nucleus accumbens (NAc). Silent glutamatergic synapses are synaptic connections in which only N-methyl-D-aspartic acid receptor (NMDAR)-mediated responses are readily detected whereas alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptors (AMPARs) are absent or highly labile. Extensive experimental evidence suggests that silent synapses are conspicuously efficient plasticity sites at which long-lasting plastic changes can be more easily induced and maintained. Thus, generation of silent synapses can be regarded as a process of metaplasticity, which primes the NAc for subsequent durable and robust plasticity for addiction-related memories. Focusing on silent synapse-based metaplasticity, this review discusses how key brain regions, such as the NAc, utilize the metaplasticity mechanism to optimize the plasticity machineries to achieve fast and durable plastic changes following exposure to cocaine. A summary of recent related results suggests that upon cocaine exposure, newly generated silent synapses may prime excitatory synapses within the NAc for long-term potentiation (LTP), thus setting the direction of future plasticity. Furthermore, because cocaine-generated silent synapses are enriched in NMDARs containing the NR2B subunit, the enhanced NR2B-signaling may set up a selective recruitment of certain types of AMPARs. Thus, silent synapse-based metaplasticity may lead to not only quantitative but also qualitative alterations in excitatory synapses within the NAc. This review is one of the first systematic analyses regarding the hypothesis that drugs of abuse induce metaplasticity, which regulates the susceptibility, the direction, and the molecular details of subsequent plastic changes. Taken together, metaplasticity ultimately serves as a key step in mediating cascades of addiction-related plastic alterations.
Health of homeless women with recent experience of rape
There is limited understanding of the physical health, mental health, and substance use or abuse correlates of sexual violence against homeless women. This study documents the association of rape with health and substance use or abuse characteristics reported by a probability sample of 974 homeless women in Los Angeles. Controlling for potential confounders, women who reported rape fared worse than those who did not on every physical and mental health measure and were also more likely to have used and abused drugs other than alcohol. Results should serve to alert clinicians about groups of homeless women who may benefit from rape screening and treatment interventions.
Using Bayes to get the most out of non-significant results
No scientific conclusion follows automatically from a statistically non-significant result, yet people routinely use non-significant results to guide conclusions about the status of theories (or the effectiveness of practices). To know whether a non-significant result counts against a theory, or if it just indicates data insensitivity, researchers must use one of: power, intervals (such as confidence or credibility intervals), or else an indicator of the relative evidence for one theory over another, such as a Bayes factor. I argue Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches. Specifically, Bayes factors use the data themselves to determine their sensitivity in distinguishing theories (unlike power), and they make use of those aspects of a theory's predictions that are often easiest to specify (unlike power and intervals, which require specifying the minimal interesting value in order to address theory). Bayes factors provide a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive. They allow accepting and rejecting the null hypothesis to be put on an equal footing. Concrete examples are provided to indicate the range of application of a simple online Bayes calculator, which reveal both the strengths and weaknesses of Bayes factors.
Unsupervised Models for Named Entity Classification
This paper discusses the use of unlabeled examples for the problem of named entity classification. A large number of rules is needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. However, we show that the use of unlabeled data can reduce the requirements for supervision to just 7 simple “seed” rules. The approach gains leverage from natural redundancy in the data: for many named-entity instances both the spelling of the name and the context in which it appears are sufficient to determine its type. We present two algorithms. The first method uses a similar algorithm to that of (Yarowsky 95), with modifications motivated by (Blum and Mitchell 98). The second algorithm extends ideas from boosting algorithms, designed for supervised learning tasks, to the framework suggested by (Blum and Mitchell 98).
Tango: distributed data structures over a shared log
Distributed systems are easier to build than ever with the emergence of new, data-centric abstractions for storing and computing over massive datasets. However, similar abstractions do not exist for storing and accessing meta-data. To fill this gap, Tango provides developers with the abstraction of a replicated, in-memory data structure (such as a map or a tree) backed by a shared log. Tango objects are easy to build and use, replicating state via simple append and read operations on the shared log instead of complex distributed protocols; in the process, they obtain properties such as linearizability, persistence and high availability from the shared log. Tango also leverages the shared log to enable fast transactions across different objects, allowing applications to partition state across machines and scale to the limits of the underlying log without sacrificing consistency.
Improved Use of Continuous Attributes in C4.5
A reported weakness of C4.5 in domains with continuous attributes is addressed by modifying the formation and evaluation of tests on continuous attributes. An MDL-inspired penalty is applied to such tests, eliminating some of them from consideration and altering the relative desirability of all tests. Empirical trials show that the modi cations lead to smaller decision trees with higher predictive accuracies. Results also con rm that a new version of C4.5 incorporating these changes is superior to recent approaches that use global discretization and that construct small trees with multi-interval splits.
Hilbert-Huang transform based physiological signals analysis for emotion recognition
This paper presents a feature extraction technique based on the Hilbert-Huang Transform (HHT) method for emotion recognition from physiological signals. Four kinds of physiological signals were used for analysis: electrocardiogram (ECG), electromyogram (EMG), skin conductivity (SC) and respiration changes (RSP). Each signal is decomposed into a finite set of AM-FM mono components (fission process) by the Empirical Mode Decomposition (EMD) which is the key part of the HHT. The information components of interest are then combined to create feature vectors (fusion process) for the next classification stage. In addition, classification is performed by using Support Vector Machines (SVM). The classification scores show that HHT based methods outperform traditional statistical techniques and provide a promising framework for both analysis and recognition of physiological signals in emotion recognition.
The Cooltown User Experience
ubiquitous, user, interface, web, handheld, wireless The Cooltown project at HP Labs applies Web technology to develop systems that support the users of wireless, handheld devices interacting with their environment, anywhere they may be. The basic elements of Web technology useful for ubiquitous computing are described in several papers and on-line resources from the Cooltown project [1-5]. Here we will give a flavor of the Cooltown user experience, discuss ways that context can be used in web systems, and explain why we think a web-based system for ubiquitous computing can achieve network effects. In our conclusion we will mention some of the many remaining challenges and our initial user studies platform.
Personality traits and group-based information behaviour: an exploratory study
Although the importance of individual characteristics and psychological factors has been conceptualized in many models of information seeking behaviour (e.g., Case 2007; Wilson 1999) research into personality issues has only recently attracted attention in information science. This may be explained by the work by Heinström (2002; 2003; 2006a; 2006b; 2006c), but may also be explained by the emerging interest and research into affective dimensions of information behaviour (Nahl and Bilal 2007). Hitherto, the research focus in information science VOL. 14 NO. 2, JUNE, 2009
Metopic suture in fetuses with Apert syndrome at 22-27 weeks of gestation.
OBJECTIVES To examine the possible association of skull deformity and the development of the cranial sutures in fetuses with Apert syndrome. METHODS Three-dimensional (3D) ultrasound was used to examine the metopic and coronal sutures in seven fetuses with Apert syndrome at 22-27 weeks of gestation. The gap between the frontal bones in the transverse plane of the head at the level of the cavum septi pellucidi was measured and compared to findings in 120 anatomically normal fetuses undergoing routine ultrasound examination at 16-32 weeks. RESULTS In the normal group, the gap between the frontal bones in the metopic suture at the level of the cavum septi pellucidi, decreased significantly with gestation from a mean of 2.2 mm (5th and 95th centiles: 1.5 mm and 2.9 mm) at 16 weeks to 0.9 mm (5th and 95th centiles: 0.3 mm and 1.6 mm) at 32 weeks. In the seven cases with Apert syndrome, two-dimensional ultrasound examination demonstrated the characteristic features of frontal bossing, depressed nasal bridge and bilateral syndactyly. On 3D examination there was complete closure of the coronal suture and a wide gap in the metopic suture (15-23 mm). CONCLUSION In normal fetuses, cranial bones are believed to grow in response to the centrifugal pressure from the expanding brain and proximity of the dura to the suture is critical in maintaining its patency. In Apert syndrome, the frontal bossing may be a mere consequence of a genetically predetermined premature closure of the coronal suture. Alternatively, there is a genetically predetermined deformation of the brain, which in turn, through differential stretch of the dura in the temporal and frontal regions, causes premature closure of the coronal suture and impaired closure of the metopic suture.
An empirical comparison of models for dropout prophecy in MOOCs
MOOCs are Massive Open Online Courses, which are offered on web and have become a focal point for students preferring e-learning. Regardless of enormous enrollment of students in MOOCs, the amount of dropout students in these courses are too high. For the success of MOOCs, their dropout rates must decrease. As the proportion of continuing and dropout students in MOOCs varies considerably, the class imbalance problem has been observed in normally all MOOCs dataset. Researchers have developed models to predict the dropout students in MOOCs using different techniques. The features, which affect these models, can be obtained during registration and interaction of students with MOOCs' portal. Using results of these models, appropriate actions can be taken for students in order to retain them. In this paper, we have created four models using various machine learning techniques over publically available dataset. After the empirical analysis and evaluation of these models, we found that model created by Naïve Bayes technique performed well for imbalance class data of MOOCs.
A Novel Economic Sharing Model in a Federation of Selfish Cloud Providers
This paper presents a novel economic model to regulate capacity sharing in a federation of hybrid cloud providers (CPs). The proposed work models the interactions among the CPs as a repeated game among selfish players that aim at maximizing their profit by selling their unused capacity in the spot market but are uncertain of future workload fluctuations. The proposed work first establishes that the uncertainty in future revenue can act as a participation incentive to sharing in the repeated game. We, then, demonstrate how an efficient sharing strategy can be obtained via solving a simple dynamic programming problem. The obtained strategy is a simple update rule that depends only on the current workloads and a single variable summarizing past interactions. In contrast to existing approaches, the model incorporates historical and expected future revenue as part of the virtual machine (VM) sharing decision. Moreover, these decisions are not enforced neither by a centralized broker nor by predefined agreements. Rather, the proposed model employs a simple grim trigger strategy where a CP is threatened by the elimination of future VM hosting by other CPs. Simulation results demonstrate the performance of the proposed model in terms of the increased profit and the reduction in the variance in the spot market VM availability and prices.
Increased Grey Matter Associated with Long-Term Sahaja Yoga Meditation: A Voxel-Based Morphometry Study
OBJECTIVES To investigate regional differences in grey matter volume associated with the practice of Sahaja Yoga Meditation. DESIGN Twenty three experienced practitioners of Sahaja Yoga Meditation and twenty three non-meditators matched on age, gender and education level, were scanned using structural Magnetic Resonance Imaging and their grey matter volume were compared using Voxel-Based Morphometry. RESULTS Grey matter volume was larger in meditators relative to non-meditators across the whole brain. In addition, grey matter volume was larger in several predominantly right hemispheric regions: in insula, ventromedial orbitofrontal cortex, inferior temporal and parietal cortices as well as in left ventrolateral prefrontal cortex and left insula. No areas with larger grey matter volume were found in non-meditators relative to meditators. CONCLUSIONS The study shows that long-term practice of Sahaja Yoga Meditation is associated with larger grey matter volume overall, and with regional enlargement in several right hemispheric cortical and subcortical brain regions that are associated with sustained attention, self-control, compassion and interoceptive perception. The increased grey matter volume in these attention and self-control mediating regions suggests use-dependent enlargement with regular practice of this meditation.
An Intelligent System for Detection of User Behavior in Internet Banking
Security and making trust is the first step toward development in both real and virtual societies. Internet-based development is inevitable. Increasing penetration of technology in the internet banking and its effectiveness in contributing to banking profitability and prosperity requires that satisfied customers turn into loyal customers. Currently, a large number of cyber attacks have been focused on online banking systems, and these attacks are considered as a significant security threat. Banks or customers might become the victim of the most complicated financial crime, namely internet fraud. This study has developed an intelligent system that enables detecting the user's abnormal behavior in online banking. Since the user's behavior is associated with uncertainty, the system has been developed based on the fuzzy theory, this enables it to identify user behaviors and categorize suspicious behaviors with various levels of intensity. The performance of the fuzzy expert system has been evaluated using a receiver operating characteristic curve, which provides the accuracy of 94%. This expert system is optimistic to be used for improving e-banking services security and quality.
Adult hippocampal neurogenesis and its role in Alzheimer's disease
The hippocampus, a brain area critical for learning and memory, is especially vulnerable to damage at early stages of Alzheimer's disease (AD). Emerging evidence has indicated that altered neurogenesis in the adult hippocampus represents an early critical event in the course of AD. Although causal links have not been established, a variety of key molecules involved in AD pathogenesis have been shown to impact new neuron generation, either positively or negatively. From a functional point of view, hippocampal neurogenesis plays an important role in structural plasticity and network maintenance. Therefore, dysfunctional neurogenesis resulting from early subtle disease manifestations may in turn exacerbate neuronal vulnerability to AD and contribute to memory impairment, whereas enhanced neurogenesis may be a compensatory response and represent an endogenous brain repair mechanism. Here we review recent findings on alterations of neurogenesis associated with pathogenesis of AD, and we discuss the potential of neurogenesis-based diagnostics and therapeutic strategies for AD.
Fast Feature Extraction with CNNs with Pooling Layers
In recent years, many publications showed that convolutional neural network based features can have a superior performance to engineered features. However, not much effort was taken so far to extract local features efficiently for a whole image. In this paper, we present an approach to compute patch-based local feature descriptors efficiently in presence of pooling and striding layers for whole images at once. Our approach is generic and can be applied to nearly all existing network architectures. This includes networks for all local feature extraction tasks like camera calibration, Patchmatching, optical flow estimation and stereo matching. In addition, our approach can be applied to other patchbased approaches like sliding window object detection and recognition. We complete our paper with a speed benchmark of popular CNN based feature extraction approaches applied on a whole image, with and without our speedup, and example code (for Torch) that shows how an arbitrary CNN architecture can be easily converted by our approach.
NoSta-D Named Entity Annotation for German: Guidelines and Dataset
We describe the annotation of a new dataset for German Named Entity Recognition (NER). The need for this dataset is motivated by licensing issues and consistency issues of existing datasets. We describe our approach to creating annotation guidelines based on linguistic and semantic considerations, and how we iteratively refined and tested them in the early stages of annotation in order to arrive at the largest publicly available dataset for German NER, consisting of over 31,000 manually annotated sentences (over 591,000 tokens) from German Wikipedia and German online news. We provide a number of statistics on the dataset, which indicate its high quality, and discuss legal aspects of distributing the data as a compilation of citations. The data is released under the permissive CC-BY license, and will be fully available for download in September 2014 after it has been used for the GermEval 2014 shared task on NER. We further provide the full annotation guidelines and links to the annotation tool used for the creation of this resource.
Implicit Common-Mode Resonance in LC Oscillators
The performance of a differential <italic>LC</italic> oscillator can be enhanced by resonating the common mode of the circuit at twice the oscillation frequency. When this technique is correctly employed, <inline-formula> <tex-math notation="LaTeX">$Q$ </tex-math></inline-formula>-degradation due to the triode operation of the differential pair is eliminated and flicker noise is nulled. Until recently, one or more tail inductors have been used to achieve this common-mode resonance. In this paper, we demonstrate that additional inductors are not strictly necessary by showing that common-mode resonance can be obtained using a single tank. We present an NMOS architecture that uses a single differential inductor and a CMOS design that uses a single transformer. Prototypes are presented that achieve figure-of-merits of 192 and 195 dBc/Hz, respectively.
Adaptive routing with end-to-end feedback: distributed learning and geometric approaches
Minimal delay routing is a fundamental task in networks. Since delays depend on the (potentially unpredictable) traffic distribution, online delay optimization can be quite challenging. While uncertainty about the current network delays may make the current routing choices sub-optimal, the algorithm can nevertheless try to learn the traffic patterns and keep adapting its choice of routing paths so as to perform nearly as well as the best static path. This online shortest path problem is a special case of online linear optimization, a problem in which an online algorithm must choose, in each round, a strategy from some compact set S ⊆ Rd so as to try to minimize a linear cost function which is only revealed at the end of the round. Kalai and Vempala[4] gave an algorithm for such problems in the transparent feedback model, where the entire cost function is revealed at the end of the round. Here we present an algorithm for online linear optimization in the more challenging opaque feedback model, in which only the cost of the chosen strategy is revealed at the end of the round. In the special case of shortest paths, opaque feedback corresponds to the notion that in each round the algorithm learns only the end-to-end cost of the chosen path, not the cost of every edge in the network.We also present a second algorithm for online shortest paths, which solves the shortest-path problem using a chain of online decision oracles, one at each node of the graph. This has several advantages over the online linear optimization approach. First, it is effective against an adaptive adversary, whereas our linear optimization algorithm assumes an oblivious adversary. Second, even in the case of an oblivious adversary, the second algorithm performs better than the first, as measured by their additive regret.
The X-tree : An Index Structure for High-Dimensional Data
In this paper, we propose a new method for indexing large amounts of point and spatial data in highdimensional space. An analysis shows that index structures such as the R*-tree are not adequate for indexing high-dimensional data sets. The major problem of R-tree-based index structures is the overlap of the bounding boxes in the directory, which increases with growing dimension. To avoid this problem, we introduce a new organization of the directory which uses a split algorithm minimizing overlap and additionally utilizes the concept of supernodes. The basic idea of overlap-minimizing split and supernodes is to keep the directory as hierarchical as possible, and at the same time to avoid splits in the directory that would result in high overlap. Our experiments show that for high-dimensional data, the X-tree outperforms the well-known R*-tree and the TV-tree by up to two orders of magnitude.
General AI Challenge - Round One: Gradual Learning
The General AI Challenge is an initiative to encourage the wider artificial intelligence community to focus on important problems in building intelligent machines with more general scope than is currently possible. The challenge comprises of multiple rounds, with the first round focusing on gradual learning, i.e. the ability to re-use already learned knowledge for efficiently learning to solve subsequent problems. In this article, we will present details of the first round of the challenge, its inspiration and aims. We also outline a more formal description of the challenge and present a preliminary analysis of its curriculum, based on ideas from computational mechanics. We believe, that such formalism will allow for a more principled approach towards investigating tasks in the challenge, building new curricula and for potentially improving consequent challenge rounds.
Born in a watery commune
If you go back far enough, humans, frogs, bacteria and slime moulds share a common ancestor. But scientists can't agree what it was like, or even whether it was a single creature. John Whitfield reviews the evidence.
Automatic Segmentation of Hair in Images
Based on a multi-step process, an automatic hair segmentation method is created and tested on a database of 115 manually segmented hair images. By extracting various information components from an image, including background color, face position, hair color, skin color, and skin mask, a heuristic-based method is created for the detection and segmentation of hair that can detect hair with an accuracy of approximately 75% and with a false hair overestimation error of 34%. Furthermore, it is shown that downsampling the image down to a face width of 25px results in a 73% reduction in computation time with insignificant change in detection accuracy.
Real-Time PID Control Strategy for Maglev Transportation System via Particle Swarm Optimization
This paper focuses on the design of a real-time particle-swarm-optimization-based proportional-integral-differential (PSO-PID) control scheme for the levitated balancing and propulsive positioning of a magnetic-levitation (maglev) transportation system. The dynamic model of a maglev transportation system, including levitated electromagnets and a propulsive linear induction motor based on the concepts of mechanical geometry and motion dynamics, is first constructed. The control objective is to design a real-time PID control methodology via PSO gain selections and to directly ensure the stability of the controlled system without the requirement of strict constraints, detailed system information, and auxiliary compensated controllers despite the existence of uncertainties. The effectiveness of the proposed PSO-PID control scheme for the maglev transportation system is verified by numerical simulations and experimental results, and its superiority is indicated in comparison with PSO-PID in previous literature and conventional sliding-mode (SM) control strategies. With the proposed PSO-PID control scheme, the controlled maglev transportation system possesses the advantages of favorable control performance without chattering phenomena in SM control and robustness to uncertainties superior to fixed-gain PSO-PID control.
Experimental Case Studies for Investigating E-Banking Phishing Techniques and Attack Strategies
Phishing is a form of electronic identity theft in which a combination of social engineering and Web site spoofing techniques is used to trick a user into revealing confidential information with economic value. The problem of social engineering attack is that there is no single solution to eliminate it completely, since it deals largely with the human factor. This is why implementing empirical experiments is very crucial in order to study and to analyze all malicious and deceiving phishing Web site attack techniques and strategies. In this paper, three different kinds of phishing experiment case studies have been conducted to shed some light into social engineering attacks, such as phone phishing and phishing Web site attacks for designing effective countermeasures and analyzing the efficiency of performing security awareness about phishing threats. Results and reactions to our experiments show the importance of conducting phishing training awareness for all users and doubling our efforts in developing phishing prevention techniques. Results also suggest that traditional standard security phishing factor indicators are not always effective for detecting phishing websites, and alternative intelligent phishing detection approaches are needed.
Are topical fl uorides effective for treating incipient carious lesions ? A systematic review and meta-analysis
Background. This systematic review and meta-analysis Tathiane Larissa Lenzi, MSc, PhD; Anelise Fernandes Montagner, MSc, PhD; Fabio Zovico Maxnuck Soares, PhD; Rachel de Oliveira Rocha, MSc, PhD evaluated the effectiveness of professional topical fluoride application (gels or varnishes) on the reversal treatment of incipient enamel carious lesions in primary or permanent
Exploring the emotional dynamics of subclinically depressed individuals with and without anhedonia
Reverse Engineering of Embedded Software Using Syntactic Pattern Recognition
When a secure component executes sensitive operations, the information carried by the power consumption can be used to recover secret information. Many different techniques have been developped to recover this secret, but only few of them focus on the recovering of the executed code itself. Indeed, the code knowledge acquired through this step of Simple Power Analysis (SPA) can help to identify implementation weaknesses and to improve further kinds of attacks. In this paper we present a new approach improving the SPA based on a pattern recognition methodology, that can be used to automatically identify the processed instructions that leak through power consumption. We firstly process a geometrical classification with chosen instructions to enable the automatic identification of any sequence of instructions. Such an analysis is used to reverse general purpose code executions of a recent secure component.
Effect of carbohydrate restriction in patients with hyperinsulinemic hypoglycemia after Roux-en-Y gastric bypass.
BACKGROUND Hyperinsulinemic hypoglycemia is a rare complication of Roux-en-Y gastric bypass (RYGB) surgery. Meals with a high carbohydrate (carb) content and high glycemic index (GI) may provoke these hypoglycemic attacks. The aim of this study is to assess the effects of reducing meal carb content and GI on glycemic responses in patients with post-RYGB hypoglycemia. METHODS Fourteen patients with post-RYGB hypoglycemia underwent two meal tests: a mixed meal test (MMT) with a carb content of 30 g and a meal test with the low GI supplement, Glucerna SR 1.5® (Glucerna meal test (GMT)). Plasma glucose and serum insulin levels were measured for a period of 6 h. RESULTS Peak glucose levels were reached at T 30 during GMT and at T 60 during MMT, and they were 1.5 ± 0.3 mmol/L lower during GMT than during MMT (7.5 ± 0.4 vs 9.0 ± 0.4 mmol/L, P < 0.005). GMT induced the most rapid rise in plasma insulin: at T 30 plasma, insulin was 30.7 ± 8.5 mU/L higher during GMT than during MMT (P < 0.005). None of the carb-restricted meals induced post-prandial hypoglycemia. CONCLUSION A 30-g carb-restricted meal may help to prevent post-prandial hypoglycemia in patients with post-RYGB hypoglycemia. The use of a liquid, low GI, supplement offers no additional advantage.
Digit Recognition via Unsupervised Learning
We present the results of several unsupervised algorithms tested on the MNIST database as well as techniques we used to improve the classification accuracy. We find that spiking neural network outperforms kmeans clustering and reaches the same level as the supervised SVM. We then discuss several inherent issues of unsupervised methods for the handwritten digit classfication problem and propose several methods to further improve the accuracy.
A Review of Adaptive Line Enhancers for Noise Cancellation
This paper provides a literature review on Adaptive Line Enhancer (ALE) methods based on adaptive noise cancellation systems. Such methods have been used in various applications, including communication systems, biomedical engineering, and industrial applications. Developments in ALE in noise cancellation are reviewed, including the principles, adaptive algorithms, and recent modifications on the filter design proposed to increase the convergence rate and reduce the computational complexity for future implementation. The advantages and drawbacks of various adaptive algorithms, such as the Least Mean Square, Recursive Least Square, Affine Projection Algorithm, and their variants, are discussed in this review. Design modifications of filter structures used in ALE are also evaluated. Such filters include Finite Impulse Response, Infinite Impulse Response, lattice, and nonlinear adaptive filters. These structural modifications aim to achieve better adaptive filter performance in ALE systems. Finally, a perspective of future research on ALE systems is presented for further consideration.
Progress in Silicon Single-Photon Avalanche Diodes
Silicon single-photon avalanche diodes (SPADs) are nowadays a solid-state alternative to photomultiplier tubes (PMTs) in single-photon counting (SPC) and time-correlated single-photon counting (TCSPC) over the visible spectral range up to 1-mum wavelength. SPADs implemented in planar technology compatible with CMOS circuits offer typical advantages of microelectronic devices (small size, ruggedness, low voltage, low power, etc.). Furthermore, they have inherently higher photon detection efficiency, since they do not rely on electron emission in vacuum from a photocathode as do PMTs, but instead on the internal photoelectric effect. However, PMTs offer much wider sensitive area, which greatly simplifies the design of optical systems; they also attain remarkable performance at high counting rate, and offer picosecond timing resolution with microchannel plate models. In order to make SPAD detectors more competitive in a broader range of SPC and TCSPC applications, it is necessary to face several issues in the semiconductor device design and technology. Such issues will be discussed in the context of the two possible approaches to such a challenge: employing a standard industrial high-voltage CMOS technology or developing a dedicated CMOS-compatible technology. Advances recently attained in the development of SPAD detectors will be outlined and discussed with reference to both single-element detectors and integrated detector arrays.
Improving Data Governance in Large Organizations through Ontology and Linked Data
In the past decade, the role of data has increased exponentially from something that is queried or reported on, to becoming a true corporate asset. The same time period has also seen marked growth in corporate structural complexity. This combination has lead to information management challenges, as the data moving across a multitude of systems lends itself to a higher likelihood of impacting dependent processes and systems, should something go wrong or be changed. Many enterprise data projects are faced with low success rates and consequently subject to high amounts of scrutiny as senior leadership struggles to identify return on investment. While there are many tools and methods to increase a companies' ability to govern data, this research is based on the premise that you can not govern what you do not know. This lack of awareness of the corporate data landscape impacts the ability to govern data, which in turn impacts overall data quality within organizations. This paper seeks to propose a tools and techniques for companies to better gain an awareness of the landscape of their data, processes, and organizational attributes through the use of linked data, via the Resource Description Framework (RDF) and ontology. The outcome of adopting such techniques is an increased level of data awareness within the organization, resulting in improved ability to govern corporate data assets, and in turn increased data quality.
Homeopathic medical practice: Long-term results of a cohort study with 3981 patients
BACKGROUND On the range of diagnoses, course of treatment, and long-term outcome in patients who chose to receive homeopathic medical treatment very little is known. We investigated homeopathic practice in an industrialized country under everyday conditions. METHODS In a prospective, multicentre cohort study with 103 primary care practices with additional specialisation in homeopathy in Germany and Switzerland, data from all patients (age > 1 year) consulting the physician for the first time were observed. The main outcome measures were: Patient and physician assessments (numeric rating scales from 0 to 10) and quality of life at baseline, and after 3, 12, and 24 months. RESULTS A total of 3,981 patients were studied including 2,851 adults (29% men, mean age 42.5 +/- 13.1 years; 71% women, 39.9 +/- 12.4 years) and 1,130 children (52% boys, 6.5 +/- 3.9 years; 48% girls, 7.0 +/- 4.3 years). Ninety-seven percent of all diagnoses were chronic with an average duration of 8.8 +/- 8 years. The most frequent diagnoses were allergic rhinitis in men, headache in women, and atopic dermatitis in children. Disease severity decreased significantly (p < 0.001) between baseline and 24 months (adults from 6.2 +/- 1.7 to 3.0 +/- 2.2; children from 6.1 +/- 1.8 to 2.2 +/- 1.9). Physicians' assessments yielded similar results. For adults and young children, major improvements were observed for quality of life, whereas no changes were seen in adolescents. Younger age and more severe disease at baseline were factors predictive of better therapeutic success. CONCLUSION Disease severity and quality of life demonstrated marked and sustained improvements following homeopathic treatment period. Our findings indicate that homeopathic medical therapy may play a beneficial role in the long-term care of patients with chronic diseases.
AVA: Adjective-Verb-Adverb Combinations for Sentiment Analysis
Most research on determining the strength of subjective expressions in a sentence or document uses single, specific parts of speech such as adjectives, adverbs, or verbs. To date, almost no research covers the development of a single comprehensive framework in which we can analyze sentiment that takes all three into account. The authors propose the AVA (adjective verb adverb) framework for identifying opinions on any given topic. In AVA, a user can select any topic t of interest and any document d. AVA will return a score that d expresses topic t. The score is expressed on a –1 (maximally negative) to +1 (maximally positive) scale.
Importance of Intrusion Detection System ( IDS )
Intruders computers, who are spread across the Internet have become a major threat in our world, The researchers proposed a number of techniques such as (firewall, encryption) to prevent such penetration and protect the infrastructure of computers, but with this, the intruders managed to penetrate the computers. IDS has taken much of the attention of researchers, IDS monitors the resources computer and sends reports on the activities of any anomaly or strange patterns The aim of this paper is to explain the stages of the evolution of the idea of IDS and its importance to researchers and research centres, security, military and to examine the importance of intrusion detection systems and categories , classifications, and where can put IDS to reduce the risk to the network.
Assessment of myocardial washout of Tc-99m-sestamibi in patients with chronic heart failure: comparison with normal control.
BACKGROUND In contrast to 201TlCl, 99mTc-sestamibi shows very slow myocardial clearance after its initial myocardial uptake. In the present study, myocardial washout of 99mTc-sestamibi was calculated in patients with non-ischemic chronic heart failure (CHF) and compared with biventricular parameters obtained from first-pass and ECG-gated myocardial perfusion SPECT data. METHODS AND RESULTS After administration of 99mTc-sestamibi, 25 patients with CHF and 8 normal controls (NC) were examined by ECG-gated myocardial perfusion SPECT and planar data acquisition in the early and delayed (interval of 3 hours) phase. Left ventricular ejection fraction (LVEF, %), peak filling rate (PFR, sec(-1)), end-diastolic volume (LVEDV, ml) and end-systolic volume (LVESV, ml) were automatically calculated from the ECG-gated SPECT data. Myocardial washout rates over 3 hours were calculated from the early and delayed planar images. Myocardial washout rates in the CHF group (39.6+/-5.2%) were significantly higher than those in the NC group (31.2+/-5.5%, p < 0.01). The myocardial washout rates for the 33 subjects showed significant correlations with LVEF (r = -0.61, p < 0.001), PFR (r = -0.47, p < 0.01), LVEDV (r = 0.45, p < 0.01) and LVESV (r = 0.48, p < 0.01). CONCLUSION The myocardial washout rate of 99Tc-sestamibi is considered to be a novel marker for the diagnosis of myocardial damage in patients with chronic heart failure.
Exploring the application of process mining to support self-regulated learning: An initial analysis with video lectures
Self-regulated learning involves students taking the responsibility of their own learning. Self-regulated learning students usually adopt a variety of learning strategies and behaviors, such as the performance of forethought-performance-reflection cycles or the regular and sequenced work over time, that eventually enable them to achieve a more significant and long-lasting learning. In this paper, we explore if these particular behaviors and strategies can be analyzed through the application of process mining techniques taking as data the events registered during the performance of learning activities. The discovery of the underlying processes followed by students can open new approaches to study the real self-regulated strategies used by students. The paper reviews the techniques and tools available to perform the process mining of events related to self-regulated learning and describes some initial works in this area. Furthermore, as an initial empirical study, we analyze the process followed by students regarding the visualization of videos provided in a first-year engineering subject. The obtained results are studied taking into account the grades obtained by the students. The results show that the students that obtained the best grades follow more varied routes than the students that obtained the worst grades. In addition, the best ones are more regular over time regarding weekly video visualization, mainly at the beginning of the term, while the worst ones visualize the videos mainly at the second part of the term.
Anti-inflammatory properties of curcumin, a major constituent of Curcuma longa: a review of preclinical and clinical research.
Curcuma longa (turmeric) has a long history of use in Ayurvedic medicine as a treatment for inflammatory conditions. Turmeric constituents include the three curcuminoids: curcumin (diferuloylmethane; the primary constituent and the one responsible for its vibrant yellow color), demethoxycurcumin, and bisdemethoxycurcumin, as well as volatile oils (tumerone, atlantone, and zingiberone), sugars, proteins, and resins. While numerous pharmacological activities, including antioxidant and antimicrobial properties, have been attributed to curcumin, this article focuses on curcumin's anti-inflammatory properties and its use for inflammatory conditions. Curcumin's effect on cancer (from an anti-inflammatory perspective) will also be discussed; however, an exhaustive review of its many anticancer mechanisms is outside the scope of this article. Research has shown curcumin to be a highly pleiotropic molecule capable of interacting with numerous molecular targets involved in inflammation. Based on early cell culture and animal research, clinical trials indicate curcumin may have potential as a therapeutic agent in diseases such as inflammatory bowel disease, pancreatitis, arthritis, and chronic anterior uveitis, as well as certain types of cancer. Because of curcumin's rapid plasma clearance and conjugation, its therapeutic usefulness has been somewhat limited, leading researchers to investigate the benefits of complexing curcumin with other substances to increase systemic bioavailability. Numerous in-progress clinical trials should provide an even deeper understanding of the mechanisms and therapeutic potential of curcumin.
Automated vehicle system architecture with performance assessment
This paper proposes a reference architecture to increase reliability and robustness of an automated vehicle. The architecture exploits the benefits arising from the in-terdependencies of the system and provides self awareness. Performance Assessment units attached to subsystems quantify the reliability of their operation and return performance values. The Environment Condition Assessment, which is another important novelty of the architecture, informs augmented sensors on current sensing conditions. Utilizing environment conditions and performance values for subsequent centralized integrity checks allow algorithms to adapt to current driving conditions and thereby to increase their robustness. We demonstrate the benefit of the approach with the example of false positive object detection and tracking, where the detection of a ghost object is resolved in centralized performance assessment using a Bayesian network.
On Expressing Vague Quantification and Scalar Implicatures in the Logic of Partial Information
In this paper, we use the logic of partial information to re-examine some early analyses of vague quantifiers in French such as quelques,peu,beaucoup that are found in particular in the work of O. Ducrot [2]. Our approach is based on the paradigm offered by the logical formalization of the sorites paradox. We claim that this paradox offers a general scheme along which the argumentation structure of all vague quantifiers in French may be expressed. We offer a variational principle approximating Grice's maxims in the case of vague quantification.
Corporate Brand Trust as a Mediator in the Relationship between Consumer Perception of CSR , Corporate Hypocrisy , and Corporate Reputation
The aim of this research is to investigate the relationship between consumer perception of Corporate Social Responsibility (CSR), corporate brand trust, corporate hypocrisy, and corporate reputation. Based on the one-to-one interview method using a structured questionnaire of 560 consumers in South Korea, the proposed model was estimated by structural equation modeling analysis. The model suggests that consumer perception of CSR influences consumer attitudes toward a corporation (i.e., perceived corporate hypocrisy and corporate reputation) by developing corporate brand trust. This in turn further enhances corporate reputation while decreasing corporate hypocrisy. The findings of our study demonstrate that consumer perception of CSR is an antecedent to corporate brand trust, which fully mediates the relationship between consumer perception of CSR and corporate reputation. In addition, corporate brand trust has the role of partial mediator in the relationship between consumer perception of CSR and corporate hypocrisy. These results imply that to better understand the relationship between consumer perception of CSR and consumer attitudes toward a corporation, it is necessary to consider corporate brand trust as an important mediating variable. The theoretical and practical implications of this study are discussed, together with its limitations and potential for future research. OPEN ACCESS Sustainability 2015, 7 3684
Vascular endothelial growth factor (VEGF) improves the sensitivity of CA125 for differentiation of epithelial ovarian cancers from ovarian cysts
The present study aimed to compare the diagnostic value of preoperative serum levels of CA125 and vascular endothelial growth factor (VEGF), and the combination of both biomarkers for differentiating early stage epithelial ovarian cancers from ovarian cysts. In this study, preoperative and postoperative serum levels of CA125 and VEGF of 30 patients with epithelial ovarian cancers (cancer arm) compared with that of 30 patients with benign ovarian cysts (cyst arm). Initial eligibility included having an ovarian cystic or solid mass detected by transvaginal ultrasonography at the hospital clinic. Included patients had to have localized pelvic disease and no clinical or imaging evidence of extrapelvic disease, ascites and distant metastasis. Initial exclusion criteria included prior history of malignancy or any type of cancer treatment. After surgery, only patients with pathologic diagnosis of early stage epithelial ovarian cancer and ovarian cyst were included. Preoperative serum levels of CA125 (P < 0.001) and VEGF (P < 0.001) were significantly higher in the study arm compared to the control arm. In addition, postoperative serum levels of CA125 (P < 0.001) and VEGF (P < 0.001) in study arm were significantly decreased compared to preoperative serum levels. At usual clinical cut-off levels of 17.6 pg/ml for VEGF and 35 U/ml for CA125, the sensitivity and specificity for detecting early stage epithelial ovary cancer were 90 and 57 % for VEGF and 66.6 and 73 % for CA125, respectively. At 100 % specificity for each test, the addition of VEGF to CA125 increased the sensitivity of early ovarian cancer detection from 60 to 73.3 %. This study indicates that the addition of VEGF serum value improves the specificity and the sensitivity of CA125 to detect early stage epithelial ovarian cancers, and to differentiate these neoplasms from ovarian cyst.
Injectable neurotoxins and fillers: there is no free lunch.
Injection of neurotoxins and filling agents for the treatment of facial aesthetics has increased dramatically during the past few decades due to an increased interest in noninvasive aesthetic improvements. An aging but still youth-oriented population expects effective treatments with minimal recovery time and limited risk of complications. Injectable neurotoxins and soft tissue stimulators and fillers have filled this niche of "lunch-time" procedures. As demand for these procedures has increased, supply has followed with more noncore cosmetic specialty physicians, as well as unsupervised ancillary staff, becoming providers and advertising them as easy fixes. Despite an excellent record of safety and efficacy demonstrated in scores of published studies, injectable agents do carry risks of complications. These procedures require a physician with in-depth knowledge of facial anatomy and injection techniques to ensure patient safety and satisfaction. In general, adverse events are preventable and technique-dependent. Although most adverse events are minor and temporary, more serious complications can occur. The recognition, management, and treatment of poor outcomes are as important as obtaining the best aesthetic results. This review addresses important considerations regarding the complications of injectable neurotoxins and fillers used for "lunch-time" injectable procedures.
ISC: An Iterative Social Based Classifier for Adult Account Detection on Twitter
The widespread of adult content on online social networks (e.g., Twitter) is becoming an emerging yet critical problem. An automatic method to identify accounts spreading sexually explicit content (i.e., adult account) is of significant values in protecting children and improving user experiences. Traditional adult content detection techniques are ill-suited for detecting adult accounts on Twitter due to the diversity and dynamics in Twitter content. In this paper, we formulate the adult account detection as a graph based classification problem and demonstrate our detection method on Twitter by using social links between Twitter accounts and entities in tweets. As adult Twitter accounts are mostly connected with normal accounts and post many normal entities, which makes the graph full of noisy links, existing graph based classification techniques cannot work well on such a graph. To address this problem, we propose an iterative social based classifier (ISC), a novel graph based classification technique resistant to the noisy links. Evaluations using large-scale real-world Twitter data show that, by labeling a small number of popular Twitter accounts, ISC can achieve satisfactory performance in adult account detection, significantly outperforming existing techniques.
GraphIE: A Graph-Based Framework for Information Extraction
Most modern Information Extraction (IE) systems are implemented as sequential taggers and focus on modelling local dependencies. Non-local and non-sequential context is, however, a valuable source of information to improve predictions. In this paper, we introduce GraphIE, a framework that operates over a graph representing both local and nonlocal dependencies between textual units (i.e. words or sentences). The algorithm propagates information between connected nodes through graph convolutions and exploits the richer representation to improve word-level predictions. The framework is evaluated on three different tasks, namely social media, textual and visual information extraction. Results show that GraphIE outperforms a competitive baseline (BiLSTM+CRF) in all tasks by a significant margin.
Treatment of posttraumatic and focal osteoarthritic cartilage defects of the knee with autologous polymer-based three-dimensional chondrocyte grafts: 2-year clinical results
Autologous chondrocyte implantation (ACI) is an effective clinical procedure for the regeneration of articular cartilage defects. BioSeed-C is a second-generation ACI tissue engineering cartilage graft that is based on autologous chondrocytes embedded in a three-dimensional bioresorbable two-component gel-polymer scaffold. In the present prospective study, we evaluated the short-term to mid-term efficacy of BioSeed-C for the arthrotomic and arthroscopic treatment of posttraumatic and degenerative cartilage defects in a group of patients suffering from chronic posttraumatic and/or degenerative cartilage lesions of the knee. Clinical outcome was assessed in 40 patients with a 2-year clinical follow-up before implantation and at 3, 6, 12, and 24 months after implantation by using the modified Cincinnati Knee Rating System, the Lysholm score, the Knee injury and Osteoarthritis Outcome Score, and the current health assessment form (SF-36) of the International Knee Documentation Committee, as well as histological analysis of second-look biopsies. Significant improvement (p < 0.05) in the evaluated scores was observed at 1 and/or 2 years after implantation of BioSeed-C, and histological staining of the biopsies showed good integration of the graft and formation of a cartilaginous repair tissue. The Knee injury and Osteoarthritis Outcome Score showed significant improvement in the subclasses pain, other symptoms, and knee-related quality of life 2 years after implantation of BioSeed-C in focal osteoarthritic defects. The results suggest that implanting BioSeed-C is an effective treatment option for the regeneration of posttraumatic and/or osteoarthritic defects of the knee.
Association of PD-1, PD-1 ligands, and other features of the tumor immune microenvironment with response to anti-PD-1 therapy.
PURPOSE Immunomodulatory drugs differ in mechanism-of-action from directly cytotoxic cancer therapies. Identifying factors predicting clinical response could guide patient selection and therapeutic optimization. EXPERIMENTAL DESIGN Patients (N = 41) with melanoma, non-small cell lung carcinoma (NSCLC), renal cell carcinoma (RCC), colorectal carcinoma, or castration-resistant prostate cancer were treated on an early-phase trial of anti-PD-1 (nivolumab) at one institution and had evaluable pretreatment tumor specimens. Immunoarchitectural features, including PD-1, PD-L1, and PD-L2 expression, patterns of immune cell infiltration, and lymphocyte subpopulations, were assessed for interrelationships and potential correlations with clinical outcomes. RESULTS Membranous (cell surface) PD-L1 expression by tumor cells and immune infiltrates varied significantly by tumor type and was most abundant in melanoma, NSCLC, and RCC. In the overall cohort, PD-L1 expression was geographically associated with infiltrating immune cells (P < 0.001), although lymphocyte-rich regions were not always associated with PD-L1 expression. Expression of PD-L1 by tumor cells and immune infiltrates was significantly associated with expression of PD-1 on lymphocytes. PD-L2, the second ligand for PD-1, was associated with PD-L1 expression. Tumor cell PD-L1 expression correlated with objective response to anti-PD-1 therapy, when analyzing either the specimen obtained closest to therapy or the highest scoring sample among multiple biopsies from individual patients. These correlations were stronger than borderline associations of PD-1 expression or the presence of intratumoral immune cell infiltrates with response. CONCLUSIONS Tumor PD-L1 expression reflects an immune-active microenvironment and, while associated other immunosuppressive molecules, including PD-1 and PD-L2, is the single factor most closely correlated with response to anti-PD-1 blockade. Clin Cancer Res; 20(19); 5064-74. ©2014 AACR.
Enhanced photoluminescence and solar cell performance via Lewis base passivation of organic-inorganic lead halide perovskites.
Organic-inorganic metal halide perovskites have recently emerged as a top contender to be used as an absorber material in highly efficient, low-cost photovoltaic devices. Solution-processed semiconductors tend to have a high density of defect states and exhibit a large degree of electronic disorder. Perovskites appear to go against this trend, and despite relatively little knowledge of the impact of electronic defects, certified solar-to-electrical power conversion efficiencies of up to 17.9% have been achieved. Here, through treatment of the crystal surfaces with the Lewis bases thiophene and pyridine, we demonstrate significantly reduced nonradiative electron-hole recombination within the CH(3)NH(3)PbI(3-x)Cl(x) perovskite, achieving photoluminescence lifetimes which are enhanced by nearly an order of magnitude, up to 2 μs. We propose that this is due to the electronic passivation of under-coordinated Pb atoms within the crystal. Through this method of Lewis base passivation, we achieve power conversion efficiencies for solution-processed planar heterojunction solar cells enhanced from 13% for the untreated solar cells to 15.3% and 16.5% for the thiophene and pyridine-treated solar cells, respectively.
Ice velocity determined using conventional and multiple-aperture InSAR
We combine conventional Interferometric Synthetic Aperture Radar (InSAR) and Multiple Aperture InSAR (MAI) to determine the ice surface velocity on the Langjokull and Hofsjokull ice caps in Iceland in 1994. This approach allows the 20 principal ice cap outlets to be fully resolved. We show that MAI leads to displacement estimates of finer resolution (15 versus 150 m) and superior precision (5 versus 15 cm) to those afforded by the alternative technique of speckle tracking. Using SAR data acquired in ascending and descending orbits, we show that ice flows within 15° of the direction of maximum surface slope across 66 % of the ice caps. It is therefore possible to determine ice displacement over the majority of the ice caps using a single SAR image pair, thereby reducing errors associated with temporal fluctuations in ice flow.
An efficient elastic distributed SDN controller for follow-me cloud
Follow Me Cloud (FMC) concept has emerged as a promising technology that allows seamless migration of services according to the corresponding users' mobility. Meanwhile, Software Defined Networking (SDN) is a new paradigm that permits to decouple the control and data planes of traditional network, and provides programmability and flexibility, allowing the network to dynamically adapt to changing traffic patterns and user demands. While the SDN implementations are gaining momentum, the control plane, however, is still suffering from scalability and performance concerns for a very large network. In this paper, we address these scalability and performance issues by introducing a novel SDN/OpenFlow-based architecture and control plane framework tailored for mobile cloud computing systems and more specifically for FMC-based systems where mobile nodes and network services are subject to constraints of movements and migrations. Contrary to centralized approach with single SDN controller, our approach permits to distribute the SDN/OpenFlow control plane on a two-level hierarchical architecture: a first level with a global controller G-FMCC, and second level with several local controllers L-FMCC(s). Thanks to our control plane framework and Network Function Virtual-ization concept (NFV), the L-FMCC(s) are deployed on-demand, where and when needed, depending on the global system load. Results obtained via analysis show that our solution ensures more efficient management of control plane, performances maintaining and network resources preservation.
Malware detection on Android smartphones using API class and machine learning
This paper proposes a (new) method to detect malware in Android smartphones using API (application programming interface) classes. We use machine learning to classify whether an application is benign or malware. Furthermore, we compare classification precision rate from machine learning. This research uses 51 APIs package classes from 16 APIs classes and employs cross validation and percentage split test to classify benign and malware using Random Forest, J48, and Support Vector Machine algorithms. We use 412 total application samples (205 benign, 207 malware). We obtain that the classification precision average is 91.9%.
The Impact of Corporate Social Responsibility on Investment Recommendations
Using a large sample of publicly traded US firms over 16 years, we investigate the impact of corporate socially responsible (CSR) strategies on security analysts’ recommendations. Socially responsible firms receive more favorable recommendations in recent years relative to earlier ones, documenting a changing perception of the value of such strategies by the analysts. Moreover, we find that firms with higher visibility receive more favorable recommendations for their CSR strategies and that analysts with more experience, broader CSR awareness or those with more resources at their disposal, are more likely to perceive the value of CSR strategies more favorably. Our results document how CSR strategies can affect value creation in public equity markets through analyst recommendations. 1 Assistant Professor of Strategic and International Management, London Business School, Regent’s Park, NW1 4SA, London, United Kingdom. Email: [email protected], Ph: +44 20 7000 8748, Fx: +44 20 7000 7001. 2 Assistant Professor of Business Administration, Harvard Business School, Soldiers’ Field Road, Morgan Hall 381, 02163 Boston, MA, USA. Email:[email protected], Ph: +1 617 495 6548, Fx: +1 617 496 7387. We are grateful to Constantinos Markides, and seminar participants at the research brown bag (SIM area) of the London Business School, the academic conference on Social Responsibility at University of Washington Tacoma, the 2010 European Academy of Management Conference, and the 2010 Academy of Management Conference. Ioannou acknowledges financial support from the Research and Materials Development Fund (RAMD) at the London Business School. All remaining errors are our own.
Using deep learning for short text understanding
Classifying short texts to one category or clustering semantically related texts is challenging, and the importance of both is growing due to the rise of microblogging platforms, digital news feeds, and the like. We can accomplish this classifying and clustering with the help of a deep neural network which produces compact binary representations of a short text, and can assign the same category to texts that have similar binary representations. But problems arise when there is little contextual information on the short texts, which makes it difficult for the deep neural network to produce similar binary codes for semantically related texts. We propose to address this issue using semantic enrichment. This is accomplished by taking the nouns, and verbs used in the short texts and generating the concepts and co-occurring words with the help of those terms. The nouns are used to generate concepts within the given short text, whereas the verbs are used to prune the ambiguous context (if any) present in the text. The enriched text then goes through a deep neural network to produce a prediction label for that short text representing it’s category.
朝鮮 後期 官僚文化와 眞景山水畵 :소론계 관료를 중심으로
This study considers the true-view landscape paintings made by the order of Confucian officials during the late Joseon period. The tradition of provincial officials taking excursions, that is hwanyu(宦遊), was continued throughout the Joseon period. After the 17th century, many real landscape paintings were produced as visual records of these excursions. This article focuses on the true view landscape paintings commisioned by scholar officials during the middle 18th century when the true-view landscapes were popular. The major paintings recorded the hwanyu were generally ordered by the provincial officials and produced by the provincial painters. The origin of this type of paintings was the Ten Scenic Spots of Hamheung City and the Ten Scenic Spots of Hamgyeong Province commisioned by Nam Guman who was the leader of soron(少論) faction and the administrator of Hamgyeong Province in the late 17th century. This work as the result of Nam Guman’s hwanyu was the real landscape of political and symbolic significance that was aimed at developing and publicizing the Hamgyeoung Province. Scholars officials of the soron faction subsequently began producing real landscape paintings that portrayed provincial landscapes and clearly showed the influence of Ten Scenic Spots of Hamheung City and the Ten Scenic Spots of Hamgyeong Province. These paintings are characterized by their political and symbolic significance. Such works constitute a particular sub-genre within the true-view landscape painting of the 18th century, helping us to understand the various characteristics that comprise the genre as a whole. The Album of the Ten Scenic Spots of East Sea was commissioned by a provincial governor of the soron faction in 1748. It reveals more personal taste and shows purely artistic tendency focusing the “Three Treasure” of poetry, calligraphy and painting. Jeong Seon, the master of the true-view painting also painted a true view painting commissioned by Hong Gyeongbo, the provincial governor of the Gyeonggi Province to record a very elegant boat trip he enjoyed as part of an inspection tour in 1742. This work, a record of an artistically refined scholar official excursion suggests the development of a new culture of commemorating scholar official’s history as a government officer.
Adversarial Deep Learning for Robust Detection of Binary Encoded Malware
Malware is constantly adapting in order to avoid detection. Model-based malware detectors, such as SVM and neural networks, are vulnerable to so-called adversarial examples which are modest changes to detectable malware that allows the resulting malware to evade detection. Continuous-valued methods that are robust to adversarial examples of images have been developed using saddle-point optimization formulations. We are inspired by them to develop similar methods for the discrete, e.g. binary, domain which characterizes the features of malware. A specific extra challenge of malware is that the adversarial examples must be generated in a way that preserves their malicious functionality. We introduce methods capable of generating functionally preserved adversarial malware examples in the binary domain. Using the saddle-point formulation, we incorporate the adversarial examples into the training of models that are robust to them. We evaluate the effectiveness of the methods and others in the literature on a set of Portable Execution (PE) files. Comparison prompts our introduction of an online measure computed during training to assess general expectation of robustness.
Crowd disasters as systemic failures: analysis of the Love Parade disaster
Each year, crowd disasters happen in different areas of the world. How and why do such disasters happen? Are the fatalities caused by relentless behavior of people or a psychological state of panic that makes the crowd ‘go mad’? Or are they a tragic consequence of a breakdown of coordination? These and other questions are addressed, based on a qualitative analysis of publicly available videos and materials, which document the planning and organization of the Love Parade in Duisburg, Germany, and the crowd disaster on July 24, 2010. Our analysis reveals a number of misunderstandings that have widely spread. We also provide a new perspective on concepts such as ‘intentional pushing’, ‘mass panic’, ‘stampede’, and ‘crowd crushes’. The focus of our analysis is on the contributing causal factors and their mutual interdependencies, not on legal issues or the judgment of personal or institutional responsibilities. Video recordings show that people stumbled and piled up due to a ‘domino effect’, resulting from a phenomenon called ‘crowd turbulence’ or ‘crowd quake’. Crowd quakes are a typical reason for crowd disasters, to be distinguished from crowd disasters resulting from ‘mass panic’ or ‘crowd crushes’. In Duisburg, crowd turbulence was the consequence of amplifying feedback and cascading effects, which are typical for systemic instabilities. Accordingly, things can go terribly wrong in spite of no bad intentions from anyone. Comparing the incident in Duisburg with others, we give recommendations to help prevent future crowd disasters. In particular, we introduce a new scale to assess the criticality of conditions in the crowd. This may allow preventative measures to be taken earlier on. Furthermore, we discuss the merits and limitations of citizen science for public investigation, considering that today, almost every event is recorded and reflected in the World Wide Web.
On temperatures and tool wear in machining hypereutectic Al–Si alloys with vortex-tube cooling
This study investigates dry machining of hypereutectic silicon–aluminum alloys assisted with vortex-tube (VT) cooling. The objective is to reduce cutting temperatures and tool wear by enhanced heat dissipation through the chilled air generated by a VT. A machining experiment, cutting mechanics analysis, and temperature simulations are employed to (1) model the heat transfer of a cutting tool system with VT cooling applied, (2) explore effects of cooling setting and machining parameters on the cooling efficiency, and (3) evaluate VT cooling effects on tool wear. A390 alloy is machined by tungsten carbides with cutting forces and geometry measured for heat source characterizations as the input of temperature modeling and simulations. VT cooling is approximated as an impinging air jet to estimate the heat convection coefficient that is incorporated into the heat transfer models. The major findings include: (1) VT cooling may reduce tool wear in A390 machining depending upon machining conditions, and the outlet temperature is more critical than the flow rate, (2) cooling effects on temperature reductions, up to 20 1C, decrease with the increase of the cutting speed and feed, and (3) tool temperature decreasing by VT cooling shows no direct correlations with tool wear reductions. r 2006 Elsevier Ltd. All rights reserved.
Hyperbaric oxygenation accelerates the healing rate of nonischemic chronic diabetic foot ulcers: a prospective randomized study.
OBJECTIVE To study the effect of systemic hyperbaric oxygenation (HBO) therapy on the healing course of nonischemic chronic diabetic foot ulcers. RESEARCH DESIGN AND METHODS From 1999 to 2000, 28 patients (average age 60.2 +/- 9.7 years, diabetes duration 18.2 +/- 6.6 years), of whom 87% had type 2 diabetes, demonstrating chronic Wagner grades I-III foot ulcers without clinical symptoms of arteriopathy, were studied. They were randomized to undergo HBO because their ulcers did not improve over 3 months of full standard treatment. All the patients demonstrated signs of neuropathy. HBO was applied twice a day, 5 days a week for 2 weeks; each session lasted 90 min at 2.5 ATA (absolute temperature air). The main parameter studied was the size of the foot ulcer measured on tracing graphs with a computer. It was evaluated before HBO and at day 15 and 30 after the baseline. RESULTS HBO was well tolerated in all but one patient (barotraumatic otitis). The transcutaneous oxygen pressure (TcPO(2)) measured on the dorsum of the feet of the patients was 45.6 +/- 18.1 mmHg (room air). During HBO, the TcPO(2) measured around the ulcer increased significantly from 21.9 +/- 12.1 to 454.2 +/- 128.1 mmHg (P < 0.001). At day 15 (i.e., after completion of HBO), the size of ulcers decreased significantly in the HBO group (41.8 +/- 25.5 vs. 21.7 +/- 16.9% in the control group [P = 0.037]). Such a difference could no longer be observed at day 30 (48.1 +/- 30.3 vs. 41.7 +/- 27.3%). Four weeks later, complete healing was observed in two patients having undergone HBO and none in the control group. CONCLUSIONS In addition to standard multidisciplinary management, HBO doubles the mean healing rate of nonischemic chronic foot ulcers in selected diabetic patients. The time dependence of the effect of HBO warrants further investigations.
Novel Design of a 2.5-GHz Fully Integrated CMOS Butler Matrix for Smart-Antenna Systems
This paper presents a novel design of monolithic 2.5-GHz 4times4 Butler matrix in 0.18-mum CMOS technology. To achieve a full integration of smart antenna system monolithically, the proposed Butler matrix is designed with the phase-compensated transformer-based quadrature couplers and reflection-type phase shifters. The measurements show an accurate phase distribution of 45plusmn3deg, 135 plusmn 4deg, -45 plusmn 3deg, and -135 plusmn 4deg with amplitude imbalance less than 1.5 dB. The antenna beamforming capability is also demonstrated by integrating the Butler matrix with a 1 X 4 monopole antenna array. The generated beams are pointing to -45deg, -15deg, 15deg, and 45deg, respectively, with less than 1deg error, which agree very well with the predictions. This Butler matrix consumes no dc power and only occupies the chip area of 1.36 times 1.47 mm2. To our knowledge, this is the first demonstration of the single-chip Butler matrix in CMOS technology.
Neogene Himalayan weathering history and river87Sr86Sr: impact on the marine Sr record
Clastic sediments in the Bengal Fan contain a Neogene history of erosion and weathering of the Himalaya. We present data on clay mineralogy, major element, stable and radiogenic isotope abundances from Lower Miocene-Pleistocene sediments from ODP Leg 116. Nd and Sr isotope data show that the Himalayan provenance for the eroded material has varied little since > 17 Ma. However, from 7 to 1 Ma smectite replaces illite as the dominant clay, while sediment accumulation decreased, implying an interval of high chemical weathering intensity but lower physical erosion rates in the Ganges-Brahmaputra (GB) basin. 0 and H isotopes in clays are correlated with mineralogy and chemistry, and indicate that weathering took place in the paleo-Gangetic flood plain. The 87Sr/ 86Sr ratios of pedogenic clays (vermiculite, smectite) record the isotopic composition of Sr in the weathering environment, and can be used as a proxy for 87Sr/86Sr in the paleo-GB basin. The Sr data from pedogenic clays shows that river 87Sr/86Sr values were near 0.72 prior to 7 Ma, rose rapidly to 2 0.74 in the Pliocene, and returned to I 0.72 in the middle Pleistocene. These are the first direct constraints available on the temporal variability of 87Sr/86Sr in a major river system. The high 87Sr/86Sr values resulted from intensified chemical weathering of radiogenic silicates and a shift in the carbonate-silicate weathering ratio. Modeling of the seawater Sr isotopic budget shows that the high river 87Sr/86Sr values require a ca. 50% decrease in the Sr flux from the GB system in the Pliocene. The relationship between weathering intensity, 87Sr/86Sr and Sr flux is similar to that observed in modem rivers, and implies that fluxes of other elements such as Ca, Na and Si were also reduced. Increased weathering intensity but reduced Sr flux appears to require a late Miocene-Pliocene decrease in Himalayan erosion rates, followed by a return to physically dominated and rapid erosion in the Pleistocene. In contrast to the view that increasing seawater 87Sr/86Sr results from increased erosion, Mio-Pliocene to mid-Pleistocene changes in the seawater Sr budget were the result of reduced erosion rates and Sr fluxes from the Himalaya.