title
stringlengths
8
300
abstract
stringlengths
0
10k
Dabigatran versus warfarin major bleeding in practice: an observational comparison of patient characteristics, management and outcomes in atrial fibrillation patients
Data comparing the patient characteristics, management and outcomes for dabigatran versus warfarin major bleeding in the practice setting are limited. We performed a retrospective single health system study of atrial fibrillation patients with dabigatran or warfarin major bleeding from October 2010 through September 2012. Patient identification occurred through both an internal adverse event reporting system and a structured stepwise data filtering approach using the International Classification of Diseases diagnosis codes. Thirty-five dabigatran major bleeding patients were identified and compared to 70 warfarin major bleeding patients. Intracranial bleed occurred in 4.3 % of warfarin patients and 8.6 % of dabigatran patients. Dabigatran patients tended to be older (79.9 vs. 76 years) and were more likely to have a creatinine clearance of 15–30 mL/min (40 vs. 18.6 %, p = 0.02). Over one-third of dabigatran patients had an excessive dose based on renal function. More dabigatran patients required a procedure for bleed management (37.1 vs. 17.1 %, p = 0.03) and received a hemostatic agent for reversal (11.4 vs. 1.4 %, p = 0.04). Dabigatran patients were twice as likely to spend time in an ICU (45.7 vs. 27.1 %, p = 0.06), be placed in hospice/comfort care (14.3 vs. 7.1 %, p = 0.24), expire during hospitalization (14.3 vs. 7.1 %, p = 0.24), and expire within 30-days (22.9 vs. 11.4 %, p = 0.28). In a single hospital center practice setting, as compared to warfarin, patients with dabigatran major bleeding were more likely to be older, have renal impairment, require a procedure for bleed management and receive a hemostatic agent. Patients with dabigatran major bleeding had an excessive dose for renal function in more than one-third of cases.
SAF: Strategic Alignment Framework for Monitoring Organizations
Reaching a Strategic Alignment is a crucial aspect for any organization. The alignment can be achieved by controlling, through monitoring probes, the coherency of the Business Processes with the related Business Strategy. In this paper we present SAF, a powerful framework for those organizations that aim at a superior business performance and want to keep monitored the organization’s alignment. SAF has been applied to a real case study and it has also been compared with GQM Strategy [2] and Process Performance Indicators Monitoring Model [16].
Schema-agnostic vs Schema-based Configurations for Blocking Methods on Homogeneous Data
Entity Resolution constitutes a core task for data integration that, due to its quadratic complexity, typically scales to large datasets through blocking methods. These can be configured in two ways. The schema-based configuration relies on schema information in order to select signatures of high distinctiveness and low noise, while the schema-agnostic one treats every token from all attribute values as a signature. The latter approach has significant potential, as it requires no fine-tuning by human experts and it applies to heterogeneous data. Yet, there is no systematic study on its relative performance with respect to the schema-based configuration. This work covers this gap by comparing analytically the two configurations in terms of effectiveness, time efficiency and scalability. We apply them to 9 established blocking methods and to 11 benchmarks of structured data. We provide valuable insights into the internal functionality of the blocking methods with the help of a novel taxonomy. Our studies reveal that the schema-agnostic configuration offers unsupervised and robust definition of blocking keys under versatile settings, trading a higher computational cost for a consistently higher recall than the schema-based one. It also enables the use of state-of-the-art blocking methods without schema knowledge.
A Modified Particle Swarm Optimizer for the Coordination of Directional Overcurrent Relays
The coordination of directional overcurrent relays (DOCR) is treated in this paper using particle swarm optimization (PSO), a recently proposed optimizer that utilizes the swarm behavior in searching for an optimum. PSO gained a lot of interest for its simplicity, robustness, and easy implementation. The problem of setting DOCR is a highly constrained optimization problem that has been stated and solved as a linear programming (LP) problem. To deal with such constraints a modification to the standard PSO algorithm is introduced. Three case studies are presented, and the results are compared to those of LP technique to demonstrate the effectiveness of the proposed methodology.
A Science Learning Environment using a Computational Thinking Approach
Computational Thinking (CT) defines a domain-gener al, analytic approach to problem solving that combines concepts fundamental to computing, with systematic representations for concepts and problem-solving ap proaches in scientific and mathematical domains. We exploit this trade-off between domain-s pecificity and domain-generality to develop CTSiM (Computational Thinking in Simulation a d Modeling), a cross-domain, visual programming and agent-based learning environ ment for middle school science. CTSiM promotes inquiry learning by providing studen ts with an environment for constructing computational models of scientific phe nomena, executing their models using simulation tools, and conducting experiments to com pare the simulation behavior generated by their models against that of an expert model. In a preliminary study, sixth-grade students used CTSiM to learn about distance-speed-time relat ions in a kinematics unit and then about the ecological process relations between fish, duck weed, and bacteria occurring in a fish tank system. Results show learning gains in both sc ience units, but this required a set of scaffolds to help students learn in this environmen t.
Navigating the bone marrow niche: translational insights and cancer-driven dysfunction
The bone marrow niche consists of stem and progenitor cells destined to become mature cells such as haematopoietic elements, osteoblasts or adipocytes. Marrow cells, influenced by endocrine, paracrine and autocrine factors, ultimately function as a unit to regulate bone remodelling and haematopoiesis. Current evidence highlights that the bone marrow niche is not merely an anatomic compartment; rather, it integrates the physiology of two distinct organ systems, the skeleton and the marrow. The niche has a hypoxic microenvironment that maintains quiescent haematopoietic stem cells (HSCs) and supports glycolytic metabolism. In response to biochemical cues and under the influence of neural, hormonal, and biochemical factors, marrow stromal elements, such as mesenchymal stromal cells (MSCs), differentiate into mature, functioning cells. However, disruption of the niche can affect cellular differentiation, resulting in disorders ranging from osteoporosis to malignancy. In this Review, we propose that the niche reflects the vitality of two tissues — bone and blood — by providing a unique environment for stem and stromal cells to flourish while simultaneously preventing disproportionate proliferation, malignant transformation or loss of the multipotent progenitors required for healing, functional immunity and growth throughout an organism's lifetime. Through a fuller understanding of the complexity of the niche in physiologic and pathologic states, the successful development of more-effective therapeutic approaches to target the niche and its cellular components for the treatment of rheumatic, endocrine, neoplastic and metabolic diseases becomes achievable.
Pupil Diameter Predicts Changes in the Exploration–Exploitation Trade-off: Evidence for the Adaptive Gain Theory
The adaptive regulation of the balance between exploitation and exploration is critical for the optimization of behavioral performance. Animal research and computational modeling have suggested that changes in exploitative versus exploratory control state in response to changes in task utility are mediated by the neuromodulatory locus coeruleus–norepinephrine (LC–NE) system. Recent studies have suggested that utility-driven changes in control state correlate with pupil diameter, and that pupil diameter can be used as an indirect marker of LC activity. We measured participants' pupil diameter while they performed a gambling task with a gradually changing payoff structure. Each choice in this task can be classified as exploitative or exploratory using a computational model of reinforcement learning. We examined the relationship between pupil diameter, task utility, and choice strategy (exploitation vs. exploration), and found that (i) exploratory choices were preceded by a larger baseline pupil diameter than exploitative choices; (ii) individual differences in baseline pupil diameter were predictive of an individual's tendency to explore; and (iii) changes in pupil diameter surrounding the transition between exploitative and exploratory choices correlated with changes in task utility. These findings provide novel evidence that pupil diameter correlates closely with control state, and are consistent with a role for the LC–NE system in the regulation of the exploration–exploitation trade-off in humans.
Transmission Lines Loaded With Bisymmetric Resonators and Their Application to Angular Displacement and Velocity Sensors
This paper is focused on the analysis of coplanar waveguides (CPWs) loaded with circularly shaped electric-LC (ELC) resonators, the latter consisting of two coplanar loops connected in parallel through a common gap. Specifically, the resonator axis is aligned with the CPW axis, and a dynamic loading with ELC rotation is considered. Since the ELC resonator is bisymmetric, i.e., it exhibits two orthogonal symmetry planes, the angular orientation range is limited to 90°. It is shown that the transmission and reflection coefficients of the structure depend on the angular orientation of the ELC. In particular, the loaded CPW behaves as a transmission line-type (i.e., all-pass) structure for a certain ELC orientation (0°) since the resonator is not excited. However, by rotating the ELC, magnetic coupling to the line arises, and a notch in the transmission coefficient (with orientation dependent depth and bandwidth) appears. This feature is exploited to implement angular displacement sensors by measuring the notch depth in the transmission coefficient. To gain more insight on sensor design, the lumped element equivalent-circuit model for ELC-loaded CPWs with arbitrary ELC orientation is proposed and validated. Based on this approach, a prototype displacement sensor is designed and characterized. It is shown that by introducing additional elements (a circulator and an envelope detector), novel and high precision angular velocity sensors can also be implemented. An angular velocity sensor is thus proposed, characterized, and satisfactorily validated. The proposed solution for angular sensing is robust against environmental variations since it is based on the geometrical alignment/misalignment between the symmetry planes of the coupled elements.
Simple vs. compound mark hierarchical marking menus
We present a variant of hierarchical marking menus where items are selected using a series of inflection-free simple marks, rather than the single "zig-zag" compound mark used in the traditional design. Theoretical analysis indicates that this simple mark approach has the potential to significantly increase the number of items in a marking menu that can be selected efficiently and accurately. A user experiment is presented that compares the simple and compound mark techniques. Results show that the simple mark technique allows for significantly more accurate and faster menu selections overall, but most importantly also in menus with a large number of items where performance of the compound mark technique is particularly poor. The simple mark technique also requires significantly less physical input space to perform the selections, making it particularly suitable for small footprint pen-based input devices. Visual design alternatives are also discussed.
Atomic redistribution of alloying elements in nanocrystalline austenitic chromium-nickel steels obtained by strong plastic deformation
Abstract The fcc solid solution of the stable austenitic 12Cr30Ni steels has exhibited segregation during heavy cold-working ψ > 80%, T = 24 °C), that gives rise to a ferromagnetic component with a Curie temperature close to 128 °C, which may be associated with a local increase in the nickel concentration to 40%. The redistribution of the alloying elements induced by cold working is attributed, by analogy with the low-temperature irradiation, to the diffusion of the point defects generated by the deformation, to the sinks (grain or sub grain boundaries, phase interfaces, etc.) so that the sink regions have higher or lower concentrations of elements with different atomic radii. Changes that take place in the matrix during redistribution of the alloying elements are evaluated using the Mossbauer spectroscopy.
Electric load forecasting in smart grids using Long-Short-Term-Memory based Recurrent Neural Network
Electric load forecasting plays a vital role in smart grids. Short term electric load forecasting forecasts the load that is several hours to several weeks ahead. Due to the nonlinear, non-stationary and nonseasonal nature of the short term electric load time series in small scale power systems, accurate forecasting is challenging. This paper explores Long-Short-Term-Memory (LSTM) based Recurrent Neural Network (RNN) to deal with this challenge. LSTM-based RNN is able to exploit the long term dependencies in the electric load time series for more accurate forecasting. Experiments are conducted to demonstrate that LSTM-based RNN is capable of forecasting accurately the complex electric load time series with a long forecasting horizon. Its performance compares favorably to many other forecasting methods.
Predictors of treatment outcome in adults with ADHD treated with OROS<ce:sup loc=post>®</ce:sup> methylphenidate
BACKGROUND We conducted a post-hoc analysis of the Long-Acting MethylpheniDate in Adult attention-deficit hyperactivity disorder (LAMDA) study to investigate predictors of response in adults with ADHD randomly assigned to Osmotic Release Oral System (OROS)(®)-methylphenidate hydrochloride (MPH) 18, 36 or 72 mg or placebo. METHODS LAMDA comprised a 5-week, double-blind (DB) period, followed by a 7-week, open-label (OL) period. A post-hoc analysis of covariance and a logistic regression analysis were undertaken to detect whether specific baseline parameters or overall treatment compliance during the double-blind phase contributed to response. The initial model included all covariates as independent variables; a backward stepwise selection method was used, with stay criteria of p<0.10. Six outcomes were considered: change from baseline CAARS:O-SV (physician-rated) and CAARS:S-S (self-report) scores at DB and OL end points, and response rate (≥30% decrease in CAARS:O-SV score from baseline) and normalization of CAARS:O-SV score at DB end point. RESULTS Taking into account a significant effect of OROS(®)-MPH treatment versus placebo in the original analysis (p≤0.015), across the outcomes considered in this post-hoc analysis, higher baseline CAARS scores were most strongly predictive of superior outcomes. Male gender and lower academic achievement were also predictive for improved results with certain outcomes. CONCLUSIONS Several baseline factors may help to predict better treatment outcomes in adults receiving OROS(®)-MPH; however, further research is required to confirm these findings and examine their neurobiological underpinnings.
FPGA Based Reconfigurable Platform for Complex Image Processing
Field programmable gate arrays (FPGAs) are in use to build high performance DSP systems. FPGAs are uniquely suited to repetitive DSP tasks, such as multiply and accumulate (MAC) operations in parallel. As a result FPGAs can vastly outperform DSP chips, which perform operations in an essentially sequential fashion. The interfaces between FPGA and image sensor have been very slow, which inhibits possibility of exploiting parallel processing in the FPGA This paper discusses method to build an image-processing platform using FPGA. This involves interfacing of FPGA to CMOS image sensor and VGA monitor. We discuss techniques that helped attending speed of 50 FPS (Frames per Second) for an interface of CMOS image sensor to FPGA from earlier reported speed of 3 FPS. A benchmarking application has been executed on FPGA and DSP based system and comparative real time performance data is reported
Coenzyme Q10 Improves Endothelial Dysfunction in Statin-Treated Type 2 Diabetic Patients
OBJECTIVE The vascular benefits of statins might be attenuated by inhibition of coenzyme Q(10) (CoQ(10)) synthesis. We investigated whether oral CoQ(10) supplementation improves endothelial dysfunction in statin-treated type 2 diabetic patients. RESEARCH DESIGN AND METHODS In a double-blind crossover study, 23 statin-treated type 2 diabetic patients with LDL cholesterol <2.5 mmol/l and endothelial dysfunction (brachial artery flow-mediated dilatation [FMD] <5.5%) were randomized to oral CoQ(10) (200 mg/day) or placebo for 12 weeks. We measured brachial artery FMD and nitrate-mediated dilatation (NMD) by ultrasonography. Plasma F(2)-isoprostane and 24-h urinary 20-hydroxyeicosatetraenoic acid (HETE) levels were measured as systemic oxidative stress markers. RESULTS Compared with placebo, CoQ(10) supplementation increased brachial artery FMD by 1.0 +/- 0.5% (P = 0.04), but did not alter NMD (P = 0.66). CoQ(10) supplementation also did not alter plasma F(2)-isoprostane (P = 0.58) or urinary 20-HETE levels (P = 0.28). CONCLUSIONS CoQ(10) supplementation improved endothelial dysfunction in statin-treated type 2 diabetic patients, possibly by altering local vascular oxidative stress.
SSP: Semantic Space Projection for Knowledge Graph Embedding with Text Descriptions
Knowledge graph embedding represents entities and relations in knowledge graph as low-dimensional, continuous vectors, and thus enables knowledge graph compatible with machine learning models. Though there have been a variety of models for knowledge graph embedding, most methods merely concentrate on the fact triples, while supplementary textual descriptions of entities and relations have not been fully employed. To this end, this paper proposes the semantic space projection (SSP) model which jointly learns from the symbolic triples and textual descriptions. Our model builds interaction between the two information sources, and employs textual descriptions to discover semantic relevance and offer precise semantic embedding. Extensive experiments show that our method achieves substantial improvements against baselines on the tasks of knowledge graph completion and entity classification. Papers, Posters, Slides, Datasets and Codes: http://www.ibookman.net/conference.html
Skippy: single view 3D curve interactive modeling
We introduce Skippy, a novel algorithm for 3D interactive curve modeling from a single view. While positing curves in space can be a tedious task, our rapid sketching algorithm allows users to draw curves in and around existing geometry in a controllable manner. The key insight behind our system is to automatically infer the 3D curve coordinates by enumerating a large set of potential curve trajectories. More specifically, we partition 2D strokes into continuous segments that land both on and off the geometry, duplicating segments that could be placed in front or behind, to form a directed graph. We use distance fields to estimate 3D coordinates for our curve segments and solve for an optimally smooth path that follows the curvature of the scene geometry while avoiding intersections. Using our curve design framework we present a collection of novel editing operations allowing artists to rapidly explore and refine the combinatorial space of solutions. Furthermore, we include the quick placement of transient geometry to aid in guiding the 3D curve. Finally we demonstrate our interactive design curve system on a variety of applications including geometric modeling, and camera motion path planning.
Analyzing caching benefits for YouTube traffic in edge networks — A measurement-based evaluation
Recent studies observed video download platforms which contribute a large share to the overall traffic mix in today's operator networks. Traffic related to video downloads has reached a level where operators, network equipment vendors, and standardization organizations such as the IETF start to explore methods in order to reduce the traffic load in the network. Success or failure of these techniques depend on caching potentials of the target applications' traffic patterns. Our work aims at providing detailed insight into caching potentials of one of the leading video serving platforms: YouTube. We monitored interactions of users of a large operator network with the YouTube video distribution infrastructure for the time period of one month. From these traffic observations, we examine parameters that are relevant to the operation and effectiveness of an in-network cache deployed in an edge-network. Furthermore, we use our monitoring data as input for a simulation and determine the caching benefits that could have been observed if caching had been deployed.
Spatio-temporal clustering methods classification
Nowadays, a vast amount of spatio-temporal data ar e being generated by devices like cell phones, GPS and remote sensing devices and therefore discovering interesting patterns in such data becam n interesting topics for researchers. One of these topics has been spatio-te mporal clustering which is a novel sub field of data mining and Recent researches in this area has focused on new methods and ways which are adapting previous me thods and solutions to the new problem. In this paper we first define what t e spatio-temporal data is and what different it has with other types of data. Then try to classify the clustering methods and done works in this area base d on the proposed solutions. classification has been made based on this fact tha t how these works import and adapt temporal concept in their solutions.
Non‐Advertized does not Mean Concealed: Body Odour Changes across the Human Menstrual Cycle
The visual appearance of females of various primate species changes considerably across their menstrual cycle. These changes usually take place in the anogenital region and are commonly called sexual swellings. Sexual swellings are related to ovulation (e.g. Deschner et al. 2003), correlate with female sexual proceptivity (e.g. Wallis 1992) and are found to be attractive to males (Hrdy & Whitten 1987). Traditionally, human females are different from species which possess ‘sexual swelling’, with human Correspondence Jan Havlı́ček, Department of Anthropology, Faculty of Humanities, Charles University, Husnikova 2075, 158 00 Prague 13, Czech Republic. E-mail: [email protected]
Joining Forces: A Study of Multinational Corporations' Sustainability Contributions to a Cross-Sector Social Partnership
Background: Cross-sector social partnership (CSSP) is a joint effort that utilizes resources from different sectors to solve social issues, such as poverty, pandemics and environmental degradation. ...
Automatic text summarization using fuzzy inference
Due to the high volume of information and electronic documents on the Web, it is almost impossible for a human to study, research and analyze this volume of text. Summarizing the main idea and the major concept of the context enables the humans to read the summary of a large volume of text quickly and decide whether to further dig into details. Most of the existing summarization approaches have applied probability and statistics based techniques. But these approaches cannot achieve high accuracy. We observe that attention to the concept and the meaning of the context could greatly improve summarization accuracy, and due to the uncertainty that exists in the summarization methods, we simulate human like methods by integrating fuzzy logic with traditional statistical approaches in this study. The results of this study indicate that our approach can deal with uncertainty and achieve better results when compared with existing methods.
Comparison of Artificial Neural Network and Logistic Regression Models for Predicting In-Hospital Mortality after Primary Liver Cancer Surgery
BACKGROUND Since most published articles comparing the performance of artificial neural network (ANN) models and logistic regression (LR) models for predicting hepatocellular carcinoma (HCC) outcomes used only a single dataset, the essential issue of internal validity (reproducibility) of the models has not been addressed. The study purposes to validate the use of ANN model for predicting in-hospital mortality in HCC surgery patients in Taiwan and to compare the predictive accuracy of ANN with that of LR model. METHODOLOGY/PRINCIPAL FINDINGS Patients who underwent a HCC surgery during the period from 1998 to 2009 were included in the study. This study retrospectively compared 1,000 pairs of LR and ANN models based on initial clinical data for 22,926 HCC surgery patients. For each pair of ANN and LR models, the area under the receiver operating characteristic (AUROC) curves, Hosmer-Lemeshow (H-L) statistics and accuracy rate were calculated and compared using paired T-tests. A global sensitivity analysis was also performed to assess the relative significance of input parameters in the system model and the relative importance of variables. Compared to the LR models, the ANN models had a better accuracy rate in 97.28% of cases, a better H-L statistic in 41.18% of cases, and a better AUROC curve in 84.67% of cases. Surgeon volume was the most influential (sensitive) parameter affecting in-hospital mortality followed by age and lengths of stay. CONCLUSIONS/SIGNIFICANCE In comparison with the conventional LR model, the ANN model in the study was more accurate in predicting in-hospital mortality and had higher overall performance indices. Further studies of this model may consider the effect of a more detailed database that includes complications and clinical examination findings as well as more detailed outcome data.
A Hybrid Model for Identity Obfuscation by Face Replacement
As more and more personal photos are shared and tagged in social media, avoiding privacy risks such as unintended recognition, becomes increasingly challenging. We propose a new hybrid approach to obfuscate identities in photos by head replacement. Our approach combines state of the art parametric face synthesis with latest advances in Generative Adversarial Networks (GAN) for data-driven image synthesis. On the one hand, the parametric part of our method gives us control over the facial parameters and allows for explicit manipulation of the identity. On the other hand, the data-driven aspects allow for adding fine details and overall realism as well as seamless blending into the scene context. In our experiments we show highly realistic output of our system that improves over the previous state of the art in obfuscation rate while preserving a higher similarity to the original image content.
An interstitial deletion of 7.1Mb in chromosome band 6p22.3 associated with developmental delay and dysmorphic features including heart defects, short neck, and eye abnormalities.
Seven cases with an interstitial deletion of the short arm of chromosome 6 involving the 6p22 region have previously been reported. The clinical phenotype of these cases includes developmental delay, brain-, heart-, and kidney defects, eye abnormalities, short neck, craniofacial malformations, hypotonia, as well as clinodactyly or syndactyly. Here, we report a patient with a 7.1Mb interstitial deletion of chromosome band 6p22.3, detected by genome-wide screening array CGH. The patient is a 4-year-old girl with developmental delay and dysmorphic features including eye abnormalities, short neck, and a ventricular septum defect. The deleted region at 6p22.3 in our patient overlaps with six out of the seven previously reported cases with a 6p22-24 interstitial deletion. This enabled us to further narrow down the critical region for the 6p22 deletion phenotype to 2.2Mb. Twelve genes are mapped to the overlapping deleted region, among them the gene encoding the ataxin-1 protein, the ATXN1 gene. Mice with homozygous deletions in ATXN1 are phenotypically normal but show cognitive delay. Haploinsufficiency of ATXN1 may therefore contribute to the learning difficulties observed in the patients harboring a 6p22 deletion.
Clinical features and prognostic factors in adults with bacterial meningitis.
BACKGROUND We conducted a nationwide study in the Netherlands to determine clinical features and prognostic factors in adults with community-acquired acute bacterial meningitis. METHODS From October 1998 to April 2002, all Dutch patients with community-acquired acute bacterial meningitis, confirmed by cerebrospinal fluid cultures, were prospectively evaluated. All patients underwent a neurologic examination on admission and at discharge, and outcomes were classified as unfavorable (defined by a Glasgow Outcome Scale score of 1 to 4 points at discharge) or favorable (a score of 5). Predictors of an unfavorable outcome were identified through logistic-regression analysis. RESULTS We evaluated 696 episodes of community-acquired acute bacterial meningitis. The most common pathogens were Streptococcus pneumoniae (51 percent of episodes) and Neisseria meningitidis (37 percent). The classic triad of fever, neck stiffness, and a change in mental status was present in only 44 percent of episodes; however, 95 percent had at least two of the four symptoms of headache, fever, neck stiffness, and altered mental status. On admission, 14 percent of patients were comatose and 33 percent had focal neurologic abnormalities. The overall mortality rate was 21 percent. The mortality rate was higher among patients with pneumococcal meningitis than among those with meningococcal meningitis (30 percent vs. 7 percent, P<0.001). The outcome was unfavorable in 34 percent of episodes. Risk factors for an unfavorable outcome were advanced age, presence of otitis or sinusitis, absence of rash, a low score on the Glasgow Coma Scale on admission, tachycardia, a positive blood culture, an elevated erythrocyte sedimentation rate, thrombocytopenia, and a low cerebrospinal fluid white-cell count. CONCLUSIONS In adults presenting with community-acquired acute bacterial meningitis, the sensitivity of the classic triad of fever, neck stiffness, and altered mental status is low, but almost all present with at least two of the four symptoms of headache, fever, neck stiffness, and altered mental status. The mortality associated with bacterial meningitis remains high, and the strongest risk factors for an unfavorable outcome are those that are indicative of systemic compromise, a low level of consciousness, and infection with S. pneumoniae.
A Practical Attack on Broadcast RC4
RC4 is the most widely deployed stream cipher in software applications. In this paper we describe a major statistical weakness in RC4, which makes it trivial to distinguish between short outputs of RC4 and random strings by analyzing their second bytes. This weakness can be used to mount a practical ciphertext-only attack on RC4 in some broadcast applications, in which the same plaintext is sent to multiple recipients under different keys.
The fluvial record of climate change.
Fluvial landforms and sediments can be used to reconstruct past hydrological conditions over different time scales once allowance has been made for tectonic, base-level and human complications. Field stratigraphic evidence is explored here at three time scales: the later Pleistocene, the Holocene, and the historical and instrumental period. New data from a range of field studies demonstrate that Croll-Milankovitch forcing, Dansgaard-Oeschger and Heinrich events, enhanced monsoon circulation, millennial- to centennial-scale climate variability within the Holocene (probably associated with solar forcing and deep ocean circulation) and flood-event variability in recent centuries can all be discerned in the fluvial record. Although very significant advances have been made in river system and climate change research in recent years, the potential of fluvial palaeohydrology has yet to be fully realized, to the detriment of climatology, public health, resource management and river engineering.
Psychology and personalism by William Stern
Abstract In 1916, shortly after his move from Breslau to Hamburg, William Stern wrote in a letter dated October 10 and addressed to his colleague and friend, the Freiburg philosopher Jonas Cohn, that the second volume of a planned three-volume series setting forth that comprehensive system of thought called ‘critical personalism’ was “essentially finished.” He wrote further that that work, in combination with a “little booklet” ( Broschure ) titled Psychology and Personalism ( Die Psychologie und der Personalismus ) should “fashion a bridge” between his philosophical and psychological teachings. Due to a paper shortage in Germany during World War I, the larger of the two works mentioned by Stern in this letter would not actually appear until 2 years later. The “little booklet,” however, could be published sooner, and in fact appeared in 1917. What follows is a translation of that work. It sets forth the principle tenets of critical personalism in a relatively concise and accessible way, and is thus not only worthy of attention in its own right but also offers readers of this Special Issue helpful background for the papers that follow. I am very grateful to Lothar Laux and Karl-Heinz Renner for their careful reading of an earlier draft of my translation against the original publication. As native speakers of German, Laux and Renner made numerous suggestions that I gladly implemented in the interest of improving the quality of the translation. For such infelicities as remain I take full responsibility.
An Empirical Comparison of Pattern Recognition, Neural Nets, and Machine Learning Classification Methods
Classification methods from statistical pattern recognition, neural nets, and machine learning were applied to four real-world data sets. Each of these data sets has been previously analyzed and reported in the statistical, medical, or machine learning literature. The data sets are characterized by statisucal uncertainty; there is no completely accurate solution to these problems. Training and testing or resampling techniques are used to estimate the true error rates of the classification methods. Detailed attention is given to the analysis of performance of the neural nets using back propagation. For these problems, which have relatively few hypotheses and features, the machine learning procedures for rule induction or tree induction clearly performed best.
Modeling repulsive gravity with creation
There is a growing interest among cosmologists for theories with negative energy scalar fields and creation, in order to model a repulsive gravity. The classical steady state cosmology proposed by Bondi, Gold & Hoyle in 1948, was the first such theory which used a negative kinetic energy creation field to invoke creation of matter. We emphasize that creation plays a very crucial role in cosmology and provides a natural explanation to the various explosive phenomena occurring in local (z < 0.1) and extra galactic universe. We exemplify this point of view by considering the resurrected version of this theory — the quasi-steady state theory, which tries to relate creation events directly to the large scale dynamics of the universe and supplies more natural explanations of the observed phenomena.Although the theory predicts a decelerating universe at the present era, it explains successfully the recent SNe Ia observations (which require an accelerating universe in the standard cosmology), as we show in this paper by performing a Bayesian analysis of the data.
Improved Estimation of Trilateration Distances for Indoor Wireless Intrusion Detection
Detecting wireless network intruders is challenging since logical addressing information may be spoofed and the attacker may be located anywhere within radio range. Accurate indoor geolocation provides a method by which the physical location of rogue wireless devices may be pinpointed whilst providing an additional option for location-based access control. Existing methods for geolocation using received signal strength (RSS) are imprecise, due to the multipath nature of indoor radio propagation and additional pathloss due to walls, and aim to minimise location estimate error. This paper presents an approach to indoor geolocation that improves measurements of RSS by averaging across multiple frequency channels and determining the occurrence of walls in the signal path. Experimental results demonstrate that the approach provides improved distance estimates for trilateration and thus aids intrusion detection for wireless networks.
Distributed Bayesian Learning with Stochastic Natural-gradient Expectation Propagation and the Posterior Server
This paper makes two contributions to Bayesian machine learning algorithms. Firstly, we propose stochastic natural gradient expectation propagation (SNEP), a novel black box variational algorithm that is an alternative to expectation propagation (EP). In contrast to EP which has no guarantee of convergence, SNEP can be shown to be convergent, even when using Monte Carlo moment estimates. Secondly, we propose a novel architecture for distributed Bayesian learning which we call the posterior server, implementing a distributed asynchronous version of SNEP, which allows scalable and robust Bayesian learning in cases where a dataset is stored in a distributed manner across a cluster. An independent Monte Carlo sampler is run on each compute node which targets an approximation to the global posterior distribution given all data across the whole cluster. We demonstrate SNEP and the posterior server on distributed Bayesian learning of neural networks.
Reading Between the Lines: Content-Agnostic Detection of Spear-Phishing Emails
Spear-phishing is an effective attack vector for infiltrating companies and organisations. Based on the multitude of personal information available online, an attacker can craft seemingly legit emails and trick his victims into opening malicious attachments and links. Although anti-spoofing techniques exist, their adoption is still limited and alternative protection approaches are needed. In this paper, we show that a sender leaves content-agnostic traits in the structure of an email. Based on these traits, we develop a method capable of learning profiles for a large set of senders and identifying spoofed emails as deviations thereof. We evaluate our approach on over 700,000 emails from 16,000 senders and demonstrate that it can discriminate thousands of senders, identifying spoofed emails with 90% detection rate and less than 1 false positive in 10,000 emails. Moreover, we show that individual traits are hard to guess and spoofing only succeeds if entire emails of the sender are available to the attacker.
MODELING PRINTED CIRCUIT BOARDS WITH EMBEDDED DECOUPLING CAPACITANCE
Embedded capacitance is an alternative to discrete decoupling capacitors and is achieved by enhancing the natural capacitance between power and ground planes. New materials have recently been developed by competing companies that promise to reduce the cost and improve the performance of boards with embedded capacitance. This paper introduces simple models for embedded capacitance boards and examines the properties of these boards that have the greatest impact on their effectiveness.
Improved learning in a large-enrollment physics class.
We compared the amounts of learning achieved using two different instructional approaches under controlled conditions. We measured the learning of a specific set of topics and objectives when taught by 3 hours of traditional lecture given by an experienced highly rated instructor and 3 hours of instruction given by a trained but inexperienced instructor using instruction based on research in cognitive psychology and physics education. The comparison was made between two large sections (N = 267 and N = 271) of an introductory undergraduate physics course. We found increased student attendance, higher engagement, and more than twice the learning in the section taught using research-based instruction.
SecureDroid: Enhancing Security of Machine Learning-based Detection against Adversarial Android Malware Attacks
With smart phones being indispensable in people's everyday life, Android malware has posed serious threats to their security, making its detection of utmost concern. To protect legitimate users from the evolving Android malware attacks, machine learning-based systems have been successfully deployed and offer unparalleled flexibility in automatic Android malware detection. In these systems, based on different feature representations, various kinds of classifiers are constructed to detect Android malware. Unfortunately, as classifiers become more widely deployed, the incentive for defeating them increases. In this paper, we explore the security of machine learning in Android malware detection on the basis of a learning-based classifier with the input of a set of features extracted from the Android applications (apps). We consider different importances of the features associated with their contributions to the classification problem as well as their manipulation costs, and present a novel feature selection method (named SecCLS) to make the classifier harder to be evaded. To improve the system security while not compromising the detection accuracy, we further propose an ensemble learning approach (named SecENS) by aggregating the individual classifiers that are constructed using our proposed feature selection method SecCLS. Accordingly, we develop a system called SecureDroid which integrates our proposed methods (i.e., SecCLS and SecENS) to enhance security of machine learning-based Android malware detection. Comprehensive experiments on the real sample collections from Comodo Cloud Security Center are conducted to validate the effectiveness of SecureDroid against adversarial Android malware attacks by comparisons with other alternative defense methods. Our proposed secure-learning paradigm can also be readily applied to other malware detection tasks.
The relationships between communication, care and time are intertwined: a narrative inquiry exploring the impact of time on registered nurses' work.
AIM To report a qualitative study which explores registered nurses' views on the issue of time in the workplace. BACKGROUND There is a worldwide shortage of healthcare workers, subsequently time as a healthcare resource is both finite and scarce. As a result, increased attention is being paid to the restructuring of nursing work. However, the experience of time passing is a subjective one and there exists little research which, over a prolonged period of time, describes nurses' experiences of working in time-pressurized environments. DESIGN A narrative inquiry. METHOD Five registered nurses were individually interviewed a total of three times over a period of 12 months, amounting to a total of 15 interviews and 30 hours of data. Data were collected and analysed following a narrative enquiry approach during the period 2008-2010. FINDINGS Participants describe how attempts to work more effectively sometimes resulted in unintended negative consequences for patient care and how time pressure encourages collegiality amongst nurses. Furthermore, the registered nurses' account of how they opportunistically create time for communication with patients compels us to re-evaluate the nature of communication during procedural nursing care. CONCLUSION Increasingly nursing work is translated into quantitative data or metrics. This is an inescapable development which seeks to enhance understanding of nursing work. However, qualitative research may also offer a useful approach which captures the otherwise hidden, subjective experiences associated with time and work. Such data can exist alongside nursing metrics, and together these can build a better and more nuanced consideration of nursing practice.
Nonoxidized, biologically active parathyroid hormone determines mortality in hemodialysis patients.
BACKGROUND It was shown that nonoxidized PTH (n-oxPTH) is bioactive, whereas the oxidation of PTH results in a loss of biological activity. METHODS In this study we analyzed the association of n-oxPTH on mortality in hemodialysis patients using a recently developed assay system. RESULTS Hemodialysis patients (224 men, 116 women) had a median age of 66 years. One hundred seventy patients (50%) died during the follow-up period of 5 years. Median n-oxPTH levels were higher in survivors (7.2 ng/L) compared with deceased patients (5.0 ng/L; P = .002). Survival analysis showed an increased survival in the highest n-oxPTH tertile compared with the lowest n-oxPTH tertile (χ², 14.3; P = .0008). Median survival was 1702 days in the highest n-oxPTH tertile, whereas it was only 453 days in the lowest n-oxPTH tertile. Multivariable-adjusted Cox regression showed that higher age increased odds for death, whereas higher n-oxPTH reduced the odds for death. Another model analyzing a subgroup of patients with intact PTH (iPTH) concentrations at baseline above the upper normal range of the iPTH assay (70 ng/L) revealed that mortality in this subgroup was associated with oxidized PTH but not with n-oxPTH levels. CONCLUSIONS The predictive power of n-oxPTH and iPTH on the mortality of hemodialysis patients differs substantially. Measurements of n-oxPTH may reflect the hormone status more precisely. The iPTH-associated mortality is most likely describing oxidative stress-related mortality.
Text mining in healthcare. Applications and opportunities.
Healthcare information systems collect massive amounts of textual and numeric information about patients, visits, prescriptions, physician notes and more. The information encapsulated within electronic clinical records could lead to improved healthcare quality, promotion of clinical and research initiatives, fewer medical errors and lower costs. However, the documents that comprise the health record vary in complexity, length and use of technical vocabulary. This makes knowledge discovery complex. Commercial text mining tools provide a unique opportunity to extract critical information from textual data archives. In this paper, we share our experience of a collaborative research project to develop predictive models by text mining electronic clinical records. We provide an overview of the text mining process, examples of existing studies, experiences of our collaborative project and future opportunities.
Using Fully Homomorphic Encryption for Statistical Analysis of Categorical, Ordinal and Numerical Data
In recent years, there has been a growing trend towards outsourcing of computational tasks with the development of cloud services. The Gentry’s pioneering work of fully homomorphic encryption (FHE) and successive works have opened a new vista for secure and practical cloud computing. In this paper, we consider performing statistical analysis on encrypted data. To improve the efficiency of the computations, we take advantage of the batched computation based on the ChineseRemainder-Theorem. We propose two building blocks that work with FHE: a novel batch greater-than primitive, and matrix primitive for encrypted matrices. With these building blocks, we construct secure procedures and protocols for different types of statistics including the histogram (count), contingency table (with cell suppression) for categorical data; k-percentile for ordinal data; and principal component analysis and linear regression for numerical data. To demonstrate the effectiveness of our methods, we ran experiments in five real datasets. For instance, we can compute a contingency table with more than 50 cells from 4000 of data in just 5 minutes, and we can train a linear regression model with more than 40k of data and dimension as high as 6 within 15 minutes. We show that the FHE is not as slow as commonly believed and it becomes feasible to perform a broad range of statistical analysis on thousands of encrypted data.
Towards Improving Abstractive Summarization via Entailment Generation
Abstractive summarization, the task of rewriting and compressing a document into a short summary, has achieved considerable success with neural sequence-tosequence models. However, these models can still benefit from stronger natural language inference skills, since a correct summary is logically entailed by the input document, i.e., it should not contain any contradictory or unrelated information. We incorporate such knowledge into an abstractive summarization model via multi-task learning, where we share its decoder parameters with those of an entailment generation model. We achieve promising initial improvements based on multiple metrics and datasets (including a test-only setting). The domain mismatch between the entailment (captions) and summarization (news) datasets suggests that the model is learning some domain-agnostic inference skills.ive summarization, the task of rewriting and compressing a document into a short summary, has achieved considerable success with neural sequence-tosequence models. However, these models can still benefit from stronger natural language inference skills, since a correct summary is logically entailed by the input document, i.e., it should not contain any contradictory or unrelated information. We incorporate such knowledge into an abstractive summarization model via multi-task learning, where we share its decoder parameters with those of an entailment generation model. We achieve promising initial improvements based on multiple metrics and datasets (including a test-only setting). The domain mismatch between the entailment (captions) and summarization (news) datasets suggests that the model is learning some domain-agnostic inference skills.
How IT Consumerization Affects the Stress Level at Work: A Public Sector Case Study
IT consumerization refers to the adoption of consumer technologies in an enterprise context and is becoming increasingly important in both research and practice. While there are often positive effects attributed with the trend, e.g. with respect to increased performance or motivation, not much attention has yet been given to the effects it has on stress of employees. In order to close this research gap, we conduct a qualitative single case study in the public sector. We derive four major stressors that are related to IT consumerization, i.e. 1) increased reachability, 2) lack of competence, 3) workflow changes, and 4) system redundancies. These stressors are discussed with respect to related theory concepts in IS. Moreover, they are used to derive recommendations for practitioners with respect to policy development and communication. Our paper contributes to the recent discussion on theoretical implications of IT consumerization effects.
The Oncological Safety of Nipple-Sparing Mastectomy: A Systematic Review of the Literature with a Pooled Analysis of 12,358 Procedures
Nipple-sparing mastectomy (NSM) is increasingly popular as a procedure for the treatment of breast cancer and as a prophylactic procedure for those at high risk of developing the disease. However, it remains a controversial option due to questions regarding its oncological safety and concerns regarding locoregional recurrence. This systematic review with a pooled analysis examines the current literature regarding NSM, including locoregional recurrence and complication rates. Systematic electronic searches were conducted using the PubMed database and the Ovid database for studies reporting the indications for NSM and the subsequent outcomes. Studies between January 1970 and January 2015 (inclusive) were analysed if they met the inclusion criteria. Pooled descriptive statistics were performed. Seventy-three studies that met the inclusion criteria were included in the analysis, yielding 12,358 procedures. After a mean follow up of 38 months (range, 7.4-156 months), the overall pooled locoregional recurrence rate was 2.38%, the overall complication rate was 22.3%, and the overall incidence of nipple necrosis, either partial or total, was 5.9%. Significant heterogeneity was found among the published studies and patient selection was affected by tumour characteristics. We concluded that NSM appears to be an oncologically safe option for appropriately selected patients, with low rates of locoregional recurrence. For NSM to be performed, tumours should be peripherally located, smaller than 5 cm in diameter, located more than 2 cm away from the nipple margin, and human epidermal growth factor 2-negative. A separate histopathological examination of the subareolar tissue and exclusion of malignancy at this site is essential for safe oncological practice. Long-term follow-up studies and prospective cohort studies are required in order to determine the best reconstructive methods.
A Multi-Functional In-Memory Inference Processor Using a Standard 6T SRAM Array
A multi-functional in-memory inference processor integrated circuit (IC) in a 65-nm CMOS process is presented. The prototype employs a deep in-memory architecture (DIMA), which enhances both energy efficiency and throughput over conventional digital architectures via simultaneous access of multiple rows of a standard 6T bitcell array (BCA) per precharge, and embedding column pitch-matched low-swing analog processing at the BCA periphery. In doing so, DIMA exploits the synergy between the dataflow of machine learning (ML) algorithms and the SRAM architecture to reduce the dominant energy cost due to data movement. The prototype IC incorporates a 16-kB SRAM array and supports four commonly used ML algorithms—the support vector machine, template matching, <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula>-nearest neighbor, and the matched filter. Silicon measured results demonstrate simultaneous gains (dot product mode) in energy efficiency of 10<inline-formula> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> and in throughput of 5.3<inline-formula> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> leading to a 53<inline-formula> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> reduction in the energy-delay product with negligible (<inline-formula> <tex-math notation="LaTeX">$\le $ </tex-math></inline-formula>1%) degradation in the decision-making accuracy, compared with the conventional 8-b fixed-point single-function digital implementations.
The impact of unions on municipal elections and urban fiscal policies
The efficient decentralized provision of public goods requires that special interest groups, such as municipal unions, do not exercise undue influence on the outcome of municipal elections and local fiscal policies. We develop a new political economy model in which a union can endorse one of the candidates in a local election. A politician that prefers an inefficiently large public sector can, therefore, win an election if the union can provide sufficiently strong support during the campaign. We have assembled a unique data set that is based on union endorsements that are published in leading local newspapers. Our empirical analysis focuses on municipal elections in the 150 largest cities in the U.S. between 1990 and 2012. We find that challengers strongly benefit from endorsements in competitive elections. Challengers that receive union endorsements and successfully defeat an incumbent also tend to adopt more union friendly fiscal policies.
Learning Hash Functions for Cross-View Similarity Search
Many applications in Multilingual and Multimodal Information Access involve searching large databases of high dimensional data objects with multiple (conditionally independent) views. In this work we consider the problem of learning hash functions for similarity search across the views for such applications. We propose a principled method for learning a hash function for each view given a set of multiview training data objects. The hash functions map similar objects to similar codes across the views thus enabling cross-view similarity search. We present results from an extensive empirical study of the proposed approach which demonstrate its effectiveness on Japanese language People Search and Multilingual People Search problems.
Micro-Biomechanics of the Kebara 2 Hyoid and Its Implications for Speech in Neanderthals
The description of a Neanderthal hyoid from Kebara Cave (Israel) in 1989 fuelled scientific debate on the evolution of speech and complex language. Gross anatomy of the Kebara 2 hyoid differs little from that of modern humans. However, whether Homo neanderthalensis could use speech or complex language remains controversial. Similarity in overall shape does not necessarily demonstrate that the Kebara 2 hyoid was used in the same way as that of Homo sapiens. The mechanical performance of whole bones is partly controlled by internal trabecular geometries, regulated by bone-remodelling in response to the forces applied. Here we show that the Neanderthal and modern human hyoids also present very similar internal architectures and micro-biomechanical behaviours. Our study incorporates detailed analysis of histology, meticulous reconstruction of musculature, and computational biomechanical analysis with models incorporating internal micro-geometry. Because internal architecture reflects the loadings to which a bone is routinely subjected, our findings are consistent with a capacity for speech in the Neanderthals.
Monolithic DC-DC boost converter with current-mode hysteretic control
A monolithic DC-DC boost converter with current-mode hysteretic control is designed and simulated in 0.18-μm CMOS technology. The system is simple, robust, and has a fast response to external changes. It does not require an external clock, and the output is regulated by voltage feedback in addition to limiting the inductor current by sensing it. A non-overlapping clock is internally generated to drive the power switches using buffers designed to minimize power dissipation. For conversion specifications of 1.8 V to 3.3 V at 150 mA, overall efficiency of 94.5% is achieved. Line regulation is 17.5 mV/V, load regulation is 0.33% for a 100 mA current step, while the output voltage ripple is below 30 mV for nominal conditions.
Low speed automation: Technical feasibility of the driving sharing in urban areas
This article presents the technical feasibility of fully automated driving at speeds below 50 km/h in urban and suburban areas with an adequate infrastructure quality (no intersections, known road geometry and lane markings available) focusing on congested and heavy traffic. This requires implementation of several systems: lane keeping system, Adaptive Cruise Control (ACC), lane changing,... Feasibility has been demonstrated through a complex scenario during the final ABV project event.
Complications during root canal irrigation--literature review and case reports.
LITERATURE REVIEW AND CASE REPORTS: The literature concerning the aetiology, symptomatology and therapy of complications during root canal irrigation is reviewed. Three cases of inadvertent injection of sodium hypochlorite and hydrogen peroxide beyond the root apex are presented. Clinical symptoms are discussed, as well as preventive and therapeutic considerations.
Ontology-based sentiment analysis of twitter posts
The emergence of Web 2.0 has drastically altered the way users perceive the Internet, by improving information sharing, collaboration and interoperability. Micro-blogging is one of the most popular Web 2.0 applications and related services, like Twitter, have evolved into a practical means for sharing opinions on almost all aspects of everyday life. Consequently, micro-blogging web sites have since become rich data sources for opinion mining and sentiment analysis. Towards this direction, text-based sentiment classifiers often prove inefficient, since tweets typically do not consist of representative and syntactically consistent words, due to the imposed character limit. This paper proposes the deployment of original ontology-based techniques towards a more efficient sentiment analysis of Twitter posts. The novelty of the proposed approach is that posts are not simply characterized by a sentiment score, as is the case with machine learning-based classifiers, but instead receive a sentiment grade for each distinct notion in the post. Overall, our proposed architecture results in a more detailed analysis of post opinions regarding a specific topic.
Improved Successive Cancellation Decoding of Polar Codes
As improved versions of the successive cancellation (SC) decoding algorithm, the successive cancellation list (SCL) decoding and the successive cancellation stack (SCS) decoding are used to improve the finite-length performance of polar codes. In this paper, unified descriptions of the SC, SCL, and SCS decoding algorithms are given as path search procedures on the code tree of polar codes. Combining the principles of SCL and SCS, a new decoding algorithm called the successive cancellation hybrid (SCH) is proposed. This proposed algorithm can provide a flexible configuration when the time and space complexities are limited. Furthermore, a pruning technique is also proposed to lower the complexity by reducing unnecessary path searching operations. Performance and complexity analysis based on simulations shows that under proper configurations, all the three improved successive cancellation (ISC) decoding algorithms can approach the performance of the maximum likelihood (ML) decoding but with acceptable complexity. With the help of the proposed pruning technique, the time and space complexities of ISC decoders can be significantly reduced and be made very close to those of the SC decoder in the high signal-to-noise ratio regime.
Learning Word Importance with the Neural Bag-of-Words Model
The Neural Bag-of-Words (NBOW) model performs classification with an average of the input word vectors and achieves an impressive performance. While the NBOW model learns word vectors targeted for the classification task it does not explicitly model which words are important for given task. In this paper we propose an improved NBOW model with this ability to learn task specific word importance weights. The word importance weights are learned by introducing a new weighted sum composition of the word vectors. With experiments on standard topic and sentiment classification tasks, we show that (a) our proposed model learns meaningful word importance for a given task (b) our model gives best accuracies among the BOW approaches. We also show that the learned word importance weights are comparable to tf-idf based word weights when used as features in a BOW SVM classifier.
Semi-supervised radio signal identification
Radio emitter recognition in dense multi-user environments is an important tool for optimizing spectrum utilization, identifying and minimizing interference, and enforcing spectrum policy. Radio data is readily available and easy to obtain from an antenna, but labeled and curated data is often scarce making supervised learning strategies difficult and time consuming in practice. We demonstrate that semi-supervised learning techniques can be used to scale learning beyond supervised datasets, allowing for discerning and recalling new radio signals by using sparse signal representations based on both unsupervised and supervised methods for nonlinear feature learning and clustering methods.
Scene Recognition with CNNs: Objects, Scales and Dataset Bias
Since scenes are composed in part of objects, accurate recognition of scenes requires knowledge about both scenes and objects. In this paper we address two related problems: 1) scale induced dataset bias in multi-scale convolutional neural network (CNN) architectures, and 2) how to combine effectively scene-centric and object-centric knowledge (i.e. Places and ImageNet) in CNNs. An earlier attempt, Hybrid-CNN[23], showed that incorporating ImageNet did not help much. Here we propose an alternative method taking the scale into account, resulting in significant recognition gains. By analyzing the response of ImageNet-CNNs and Places-CNNs at different scales we find that both operate in different scale ranges, so using the same network for all the scales induces dataset bias resulting in limited performance. Thus, adapting the feature extractor to each particular scale (i.e. scale-specific CNNs) is crucial to improve recognition, since the objects in the scenes have their specific range of scales. Experimental results show that the recognition accuracy highly depends on the scale, and that simple yet carefully chosen multi-scale combinations of ImageNet-CNNs and Places-CNNs, can push the stateof-the-art recognition accuracy in SUN397 up to 66.26% (and even 70.17% with deeper architectures, comparable to human performance).
Face detection: A deep convolutional network method based on grouped facial part
In this paper, a novel method is proposed for face detection, which is of simple structure but robust to severe occlusion. In detail, the size-free images are firstly segmented to a series of candidate windows. Then these candidate windows are further filtered by grouped facial part networks to generate a set of windows which may contain faces. Finally, the set of face proposals are input to a multi-task deep convolutional network (DCN) for further classification and calibration. Importantly, we take the spatial position relations of local facial parts into consideration and find it helpful to handle the severe occlusion. Our method achieves outstanding performance on the widely used datasets FDDB and AFW, compared to the other proposed face detectors. Especially on FDDB, our method achieves a high recall rate of 90.13%.
Mechanical sensorless control of PMSM with online estimation of stator resistance
This paper provides an improvement in sensorless control performance of nonsalient-pole permanent-magnet synchronous machines. To ensure sensorless operation, most of the existing methods require that the initial position error as well as the parameters uncertainties must be limited. In order to overcome these drawbacks, we study them analytically and present a solution using an online identification method which is easy to implement and is highly stable. A stability analysis based on Lyapunov's linearization method shows the stability of the closed-loop system with the proposed estimator combined with the sensorless algorithm. This approach does not need a well-known initial rotor position and makes the sensorless control more robust with respect to the stator resistance variations at low speed. The simulation and experimental results illustrate the validity of the analytical approach and the efficiency of the proposed method.
Education for the engineering mission
Engineering involves such a wide range of human endeavor that it demands diversity and innovation in the preparation of its practitioners. Engineering effort can be divided into three major categories: engineering technology, engineering practice, and engineering science. The qualifications of individuals whose activities would fall into each of these mutually interdependent categories are examined, as are the problems involved in training these individuals.
Explaining public service broadcasting entrenched politicization: The case of South Africa’s SABC
Public service broadcasting is the terrain par excellence within today’s media systems on which political power and media logic interact and overlap. This study will argue that public service broad...
Impact of metformin on peak aerobic capacity.
Individually, exercise and the drug metformin have been shown to prevent or delay type 2 diabetes. Metformin mildly inhibits complex I of the electron transport system and may impact aerobic capacity in people exercising while taking metformin. The purpose of the study was to evaluate the effects of metformin on maximal aerobic capacity in healthy individuals without mitochondrial dysfunction. Seventeen healthy, normal-weight men (n=11) and women (n=6) participated in a double-blind, placebo-controlled, cross-over design. Peak aerobic capacity was measured twice using a continuous, incrementally graded protocol; once after 7-9 d of metformin (final dose=2000 mg/d) and once with placebo, with 1 week between tests. The order of the conditions was counterbalanced. Peak oxygen uptake (VO2 peak), heart rate (HR), ventilation (VE), respiratory exchange ratio (RER), rating of perceived exertion (RPE), and test duration were compared across conditions using paired t tests with the R statistical program. VO2 peak (-2.7%), peak heart rate (-2.0%), peak ventilation (-6.2%), peak RER (-3.0%), and exercise duration (-4.1%) were all reduced slightly, but significantly, with metformin (all p<0.05). There was no effect of metformin on RPE or ventilatory breakpoint. Correlations between the decrement in VO2 peak and any of the other outcome variables were weak (r2<0.20) and not significant. Short-term treatment with metformin has statistically significant, but physiologically subtle, effects that reduce key outcomes related to maximal exercise capacity. Whether this small but consistent effect is manifested in people with insulin resistance or diabetes who already have some degree of mitochondrial dysfunction remains to be determined.
Efficacy of a tetravalent dengue vaccine in children in Latin America.
BACKGROUND In light of the increasing rate of dengue infections throughout the world despite vector-control measures, several dengue vaccine candidates are in development. METHODS In a phase 3 efficacy trial of a tetravalent dengue vaccine in five Latin American countries where dengue is endemic, we randomly assigned healthy children between the ages of 9 and 16 years in a 2:1 ratio to receive three injections of recombinant, live, attenuated, tetravalent dengue vaccine (CYD-TDV) or placebo at months 0, 6, and 12 under blinded conditions. The children were then followed for 25 months. The primary outcome was vaccine efficacy against symptomatic, virologically confirmed dengue (VCD), regardless of disease severity or serotype, occurring more than 28 days after the third injection. RESULTS A total of 20,869 healthy children received either vaccine or placebo. At baseline, 79.4% of an immunogenicity subgroup of 1944 children had seropositive status for one or more dengue serotypes. In the per-protocol population, there were 176 VCD cases (with 11,793 person-years at risk) in the vaccine group and 221 VCD cases (with 5809 person-years at risk) in the control group, for a vaccine efficacy of 60.8% (95% confidence interval [CI], 52.0 to 68.0). In the intention-to-treat population (those who received at least one injection), vaccine efficacy was 64.7% (95% CI, 58.7 to 69.8). Serotype-specific vaccine efficacy was 50.3% for serotype 1, 42.3% for serotype 2, 74.0% for serotype 3, and 77.7% for serotype 4. Among the severe VCD cases, 1 of 12 was in the vaccine group, for an intention-to-treat vaccine efficacy of 95.5%. Vaccine efficacy against hospitalization for dengue was 80.3%. The safety profile for the CYD-TDV vaccine was similar to that for placebo, with no marked difference in rates of adverse events. CONCLUSIONS The CYD-TDV dengue vaccine was efficacious against VCD and severe VCD and led to fewer hospitalizations for VCD in five Latin American countries where dengue is endemic. (Funded by Sanofi Pasteur; ClinicalTrials.gov number, NCT01374516.).
High prevalence of asthma symptoms in Warao Amerindian children in Venezuela is significantly associated with open-fire cooking: a cross-sectional observational study
BACKGROUND The International Study on Asthma and Allergies in Childhood (ISAAC) reported a prevalence of asthma symptoms in 17 centers in nine Latin American countries that was similar to prevalence rates reported in non-tropical countries. It has been proposed that the continuous exposure to infectious diseases in rural populations residing in tropical areas leads to a relatively low prevalence of asthma symptoms. As almost a quarter of Latin American people live in rural tropical areas, the encountered high prevalence of asthma symptoms is remarkable. Wood smoke exposure and environmental tobacco smoke have been identified as possible risk factors for having asthma symptoms. METHODS We performed a cross-sectional observational study from June 1, 2012 to September 30, 2012 in which we interviewed parents and guardians of Warao Amerindian children from Venezuela. Asthma symptoms were defined according to the ISAAC definition as self-reported wheezing in the last 12 months. The associations between wood smoke exposure and environmental tobacco smoke and the prevalence of asthma symptoms were calculated by means of univariate and multivariable logistic regression analyses. RESULTS We included 630 children between two and ten years of age. Asthma symptoms were recorded in 164 of these children (26%). The prevalence of asthma symptoms was associated with the cooking method. Children exposed to the smoke produced by cooking on open wood fires were at higher risk of having asthma symptoms compared to children exposed to cooking with gas (AOR 2.12, 95% CI 1.18 - 3.84). Four percent of the children lived in a household where more than ten cigarettes were smoked per day and they had a higher risk of having asthma symptoms compared to children who were not exposed to cigarette smoke (AOR 2.69, 95% CI 1.11 - 6.48). CONCLUSION Our findings suggest that children living in rural settings in a household where wood is used for cooking or where more than ten cigarettes are smoked daily have a higher risk of having asthma symptoms.
Emergence of Political Unionism in Economies of British Colonial Origin: The Cases of Jamaica and Trinidad
Abstract. What were the circumstances under which political unionism has emerged in economies of British colonial origin, such as Jamaica and Trinidad? The hypothesis tested is that the political activities of trade unions in such economies played a role in the process of economic development, helping to achieve political independence and then economic growth. But at that stage political unionism is found to be incompatible with needed acceleration of growth rates. A significant deterioration in economic and social conditions produced a crisis and the unions traded support for the parties for some control over economic and social policy. This gave the political leaders the power they needed to negotiate for independence but, in Jamaica, it changed the focus and character of the labor movement.
Energy management algorithm for smart home with renewable energy sources
Meeting increased power demand and solving the integration problems of renewable energy sources do not seem to be possible with today's power grid infrastructure. In order to overcome these problems, Smart Grid is the most promising concept which is more reliable, flexible, controllable and environmental friendly. Home energy management (HEM) system, an important part of smart grid, provides a number of benefits such as savings in the electricity bill, reduction in peak demand and meeting the demand side requirements. In this study, a new home energy management algorithm is proposed for smart home having renewable sources. Management mechanism developed uses the battery state of charge (SOC) level, power grid availability and multi-rate tariff to decrease the energy cost in the customer bill. Load and source data is obtained from the real smart house laboratory built in Yildiz Technical University, Turkey. Simulation results obtained from MATLAB/Simulink environment demonstrate the effectiveness of the proposed algorithm in decreasing the electricity bill of customer.
Wearable augmented reality system using gaze interaction
Undisturbed interaction is essential to provide immersive AR environments. There have been a lot of approaches to interact with VEs (virtual environments) so far, especially in hand metaphor. When the userpsilas hands are being used for hand-based work such as maintenance and repair, necessity of alternative interaction technique has arisen. In recent research, hands-free gaze information is adopted to AR to perform original actions in concurrence with interaction. [3, 4]. There has been little progress on that research, still at a pilot study in a laboratory setting. In this paper, we introduce such a simple WARS (wearable augmented reality system) equipped with an HMD, scene camera, eye tracker. We propose dasiaAgingpsila technique improving traditional dwell-time selection, demonstrate AR gallery - dynamic exhibition space with wearable system.
The impact of frailty in older patients with non-ischaemic cardiomyopathy after implantation of cardiac resynchronization therapy defibrillator.
AIMS Frailty status impacts the prognosis in older patients with heart disease. However, frailty status impact is unknown in patients with non-ischaemic cardiomyopathy after cardiac resynchronization therapy (CRT). METHODS AND RESULTS Functional measures of baseline frailty and clinical data were collected for all patients with non-ischaemic cardiomyopathy before CRT defibrillator (CRT-D) implantation. The level of frailty was assessed using the Fried and Walston definition. Cox proportional hazard regression models were used to examine the association between baseline frailty and decompensated heart failure (HF) at the 12 months follow-up. The cohort study consisted of 102 patients with a mean age of 73 ± 4 years, 53% of which were male patients. Twenty-nine patients (28%) were classified as frail before CRT-D implantation. Twenty-seven patients experienced decompensated HF after CRT-D implantation at the 12-month follow-up. In the non-frail group, 12 of 73 patients (16.4%) experienced episodes of decompensated HF. In contrast, 15 of 29 (55.6%) frail patients experienced higher proportions of decompensated HF (P < 0.001). Patients who were frail (hazard ratio 4.55, 95% confidence interval 1.726-12.013) were at increased risk for the decompensated HF (P for trend = 0.002) compared with those who were not frail. CONCLUSION Frailty is a strong predictor of adverse post-implantation outcome in patients with non-ischaemic cardiomyopathy undergoing CRT-D.
Quantifying Long-term Scientific Impact
The lack of predictability of citation-based measures frequently used to gauge impact, from impact factors to short-term citations, raises a fundamental question: Is there long-term predictability in citation patterns? Here, we derive a mechanistic model for the citation dynamics of individual papers, allowing us to collapse the citation histories of papers from different journals and disciplines into a single curve, indicating that all papers tend to follow the same universal temporal pattern. The observed patterns not only help us uncover basic mechanisms that govern scientific impact but also offer reliable measures of influence that may have potential policy implications.
Antagonism also Flows through Retweets: The Impact of Out-of-Context Quotes in Opinion Polarization Analysis
In this paper, we study the implications of the commonplace assumption that most social media studies make with respect to the nature of message shares (such as retweets) as a predominantly positive interaction. By analyzing two large longitudinal Brazilian Twitter datasets containing 5 years of conversations on two polarizing topics – Politics and Sports, we empirically demonstrate that groups holding antagonistic views can actually retweet each other more often than they retweet other groups. We show that assuming retweets as endorsement interactions can lead to misleading conclusions with respect to the level of antagonism among social communities, and that this apparent paradox is explained in part by the use of retweets to quote the original content creator out of the message’s original temporal context, for humor and criticism purposes. As a consequence, messages diffused on online media can have their polarity reversed over time, what poses challenges for social and computer scientists aiming to classify and track opinion groups on online media. On the other hand, we found that the time users take to retweet a message after it has been originally posted can be a useful signal to infer antagonism in social platforms, and that surges of out-of-context retweets correlate with sentiment drifts triggered by real-world events. We also discuss how such evidences can be embedded in sentiment analysis models.
Long-Range Battery-Free UHF RFID With a Single Wire Transmission Line
We present a long-range passive (battery-free) radio frequency identification (RFID) and distributed sensing system using a single wire transmission line (SWTL) as the communication channel. A SWTL exploits guided surface wave propagation along a single conductor, which can be formed from existing infrastructure, such as power lines, pipes, or steel cables. Guided propagation along a SWTL has far lower losses than a comparable over-the-air (OTA) communication link; so much longer read distances can be achieved compared with the conventional OTA RFID system. In a laboratory-scale experiment with an ISO18000–6C (EPC Gen 2) passive tag, we demonstrate an RFID system using an 8 mm diameter, 5.2 m long SWTL. This SWTL has 30 dB lower propagation loss than a standard OTA RFID system at the same read range. We further demonstrate that the SWTL can tolerate extreme temperatures far beyond the capabilities of coaxial cable, by heating an operating SWTL conductor with a propane torch having a temperature of nearly 2000 °C. Extrapolation from the measured results suggest that a SWTL-based RFID system is capable of read ranges of over 70 m assuming a reader output power of +32.5 dBm and a tag power-up threshold of −7 dBm.
Stacked Extreme Learning Machines
Extreme learning machine (ELM) has recently attracted many researchers' interest due to its very fast learning speed, good generalization ability, and ease of implementation. It provides a unified solution that can be used directly to solve regression, binary, and multiclass classification problems. In this paper, we propose a stacked ELMs (S-ELMs) that is specially designed for solving large and complex data problems. The S-ELMs divides a single large ELM network into multiple stacked small ELMs which are serially connected. The S-ELMs can approximate a very large ELM network with small memory requirement. To further improve the testing accuracy on big data problems, the ELM autoencoder can be implemented during each iteration of the S-ELMs algorithm. The simulation results show that the S-ELMs even with random hidden nodes can achieve similar testing accuracy to support vector machine (SVM) while having low memory requirements. With the help of ELM autoencoder, the S-ELMs can achieve much better testing accuracy than SVM and slightly better accuracy than deep belief network (DBN) with much faster training speed.
tabilization of slugging in oil production facilities with or without upstream ressure sensors lorent
This paper presents methods for suppressing the slugging phenomenon occurring in multiphase flow. The considered systems include industrial oil production facilities such as gas-lifted wells and flowline risers with low-points. Given the difficulty to maintain sensors in deep locations, a particular emphasis is put on observer-based control design. It appears that, without any upstream pressure sensor, such vailable online 23 March 2012 eywords: lugging tabilization bserver a strategy can stabilize the flow. Besides, given a measurement or estimate of the upstream pressure, we propose a control strategy alternative to the classical techniques. The efficiency of these methods is assessed through experiments on a mid-scaled multiphase flow loop. © 2012 Elsevier Ltd. All rights reserved. However, such a simple controller is not always well suited ultiphase flow
Measuring Urban Social Diversity Using Interconnected Geo-Social Networks
Large metropolitan cities bring together diverse individuals, creating opportunities for cultural and intellectual exchanges, which can ultimately lead to social and economic enrichment. In this work, we present a novel network perspective on the interconnected nature of people and places, allowing us to capture the social diversity of urban locations through the social network and mobility patterns of their visitors. We use a dataset of approximately 37K users and 42K venues in London to build a network of Foursquare places and the parallel Twitter social network of visitors through check-ins. We define four metrics of the social diversity of places which relate to their social brokerage role, their entropy, the homogeneity of their visitors and the amount of serendipitous encounters they are able to induce. This allows us to distinguish between places that bring together strangers versus those which tend to bring together friends, as well as places that attract diverse individuals as opposed to those which attract regulars. We correlate these properties with wellbeing indicators for London neighbourhoods and discover signals of gentrification in deprived areas with high entropy and brokerage, where an influx of more affluent and diverse visitors points to an overall improvement of their rank according to the UK Index of Multiple Deprivation for the area over the five-year census period. Our analysis sheds light on the relationship between the prosperity of people and places, distinguishing between different categories and urban geographies of consequence to the development of urban policy and the next generation of socially-aware location-based applications.
Improving the Robustness of Neural Networks Using K-Support Norm Based Adversarial Training
It is of significant importance for any classification and recognition system, which claims near or better than human performance to be immune to small perturbations in the dataset. Researchers found out that neural networks are not very robust to small perturbations and can easily be fooled to persistently misclassify by adding a particular class of noise in the test data. This, so-called adversarial noise severely deteriorates the performance of neural networks, which otherwise perform really well on unperturbed dataset. It has been recently proposed that neural networks can be made robust against adversarial noise by training them using the data corrupted with adversarial noise itself. Following this approach, in this paper, we propose a new mechanism to generate a powerful adversarial noise model based on K-support norm to train neural networks. We tested our approach on two benchmark datasets, namely the MNIST and STL-10, using muti-layer perceptron and convolutional neural networks. Experimental results demonstrate that neural networks trained with the proposed technique show significant improvement in robustness as compared to state-of-the-art techniques.
Fast Collision Attack on MD5
We presented the first single block collision attack on MD5 with complexity of 2 MD5 compressions and posted the challenge for another completely new one in 2010. Last year, Stevens presented a single block collision attack to our challenge, with complexity of 2 MD5 compressions. We really appreciate Stevens’s hard work. However, it is a pity that he had not found even a better solution than our original one, let alone a completely new one and the very optimal solution that we preserved and have been hoping that someone can find it, whose collision complexity is about 2 MD5 compressions. In this paper, we propose a method how to choose the optimal input difference for generating MD5 collision pairs. First, we divide the sufficient conditions into two classes: strong conditions and weak conditions, by the degree of difficulty for condition satisfaction. Second, we prove that there exist strong conditions in only 24 steps (one and a half rounds) under specific conditions, by utilizing the weaknesses of compression functions of MD5, which are difference inheriting and message expanding. Third, there should be no difference scaling after state word q25 so that it can result in the least number of strong conditions in each differential path, in such a way we deduce the distribution of strong conditions for each input difference pattern. Finally, we choose the input difference with the least number of strong conditions and the most number of free message words. We implement the most efficient 2-block MD5 collision attack, which needs only about 2 MD5 compressions to find a collision pair, and show a single-block collision attack with complexity 2.
Finding Clues for Your Secrets: Semantics-Driven, Learning-Based Privacy Discovery in Mobile Apps
A long-standing challenge in analyzing information leaks within mobile apps is to automatically identify the code operating on sensitive data. With all existing solutions relying on System APIs (e.g., IMEI, GPS location) or features of user interfaces (UI), the content from app servers, like user’s Facebook profile, payment history, fall through the crack. Finding such content is important given the fact that most apps today are web applications, whose critical data are often on the server side. In the meantime, operations on the data within mobile apps are often hard to capture, since all server-side information is delivered to the app in the same way, sensitive or not. A unique observation of our research is that in modern apps, a program is essentially a semantics-rich documentation carrying meaningful program elements such as method names, variables and constants that reveal the sensitive data involved, even when the program is under moderate obfuscation. Leveraging this observation, we develop a novel semantics-driven solution for automatic discovery of sensitive user data, including those from the server side. Our approach utilizes natural language processing (NLP) to automatically locate the program elements (variables, methods, etc.) of interest, and then performs a learning-based program structure analysis to accurately identify those indeed carrying sensitive content. Using this new technique, we analyzed 445,668 popular apps, an unprecedented scale for this type of research. Our work brings to light the pervasiveness of information leaks, and the channels through which the leaks happen, including unintentional over-sharing across libraries and aggressive data acquisition behaviors. Further we found that many high-profile apps and libraries are involved in such leaks. Our findings contribute to a better understanding of the privacy risk in mobile apps and also highlight the importance of data protection in today’s software composition.
Hybrid elevation maps: 3D surface models for segmentation
This paper presents an algorithm for segmenting 3D point clouds. It extends terrain elevation models by incorporating two types of representations: (1) ground representations based on averaging the height in the point cloud, (2) object models based on a voxelisation of the point cloud. The approach is deployed on Riegl data (dense 3D laser data) acquired in a campus type of environment and compared against six other terrain models. Amongst elevation models, it is shown to provide the best fit to the data as well as being unique in the sense that it jointly performs ground extraction, overhang representation and 3D segmentation. We experimentally demonstrate that the resulting model is also applicable to path planning.
Personalizing a Dialogue System with Transfer Learning
It is difficult to train a personalized task-oriented dialogue system because the data collected from each individual is often insufficient. Personalized dialogue systems trained on a small dataset can overfit and make it difficult to adapt to different user needs. One way to solve this problem is to consider a collection of multiple users’ data as a source domain and an individual user’s data as a target domain, and to perform a transfer learning from the source to the target domain. By following this idea, we propose the “PETAL” (PErsonalized Task-oriented diALogue), a transfer learning framework based on POMDP to learn a personalized dialogue system. The system first learns common dialogue knowledge from the source domain and then adapts this knowledge to the target user. This framework can avoid the negative transfer problem by considering differences between source and target users. The policy in the personalized POMDP can learn to choose different actions appropriately for different users. Experimental results on a real-world coffee-shopping data and simulation data show that our personalized dialogue system can choose different optimal actions for different users, and thus effectively improve the dialogue quality under the personalized setting.
Strong and Robust RFID Authentication Enabling Perfect Ownership Transfer
RFID technology arouses great interests from both its advocates and opponents because of the promising but privacy-threatening nature of low-cost RFID tags. A main privacy concern in RFID systems results from clandestine scanning through which an adversary could conduct silent tracking and inventorying of persons carrying tagged objects. Thus, the most important security requirement in designing RFID protocols is to ensure untraceability of RFID tags by unauthorized parties (even with knowledge of a tag secret due to no physical security of low-cost RFID tags). Previous work in this direction mainly focuses on backward untraceability, requiring that compromise of a tag secret should not help identify the tag from past communication transcripts. However, in this paper, we argue that forward untraceability, i.e., untraceability of future events even with knowledge of a current tag secret, should be considered as an equally or even more important security property in RFID protocol designs. Furthermore, RFID tags may often change hands during their lifetime and thus the problem of tag ownership transfer should be dealt with as another key issue in RFID privacy problems; once ownership of a tag is transferred to another party, the old owner should not be able to read the tag any more. It is rather obvious that complete transfer of tag ownership is possible only if some degree of forward untraceability is provided. We propose a strong and robust RFID authentication protocol satisfying both forward and backward untraceability and enabling complete transfer of tag ownership.
Vision for spatial perception and vision for action: a dissociation between the left–right and near–far dimensions
Neuropsychological and psychophysical studies have suggested that two distinct visual sub-systems are responsible for perception and action. One of the main psychophysical arguments for this is based on visual illusion such as the Induced Roelofs Effect (IRE), where the location of a visual target presented with an off-centre frame is misperceived when evaluated verbally, but not with a reaching response. This dissociated effect suggests the existence of two independent representations of visual space devoted, respectively, to categorisation and to egocentric localisation of reachable objects. These "cognitive" and "sensorimotor" representations have been assumed to be produced through specific anatomical pathways stemming from the primary visual cortex (respectively, the ventral and dorsal streams). To account for the dissociation found with the IRE, it has been suggested that only the cognitive system is sensitive to contextual information. However this view has been challenged by recent psychophysical studies demonstrating the influence of environmental cues on distance perception and the guiding of movement. In the present study, the IRE is re-evaluated but the near-far and right-left dimensions were dissociated. In agreement with previous findings, our results showed that the IRE in the right-left dimension gives rise to a perceptual misperception of target position with no effect on motor performance. Conversely, when the IRE was induced in the near-far dimension a misperception of the target position affected both perceptual and motor responses. This dissociation indicates that the spatial constraints of the task, and not only the nature of the response, interfere with sensitivity to contextual information leading to visual illusions. It is thus likely that the action system (imputed to the dorsal stream) can be sensitive to contextual information, at least when depth processing is emphasised.
Ontology-based information extraction: An introduction and a survey of current approaches
Information Extraction aims to retrieve certain types of information from natural language text by processing them automatically. For example, an information extraction system might retrieve information about geopolitical indicators of countries from a set of web pages while ignoring other types of information. Ontology-based information extraction has recently emerged as a subfield of information extraction. Here, ontologies which provide formal and explicit specifications of conceptualizations play a crucial role in the information extraction process. Because of the use of ontologies, this field is related to knowledge representation and has the potential to assist the development of the Semantic Web. In this paper, we provide an introduction to ontology-based information extraction and review the details of different ontology-based information extraction systems developed so far. We attempt to identify a common architecture among these systems and classify them based on different factors, which leads to a better understanding on their operation. We also discuss the implementation details of these systems including the tools used by them and the metrics used to measure their performance. In addition, we attempt to identify the possible future directions for this field.
Longitudinal examination of predictors of smoking cessation in a national sample of U.S. adolescent and young adult smokers.
INTRODUCTION To better inform the development of smoking cessation programs for adolescents and young adults, a prospective study was employed to systematically examine behavioral, demographic, health, and psychosocial determinants of smoking cessation. METHODS Data from the 2003-2005 National Youth Smoking Cessation Survey were used. Of 2,582 smokers aged 16-24 years sampled, 1,354 provided complete baseline telephone interview data on the study variables, and their self-reported smoking status at 2-year follow-up was known (currently smoking vs. not smoking). Multivariable logistic regression analysis was employed to examine independent predictors of smoking status (outcome variable) at the 2-year follow-up period. RESULTS Four of 5 participants remained smokers after 2 years. Of the high nicotine dependence smokers, 90% remained smokers at follow-up; of the low nicotine dependence smokers, 77% remained smokers at follow-up. Higher nicotine dependence smokers started smoking earlier in life (13.2 vs. 14.3 years; p < .05). Similarly, those not smoking at the 2-year follow-up period started smoking later in life than those still smoking (14.5 vs. 13.7 years). Along with nicotine dependence, various psychosocial and demographic variables at baseline predicted smoking status at the 2-year follow-up period. CONCLUSIONS Identifiable demographic and psychosocial factors influence smoking behavior among U.S. adolescents and young adults. Even low nicotine dependence is a strong predictor of follow-up smoking behavior. This, coupled with the early smoking age of high nicotine dependence smokers, underscores the importance of early nicotine avoidance among youth.
Enhancements of V2X communication in support of cooperative autonomous driving
Two emerging technologies in the automotive domain are autonomous vehicles and V2X communication. Even though these technologies are usually considered separately, their combination enables two key cooperative features: sensing and maneuvering. Cooperative sensing allows vehicles to exchange information gathered from local sensors. Cooperative maneuvering permits inter-vehicle coordination of maneuvers. These features enable the creation of cooperative autonomous vehicles, which may greatly improve traffic safety, efficiency, and driver comfort. The first generation V2X communication systems with the corresponding standards, such as Release 1 from ETSI, have been designed mainly for driver warning applications in the context of road safety and traffic efficiency, and do not target use cases for autonomous driving. This article presents the design of core functionalities for cooperative autonomous driving and addresses the required evolution of communication standards in order to support a selected number of autonomous driving use cases. The article describes the targeted use cases, identifies their communication requirements, and analyzes the current V2X communication standards from ETSI for missing features. The result is a set of specifications for the amendment and extension of the standards in support of cooperative autonomous driving.
EAST: An Efficient and Accurate Scene Text Detector
Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.
Cortical reorganization after motor imagery training in chronic stroke patients with severe motor impairment: a longitudinal fMRI study
Despite its clinical efficacy, few studies have examined the neural mechanisms of motor imagery training (MIT) in stroke. Our objective was to find the cortical reorganization patterns after MIT in chronic stroke patients. Twenty stroke patients with severe motor deficits were randomly assigned to the MIT or conventional rehabilitation therapy (CRT) group, but two lost in the follow-up. All 18 patients received CRT 5 days/week for 4 weeks. Nine subjects in the MIT group received 30-min MIT 5 days/week for 4 weeks. Before and after the interventions, the upper limb section of the Fugl–Meyer Scale (FM-UL) was blindly evaluated, and functional magnetic resonance imaging was administered while the patients executed a passive fist clutch task. Two cortical reorganization patterns were found. One pattern consisted of the growth in activation in the contralateral sensorimotor cortex (cSMC) for most patients (six in the MIT group, five in the CRT group), and the other consisted of focusing of the activation in the cSMC with increasing of the laterality index of the SMC for a small portion of patients (three in the MIT group, one in the CRT group). When we applied correlation analyses to the variables of relative ΔcSMC and ΔFM-UL in the 11 patients who experienced the first pattern, a positive relationship was detected. Our results indicate that different cortical reorganization patterns (increases in or focusing of recruitment to the cSMC region) exist in chronic stroke patients after interventions, and patients may choose efficient patterns to improve their motor function.
Global Explanations of Neural Networks: Mapping the Landscape of Predictions
A barrier to the wider adoption of neural networks is their lack of interpretability. While local explanation methods exist for one prediction, most global attributions still reduce neural network decisions to a single set of features. In response, we present an approach for generating global attributions called GAM, which explains the landscape of neural network predictions across subpopulations. GAM augments global explanations with the proportion of samples that each attribution best explains and specifies which samples are described by each attribution. Global explanations also have tunable granularity to detect more or fewer subpopulations. We demonstrate that GAM’s global explanations 1) yield the known feature importances of simulated data, 2) match feature weights of interpretable statistical models on real data, and 3) are intuitive to practitioners through user studies. With more transparent predictions, GAM can help ensure neural network decisions are generated for the right reasons.
Parameter identi fi cation for industrial robots with a fast and robust trajectory design approach
Model-based, torque-level control can offer precision and speed advantages over velocity-level or position-level robot control. However, the dynamic parameters of the robot must be identified accurately. Several steps are involved in dynamic parameter identification, including modeling the system dynamics, joint position/torque data acquisition and filtering, experimental design, dynamic parameters estimation and validation. In this paper, we propose a novel, computationally efficient and intuitive optimality criterion to design the excitation trajectory for the robot to follow. Experiments are carried out for a 6 degree of freedom (DOF) Staubli TX-90 robot. We validate the dynamics parameters using torque prediction accuracy and compare to existing methods. The RMS errors of the prediction were small, and the computation time for the new, optimal objective function is an order of magnitude less than for existing approaches. & 2014 Elsevier Ltd. All rights reserved.
Face recognition techniques, their advantages, disadvantages and performance evaluation
A human brain can store and remember thousands of faces in a person's life time, however it is very difficult for an automated system to reproduce the same results. Faces are complex and multidimensional which makes extraction of facial features to be very challenging, yet it is imperative for our face recognition systems to be better than our brain's capabilities. The face like many physiological biometrics that include fingerprint, hand geometry, retina, iris and ear uniquely identifies each individual. In this paper we focus mainly on the face recognition techniques. This review looks at three types of recognition approaches namely holistic, feature based (geometric) and the hybrid approach. We also look at the challenges that are face by the approaches.
A convolutional autoencoder approach for mining features in cellular electron cryo-tomograms and weakly supervised coarse segmentation
Cellular electron cryo-tomography enables the 3D visualization of cellular organization in the near-native state and at submolecular resolution. However, the contents of cellular tomograms are often complex, making it difficult to automatically isolate different in situ cellular components. In this paper, we propose a convolutional autoencoder-based unsupervised approach to provide a coarse grouping of 3D small subvolumes extracted from tomograms. We demonstrate that the autoencoder can be used for efficient and coarse characterization of features of macromolecular complexes and surfaces, such as membranes. In addition, the autoencoder can be used to detect non-cellular features related to sample preparation and data collection, such as carbon edges from the grid and tomogram boundaries. The autoencoder is also able to detect patterns that may indicate spatial interactions between cellular components. Furthermore, we demonstrate that our autoencoder can be used for weakly supervised semantic segmentation of cellular components, requiring a very small amount of manual annotation.
Baseline serum cholestanol as predictor of recurrent coronary events in subgroup of Scandinavian simvastatin survival study. Finnish 4S Investigators.
Objectives: To investigate whether baseline serum cholestanol:cholesterol ratio, which is negatively related to cholesterol synthesis, could predict reduction of coronary events in the Scandinavian simvastatin survival study. Design: Follow up of patients with coronary heart disease in whom baseline ratios were related to major coronary events. Setting: Four universities in Finland. Subjects: A subgroup of 868 patients with coronary heart disease selected from the Scandinavian simvastatin survival study. Intervention: Treatment with simvastatin or placebo. Main outcome measures: Serum concentrations of low density lipoprotein and high density lipoprotein cholesterol, total triglyceride concentration, and cholesterol:cholestanol ratio. Major coronary events. Results: With increasing baseline quarter of cholestanol distribution the reduction in relative risk increased gradually from 0.623 (95% confidence interval 0.395 to 0.982) to 1.166 (0.791 to 1.72). The risk of recurrence of major coronary events increased 2.2-fold (P < 0.01) by multiple logistic regression analysis between the lowest and highest quarter of cholestanol. The ratio of cholestanol was related inversely to the body mass index and directly to high density lipoprotein cholesterol and triglyceride concentrations but their quarters of distribution were not related to risk reduction. Conclusions: Measurement of serum cholestanol concentration revealed a subgroup of patients with coronary heart disease in whom coronary events were not reduced by simvastatin treatment. Thus, patients with high baseline synthesis of cholesterol seem to be responders whereas those with low synthesis of cholesterol are non-responders.
ADaPT: optimizing CNN inference on IoT and mobile devices using approximately separable 1-D kernels
Breakthroughs from the field of deep learning are radically changing how sensor data are interpreted to extract important information to help advance healthcare, make our cities smarter, and innovate in smart home technology. Deep convolutional neural networks, which are at the heart of many emerging Internet-of-Things (IoT) applications, achieve remarkable performance in audio and visual recognition tasks, at the expense of high computational complexity in convolutional layers, limiting their deployability. In this paper, we present an easy-to-implement acceleration scheme, named ADaPT, which can be applied to already available pre-trained networks. Our proposed technique exploits redundancy present in the convolutional layers to reduce computation and storage requirements. Additionally, we also decompose each convolution layer into two consecutive one-dimensional stages to make full use of the approximate model. This technique can easily be applied to existing low power processors, GPUs or new accelerators. We evaluated this technique using four diverse and widely used benchmarks, on hardware ranging from embedded CPUs to server GPUs. Our experiments show an average 3-5x speed-up in all deep models and a maximum 8-9x speed-up on many individual convolutional layers. We demonstrate that unlike iterative pruning based methodology, our approximation technique is mathematically well grounded, robust, does not require any time-consuming retraining, and still achieves speed-ups solely from convolutional layers with no loss in baseline accuracy.
Qrs Complex Detection of Ecg Signal Using Wavelet Transform
The electrocardiogram (ECG) is widely used for diagnosis of heart diseases. Generally, the recorded ECG signal is often contaminated by noise. In order to extract useful information from the noisy ECG signals, the raw ECG signals has to be processed. The baseline wandering is significant and can strongly affect ECG signal analysis. The detection of QRS complexes in an ECG signal provides information about the heart rate, the conduction velocity, the condition of tissues within the heart as well as various abnormalities. It supplies evidence for the diagnosis of cardiac diseases. An algorithm based on wavelet transforms (WT's) has been developed for detecting ECG characteristic points. Discrete Wavelet Transform (DWT) has been used to extract relevant information from the ECG signal in order to perform classification. The QRS complex can be distinguished from high P or T waves, noise, baseline drift, and artifacts. By using this method, the detection rate of QRS complexes is above 99.8% for the MIT/BIH database and the P and T waves can also be detected, even with serious base line drift and noise. KeywordsElectrocardiogram, Wavelet Transform, QRS Complex, Filters, Threshold.
Topic Model for Identifying Suicidal Ideation in Chinese Microblog
Suicide is one of major public health problems worldwide. Traditionally, suicidal ideation is assessed by surveys or interviews, which lacks of a real-time assessment of personal mental state. Online social networks, with large amount of user-generated data, offer opportunities to gain insights of suicide assessment and prevention. In this paper, we explore potentiality to identify and monitor suicide expressed in microblog on social networks. First, we identify users who have committed suicide and collect millions of microblogs from social networks. Second, we build suicide psychological lexicon by psychological standards and word embedding technique. Third, by leveraging both language styles and online behaviors, we employ Topic Model and other machine learning algorithms to identify suicidal ideation. Our approach achieves the best results on topic-500, yielding F1 − measure of 80.0%, Precision of 87.1%, Recall of 73.9%, and Accuracy of 93.2%. Furthermore, a prototype system for monitoring suicidal ideation on several social networks is deployed.
Binocular Camera Calibration Using Rectification Error
Reprojection error is a commonly used measure for comparing the quality of different camera calibrations, for example when choosing the best calibration from a set. While this measure is suitable for single cameras, we show that we can improve calibrations in a binocular or multi-camera setup by calibrating the cameras in pairs using a rectification error. The rectification error determines the mismatch in epipolar constraints between a pair of cameras, and it can be used to calibrate binocular camera setups more accurately than using the reprojection error. We provide a quantitative comparison of the reprojection and rectification errors, and also demonstrate our result with examples of binocular stereo reconstruction.
Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory.
Damage to the hippocampal system disrupts recent memory but leaves remote memory intact. The account presented here suggests that memories are first stored via synaptic changes in the hippocampal system, that these changes support reinstatement of recent memories in the neocortex, that neocortical synapses change a little on each reinstatement, and that remote memory is based on accumulated neocortical changes. Models that learn via changes to connections help explain this organization. These models discover the structure in ensembles of items if learning of each item is gradual and interleaved with learning about other items. This suggests that the neocortex learns slowly to discover the structure in ensembles of experiences. The hippocampal system permits rapid learning of new items without disrupting this structure, and reinstatement of new memories interleaves them with others to integrate them into structured neocortical memory systems.
Biomechanics and muscle coordination of human walking: part II: lessons from dynamical simulations and clinical implications.
Principles of muscle coordination in gait have been based largely on analyses of body motion, ground reaction force and EMG measurements. However, data from dynamical simulations provide a cause-effect framework for analyzing these measurements; for example, Part I (Gait Posture, in press) of this two-part review described how force generation in a muscle affects the acceleration and energy flow among the segments. This Part II reviews the mechanical and coordination concepts arising from analyses of simulations of walking. Simple models have elucidated the basic multisegmented ballistic and passive mechanics of walking. Dynamical models driven by net joint moments have provided clues about coordination in healthy and pathological gait. Simulations driven by muscle excitations have highlighted the partial stability afforded by muscles with their viscoelastic-like properties and the predictability of walking performance when minimization of metabolic energy per unit distance is assumed. When combined with neural control models for exciting motoneuronal pools, simulations have shown how the integrative properties of the neuro-musculo-skeletal systems maintain a stable gait. Other analyses of walking simulations have revealed how individual muscles contribute to trunk support and progression. Finally, we discuss how biomechanical models and simulations may enhance our understanding of the mechanics and muscle function of walking in individuals with gait impairments.
GPLAG: detection of software plagiarism by program dependence graph analysis
Along with the blossom of open source projects comes the convenience for software plagiarism. A company, if less self-disciplined, may be tempted to plagiarize some open source projects for its own products. Although current plagiarism detection tools appear sufficient for academic use, they are nevertheless short for fighting against serious plagiarists. For example, disguises like statement reordering and code insertion can effectively confuse these tools. In this paper, we develop a new plagiarism detection tool, called GPLAG, which detects plagiarism by mining program dependence graphs (PDGs). A PDG is a graphic representation of the data and control dependencies within a procedure. Because PDGs are nearly invariant during plagiarism, GPLAG is more effective than state-of-the-art tools for plagiarism detection. In order to make GPLAG scalable to large programs, a statistical lossy filter is proposed to prune the plagiarism search space. Experiment study shows that GPLAG is both effective and efficient: It detects plagiarism that easily slips over existing tools, and it usually takes a few seconds to find (simulated) plagiarism in programs having thousands of lines of code.
Efficient Event-Driven Simulation of Large Networks of Spiking Neurons and Dynamical Synapses
A simulation procedure is described for making feasible large-scale simulations of recurrent neural networks of spiking neurons and plastic synapses. The procedure is applicable if the dynamic variables of both neurons and synapses evolve deterministically between any two successive spikes. Spikes introduce jumps in these variables, and since spike trains are typically noisy, spikes introduce stochasticity into both dynamics. Since all events in the simulation are guided by the arrival of spikes, at neurons or synapses, we name this procedure event-driven. The procedure is described in detail, and its logic and performance are compared with conventional (synchronous) simulations. The main impact of the new approach is a drastic reduction of the computational load incurred upon introduction of dynamic synaptic efficacies, which vary organically as a function of the activities of the pre- and postsynaptic neurons. In fact, the computational load per neuron in the presence of the synaptic dynamics grows linearly with the number of neurons and is only about 6 more than the load with fixed synapses. Even the latter is handled quite efficiently by the algorithm. We illustrate the operation of the algorithm in a specific case with integrate-and-fire neurons and specific spike-driven synaptic dynamics. Both dynamical elements have been found to be naturally implementable in VLSI. This network is simulated to show the effects on the synaptic structure of the presentation of stimuli, as well as the stability of the generated matrix to the neural activity it induces.
Osteotomies/spinal column resections in adult deformity
Osteotomies may be life saving procedures for patients with rigid severe spinal deformity. Several different types of osteotomies have been defined by several authors. To correct and provide a balanced spine with reasonable amount of correction is the ultimate goal in deformity correction by osteotomies. Selection of osteotomy is decided by careful preoperative assessment of the patient and deformity and the amount of correction needed to have a balanced spine. Patient’s general medical status and surgeon’s experience levels are the other factors for determining the ideal osteotomy type. There are different osteotomy options for correcting deformities, including the Smith-Petersen osteotomy (SPO), pedicle subtraction osteotomy (PSO), bone-disc-bone osteotomy (BDBO) and vertebral column resection (VCR) providing correction of the sagittal and multiplanar deformity. SPO refers to a posterior column osteotomy in which the posterior ligaments and facet joints are removed and a mobile anterior disc is required for correction. PSO is performed by removing the posterior elements and both pedicles, decancellating vertebral body, and closure of the osteotomy by hinging on the anterior cortex. BDBO is an osteotomy that aims to resect the disc with its adjacent endplate(s) in deformities with the disc space as the apex or center of rotational axis (CORA). VCR provides the greatest amount of correction among other osteotomy types with complete resection of one or more vertebral segments with posterior elements and entire vertebral body including adjacent discs. It is also important to understand sagittal imbalance and the surgeon must consider global spino-pelvic alignment for satisfactory long-term results. Vertebral osteotomies are technically challenging but effective procedures for the correction of severe adult deformity and should be performed by experienced surgeons to prevent catastrophic complications.
The Elderly as Victims of Crime , Abuse and Neglect
Surveys throughout the world have shown consistently that persons over 65 are far less likely to be victims of crime than younger age groups. However, many elderly people are unduly fearful about crime which has an adverse effect on their quality of life. This Trends and Issues puts this matter into perspective, but also discusses the more covert phenomena of abuse and neglect of the elderly. Our senior citizens have earned the right to live in dignity and without fear: the community as a whole should contribute to this process. Duncan Chappell Director
Uncertainty, Action, and Interaction: In Pursuit of Mixed-Initiative Computing
Recent debate has highlighted differing views on the most promising opportunities for userinterface innovation. 1 One group of investigators has expressed optimism about the potential for refining intelligent-interface agents, suggesting that resear ch should focus on developing more powerful representations and inferential machinery for sensing a use r’s activity and taking automated actions. 2–4 Other researchers have voiced concerns that efforts focused on automation might be better expended on tools and metaphors that enhance the a bilities of users to directly manipulate and inspect objects and information. 5 Rather than advocating one approach over the other, a creative integration of direct manipulati on nd automated services could provide fundamentally new kinds of user experiences, characterize d by deeper, more natural collaborations between users and computers. In particular, there are ich opportunities for interweaving direct control and automation to create mixed-initiative systems and interfaces.