title
stringlengths
8
300
abstract
stringlengths
0
10k
A variant of environmental adaptation method with real parameter encoding and its application in economic load dispatch problem
Environmental Adaptation Method (EAM) and Improved Environmental Adaptation Method (IEAM) were proposed to solve optimization problems with the biological theory of adaptation in mind. Both of these algorithms work with binary encoding, and their performance is comparable with other state-of-art algorithms. To further improve the performance of these algorithms, some major changes are incorporated into the proposed algorithm. The proposed algorithm works with the real value parameter encoding, and, in order to maintain significant convergence rate and diversity, it maintains a balance between exploitation and exploration. The choice to explore or exploit a solution depends on the fitness of the individual. The performance of the proposed algorithm is compared with 17 state-of-art algorithms in 2-D, 3-D, 5-D, 10-D and 20-D dimensions using the COCO (COmparing Continuous Optimisers) framework with Black-Box Optimization Benchmarking (BBOB) functions. It outperforms all other algorithms in 3-D and 5-D, and its performance is comparable to other algorithms for other dimensions. In addition, IEAM-R has been applied to the real world problem of economic load dispatch, and its results demonstrate that it gives minimum fuel cost when compared to other algorithms in different cases.
Opinion Mining on YouTube
This paper defines a systematic approach to Opinion Mining (OM) on YouTube comments by (i) modeling classifiers for predicting the opinion polarity and the type of comment and (ii) proposing robust shallow syntactic structures for improving model adaptability. We rely on the tree kernel technology to automatically extract and learn features with better generalization power than bag-of-words. An extensive empirical evaluation on our manually annotated YouTube comments corpus shows a high classification accuracy and highlights the benefits of structural models in a cross-domain setting.
N-alkylation of N-trimethylsilylimidazole
The reaction of N-trimethylsilylimidazole with alkyl chloroacetates is studied. This process yields a mixture of N-alkylation and quarternization products in the ratio dependent on the reaction conditions. The reaction mechanism is discussed. 1H NMR data show that high melting point and low solubility of 1-imidazolylacetic acid in organic solvents are evidently caused by the formation of strong intermolecular hydrogen bonds, whereas in water zwitterionic structures with the protonation of both nitrogen atoms are formed.
Intelligence Testing for Autonomous Vehicles: A New Approach
In this paper, we study how to test the intelligence of an autonomous vehicle. Comprehensive testing is crucial to both vehicle manufactories and customers. Existing testing approaches can be categorized into two kinds: scenario-based testing and functionality-based testing. We first discuss the shortcomings of these two kinds of approaches, and then propose a new testing framework to combine the benefits of them. Based on the new semantic diagram definition for the intelligence of autonomous vehicles, we explain how to design a task for autonomous vehicle testing and how to evaluate test results. Experiments show that this new approach provides a quantitative way to test the intelligence of an autonomous vehicle.
Suicide by Cop.
A terrified woman called police because her ex-boyfriend was breaking into her home. Upon arrival, police heard screams coming from the basement. They stopped halfway down the stairs and found the ex-boyfriend pointing a rifle at the floor. Officers observed a strange look on the subject’s face as he slowly raised the rifle in their direction. Both officers fired their weapons, killing the suspect. The rifle was not loaded.
A double blind, randomized, placebo controlled study of the efficacy and safety of 5-Loxin® for treatment of osteoarthritis of the knee
INTRODUCTION 5-Loxin is a novel Boswellia serrata extract enriched with 30% 3-O-acetyl-11-keto-beta-boswellic acid (AKBA), which exhibits potential anti-inflammatory properties by inhibiting the 5-lipoxygenase enzyme. A 90-day, double-blind, randomized, placebo-controlled study was conducted to evaluate the efficacy and safety of 5-Loxin in the treatment of osteoarthritis (OA) of the knee. METHODS Seventy-five OA patients were included in the study. The patients received either 100 mg (n = 25) or 250 mg (n = 25) of 5-Loxin daily or a placebo (n = 25) for 90 days. Each patient was evaluated for pain and physical functions by using the standard tools (visual analog scale, Lequesne's Functional Index, and Western Ontario and McMaster Universities Osteoarthritis Index) at the baseline (day 0), and at days 7, 30, 60 and 90. Additionally, the cartilage degrading enzyme matrix metalloproteinase-3 was also evaluated in synovial fluid from OA patients. Measurement of a battery of biochemical parameters in serum and haematological parameters, and urine analysis were performed to evaluate the safety of 5-Loxin in OA patients. RESULTS Seventy patients completed the study. At the end of the study, both doses of 5-Loxin conferred clinically and statistically significant improvements in pain scores and physical function scores in OA patients. Interestingly, significant improvements in pain score and functional ability were recorded in the treatment group supplemented with 250 mg 5-Loxin as early as 7 days after the start of treatment. Corroborating the improvements in pain scores in treatment groups, we also noted significant reduction in synovial fluid matrix metalloproteinase-3. In comparison with placebo, the safety parameters were almost unchanged in the treatment groups. CONCLUSION 5-Loxin reduces pain and improves physical functioning significantly in OA patients; and it is safe for human consumption. 5-Loxin may exert its beneficial effects by controlling inflammatory responses through reducing proinflammatory modulators, and it may improve joint health by reducing the enzymatic degradation of cartilage in OA patients. TRIAL REGISTRATION ( CLINICAL TRIAL REGISTRATION NUMBER ISRCTN05212803.).
Recently targeted kinases and their inhibitors-the path to clinical trials.
Protein kinases have emerged as one of the most important drug target families for the treatment of cancer. To date, 28 inhibitors with reported activity versus one or multiple kinases have been approved for clinical use. However, the majority of new clinical trials are focused on new subindications using already approved kinase inhibitors or target well validated kinase targets with novel inhibitors. In contrast, relatively few clinical trials have been initiated using specific inhibitors that inhibit novel kinase targets, despite significant validation efforts in the public domain. Analysis of the target validation history of first in class kinase inhibitors revealed a long delay between initial disease association and development of inhibitors. As part of this analysis, we have investigated which first in class inhibitor that entered phase I clinical trials over the last five years and also considered which research approaches that were used to validate them.
Clinical trial results: a clinical trial bazaar!
AtTheOncologist,we launchedthe “Clinical Trial Results” section “motivated by the premise that every trial, regardless of outcome, can have a benefit to the research community.” But we envision a broader audience.One needonlywork in a tertiary referral center to see how “creative” community oncologists have become—often, trying new and off-label combinations of approved agents in an effort to help patients who have exhausted standard options, nearly all of which fail to benefit patients whose tumors are intractable. These novel combinations have likely also been tried in the context of a clinical trial but, given the negative outcome, the investigator likely felt the time and effort to report it would be wasted. Besides, what journal would publish a negative result in a small group of patients? The answer: The Oncologist. Two excellent submissions by Beverly Moy and her associates reporting combinations of bosutinib with the aromatase inhibitors exemestane and letrozole are exhibit A [1, 2]. Both combinations were deemed to have unfavorable risk-benefit ratios and now join the overwhelming majority of “targeted agent combinations” proven poorly tolerable. Why toxicity has limited the development of such combinations remains something of a mystery. Exhibit B is a combination of temsirolimus and bryostatin-1 [3]. Although bryostatin-1 development has been halted, studies such as this must be published to inform the future development of other protein kinase C inhibitors. Exhibit C is a combination of sorafenib with everolimus selected on the basis of molecular targets [4]; the reader can decide, is this progress or are we running in place? Tobesure,atTheOncologist,wealsowelcome positive resultsbecauseweaimto “share results, speed discovery, and inform.” Clinical Trial Results submissions have shownus howsuccinctly the salient features of a submission can be presented, with more in-depth information found online. The abbreviated format means a quick read—a nontrivial attribute, given the proliferation of medical literature. Garcia-Carbonero and colleagues report positive results with ramucirumab, a fully human monoclonal antibody targeting the vascular endothelial growth factor receptor 2 combined with modified FOLFOX-6 as first-line therapy for metastatic colorectal cancer (mCRC) [5]. Although the gains reported in this phase II trial come as no surprise, the study provides a first glimpse at tolerability, an attribute likely to emerge as important as ramucirumab competes in a crowded field. Eventually, wewill want to know how ramucirumab compares with Avastin and aflibercept—and at Sloan-Kettering, they will want to know its price! (See the 2012 New York Times op-ed, “In Cancer Care, Cost Matters,” which discusses MSKCC’s decision not to prescribe aflibercept due to its cost [6].) But importantly, this phase II result, likely to be ratified in the ongoing phase III study, reminds us that althoughwe have come far inmCRC, it is not far enough. Progress in mCRC will require targets other than vascular endothelial growth factor and strategies other than angiogenesis. In that spirit, we have the contribution by Stec et al., a prospective phase II trial of mitomycin C and high-dose 5-fluorouracil with folinic acid in heavily pretreated patients with mCRC [7]. Old fashioned? Yes, maybe even antediluvian, but with interesting activity and, as the authors argue, worthy of further investigation. Finally, on the subject of tolerability, we have a randomized phase II trial from theNorth Japan Lung Cancer Group on the tolerability of carboplatin plus weekly paclitaxel compared with docetaxel in elderly patients with advanced non-small cell lung cancer [8]. In Japan, as in the U.S., investigators are finding the elderly tolerate chemotherapy much better than previously thought. Is age 60 indeed the new 50?
Bregmanized Nonlocal Regularization for Deconvolution and Sparse Reconstruction
We propose two algorithms based on Bregman iteration and operator splitting technique for nonlocal TV regularization problems. The convergence of the algorithms is analyzed and applications to deconvolution and sparse reconstruction are presented.
Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback
This paper presents a visual-inertial odometry framework which tightly fuses inertial measurements with visual data from one or more cameras, by means of an iterated extended Kalman filter (IEKF). By employing image patches as landmark descriptors, a photometric error is derived, which is directly integrated as an innovation term in the filter update step. Consequently, the data association is an inherent part of the estimation process and no additional feature extraction or matching processes are required. Furthermore, it enables the tracking of non-corner shaped features, such as lines, and thereby increases the set of possible landmarks. The filter state is formulated in a fully robocentric fashion, which reduces errors related to nonlinearities. This also includes partitioning of a landmark’s location estimate into a bearing vector and distance and thereby allows an undelayed initialization of landmarks. Overall, this results in a compact approach which exhibits a high level of robustness with respect to low scene texture and motion blur. Furthermore, there is no time-consuming initialization procedure and pose estimates are available starting at the second image frame. We test the filter on different real datasets and compare it to other state-of-the-art visual-inertial frameworks. The experimental results show that robust localization with high accuracy can be achieved with this filterbased framework.
Reduced-order model of a half-bridge series resonant inverter for power control in domestic induction heating applications
When modeling resonant inverters considering the harmonic balance method, the order of the obtained transfer functions is twice the state variables number. This is explained because two components are considered for each state variable. In order to obtain a simpler transfer function model of a halfbridge series resonant inverter, different techniques of model order reduction have been considered in this work. Thus, a reduced-order model has been obtained by residualization providing much simpler analytical expressions than the original model. The proposed model has been validated by simulation and experimentally. The validity range of the proposed model is extended up to a tenth of the switching frequency. Taking into account the great load variability of induction heating applications, the proposed reduced-order model will allow the design of advanced controllers such as Gain-Scheduling.
Network Intrusion Detection System using attack behavior classification
Intrusion Detection Systems (IDS) have become a necessity in computer security systems because of the increase in unauthorized accesses and attacks. Intrusion Detection is a major component in computer security systems that can be classified as Host-based Intrusion Detection System (HIDS), which protects a certain host or system and Network-based Intrusion detection system (NIDS), which protects a network of hosts and systems. This paper addresses Probes attacks or reconnaissance attacks, which try to collect any possible relevant information in the network. Network probe attacks have two types: Host Sweep and Port Scan attacks. Host Sweep attacks determine the hosts that exist in the network, while port scan attacks determine the available services that exist in the network. This paper uses an intelligent system to maximize the recognition rate of network attacks by embedding the temporal behavior of the attacks into a TDNN neural network structure. The proposed system consists of five modules: packet capture engine, preprocessor, pattern recognition, classification, and monitoring and alert module. We have tested the system in a real environment where it has shown good capability in detecting attacks. In addition, the system has been tested using DARPA 1998 dataset with 100% recognition rate. In fact, our system can recognize attacks in a constant time.
Immunoglobulin G responses against human papillomavirus type 16 virus-like particles in a prospective nonintervention cohort study of women with cervical intraepithelial neoplasia.
BACKGROUND Infection with cancer-linked human papillomavirus (HPV) types such as HPV type 16 (HPV16) is the most important risk factor in the development of cervical cancer. It has been shown that immunoglobulin G (IgG) antibody responses against HPV16 virus-like particles (VLPs) are specifically associated with genital HPV16 infection. PURPOSE The aim of this study was to determine the temporal relationships between the presence of HPV16 VLP-specific IgGs, HPV16 infection patterns, and the course of premalignant cervical disease. METHODS Plasma samples from 133 women who had been diagnosed originally with mild to moderate cervical dyskaryosis and enrolled in a prospective non-intervention cohort study conducted in Amsterdam, The Netherlands, from 1991 through 1996 were analyzed for the presence of HPV16 VLP-specific IgGs by use of an enzyme-linked immunosorbent assay. A detailed analysis was performed on 43 women with different HPV16 infection patterns during a follow-up period of 10-34 months. Progression or regression of cervical intraepithelial neoplasia (CIN) lesions was monitored by cytologic and colposcopic testing at intervals of 3-4 months. HPV typing in cervical smears was performed by use of a polymerase chain reaction-based assay. Statistical analysis of the serologic data was performed by use of the Mann-Whitney U test or 2 x 2 table analyses. RESULTS The presence of HPV16 VLP-specific IgGs in the plasma of the patients was found to be associated with the presence of HPV16 DNA in the cervical smear. Significantly higher proportions of patients with persistent HPV16 infections (i.e., who were polymerase chain reaction positive in three to 11 consecutive tests) than of patients with cleared HPV16 infections were found to be positive for the presence of HPV16 VLP-specific IgGs (18 [69.2%] of 26 versus nine [28.1%] of 32, respectively; P = .003). HPV16 VLP-specific IgGs were consistently detected in all women (n = 11) who were persistently HPV16 DNA positive during follow-up and whose disease ultimately progressed to CIN III (histologically diagnosed severe dysplasia or carcinoma in situ). CONCLUSION HPV16 VLP-specific IgG responses are present in the plasma of a majority of patients with persistent HPV16 infections and histologically confirmed high-grade lesions but only in a smaller subset of patients with cleared HPV16 infections and either normal cervical histology or low-grade CIN lesions. IMPLICATIONS These results suggest that HPV16 VLP-specific antibodies are not responsible for the clearance of virally induced CIN lesions but that they might, in patients with persistent HPV16 infections, be indicative of an increased cervical cancer risk.
Country-wide rainfall maps from cellular communication networks.
Accurate and timely surface precipitation measurements are crucial for water resources management, agriculture, weather prediction, climate research, as well as ground validation of satellite-based precipitation estimates. However, the majority of the land surface of the earth lacks such data, and in many parts of the world the density of surface precipitation gauging networks is even rapidly declining. This development can potentially be counteracted by using received signal level data from the enormous number of microwave links used worldwide in commercial cellular communication networks. Along such links, radio signals propagate from a transmitting antenna at one base station to a receiving antenna at another base station. Rain-induced attenuation and, subsequently, path-averaged rainfall intensity can be retrieved from the signal's attenuation between transmitter and receiver. Here, we show how one such a network can be used to retrieve the space-time dynamics of rainfall for an entire country (The Netherlands, ∼35,500 km(2)), based on an unprecedented number of links (∼2,400) and a rainfall retrieval algorithm that can be applied in real time. This demonstrates the potential of such networks for real-time rainfall monitoring, in particular in those parts of the world where networks of dedicated ground-based rainfall sensors are often virtually absent.
Causality in Quantiles and Dynamic Stock Return-Volume Relations
This paper investigates the causal relations between stock return and volume based on quantile regressions. We first define Granger non-causality in all quantiles and propose testing non-causality by a sup-Wald test. Such a test is consistent against any deviation from non-causality in distribution, as opposed to the existing tests that check only noncausality in certain moment. This test is readily extended to test non-causality in different quantile ranges. In the empirical studies of 3 major stock market indices, we find that the causal effects of volume on return are usually heterogeneous across quantiles and those of return on volume are more stable. In particular, the quantile causal effects of volume on return exhibit a spectrum of (symmetric) V -shape relations so that the dispersion of return distribution increases with lagged volume. This is an alternative evidence that volume has a positive effect on return volatility. Moreover, the inclusion of the squares of lagged returns in the model may weaken the quantile causal effects of volume on return but does not affect the causality per se. JEL Classification No: C12, G14
Discrimination of Word Senses with Hypernyms
Languages are inherently ambiguous. Four out of five words in English have more than one meaning. Nowadays there is a growing number of small proprietary thesauri used for knowledge management for different applications. In order to enable the usage of these thesauri for automatic text annotations, we introduce a robust method for discriminating word senses using hypernyms. The method uses collocations to induce word senses and to discriminate the thesaural sense from the other senses by utilizing hypernym entries taken from a thesaurus. The main novelty of this work is the usage of hypernyms already at the stage sense induction. The hypernyms enable us to cast the task to a binary scenario, namely teasing apart thesaural senses from all the rest. The introduced method outperforms the baseline and has indicates accuracy above 80%.
Inter-industry wage differentials and the gender wage gap: Evidence from European countries
This study analyses the interaction between inter-industry wage differentials and the gender wage gap in six European countries using a unique harmonised matched employer-employee data set, the 1995 European Structure of Earnings Survey. Findings show the existence of significant inter-industry wage differentials in all countries for both sexes. While their structure is quite similar for men and women and across countries, their dispersion is significantly larger in countries with decentralised bargaining. These differentials are significantly and positively correlated with industry profitability. The magnitude of this correlation, however, is lower in countries with centralised and coordinated collective bargaining. Further results show that in all countries more than 80% of the gender wage gaps within industries are statistically significant. Yet, industries having the highest and the lowest gender wage gaps vary substantially across European countries. Finally, results indicate that industry effects explain between 0 and 29% of the overall gender wage gap. * Brenda Gannon is Research Analyst at the Economic and Social Research Institute (ESRI), Dublin, Ireland. Robert Plasman is Full Professor of Economics, Labor and Industrial Relations at the Free University of Brussels (ULB). He is Director of the Department of Applied Economics of the ULB (DULBEA). François Rycx is Associate Professor of Economics and Labor at the ULB, and an Affiliate of DULBEA. Ilan Tojerow is a PhD Student and Research Fellow at DULBEA. This paper is produced as part of a Targeted Socio-Economic Research (TSER) project on Pay Inequalities and Economic Performance (PIEP) financed by the European Commission (Contract nr. HPSE-CT-1999-00040) and coordinated by David Marsden (London School of Economics). Most of the data used in this study come from the 1995 European Structure of Earnings Survey. Unfortunately, due to confidentiality issues, this data set is only available for members of the PIEP research project (http://cep.lse.ac.uk/piep/). Researchers interested in obtaining copies of the computer programs and codebooks used to generate the results should contact François Rycx at the Department of Applied Economics (DULBEA), Free University of Brussels, CP 140, 50 av. F.D. Roosevelt, 1050 Brussels, Belgium.
A multilayer substrate integrated 3 dB power divider with embedded thick film resistor
A 3 dB power divider/combiner in substrate integrated waveguide (SIW) technology is presented. The divider consists of an E-plane SIW bifurcation with an embedded thick film resistor. The transition divides a full-height SIW into two SIWs of half the height. The resistor provides isolation between these two. The divider is fabricated in a multilayer process using high frequency substrates. For the resistor carbon paste is printed on the middle layer of the stack-up. Simulation and measurement results are presented. The measured divider exhibits an isolation of better than 22 dB within a bandwidth of more than 3GHz at 20 GHz.
Bayesian Intermittent Demand Forecasting for Large Inventories
We present a scalable and robust Bayesian method for demand forecasting in the context of a large e-commerce platform, paying special attention to intermittent and bursty target statistics. Inference is approximated by the Newton-Raphson algorithm, reduced to linear-time Kalman smoothing, which allows us to operate on several orders of magnitude larger problems than previous related work. In a study on large real-world sales datasets, our method outperforms competing approaches on fast and medium moving items.
Bidirectional LSTM Recurrent Neural Network for Keyphrase Extraction
To achieve state-of-the-art performance, keyphrase extraction systems rely on domain-specific knowledge and sophisticated features. In this paper, we propose a neural network architecture based on a Bidirectional Long Short-Term Memory Recurrent Neural Network that is able to detect the main topics on the input documents without the need of defining new hand-crafted features. A preliminary experimental evaluation on the well-known INSPEC dataset confirms the effectiveness of the proposed solution.
"Tang Our Contemporary: Twenty-First Century Adaptations of Peony Pavilion": Supplementary Materials (Video Excerpts and Photo Gallery) [Online]
These video excerpts and photographs complement the publication Ferrari, Rossella (2014) :'Tang Our Contemporary: Twenty-First Century Adaptations of Peony Pavilion.' In: Santangelo, Paolo and Tan, Tian Yuan, (eds.), Passion, Romance, and Qing: The World of Emotions and States of Mind in Peony Pavilion (3 vols.). Leiden, Boston: Brill, pp. 1482-1522. (Emotions and States of Mind in East Asia).
Code Bits: An Inexpensive Tangible Computational Thinking Toolkit For K-12 Curriculum
The extensive research in the domain of computational thinking has identified itself as one of the critical skills that needs to be a part of regular K-12 curriculum. However, most of the tangible computational thinking toolkits that are being developed are bulky and expensive to be deployed in classroom environments. In this paper we present Code Bits, a paper based tangible computational thinking toolkit that is inexpensive, portable and scalable. The students create programs using the tangible paper bits on any flat surface and use the Code Bits mobile application to process the code, which runs on any android device with a camera and uses augmented reality based games to improve the computational thinking skills of the students. The toolkit has been designed in way so as to promote collaboration amongst students.
MAERI: Enabling Flexible Dataflow Mapping over DNN Accelerators via Reconfigurable Interconnects
Deep neural networks (DNN) have demonstrated highly promising results across computer vision and speech recognition, and are becoming foundational for ubiquitous AI. The computational complexity of these algorithms and a need for high energy-efficiency has led to a surge in research on hardware accelerators. % for this paradigm. To reduce the latency and energy costs of accessing DRAM, most DNN accelerators are spatial in nature, with hundreds of processing elements (PE) operating in parallel and communicating with each other directly. DNNs are evolving at a rapid rate, and it is common to have convolution, recurrent, pooling, and fully-connected layers with varying input and filter sizes in the most recent topologies.They may be dense or sparse. They can also be partitioned in myriad ways (within and across layers) to exploit data reuse (weights and intermediate outputs). All of the above can lead to different dataflow patterns within the accelerator substrate. Unfortunately, most DNN accelerators support only fixed dataflow patterns internally as they perform a careful co-design of the PEs and the network-on-chip (NoC). In fact, the majority of them are only optimized for traffic within a convolutional layer. This makes it challenging to map arbitrary dataflows on the fabric efficiently, and can lead to underutilization of the available compute resources. DNN accelerators need to be programmable to enable mass deployment. For them to be programmable, they need to be configurable internally to support the various dataflow patterns that could be mapped over them. To address this need, we present MAERI, which is a DNN accelerator built with a set of modular and configurable building blocks that can easily support myriad DNN partitions and mappings by appropriately configuring tiny switches. MAERI provides 8-459% better utilization across multiple dataflow mappings over baselines with rigid NoC fabrics.
Compact SIW 3×3 butler matrix for 5G mobile devices
This paper presents a compact SIW (substrate integrated waveguide) 3×3 Butler matrix (BM) for 5G mobile applications. The detailed structuring procedures, parameter determinations of each involved component are provided. To validate the 3×3 BM, a slot array is designed. The cascading simulations and prototype measurements are also carried out. The overall performance and dimension show that it can be used for 5G mobile devices. The measured S-parameters agree well with the simulated ones. The measured gains are in the range of 8.1 dBi ∼ 11.1 dBi, 7.1 dBi ∼ 9.8 dBi and 8.9 dBi ∼ 11 dBi for port 1∼3 excitations.
The Third Rewrite Engines Competition
This paper presents the main results and conclusions of the Third Rewrite Engines Competition (REC III). This edition of the competition took place as part of the 8th Workshop on Rewriting Logic and its Applications (WRLA 2010), and the systems ASF+SDF, Maude, Stratego/XT, Tom, and TXL participated in it.
Stress, coping and satisfaction in nursing students.
AIM This article is a report of a study conducted to explore the relationship between sources of stress and psychological well-being and to consider how different sources of stress and coping resources might function as moderators and mediators on well-being. BACKGROUND In most research exploring sources of stress and coping in nursing students, stress has been construed as psychological distress. Sources of stress likely to enhance well-being and, by implication, learning have not been considered. METHOD A questionnaire was administered to 171 final year nursing students in 2008. Questions were asked to measure sources of stress when rated as likely to contribute to distress (a hassle) and rated as likely to help one achieve (an uplift). Support, control, self-efficacy and coping style were also measured, along with their potential moderating and mediating effects on well-being, operationalized using the General Health Questionnaire and measures of course and career satisfaction. FINDINGS Sources of stress likely to lead to distress were more often predictors of well-being than were sources of stress likely to lead to positive, eustress states, with the exception of clinical placement demands. Self-efficacy, dispositional control and support were important predictors, and avoidance coping was the strongest predictor of adverse well-being. Approach coping was not a predictor of well-being. The mere presence of support appeared beneficial as well as the utility of that support to help a student cope. CONCLUSION Initiatives to promote support and self-efficacy are likely to have immediate benefits for student well-being. In course reviews, nurse educators need to consider how students' experiences might contribute not just to potential distress, but to eustress as well.
A group intervention to reduce intimate partner violence among female drug users. Results from a randomized controlled pilot trial in a community substance-abuse center.
BACKGROUND A greater proportion of drug dependent women are victims of intimate partner violence (IPV) than women in the general population; however, few interventions have been developed to reduce IPV among drug dependent women. METHODS An adapted version of the Women's Wellness Treatment, to address IPV and depressive symptoms, was piloted in a randomized controlled trial conducted in outpatient treatment program in Barcelona, Spain among 14 women receiving outpatient treatment for a drug use disorder who screened positive for IPV in the previous month. Participants were randomly assigned to receive the 10 session cognitive behavioral therapy (IPaViT-CBT) group intervention or treatment as usual. The frequency of IPV, depressive symptoms, substance use, quality of life and health status were assessed at baseline and 1, 3 and 12 months post intervention. Intention to treat analysis was performed. RESULTS Moderate effects for the intervention were found in reducing psychological maltreatment, increasing assertiveness of IPV and reducing aggressiveness in the partner relationship, and in reducing the frequency of drinking up to 3 months post intervention. The intervention did not significantly reduce the likelihood of any IPV, depressive symptoms, quality of life or self-reported health status, up to 12-months post intervention. CONCLUSION This pilot trial suggests some initial support for the 10-session CBT group intervention among IPV victims who received treatment for drug use. Study findings indicate that it is feasible to deliver the intervention in a community substance abuse center. An adequately powered trial is required to replicate these results.
Towards a theory of supply chain management : the constructs and measurements
Rising international cooperation, vertical disintegration, along with a focus on core activities have led to the notion that firms are links in a networked supply chain. This novel perspective has created the challenge of designing and managing a network of interdependent relationships developed and fostered through strategic collaboration. Although research interests in supply chain management (SCM) are growing, no research has been directed towards a systematic development of SCM instruments. This study identifies and consolidates various supply chain initiatives and factors to develop key SCM constructs conducive to advancing the field. To this end, we analyzed over 400 articles and synthesized the large, fragmented body of work dispersed across many disciplines. The result of this study, through successive stages of measurement analysis and refinement, is a set of reliable, valid, and unidimensional measurements that can be subsequently used in different contexts to refine or extend conceptualization and measurements or to test various theoretical models, paving the way for theory building in SCM. © 2004 Elsevier B.V. All rights reserved.
Fact Checking: Task definition and dataset construction
In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.
Plasma membrane aquaporins play a significant role during recovery from water deficit.
The role of plasma membrane aquaporins (PIPs) in water relations of Arabidopsis was studied by examining plants with reduced expression of PIP1 and PIP2 aquaporins, produced by crossing two different antisense lines. Compared with controls, the double antisense (dAS) plants had reduced amounts of PIP1 and PIP2 aquaporins, and the osmotic hydraulic conductivity of isolated root and leaf protoplasts was reduced 5- to 30-fold. The dAS plants had a 3-fold decrease in the root hydraulic conductivity expressed on a root dry mass basis, but a compensating 2.5-fold increase in the root to leaf dry mass ratio. The leaf hydraulic conductance expressed on a leaf area basis was similar for the dAS compared with the control plants. As a result, the hydraulic conductance of the whole plant was unchanged. Under sufficient and under water-deficient conditions, stomatal conductance, transpiration rate, plant hydraulic conductance, leaf water potential, osmotic pressure, and turgor pressure were similar for the dAS compared with the control plants. However, after 4 d of rewatering following 8 d of drying, the control plants recovered their hydraulic conductance and their transpiration rates faster than the dAS plants. Moreover, after rewatering, the leaf water potential was significantly higher for the control than for the dAS plants. From these results, we conclude that the PIPs play an important role in the recovery of Arabidopsis from the water-deficient condition.
Natural Language Processing with Python Steven Bird, Ewan Klein, and Edward Loper (University of Melbourne, University of Edinburgh, and BBN Technologies) Sebastopol, CA: O'Reilly Media, 2009, xx+482 pp; paperbound, ISBN 978-0-596-51649-9, $44.99; on-line free of charge at nltk.org/book
This book comes with “batteries included” (a reference to the phrase often used to explain the popularity of the Python programming language). It is the companion book to an impressive open-source software library called the Natural Language Toolkit (NLTK), written in Python. NLTK combines language processing tools (tokenizers, stemmers, taggers, syntactic parsers, semantic analyzers) and standard data sets (corpora and tools to access the corpora in an efficient and uniform manner). Although the book builds on the NLTK library, it covers only a relatively small part of what can be done with it. The combination of the book with NLTK, a growing system of carefully designed, maintained, and documented code libraries, is an extraordinary resource that will dramatically influence the way computational linguistics is taught. The book attempts to cater to a large audience: It is a textbook on computational linguistics for science and engineering students; it also serves as practical documentation for the NLTK library, and it finally attempts to provide an introduction to programming and algorithm design for humanities students. I have used the book and its earlier on-line versions to teach advanced undergraduate and graduate students in computer science in the past eight years. The book adopts the following approach:
A working, non-trivial, topically indifferent NLG System for 17 languages
We present a fully fledged practical working application for a rule-based NLG system that is able to create non-trivial, human sounding narrative from structured data, in any language (e.g., English, German, Arabic and Finnish) and for any topic.
Image processing and analysis - variational, PDE, wavelet, and stochastic methods
nonlinear functional analysis and its applications iii variational methods and optimization PDF remote sensing second edition models and methods for image processing PDF remote sensing third edition models and methods for image processing PDF guide to signals and patterns in image processing foundations methods and applications PDF introduction to image processing and analysis PDF principles of digital image processing advanced methods undergraduate topics in computer science PDF image processing analysis and machine vision PDF image acquisition and processing with labview image processing series PDF wavelet transform techniques for image resolution PDF sparse image and signal processing wavelets and related geometric multiscale analysis PDF nonstandard methods in stochastic analysis and mathematical physics dover books on mathematics PDF solution manual wavelet tour of signal processing PDF remote sensing image fusion signal and image processing of earth observations PDF image understanding using sparse representations synthesis lectures on image video and multimedia processing PDF
Neurotransmission and the contraction and relaxation of penile erectile tissues
The balance between contractant and relaxant factors controls the smooth muscle of the corpus cavernosum and determines the functional state of the penis (detumescence and flaccidity versus tumescence and erection). Noradrenaline contracts both the corpus cavernosum and penile vessels, mainly via stimulation ofα 1-adrenoceptors. Recent investigations have demonstrated the presence of several subtypes of α1-adrenoceptors (α 1A,α 1B, andα 1D) in the human corpus cavernosum and also that the noradrenaline-induced contraction in this tissue is probably mediated by two or, possibly, three receptor subtypes. Even if much of the available in vitro information suggests that endothelins (ETs) may be of importance for mechanisms of detumescene and flaccidity, the role of the peptides in the control of penile smooth-muscle tone in vivo is unclear, as is the question as to whether they can contribute to erectile dysfunction. For further evaluation of the clinical importance of ETs in penile physiology and pathophysiology, clinical studies on ET-receptor antagonists would be of interest. Neurogenic nitric oxide (NO) has been considered the most important factor for relaxation of penile vessels and the corpus cavernosum, but recent studies in mice lacking neurogenic NO synthase (NOS) have shown these animals to have normal erections. This focuses interest on the role of endothelial NOS and on other agents released from nerves or endothelium. For the time being the most effective means of inducing penile erection in men involves the intracavernous administration of prostaglandin Et (PGEI). PGE1 may act partly by increasing intracellular concentrations of cyclic adenosine monophosphate (cAMP). Recent results obtained with the adenylate cyclase stimulator forskolin suggest that penile smooth-muscle relaxation leading to penile erection can be achieved through the cAMP pathway. Thus, transmitters and agents acting through this second-messenger system may significantly contribute to relaxation of penile smooth muscle and to erection.
An automatic irrigation system using ZigBee in wireless sensor network
Wireless Sensing Technology is widely used everywhere in the current scientific world. As the technology is growing and changing rapidly, Wireless sensing Network (WSN) helps to upgrade the technology. In the research field of wireless sensor networks the power efficient time is a major issue. This problem can be overcome by using the ZigBee technology. The main idea of this is to understand how data travels through a wireless medium transmission using wireless sensor network and monitoring system. This paper design an irrigation system which is automated by using controllable parameter such as temperature, soil moisture and air humidity because they are the important factors to be controlled in PA(Precision Agricultures).
White matter lesions as a feature of cognitive impairment, low vitality and other symptoms of geriatric syndrome in the elderly.
AIM White matter lesions (WML) are common findings on magnetic resonance imaging (MRI) in elderly persons. In this study, we analyzed the relation of WML with global cognitive function, depression, vitality/volition, and 19 symptoms of geriatric syndrome in Japanese elderly patients who attended three university geriatric outpatient clinics. METHODS Two hundred and eighty-six subjects (103 men and 183 women; mean +/- standard deviation age, 74.5 +/- 7.8 years) were included in this study. MRI scans were performed for the diagnosis of WML, and the severity of periventricular and deep white matter hyperintensities (PVH and DWMH) was rated semiquantitatively. Concurrently, all subjects underwent tests of cognitive function, depressive state and vitality, and were examined for 19 symptoms of geriatric syndrome. RESULTS The study subjects showed cognitive decline, depression and low vitality, all to a mild extent. Univariate linear regression analysis showed a negative correlation between the severity of WML and cognitive function or vitality. Multiple logistic analysis revealed that the severity of WML was a significant determinant of cognitive impairment and low vitality, after adjustment for confounding factors such as age, sex and concomitant diseases. PVH and/or DWMH score was significantly greater in subjects who exhibited 13 out of 19 symptoms of geriatric syndrome. Logistic regression analysis indicated that WML were associated with psychological disorders, gait disturbance, urinary problems and parkinsonism. CONCLUSION WML were associated with various symptoms of functional decline in older persons. Evaluating WML in relation to functional decline would be important for preventing disability in elderly people.
Convolutional Neural Network Architecture for Geometric Matching
We address the problem of determining correspondences between two images in agreement with a geometric model such as an affine or thin-plate spline transformation, and estimating its parameters. The contributions of this work are three-fold. First, we propose a convolutional neural network architecture for geometric matching. The architecture is based on three main components that mimic the standard steps of feature extraction, matching and simultaneous inlier detection and model parameter estimation, while being trainable end-to-end. Second, we demonstrate that the network parameters can be trained from synthetically generated imagery without the need for manual annotation and that our matching layer significantly increases generalization capabilities to never seen before images. Finally, we show that the same model can perform both instance-level and category-level matching giving state-of-the-art results on the challenging Proposal Flow dataset.
DREAM: Diabetic Retinopathy Analysis Using Machine Learning
This paper presents a computer-aided screening system (DREAM) that analyzes fundus images with varying illumination and fields of view, and generates a severity grade for diabetic retinopathy (DR) using machine learning. Classifiers such as the Gaussian Mixture model (GMM), k-nearest neighbor (kNN), support vector machine (SVM), and AdaBoost are analyzed for classifying retinopathy lesions from nonlesions. GMM and kNN classifiers are found to be the best classifiers for bright and red lesion classification, respectively. A main contribution of this paper is the reduction in the number of features used for lesion classification by feature ranking using Adaboost where 30 top features are selected out of 78. A novel two-step hierarchical classification approach is proposed where the nonlesions or false positives are rejected in the first step. In the second step, the bright lesions are classified as hard exudates and cotton wool spots, and the red lesions are classified as hemorrhages and micro-aneurysms. This lesion classification problem deals with unbalanced datasets and SVM or combination classifiers derived from SVM using the Dempster-Shafer theory are found to incur more classification error than the GMM and kNN classifiers due to the data imbalance. The DR severity grading system is tested on 1200 images from the publicly available MESSIDOR dataset. The DREAM system achieves 100% sensitivity, 53.16% specificity, and 0.904 AUC, compared to the best reported 96% sensitivity, 51% specificity, and 0.875 AUC, for classifying images as with or without DR. The feature reduction further reduces the average computation time for DR severity per image from 59.54 to 3.46 s.
Method engineering: engineering of information systems development methods and tools
This paper proposes the term method engineering for the research field of the construction of information systems development methods and tools. Some research issues in method engineering are identified. One major research topic in method engineering is discussed in depth: situational methods, i.e. the configuration of a project approach that is tuned to the project at hand. A language and support tool for the engineering of situational methods are discussed.
Dual-band circularly polarized microstrip antenna
A new dual-band circularly polarized microstrip antenna is presented experimentally. In this paper, using dual-radiator a new design dual-band antenna can be obtained. The frequency ratio of this design is flexible which more than 1.4.
What Yelp Fake Review Filter Might Be Doing?
Online reviews have become a valuable resource for decision making. However, its usefulness brings forth a curse ‒ deceptive opinion spam. In recent years, fake review detection has attracted significant attention. However, most review sites still do not publicly filter fake reviews. Yelp is an exception which has been filtering reviews over the past few years. However, Yelp’s algorithm is trade secret. In this work, we attempt to find out what Yelp might be doing by analyzing its filtered reviews. The results will be useful to other review hosting sites in their filtering effort. There are two main approaches to filtering: supervised and unsupervised learning. In terms of features used, there are also roughly two types: linguistic features and behavioral features. In this work, we will take a supervised approach as we can make use of Yelp’s filtered reviews for training. Existing approaches based on supervised learning are all based on pseudo fake reviews rather than fake reviews filtered by a commercial Web site. Recently, supervised learning using linguistic n-gram features has been shown to perform extremely well (attaining around 90% accuracy) in detecting crowdsourced fake reviews generated using Amazon Mechanical Turk (AMT). We put these existing research methods to the test and evaluate performance on the real-life Yelp data. To our surprise, the behavioral features perform very well, but the linguistic features are not as effective. To investigate, a novel information theoretic analysis is proposed to uncover the precise psycholinguistic difference between AMT reviews and Yelp reviews (crowdsourced vs. commercial fake reviews). We find something quite interesting. This analysis and experimental results allow us to postulate that Yelp’s filtering is reasonable and its filtering algorithm seems to be correlated with abnormal spamming behaviors.
MIMO-SAR: Opportunities and Pitfalls
This paper reviews advanced radar architectures that employ multiple transmit and multiple receive antennas to improve the performance of future synthetic aperture radar (SAR) systems. These advanced architectures have been dubbed multiple-input multiple-output SAR (MIMO-SAR) in analogy to MIMO communication systems. Considerable confusion arose, however, with regard to the selection of suitable waveforms for the simultaneous transmission via multiple channels. It is shown that the mere use of orthogonal waveforms is insufficient for the desired performance improvement in view of most SAR applications. As a solution to this fundamental MIMO-SAR challenge, a new class of short-term shift-orthogonal waveforms is introduced. The short-term shift orthogonality avoids mutual interferences from the radar echoes of closely spaced scatterers, while interferences from more distant scatterers are suppressed by digital beamforming on receive in elevation. Further insights can be gained by considering the data acquisition of a side-looking imaging radar in a 3-D information cube. It becomes evident that the suggested waveforms fill different subspaces that can be individually accessed by a multichannel receiver. For completeness, the new class of short-term shift-orthogonal waveforms is also compared to a recently proposed pair of orthogonal frequency-division multiplexing waveforms. It is shown that both sets of waveforms require essentially the same principle of range time to elevation angle conversion via a multichannel receiver in order to be applicable for MIMO-SAR imaging without interference.
Icariin protects against intestinal ischemia-reperfusion injury.
BACKGROUND This study investigated the role of Sirtuin 1 (SIRT1)/forkhead box O3 (FOXO3) pathway, and a possible protective function for Icariin (ICA), in intestinal ischemia-reperfusion (I/R) injury and hypoxia-reoxygenation (H/R) injury. MATERIALS AND METHODS Male Sprague-Dawley rats were pretreated with different doses of ICA (30 and 60 mg/kg) or olive oil as control 1 h before intestinal I/R. Caco-2 cells were pretreated with different concentrations of ICA (25, 50, and 100 μg/mL) and then subjected to H/R-induced injury. RESULTS The in vivo results demonstrated that ICA pretreatment significantly improved I/R-induced tissue damage and decreased serum tumor necrosis factor α and interleukin-6 levels. Changes of manganese superoxide dismutase, Bcl-2, and Bim were also reversed by ICA, and apoptosis was reduced. Importantly, the protective effects of ICA were positively associated with SIRT1 activation. Increased SIRT1 expression, as well as decreased acetylated FOXO3 expression, was observed in Caco-2 cells pretreated with ICA. Additionally, the protective effects of ICA were abrogated in the presence of SIRT1 inhibitor nicotinamide. This suggests that ICA exerts a protective effect upon H/R injury through activation of SIRT1/FOXO3 signaling pathway. Accordingly, the SIRT1 activator resveratrol achieved a similar protective effect as ICA on H/R injury, whereas cellular damage resulting from H/R was exacerbated by SIRT1 knockdown and nicotinamide. CONCLUSIONS SIRT1, activated by ICA, protects intestinal epithelial cells from I/R injury by inducing FOXO3 deacetylation both in vivo and in vitro These findings suggest that the SIRT1/FOXO3 pathway can be a target for therapeutic approaches intended to minimize injury resulting from intestinal dysfunction.
Impact Analysis of Start-Up Lost Time at Major Intersections on Sathorn Road Using a Synchro Optimization and a Microscopic SUMO Traffic Simulation
Start-up lost time is the time lost in the starting of the green time interval when a traffic signal phase changes from red to green and previously stopped vehicles in the curb line queue need time to accelerate to the desired speed. Actual traffic data analytics from newly installed loop coil detectors at all approaching upstream road segments of major intersections on Sathorn Road in Bangkok, Thailand, are used to confirm that the vehicle flow is obstructed considerably by the large start-up lost time. In this paper, the effect of a large start-up lost time is evaluated in terms of the travel time of passenger cars in a calibrated microscopic traffic simulation. The evaluation is based on the simulation of the urban mobility platform, while the traffic signal lights at major intersections are based on the standard Synchro optimization software. By the simulation, the average travel time per vehicle increases from 4% to 37% when the start-up lost time is varied from a baseline value of 1 s to the maximum value of 15 s, which potentially occurred in the actual traffic data collection in this paper. In addition, the optimal traffic signal green phase lengths increase from 2% to 42%, depending on the volume-to-capacity ratio. The similar increasing trend of optimal green time and average travel time per vehicle is observed using theoretical analysis based on M/M/1 and D/D/1 queues to support the results from Synchro. Findings of this paper are beneficial for understanding the impact of start-up lost time at signalized intersections.
A notion of rank in set theory without choice
Abstract. Starting from the definition of `amorphous set' in set theory without the axiom of choice, we propose a notion of rank (which will only make sense for, at most, the class of Dedekind finite sets), which is intended to be an analogue in this situation of Morley rank in model theory.
Optimal sizing of solar photovoltaic – Wind hybrid system
Increase in energy demand has made the renewable resources more attractive. Additionally, use of renewable energy sources reduces combustion of fossil fuels and the consequent CO2 emission which is the principal cause of global warming. The concept of photovoltaic-Wind hybrid system is well known and currently thousands of PV-Wind based power systems are being deployed worldwide, for providing power to small, remote, grid-independent applications. This paper shows the way to design the aspects of a hybrid power system that will target remote users. It emphasizes the renewable hybrid power system to obtain a reliable autonomous system with the optimization of the components size and the improvement of the cost. The system can provide electricity for a remote located village. The main power of the hybrid system comes from the photovoltaic panels and wind generators, while the batteries are used as backup units. The optimization software used for this paper is HOMER. HOMER is a design model that determines the optimal architecture and control strategy of the hybrid system. The simulation results indicate that the proposed hybrid system would be a feasible solution for distributed generation of electric power for stand-alone applications at remote locations
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
Machine learning (ML) has become a core component of many real-world applications and training data is a key factor that drives current progress. This huge success has led Internet companies to deploy machine learning as a service (MLaaS). Recently, the first membership inference attack has shown that extraction of information on the training set is possible in such MLaaS settings, which has severe security and privacy implications. However, the early demonstrations of the feasibility of such attacks have many assumptions on the adversary, such as using multiple so-called shadow models, knowledge of the target model structure, and having a dataset from the same distribution as the target model’s training data. We relax all these key assumptions, thereby showing that such attacks are very broadly applicable at low cost and thereby pose a more severe risk than previously thought. We present the most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains. In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.
(Invited) Nanoimprinted perovskite for optoelectronics
Organic-inorganic hybrid perovskites have recently emerged as promising materials for optoelectronics. Here we show successful patterning of hybrid perovskite into nanostructures with cost-effective nanoimprint technology. Photodetectors are fabricated on nanoimprinted perovskite with improved responsivity. Nanoimprinted perovskite metasurface forms with significantly enhanced photoluminescence. Lasing is expected on nanoimprinted perovskite with optimized cavity design and process.
Co-Design of a CMOS Rectifier and Small Loop Antenna for Highly Sensitive RF Energy Harvesters
In this paper, a design method for the co-design and integration of a CMOS rectifier and small loop antenna is described. In order to improve the sensitivity, the antenna-rectifier interface is analyzed as it plays a crucial role in the co-design optimization. Subsequently, a 5-stage cross-connected differential rectifier with a 7-bit binary-weighted capacitor bank is designed and fabricated in standard 90 nm CMOS technology. The rectifier is brought at resonance with a high-Q loop antenna by means of a control loop that compensates for any variation at the antenna-rectifier interface and passively boosts the antenna voltage to enhance the sensitivity. A complementary MOS diode is proposed to improve the harvester's ability to store and hold energy over a long period of time during which there is insufficient power for rectification. The chip is ESD protected and integrated on a compact loop antenna. Measurements in an anechoic chamber at 868 MHz demonstrate a -27 dBm sensitivity for 1 V output across a capacitive load and 27 meter range for a 1.78 W RF source in an office corridor. The end-to-end power conversion efficiency equals 40% at -17 dBm.
Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems
Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality. Most NLG systems in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. They are also not easily scaled to systems covering multiple domains and languages. This paper presents a statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure. The LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. With fewer heuristics, an objective evaluation in two differing test domains showed the proposed method improved performance compared to previous methods. Human judges scored the LSTM system higher on informativeness and naturalness and overall preferred it to the other systems.
Will the Pedestrian Cross? Probabilistic Path Prediction Based on Learned Motion Features
Future vehicle systems for active pedestrian safety will not only require a high recognition performance, but also an accurate analysis of the developing traffic situation. In this paper, we present a system for pedestrian action classification (walking vs. stopping) and path prediction at short, sub-second time intervals. Apart from the use of positional cues, obtained by a pedestrian detector, we extract motion features from dense optical flow. These augmented features are used in a probabilistic trajectory matching and filtering framework. The vehicle-based system was tested in various traffic scenes. We compare its performance to that of a state-of-the-art IMM Kalman filter (IMM-KF), and for the action classification task, to that of human observers, as well. Results show that human performance is best, followed by that of the proposed system, which outperforms the IMM-KF and the simpler system variants.
Metformin and other antidiabetic agents in renal failure patients.
This review mainly focuses on metformin, and considers oral antidiabetic therapy in kidney transplant patients and the potential benefits and risks of antidiabetic agents other than metformin in patients with chronic kidney disease (CKD). In view of the debate concerning lactic acidosis associated with metformin, this review tries to solve a paradox: metformin should be prescribed more widely because of its beneficial effects, but also less widely because of the increasing prevalence of contraindications to metformin, such as reduced renal function. Lactic acidosis appears either as part of a number of clinical syndromes (i.e., unrelated to metformin), induced by metformin (involving an analysis of the drug's pharmacokinetics and mechanisms of action), or associated with metformin (a more complex situation, as lactic acidosis in a metformin-treated patient is not necessarily accompanied by metformin accumulation, nor does metformin accumulation necessarily lead to lactic acidosis). A critical analysis of guidelines and literature data on metformin therapy in patients with CKD is presented. Following the present focus on metformin, new paradoxical issues can be drawn up, in particular: (i) metformin is rarely the sole cause of lactic acidosis; (ii) lactic acidosis in patients receiving metformin therapy is erroneously still considered a single medical entity, as several different scenarios can be defined, with contrasting prognoses. The prognosis for severe lactic acidosis seems even better in metformin-treated patients than in non-metformin users.
Sleep apnoea related hypoxia is associated with cognitive disturbances in patients with tetraplegia
Sleep disordered breathing is common in patients with tetraplegia. Nocturnal arterial hypoxemia and sleep fragmentation due to sleep apnoea may be associated with cognitive dysfunction. We therefore studied the influence of sleep disordered breathing on neuropsychological function in 37 representative tetraplegic patients (mean age 34±9.7 years). Thirty percent (11 of 37 patients) had clinically significant sleep disordered breathing, defined as apnoea plus hypopnoea index (AHI) greater than 15 per hour of sleep. Most apnoeas were obstructive in type. Seven patients (19%) desaturated to <80% during the night. Neuropsychological variables were significantly correlated with measures of sleep hypoxia, but not with the AHI and the frequency of sleep arousals. The neuropsychological functions most affected by nocturnal desaturation were: verbal attention and concentration, immediate and short-term memory, cognitive flexibility, internal scanning and working memory. There appeared to be a weak association between the presence of severe sleep hypoxia and visual perception, attention and concentration but no association was found between sleep variables and depression scores. We concluded that sleep disordered breathing is common in patients with tetraplegia and may be accompanied with significant oxygen desaturation. The latter impairs daytime cognitive function in these patients, particularly attention, concentration, memory and learning skills. Cognitive disturbances resulting from sleep apnoea might adversely affect rehabilitation in patients with tetraplegia.
Sentiment Lexicon Expansion Based on Neural PU Learning, Double Dictionary Lookup, and Polarity Association
Although many sentiment lexicons in different languages exist, most are not comprehensive. In a recent sentiment analysis application, we used a large Chinese sentiment lexicon and found that it missed a large number of sentiment words used in social media. This prompted us to make a new attempt to study sentiment lexicon expansion. This paper first formulates the problem as a PU learning problem. It then proposes a new PU learning method suitable for the problem based on a neural network. The results are further enhanced with a new dictionary lookup technique and a novel polarity classification algorithm. Experimental results show that the proposed approach greatly outperforms baseline methods.
A 750 Mb/s, 12 pJ/b, 6-to-10 GHz CMOS IR-UWB Transmitter With Embedded On-Chip Antenna
This paper presents a novel impulse radio based ultra-wideband transmitter. The transmitter is designed in 0.18 mum CMOS process realizing extremely low complexity and low power. It exploits the 6-to-10 GHz band to generate short duration bi-phase modulated UWB pulses with a center frequency of 8 GHz. No additional RF filtering circuits are required since the pulse generator circuit itself has the functionality of pulse shaping. Generated pulses comply with the FCC spectral emission mask. Measured results show that the transmitter consumes 12 pJ/b to achieve a maximum pulse repetition rate of 750 Mb/s. An optional embedded on-chip antenna and a power amplifier operating in 6-10 GHz band are also designed and investigated as a future low cost solution for very short distance IR-UWB communications.
Insights into Alzheimer disease pathogenesis from studies in transgenic animal models
Alzheimer disease is the most common cause of dementia among the elderly, accounting for ~60-70% of all cases of dementia. The neuropathological hallmarks of Alzheimer disease are senile plaques (mainly containing p-amyloid peptide derived from amyloid precursor protein) and neurofibrillary tangles (containing hyperphosphorylated Tau protein), along with neuronal loss. At present there is no effective treatment for Alzheimer disease. Given the prevalence and poor prognosis of the disease, the development of animal models has been a research priority to understand pathogenic mechanisms and to test therapeutic strategies. Most cases of Alzheimer disease occur sporadically in people over 65 years old, and are not genetically inherited. Roughly 5% of patients with Alzheimer disease have familial Alzheimer disease--that is, related to a genetic predisposition, including mutations in the amyloid precursor protein, presenilin 1, and presenilin 2 genes. The discovery of genes for familial Alzheimer disease has allowed transgenic models to be generated through the overexpression of the amyloid precursor protein and/or presenilins harboring one or several mutations found in familial Alzheimer disease. Although none of these models fully replicates the human disease, they have provided valuable insights into disease mechanisms as well as opportunities to test therapeutic approaches. This review describes the main transgenic mouse models of Alzheimer disease which have been adopted in Alzheimer disease research, and discusses the insights into Alzheimer disease pathogenesis from studies in such models. In summary, the Alzheimer disease mouse models have been the key to understanding the roles of soluble b-amyloid oligomers in disease pathogenesis, as well as of the relationship between p-amyloid and Tau pathologies.
Endoscopic clipping in video-assisted thoracoscopic sympathetic blockade for axillary hyperhidrosis
Endoscopic thoracic sympathectomy or sympathicotomy is the standard method for the treatment of axillary hyperhidrosis. But postoperative compensatory sweating may be troublesome in some patients. Therefore, we use endoclips to perform the T3 and T4 sympathetic blockade instead of permanently interrupting the transmission of nerve impulses from the sympathetic trunk. Between May 1997 and June 1998, a total of 26 patients with axillary hyperhidrosis underwent videoassisted thoracoscopic sympathetic blocking of the T3 and T4 ganglia at our hospital. There were 10 men and 16 women with a mean age of 31.7 years (range, 16–47). All patients were placed in a semi-sitting position under singlelumen intubated anesthesia. We performed the sympathetic blockade by clipping the T3 and T4 ganglia at the level of the third, fourth, and fifth rib beds using an 8-mm 0° thoracoscope. Bilateral T3 and T4 sympathetic blockade was achieved in all 26 patients. The operation was usually completed within 30 min (range, 20–42). Most patients were discharged within 4 h after the operation. Surgical complications were minimal, with only one case of segmental atelectasis (3.8%). There were no deaths. The mean postoperative follow-up period was 31.3 months (range, 24–37). Twenty-three patients (88.5%) developed compensatory sweating of the trunk and lower limbs. Twenty-four patients (92.3%) were satisfied with the results of the operation. Improvement of axillary hyperhidrosis was obtained in all patients. One patient underwent a reverse operation to remove the endoclips due to intolerable compensatory sweating; improvement was seen 25 days after removal of the clips. Video-assisted thoracoscopic T3 and T4 sympathetic blockade by clipping is a safe and effective method for the treatment of patients with axillary hyperhidrosis. Patients who experience excessive compensatory sweating may require a reverse operation for endoclip removal.
Style Augmentation: Data Augmentation via Style Randomization
We introduce style augmentation, a new form of data augmentation based on random style transfer, for improving the robustness of convolutional neural networks (CNN) over both classification and regression based tasks. During training, our style augmentation randomizes texture, contrast and color, while preserving shape and semantic content. This is accomplished by adapting an arbitrary style transfer network to perform style randomization, by sampling input style embeddings from a multivariate normal distribution instead of inferring them from a style image. In addition to standard classification experiments, we investigate the effect of style augmentation (and data augmentation generally) on domain transfer tasks. We find that data augmentation significantly improves robustness to domain shift, and can be used as a simple, domain agnostic alternative to domain adaptation. Comparing style augmentation against a mix of seven traditional augmentation techniques, we find that it can be readily combined with them to improve network performance. We validate the efficacy of our technique with domain transfer experiments in classification and monocular depth estimation, illustrating consistent improvements in generalization.
Principles of critical discourse analysis
This paper discusses some principles of critical discourse analysis, such as the explicit sociopolitical stance of discourse analysts, and a focus on dominance relations by elite groups and institutions as they are being enacted, legitimated or otherwise reproduced by text and talk. One of the crucial elements of this analysis of the relations between power and discourse is the patterns of access to (public) discourse for different social groups. Theoretically it is shown that in order to be able to relate power and discourse in an explicit way, we need the cognitive interface of models. knowledge, attitudes and ideologies and other social representations of the social mind, which also relate the individual and the social, and the microand the macro-levels of social structure. Finally, the argument is illustrated with an analysis of parliamentary debates about ethnic affairs.
Existence de l-formes fermées non singulières dans une classe de cohomologie de de Rham
© Publications mathématiques de l’I.H.É.S., 1994, tous droits réservés. L’accès aux archives de la revue « Publications mathématiques de l’I.H.É.S. » (http:// www.ihes.fr/IHES/Publications/Publications.html) implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/legal.php). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright.
Fast userspace packet processing
In recent years, we have witnessed the emergence of high speed packet I/O frameworks, bringing unprecedented network performance to userspace. Using the Click modular router, we first review and quantitatively compare several such packet I/O frameworks, showing their superiority to kernel-based forwarding. We then reconsider the issue of software packet processing, in the context of modern commodity hardware with hardware multi-queues, multi-core processors and non-uniform memory access. Through a combination of existing techniques and improvements of our own, we derive modern general principles for the design of software packet processors. Our implementation of a fast packet processor framework, integrating a faster Click with both Netmap and DPDK, exhibits up-to about 2.3x speed-up compared to other software implementations, when used as an IP router.
Realistic Microwave Breast Models Through T1-Weighted 3-D MRI Data
In this paper we present an effective method for developing realistic numerical three-dimensional (3-D) microwave breast models of different shape, size, and tissue density. These models are especially convenient for microwave breast cancer imaging applications and numerical analysis of human breast-microwave interactions. As in the recent studies on this area, anatomical information of the breast tissue is collected from T1-weighted 3-D MRI data of different patients' in prone position. The method presented in this paper offers significant improvements including efficient noise reduction and tissue segmentation, nonlinear mapping of electromagnetic properties, realistically asymmetric phantom shape, and a realistic classification of breast phantoms. Our method contains a five-step approach where each MRI voxel is classified and mapped to the appropriate dielectric properties. In the first step, the MRI data are denoised by estimating and removing the bias field from each slice, after which the voxels are segmented into two main tissues as fibro-glandular and adipose. Using the distribution of the voxel intensities in MRI histogram, two nonlinear mapping functions are generated for dielectric permittivity and conductivity profiles, which allow each MRI voxel to map to its proper dielectric properties. Obtained dielectric profiles are then converted into 3-D numerical breast phantoms using several image processing techniques, including morphologic operations, filtering. Resultant phantoms are classified according to their adipose content, which is a critical parameter that affects penetration depth during microwave breast imaging.
Reexpression of hSNF5 in malignant rhabdoid tumor cell lines causes cell cycle arrest through a p21(CIP1/WAF1)-dependent mechanism.
Loss of hSNF5 function is usually observed in malignant rhabdoid tumor (MRT), a highly aggressive pediatric neoplasm. Previous studies have shown that reexpression of hSNF5 in MRT cell lines causes G1 cell cycle arrest with p16(INK4A), p21(CIP1/WAF1), and cyclin D1 playing key roles in MRT cell growth control. However, we have shown that reexpression of hSNF5 induced cell cycle arrest in the absence of p16(INK4A) expression. These results indicate that the mechanism of hSNF5-induced cell cycle arrest is context dependent. Here, we investigated the relationship between p21(CIP1/WAF1) and hSNF5 in the regulation of growth using several MRT cell lines. We found that G1 cell cycle arrest occurred concomitant with an increase in p21(CIP1/WAF1) mRNA and protein levels and preceded p16(INK4A) mRNA and protein upregulation. Chromatin immunoprecipitation data confirmed that hSNF5 appeared at both p21(CIP1/WAF1) and p16(INK4A) promoters after reexpression. We further showed that p21(CIP1/WAF1) induction showed both p53-dependent and p53-independent mechanisms. We also showed that reduction of p21(CIP1/WAF1) expression by RNAi significantly inhibited hSNF5-induced G(1) arrest. Our results show that both p21(CIP1/WAF1) and p16(INK4A) are targets for hSNF5 and that p21(CIP1/WAF1) upregulation during hSNF5-induced G(1) arrest precedes p16(INK4A) upregulation. These findings indicate that SNF5 mediates a temporally controlled program of cyclin-dependent kinase inhibition to restrict aberrant proliferation in MRT cells.
The prevalence of cirrhosis and hepatocellular carcinoma in patients with human immunodeficiency virus infection.
UNLABELLED Cirrhosis is a leading cause of death among patients infected with human immunodeficiency virus (HIV). We sought to determine risk factors for and time trends in the prevalence of cirrhosis, decompensated cirrhosis, and hepatocellular carcinoma (HCC) among patients diagnosed with HIV who received care in the Veterans Affairs (VA) health care system nationally between 1996 and 2009 (n = 24,040 in 2009). Among patients coinfected with HIV and hepatitis C virus (HCV), there was a dramatic increase in the prevalence of cirrhosis (3.5%-13.2%), decompensated cirrhosis (1.9%-5.8%), and HCC (0.07%-1.6%). Little increase was observed among patients without HCV coinfection in the prevalence of cirrhosis (1.7%-2.2%), decompensated cirrhosis (1.1%-1.2%), and HCC (0.03%-0.13%). In 2009, HCV infection was present in the majority of patients with HIV who had cirrhosis (66%), decompensated cirrhosis (62%), and HCC (80%). Independent risk factors for cirrhosis included HCV infection (adjusted odds ratio [AOR], 5.82; 95% confidence interval [CI], 5.0-6.7), hepatitis B virus (HBV) infection (AOR, 2.40; 95% CI, 2.0-2.9), age (AOR, 1.03; 95% CI, 1.02-1.04), Hispanic ethnicity (AOR, 1.76; 95% CI, 1.4-2.2), diabetes (AOR, 1.79; 95% CI, 1.6-2.1), and alcohol abuse (AOR, 1.78; 95% CI, 1.5-2.1), whereas black race (AOR, 0.56; 95% CI, 0.48-0.64) and successful eradication of HCV (AOR, 0.61; 95% CI, 0.4-0.9) were protective. Independent risk factors for HCC included HCV infection (AOR, 10.0; 95% CI, 6.1-16.4), HBV infection (AOR, 2.82; 95% CI, 1.7-4.7), age (AOR, 1.05; 95% CI, 1.03-1.08), and low CD4+ cell count (AOR, 2.36; 95% CI, 1.3-4.2). Among 5999 HIV/HCV-coinfected patients, 994 (18%) had ever received HCV antiviral treatment, of whom 165 (17%) achieved sustained virologic response. CONCLUSION The prevalence of cirrhosis and HCC has increased dramatically among HIV-infected patients driven primarily by the HCV epidemic. Potentially modifiable risk factors include HCV infection, HBV infection, diabetes, alcohol abuse, and low CD4+ cell count.
Austrian Moderate Altitude Study 2000 (AMAS 2000). The effects of moderate altitude (1,700 m) on cardiovascular and metabolic variables in patients with metabolic syndrome
We investigated the changes in the cardiovascular system [resting blood pressure (BP) and heart rate (HR), measured by means of a 24-h ambulatory BP and a holter-electrocardiogram (ECG)], glycemic parameters, and lipid metabolism of subjects suffering from metabolic syndrome during a 3-week sojourn at 1,700 m in the Austrian Alps. A total of 22 male subjects with metabolic syndrome were selected. Baseline investigations were performed at Innsbruck (500 m above sea level). During the 3-week altitude stay the participants simulated a holiday with moderate sports activities. Examinations were performed on days 1, 4, 9, and 19. After returning to Innsbruck, post-altitude examinations were conducted after 7–10 days and 6–7 weeks, respectively. The 24-h ambulatory BP and holter ECG revealed a decrease in average HR, BP, and rate pressure product (RPP: systolic blood pressure × HR) after 3 weeks of altitude exposure. In some patients, an increase in premature ventricular beats was observed at the end compared to the beginning of the exposure to moderate altitude. The ECG revealed no ischemic ST-segment changes. Maximal physical capacity as measured by symptom-limited maximal cycle ergometry tests remained unchanged during the study. Six weeks after the altitude exposure the blood pressure increased again and returned to pretest levels. The Homeostasis Model Assessment index, which is a measure of insulin resistance, decreased significantly and glucose concentrations obtained after an oral glucose tolerance test were significantly lower after the stay at altitude compared to the basal values. We conclude that after a 3-week exposure to moderate altitude, patients with metabolic syndrome (1) tolerated their sojourn without any physical problems, (2) exhibited short-term favorable effects on the cardiovascular system, and (3) had significant improvements in glycemic parameters that were paralleled by a significant increase in high-density-lipoprotein-cholesterol.
$\mathtt {Deepr}$: A Convolutional Net for Medical Records
Feature engineering remains a major bottleneck when creating predictive systems from electronic medical records. At present, an important missing element is detecting predictive <italic>regular clinical motifs</italic> from <italic> irregular episodic records</italic>. We present <inline-formula><tex-math notation="LaTeX">$\mathtt {Deepr}$</tex-math> </inline-formula> (short for <italic>Deep</italic> <italic>r</italic>ecord), a new <italic>end-to-end</italic> deep learning system that learns to extract features from medical records and predicts future risk automatically. <inline-formula><tex-math notation="LaTeX">$\mathtt {Deepr}$</tex-math></inline-formula> transforms a record into a sequence of discrete elements separated by coded time gaps and hospital transfers. On top of the sequence is a convolutional neural net that detects and combines predictive local clinical motifs to stratify the risk. <inline-formula><tex-math notation="LaTeX">$\mathtt {Deepr}$</tex-math></inline-formula> permits transparent inspection and visualization of its inner working. We validate <inline-formula><tex-math notation="LaTeX">$\mathtt {Deepr}$ </tex-math></inline-formula> on hospital data to predict unplanned readmission after discharge. <inline-formula> <tex-math notation="LaTeX">$\mathtt {Deepr}$</tex-math></inline-formula> achieves superior accuracy compared to traditional techniques, detects meaningful clinical motifs, and uncovers the underlying structure of the disease and intervention space.
Motion Feature Network: Fixed Motion Filter for Action Recognition
Spatio-temporal representations in frame sequences play an important role in the task of action recognition. Previously, a method of using optical flow as a temporal information in combination with a set of RGB images that contain spatial information has shown great performance enhancement in the action recognition tasks. However, it has an expensive computational cost and requires two-stream (RGB and optical flow) framework. In this paper, we propose MFNet (Motion Feature Network) containing motion blocks which make it possible to encode spatiotemporal information between adjacent frames in a unified network that can be trained end-to-end. The motion block can be attached to any existing CNN-based action recognition frameworks with only a small additional cost. We evaluated our network on two of the action recognition datasets (Jester and Something-Something) and achieved competitive performances for both datasets by training the networks from scratch.
Charged lepton G-2 and constraints on new physics
A review of the theoretical and experimental values for the charged lepton (electron and muon) anomalous magnetic moment $a_l= (g_l-2)/2$ is presented. Employing the most accurate value for the fine structure constant $\alpha^{-1}= 137.03599993(52) (0.0038 ppm)$ obtained \cite {Kin196} from the electron $(g-2)$ we find the new complete standard model prediction for the anomalous magnetic moment of the muon $a^{th}_{\mu}= 116591595(67)\times 10^{-11}$. The comparison of this theoretical value and the precise experimental result \cite {Sch} yields the estimation for the difference $\Delta a_{\mu}= a^{exp}_{\mu}- a^{th}_{\mu}$ at the 95% confidence level: $-95 \times 10^{-10} \leq \Delta a_{\mu}\leq 236 \times 10^{-10}$. The implication of the expected a factor of about 20 increase of accuracy in the forthcoming Brookhaven National Laboratory measurement of $a_{\mu}$ implies $-47 \times 10^{-11} \leq \Delta a_{\mu}\leq 118 \times 10^{-11}$, ($95 % C.L.$). This interval is used to get constraints on the "new physics". The value of the one-loop contributions $a^{B_i}_l$ of different bosons predicted within extension of the standard model and coupled to a charged lepton are discussed. The dependence of $a^{B_i}_l$ on the masses of the bosons and leptons of the vacuum polarization loops are investigated. The constraints on "new physics" by requiring that the new contributions $a^{B_i}_{\mu}$ to the muon anomalous magnetic moment lie within the latter interval $\Delta a_{\mu}$ are obtained.
Extended primary culture of human hepatocytes in a collagen gel sandwich system
To develop a strategy for extended primary culture of human hepatocytes, we placed human hepatocytes between two layers of collagen gel, called a “collagen gel sandwich.” Maintenance of hepatocellular functions in this system was compared with that of identical hepatocyte preparations cultured on dry-collagen coated dishes or co-cultured with rat liver epithelial cells. Human hepatocytes in a collagen gel sandwich (five separate cultures) survived for more than 4 wk, with the longest period of culture being 78 d. They maintained polygonal morphology with bile canaliculuslike structures and high levels of albumin secretion throughout the period of culture. In contrast, hepatocytes on dry-collagen became feature-less, and albumin secretion could not be detected after 14 d of culture. This loss of albumin secretion was partially recovered by overlaying one layer of collagen gel. Ethoxyresorufin O-deethylase activity, associated with cytochrome P450 1A2, was detected basally up to 29 d in collagen gel sandwich culture. These activities were induced four- to eightfold after induction with dibenz(a,h)anthracene. Cocultures also maintained basal activity up to 29 d. However, their inducibility was lower than that of hepatocytes in collagen gel sandwich. No ethoxyresorufin O-deethylase activity was detected in hepatocytes cultured on dry-collagen at 7 d. Thus, the collagen gel sandwich system preserves differentiated morphology and functions of human hepatocytes in primary culture for a prolonged period of time. This system is a promising model for studying human hepatocellular function, including protein synthesis and drug metabolism in vitro.
Domain-Specific Heuristics in Answer Set Programming
We introduce a general declarative framework for incorporating domain-specific heuristics into ASP solving. We accomplish this by extending the first-order modeling language of ASP by a distinguished heuristic predicate. The resulting heuristic information is processed as an equitable part of the logic program and subsequently exploited by the solver when it comes to non-deterministically assigning a truth value to an atom. We implemented our approach as a dedicated heuristic in the ASP solver clasp and show its great prospect by an empirical evaluation.
Albanian Part-of-Speech Tagging: Gold Standard and Evaluation
In this paper, we present a gold standard corpus for Albanian part-of-speech tagging and perform evaluation experiments with different statistical taggers. The corpus consists of more than 31,000 tokens and has been manually annotated with a medium-sized tagset that can adequately represent the syntagmatic aspects of the language. We provide mappings from the full tagset to both the original Google Universal Part-of-Speech Tags and the variant used in the Universal Dependencies project. We perform experiments with different taggers on the full tagset as well as on the coarser tagsets and achieve accuracies of up to 95.10%.
Improved Techniques for Training GANs
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3%. We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.
Prosocial learning agents solve generalized Stag Hunts better than selfish ones
1 EXTENDED ABSTRACT Real world interactions are full of coordination problems [2, 3, 8, 14, 15] and thus constructing agents that can solve them is an important problem for artificial intelligence research. One of the simplest, most heavily studied coordination problems is the matrixform, two-player Stag Hunt. In the Stag Hunt, each player makes a choice between a risky action (hunt the stag) and a safe action (forage for mushrooms). Foraging for mushrooms always yields a safe payoff while hunting yields a high payoff if the other player also hunts but a very low payoff if one shows up to hunt alone. This game has two important Nash equilibria: either both players show up to hunt (this is called the payoff dominant equilibrium) or both players stay home and forage (this is called the risk-dominant equilibrium [7]). In the Stag Hunt, when the payoff to hunting alone is sufficiently low, dyads of learners as well as evolving populations converge to the risk-dominant (safe) equilibrium [6, 8, 10, 11]. The intuition here is that even a slight amount of doubt about whether one’s partner will show up causes an agent to choose the safe action. This in turn causes partners to be less likely to hunt in the future and the system trends to the inefficient equilibrium. We are interested in the problem of agent design: our task is to construct an agent that will go into an initially poorly understood environment and make decisions. Our agent must learn from its experiences to update its policy and maximize some scalar reward. However, there will also be other agents which we do not control. These agents will also learn from their experiences. We ask: if the environment has Stag Hunt-like properties, can we make changes to our agent’s learning to improve its outcomes? We focus on reinforcement learning (RL), however, many of our results should generalize to other learning algorithms.
Predictors of Virological Response in 3,235 Chronic HCV Egyptian Patients Treated with Peginterferon Alpha-2a Compared with Peginterferon Alpha-2b Using Statistical Methods and Data Mining Techniques.
Despite the appearance of new oral antiviral drugs, pegylated interferon (PEG-IFN)/RBV may remain the standard of care therapy for some time, and several viral and host factors are reported to be correlated with therapeutic effects. This study aimed to reveal the independent variables associated with failure of sustained virological response (SVR) to PEG-IFN alpha-2a versus PEG-IFN alpha-2b in treatment of naive chronic hepatitis C virus (HCV) Egyptian patients using both statistical methods and data mining techniques. This retrospective cohort study included 3,235 chronic hepatitis C patients enrolled in a large Egyptian medical center: 1,728 patients had been treated with PEG-IFN alpha-2a plus ribavirin (RBV) and 1,507 patients with PEG-IFN alpha-2b plus RBV between 2007 and 2011. Both multivariate analysis and Reduced Error Pruning Tree (REPTree)-based model were used to reveal the independent variables associated with treatment response. In both treatment types, alpha-fetoprotein (AFP) >10 ng/mL and HCV viremia >600 × 10(3) IU/mL were the independent baseline variables associated with failure of SVR, while male gender, decreased hemoglobin, and thyroid-stimulating hormone were the independent variables associated with good response (P < 0.05). Using REPTree-based model showed that low AFP was the factor of initial split (best predictor) of response for either PEG-IFN alpha-2a or PEG-IFN alpha-2b (cutoff value 8.53, 4.89 ng/mL, AUROC = 0.68 and 0.61, P = 0.05). Serum AFP >10 ng/mL and viral load >600 × 10(3) IU/mL are variables associated with failure of response in both treatment types. REPTree-based model could be used to assess predictors of response.
Fitting Superellipses
In the literature, methods for fitting superellipses to data tend to be computationally expensive due to the non-linear nature of the problem. This paper describes and tests several fitting techniques which provide different trade-offs between efficiency and accuracy. In addition, we describe various alternative error of fits (EOF) that can be applied by most superellipse fitting methods. keywords: curve, superellipse, fitting, error measure
Green Tea and Its Extracts in Cancer Prevention and Treatment
Green tea (GT) and green tea extracts (GTE) have been postulated to decrease cancer incidence. In vitro results indicate a possible effect; however, epidemiological data do not support cancer chemoprevention. We have performed a PubMED literature search for green tea consumption and the correlation to the common tumor types lung, colorectal, breast, prostate, esophageal and gastric cancer, with cohorts from both Western and Asian countries. We additionally included selected mechanistical studies for a possible mode of action. The comparability between studies was limited due to major differences in study outlines; a meta analysis was thus not possible and studies were evaluated individually. Only for breast cancer could a possible small protective effect be seen in Asian and Western cohorts, whereas for esophagus and stomach cancer, green tea increased the cancer incidence, possibly due to heat stress. No effect was found for colonic/colorectal and prostatic cancer in any country, for lung cancer Chinese studies found a protective effect, but not studies from outside China. Epidemiological studies thus do not support a cancer protective effect. GT as an indicator of as yet undefined parameters in lifestyle, environment and/or ethnicity may explain some of the observed differences between China and other countries.
Circumcision with the plastibell device a long-term follow-up
Indications for operation, immediate postoperative morbidity and complications were recorded in 43 patients circumcised with the Plastibell device. Questionnaires were used in recording late postoperative morbidity and complications during the mean observation period of 29 months, and were followed by a clinical and cosmetic assessment. No serious complications were encountered. Compared to classical dissection techniques, dysuria is a prominent feature using the Plastibell device. The Plastibell method leaves a varying amount of foreskin intact, which could well explain why meatal ulcers/stenosis are not seen when employing this method. In areas with low hygienic standards we cannot recommend the method since the ability of retaining smegma must still be present. Used on medical grounds, the method is preferable, as it leaves some of the foreskin intact and is quick and simple to perform.
Forgotten Siblings: Unifying Attacks on Machine Learning and Digital Watermarking
Machine learning is increasingly used in securitycritical applications, such as autonomous driving, face recognition, and malware detection. Most learning methods, however, have not been designed with security in mind and thus are vulnerable to different types of attacks. This problem has motivated the research field of adversarial machine learning that is concerned with attacking and defending learning methods. Concurrently, a separate line of research has tackled a very similar problem: In digital watermarking, a pattern is embedded in a signal in the presence of an adversary. As a consequence, this research field has also extensively studied techniques for attacking and defending watermarking methods. The two research communities have worked in parallel so far, unnoticeably developing similar attack and defense strategies. This paper is a first effort to bring these communities together. To this end, we present a unified notation of blackbox attacks against machine learning and watermarking. To demonstrate its efficacy, we apply concepts from watermarking to machine learning and vice versa. We show that countermeasures from watermarking can mitigate recent model-extraction attacks and, similarly, that techniques for hardening machine learning can fend off oracle attacks against watermarks. We further demonstrate a novel threat for watermarking schemes based on recent deep learning attacks from adversarial learning. Our work provides a conceptual link between two research fields and thereby opens novel directions for improving the security of both, machine learning and digital watermarking.
Magnetic resonance electrical impedance tomography (MREIT) for high-resolution conductivity imaging.
Cross-sectional imaging of an electrical conductivity distribution inside the human body has been an active research goal in impedance imaging. By injecting current into an electrically conducting object through surface electrodes, we induce current density and voltage distributions. Based on the fact that these are determined by the conductivity distribution as well as the geometry of the object and the adopted electrode configuration, electrical impedance tomography (EIT) reconstructs cross-sectional conductivity images using measured current-voltage data on the surface. Unfortunately, there exist inherent technical difficulties in EIT. First, the relationship between the boundary current-voltage data and the internal conductivity distribution bears a nonlinearity and low sensitivity, and hence the inverse problem of recovering the conductivity distribution is ill posed. Second, it is difficult to obtain accurate information on the boundary geometry and electrode positions in practice, and the inverse problem is sensitive to these modeling errors as well as measurement artifacts and noise. These result in EIT images with a poor spatial resolution. In order to produce high-resolution conductivity images, magnetic resonance electrical impedance tomography (MREIT) has been lately developed. Noting that injection current produces a magnetic as well as electric field inside the imaging object, we can measure the induced internal magnetic flux density data using an MRI scanner. Utilization of the internal magnetic flux density is the key idea of MREIT to overcome the technical difficulties in EIT. Following original ideas on MREIT in early 1990s, there has been a rapid progress in its theory, algorithm and experimental techniques. The technique has now advanced to the stage of human experiments. Though it is still a few steps away from routine clinical use, its potential is high as a new impedance imaging modality providing conductivity images with a spatial resolution of a few millimeters or less. This paper reviews MREIT from the basics to the most recent research outcomes. Focusing on measurement techniques and experimental methods rather than mathematical issues, we summarize what has been done and what needs to be done. Suggestions for future research directions, possible applications in biomedicine, biology, chemistry and material science are discussed.
Role of the gut microbiota in immunity and inflammatory disease
The mammalian intestine is colonized by trillions of microorganisms, most of which are bacteria that have co-evolved with the host in a symbiotic relationship. The collection of microbial populations that reside on and in the host is commonly referred to as the microbiota. A principal function of the microbiota is to protect the intestine against colonization by exogenous pathogens and potentially harmful indigenous microorganisms via several mechanisms, which include direct competition for limited nutrients and the modulation of host immune responses. Conversely, pathogens have developed strategies to promote their replication in the presence of competing microbiota. Breakdown of the normal microbial community increases the risk of pathogen infection, the overgrowth of harmful pathobionts and inflammatory disease. Understanding the interaction of the microbiota with pathogens and the host might provide new insights into the pathogenesis of disease, as well as novel avenues for preventing and treating intestinal and systemic disorders.
The use of optical flow for road navigation
This paper describes procedures for obtaining a reliable and dense optical flow from image sequences taken by a television (TV) camera mounted on a car moving in usual outdoor scenarios. The optical flow can be computed from these image sequences by using several techniques. Differential techniques to compute the optical flow do not provide adequate results, because of a poor texture in images and the presence of shocks and vibrations experienced by the TV camera during image acquisition. By using correlation based techniques and by correcting the optical flows for shocks and vibrations, useful sequences of optical flows can be obtained. When the car is moving along a flat road and the optical axis of the TV camera is parallel to the ground, the motion field is expected to be almost quadratic and have a specific structure. As a consequence the egomotion can be estimated from this optical flow and information on the speed and the angular velocity of the moving vehicle are obtained. By analyzing the optical flow it is possible to recover also a coarse segmentation of the flow, in which objects moving with a different speed are identified. By combining information from intensity edges a better localization of motion boundaries are obtained. These results suggest that the optical flow can be successfully used by a vision system for assisting a driver in a vehicle moving in usual streets and motorways.
How to read a paper. Statistics for the non-statistician. I: Different types of data need different statistical tests.
As medicine leans increasingly on mathematics no clinician can afford to leave the statistical aspects of a paper to the "experts." If you are numerate, try the "Basic Statistics for Clinicians" series in the Canadian Medical Association Journal,1 2 3 4 or a more mainstream statistical textbook.5 If, on the other hand, you find statistics impossibly difficult, this article and the next in this series give a checklist of preliminary questions to help you appraise the statistical validity of a paper.
Nucleons and nuclei in the context of low-energy QCD
This presentation reports on recent developments concerning basic aspects of low-energy QCD as they relate to the understanding of the nucleon mass and the nuclear many-body problem.
From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0
This paper presents Integrated Information Theory (IIT) of consciousness 3.0, which incorporates several advances over previous formulations. IIT starts from phenomenological axioms: information says that each experience is specific--it is what it is by how it differs from alternative experiences; integration says that it is unified--irreducible to non-interdependent components; exclusion says that it has unique borders and a particular spatio-temporal grain. These axioms are formalized into postulates that prescribe how physical mechanisms, such as neurons or logic gates, must be configured to generate experience (phenomenology). The postulates are used to define intrinsic information as "differences that make a difference" within a system, and integrated information as information specified by a whole that cannot be reduced to that specified by its parts. By applying the postulates both at the level of individual mechanisms and at the level of systems of mechanisms, IIT arrives at an identity: an experience is a maximally irreducible conceptual structure (MICS, a constellation of concepts in qualia space), and the set of elements that generates it constitutes a complex. According to IIT, a MICS specifies the quality of an experience and integrated information ΦMax its quantity. From the theory follow several results, including: a system of mechanisms may condense into a major complex and non-overlapping minor complexes; the concepts that specify the quality of an experience are always about the complex itself and relate only indirectly to the external environment; anatomical connectivity influences complexes and associated MICS; a complex can generate a MICS even if its elements are inactive; simple systems can be minimally conscious; complicated systems can be unconscious; there can be true "zombies"--unconscious feed-forward systems that are functionally equivalent to conscious complexes.
A machine learning approach for fingerprint based gender identification
This paper deals with the problem of gender classification using fingerprint images. Our attempt to gender identification follows the use of machine learning to determine the differences between fingerprint images. Each image in the database was represented by a feature vector consisting of ridge thickness to valley thickness ratio (RTVTR) and the ridge density values. By using a support vector machine trained on a set of 150 male and 125 female images, we obtain a robust classifying function for male and female feature vector patterns.
Completely Distributed Power Allocation using Deep Neural Network for Device to Device communication Underlaying LTE
Device to device (D2D) communication underlaying LTE can be used to distribute traffic loads of eNBs. However, a conventional D2D link is controlled by an eNB, and it still remains burdens to the eNB. We propose a completely distributed power allocation method for D2D communication underlaying LTE using deep learning. In the proposed scheme, a D2D transmitter can decide the transmit power without any help from other nodes, such as an eNB or another D2D device. Also, the power set, which is delivered from each D2D node independently, can optimize the overall cell throughput. We suggest a distirbuted deep learning architecture in which the devices are trained as a group, but operate independently. The deep learning can optimize total cell throughput while keeping constraints such as interference to eNB. The proposed scheme, which is implemented model using Tensorflow, can provide same throughput with the conventional method even it operates completely on distributed manner.
A Direct Least-Squares (DLS) method for PnP
In this work, we present a Direct Least-Squares (DLS) method for computing all solutions of the perspective-n-point camera pose determination (PnP) problem in the general case (n ≥ 3). Specifically, based on the camera measurement equations, we formulate a nonlinear least-squares cost function whose optimality conditions constitute a system of three third-order polynomials. Subsequently, we employ the multiplication matrix to determine all the roots of the system analytically, and hence all minima of the LS, without requiring iterations or an initial guess of the parameters. A key advantage of our method is scalability, since the order of the polynomial system that we solve is independent of the number of points. We compare the performance of our algorithm with the leading PnP approaches, both in simulation and experimentally, and demonstrate that DLS consistently achieves accuracy close to the Maximum-Likelihood Estimator (MLE).
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Deep reinforcement learning (deep RL) has been successful in learning sophisticated behaviors automatically; however, the learning process requires a huge number of trials. In contrast, animals can learn new tasks in just a few trials, benefiting from their prior knowledge about the world. This paper seeks to bridge this gap. Rather than designing a “fast” reinforcement learning algorithm, we propose to represent it as a recurrent neural network (RNN) and learn it from data. In our proposed method, RL, the algorithm is encoded in the weights of the RNN, which are learned slowly through a general-purpose (“slow”) RL algorithm. The RNN receives all information a typical RL algorithm would receive, including observations, actions, rewards, and termination flags; and it retains its state across episodes in a given Markov Decision Process (MDP). The activations of the RNN store the state of the “fast” RL algorithm on the current (previously unseen) MDP. We evaluate RL experimentally on both small-scale and large-scale problems. On the small-scale side, we train it to solve randomly generated multi-armed bandit problems and finite MDPs. After RL is trained, its performance on new MDPs is close to human-designed algorithms with optimality guarantees. On the largescale side, we test RL on a vision-based navigation task and show that it scales up to high-dimensional problems.
Pest Detection and Extraction Using Image Processing Techniques
—Detection of pests in the paddy fields is a major challenge in the field of agriculture, therefore effective measures should be developed to fight the infestation while minimizing the use of pesticides. The techniques of image analysis are extensively applied to agricultural science, and it provides maximum protection to crops, which can ultimately lead to better crop management and production. Monitoring of pests infestation relies on manpower, however automatic monitoring has been advancing in order to minimize human efforts and errors. This study extends the implementation of different image processing techniques to detect and extract insect pests by establishing an automated detection and extraction system for estimating pest densities in paddy fields. Experiment results shows that the proposed system provides a simple, efficient and fast solution in detecting pests in the rice fields.
Neural Open Information Extraction
Conventional Open Information Extraction (Open IE) systems are usually built on hand-crafted patterns from other NLP tools such as syntactic parsing, yet they face problems of error propagation. In this paper, we propose a neural Open IE approach with an encoder-decoder framework. Distinct from existing methods, the neural Open IE approach learns highly confident arguments and relation tuples bootstrapped from a state-of-the-art Open IE system. An empirical study on a large benchmark dataset shows that the neural Open IE system significantly outperforms several baselines, while maintaining comparable computational efficiency.
The acetylcholine releaser linopirdine increases parietal regional cerebral blood flow in Alzheimer’s disease
Centrally acting cholinergic drugs have been reported to increase regional cerebral blood flow (rCBF) as measured by single photon emission computed tomography (SPECT) in brain regions affected by Alzheimer’s disease (AD). We studied the effects of the acetylcholine releaser linopirdine (LPD) on SPECT rCBF in patients with probable AD. Twenty-four AD patients (12 M, 12 F; mean age ± SD = 68.9 ±8.2 years) and 13 healthy controls (8 M, 5 F; 68.4 ± 8.0 years) participated. AD patients were scanned with 20 mCi of Tc-99m-ECD at baseline and following 4 weeks of treatment with LPD 40 mg TID (n = 15) or placebo TID (n = 9) in a double-blind trial. Healthy subjects were scanned for comparison with baseline AD scans. Cortical/cerebellar rCBF ratios were derived for nine cortical structures. The combined parietal association cortex showed a 20.6% reduction in patients relative to controls. Patients treated with LPD showed an increase in parietal rCBF of 4.1 ± 5.8%; whereas those treated with placebo showed a decrease of −2.0 ± 7.4% (F = 5.13; df = 1, 22; P = 0.03). These data support the conclusion that rCBF abnormalities in AD are, in part, truly “functional” and can be selectively altered with pharmacological interventions. The parietal activation seen with LPD and other cholinergic AD drug therapies suggests the importance of measuring parietal lobe neuropsychological function in the course of evaluating these drugs.
The role of autophagy in modulation of neuroinflammation in microglia
Microglia have multiple functions in regulating homeostasis in the central nervous system (CNS), and microglial inflammation is thought to play a role in the etiology of the neurodegenerative diseases. When endogenous or exogenous stimuli trigger disorders in microenvironmental homeostasis in CNS, microglia critically determine the fate of other neural cells. Recently, it was reported that autophagy might influence inflammation and activation of microglia. Though the interaction between autophagy and macrophages has been reported and reviewed in length, the role of autophagy in microglia has yet to be reviewed. Herein, we will highlight recent advances on the emerging role of autophagy in microglia, focusing on the regulation of autophagy during microglial inflammation, and the possible mechanism involved.
Molten salt synthesis and characterization of Li4Ti5−xMnxO12 (x = 0.0, 0.05 and 0.1) as anodes for Li-ion batteries
Abstract Sub-micrometer sized Li 4 Ti 5− x Mn x O 12 ( x  = 0.0, 0.05 and 0.1) particles were synthesized by a single step molten salt method using LiCl–KCl as a flux. The synthesized material was structurally characterized by X-ray diffraction (XRD) and Fourier transform infrared (FTIR) spectra. The XRD analysis revealed the particles to be highly crystalline and have a face-centered cubic spinel structure. The presence of possible functional group was confirmed through FTIR analysis. The FE-SEM images showed the particles to be polyhedral in shape with uniform size distribution. It was also revealed that there was a particle size reduction with the effect of Mn 4+ dopant ions. The electrochemical studies performed using cyclic voltammogram (CV), charge–discharge, and electrochemical impedance analysis (EIS) indicate that Li 4 Ti 4.9 Mn 0.1 O 4 possesses a better discharge capacity (305 mAh/g), cycling stability, and charge carrier conductivity than both Li 4 Ti 4.95 Mn 0.05 O 12 (265 mAh/g) and Li 4 Ti 5 O 12 (240 mAh/g). The cycling stability reveals that the acceptable capacity fading was observed even after 20th cycle. The results of electrochemical studies infer that Li 4 Ti 4.9 Mn 0.1 O 4 could be utilized as a suitable anode material for Li-ion batteries.
I get it already! the influence of ChairBot motion gestures on bystander response
How could a rearranging chair convince you to let it by? This paper explores how robotic chairs might negotiate passage in shared spaces with people, using motion as an expressive cue. The user study evaluates the efficacy of three gestures at convincing a busy participant to let it by. This within-participants study consisted of three subsequent trials, in which a person is completing a puzzle on a standing desk and a robotic chair approaches to squeeze by. The measure was whether participants moved out of the robot's way or not. People deferred to the robot in slightly less than half the trials as they were engaged in the activity. The main finding, however, is that over-communication cues more blocking behaviors, perhaps because it is annoying or because people want chairs to know their place (socially speaking). The Forward-Back gesture that was most effective at negotiating passage in the first trail was least effective in the second and third trial. The more subtle Pause and the slightly loud but less-aggressive Side-to-Side gesture, were much more likely to be deferred to in later trials, but not a single participant deferred to them in the first trial. The results demonstrate that the Forward-Back gesture was the clearest way to communicate the robot's intent, however, they also give evidence that there is a communicative trade-off between clarity and politeness, particularly when direct communication has an association with aggression. The takeaway for robot design is: be informative initially, but avoid over-communicating later.
Bayesian super-resolution of text image sequences from low resolution observations
This paper deals with the problem of reconstructing high-resolution text images from an incomplete set of undersampled, blurred, and noisy images shifted with subpixel displacement. We derive mathematical expressions for the calculation of the maximum a posterioriestimate of the high resolution image and the estimation of the parameters involved in the model. The method is tested on real text images and car plates, examining the impact of blurring and the number of available low resolution images on the final estimate.
Adding procalcitonin to the MASCC risk-index score could improve risk stratification of patients with febrile neutropenia
Infectious complication could be life-threatening in patients with chemotherapy-induced febrile neutropenia (FN). The Multinational Association of Supportive Care in Cancer (MASCC) risk-index score is used to predict the complications of these patients, and it has been focused on identifying low-risk patients who may be candidates for outpatient management. In this study, we evaluated procalcitonin (PCT) and the MASCC score in predicting bacteremia and septic shock in patients with FN. From November 2010 to October 2011, 355 patients with FN were prospectively enrolled. Clinical and laboratory findings, including procalcitonin, and the MASCC score were analyzed and correlated with the infectious complications of FN. Of the 355 patients, 35 (9.9 %) had bacteremia, and 25 (7.0 %) developed septic shock. PCT ≥0.5 ng/mL (OR 3.96, 95 % CI 1.51–10.40), platelet count <100 × 103/mm3 (OR 2.50, 95 % CI 1.10–5.66), and MASCC score <21 (OR 2.45, 95 % CI 1.03–5.85) were independently predictive of bacteremia, and PCT ≥1.5 ng/mL (OR 29.78, 95 % CI 9.10–97.39) and MASCC score <21 (OR 9.46, 95 % CI 3.23–27.72) were independent factors of septic shock. In 306 patients with low-risk FN classified by the MASCC score, 52 had PCT ≥0.5 ng/mL and 31 had PCT ≥1.5 ng/mL. Of the 52 patients with PCT ≥0.5 ng/mL, 12 (23.1 %) had bacteremia, and of the 31 patients with PCT ≥1.5 ng/mL, 7 (22.6 %) developed septic shock. Implicating PCT as a routine use in clinical practice along with the MASCC score could improve risk stratification of patients with FN.
Building health informatics skills for health professionals: results from the Australian Health Informatics Skill Needs Survey.
OBJECTIVE To ascertain health professionals' perceptions of health informatics skills required in their roles. DESIGN A paper-based survey with a stratified random sample of Australian health professionals and a web-based survey open to all Australian health professionals were conducted. MEASUREMENT A questionnaire on the health professionals' perceived degree of competency required for a total of 69 specific skills in five skill categories based on the International Medical Informatics Association's (IMIA) set of recommendations on education and IMIA's scientific map. RESULTS 462 health professionals responded to the paper-based questionnaire, and 167 respondents to the Internet questionnaire. Internet respondents reported higher required degrees of competency for specific health informatics and information technology skills than paper respondents, while paper respondents valued clinical skills higher than the Internet respondents. CONCLUSION Health professionals increasingly use information technology (IT), and some also deploy, research or develop health care IT. Consequently, they need to be adequately educated for their specific roles in health informatics. Our results inform developers of educational programs while acknowledging the diversity of roles in health informatics and the diversity of pathways towards a professional health informatics qualification.
Random Cross-Sectional Determination of the Level of Awareness Among Female Saudi Patients About Breast Cancer
The purpose of this study was to randomly determine the level of awareness and knowledge among female Saudi patients about the risk factors and symptoms of breast cancer as well as any awareness about the practices for breast cancer self-examination. A random cross-sectional survey was conducted over 4 months at two private medical clinics in Riyadh, Saudi Arabia. The 4-month period was from December 2013 to March 2014. The survey instrument was a questionnaire that was both self-explanatory and user-friendly. Our study subjects included 174 randomly selected Saudi female patients with no medical history of breast cancer. These patients visited these private clinics for medical advice or for consultation on problems unrelated to breasts. Participants’ perception of risk factors regarding early menses showed only 47.1 %. The most common risk factor known by the participants was a family history of breast cancer (84 %). The most widely recognized symptoms of breast cancer were occurrence of breast lumps (86.2 %) and breast pain (93.7 %). Awareness of information regarding breast self-examination (BSE) was 81.6 % in general. Many were aware of the opinion that proper and assisted knowledge about BSE can help in early detection of breast cancer. The patients were also aware that BSE is the most widely used method of screening for breast cancer in clinics and hospitals. All the participants showed sufficient knowledge about the risk factors and symptoms of breast cancer. These baseline findings are encouraging for providing more self-explanatory information (to patients) and guidance to health authorities for developing effective breast health care programs in the entire Kingdom for the female population and not only for patients visiting health care clinics for advice on other medical issues.
Intestinal mucosal dysfunction and infection during remission-induction therapy for acute myeloid leukaemia
Intestinal barrier function was prospectively examined in the course of a clinical trial evaluating the efficacy and safety of lisofylline for reducing cytotoxic therapy-induced intestinal epithelial damage-related infectious morbidity in patients receiving standard remission-induction therapy for acute myeloid leukaemia. The absorption and permeation of oral D-Xylose, lactulose and mannitol were measured weekly from baseline until marrow recovery in adult recipients of idarubicin plus cytarabine for untreated acute myeloid leukaemia. These studies were correlated with non-haematologic chemotherapy-related toxicities reflecting mucosal damage, including nausea, vomiting, stomatitis, diarrhoea, abdominal pain and systemic infection. D-xylose absorption decreased and lactulose:mannitol ratio reflecting intestinal permeability increased from baseline until the second and third week after the beginning of the treatment followed by recovery. These measures correlated with infection rates, nausea, vomiting, diarrhoea and increased blood product utilization. Lisofylline was associated with increased intestinal permeability, nausea, vomiting and infection-related morbidity despite a reduction in the duration of neutropaenia. These surrogates of intestinal barrier function correlated well with clinically important outcomes despite the failure to demonstrate reduced morbidity with lisofylline and represent useful objective outcome measurements for future clinical trials of products for the amelioration of the effects of cytotoxic therapy on the intestinal mucosa.
Experimental results for artificial intelligence-based self-organized 5G networks
Until now, mobile networks are still managed in a manual and semi-automatic manner, which are costly and time-consuming. For the forthcoming Fifth Generation (5G) system, its large-scale, heterogeneous, software-defined and virtualized infrastructure simply become unmanageable if no innovative managing paradigm is applied. Recently, artificial intelligence is proposed to be applied in the 5G system to realize intelligent management and highly self-organized networks. In this paper, proof-of-concept experiments, including the setup of an intelligent 5G testbed, its closed-loop control and enabling algorithms, are presented. The experimental results reveal that applying artificial intelligence to wireless network management is both feasible and effective.