title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Bad Vibrations: The History of the Idea of Music as a Cause of Disease | Contents: Introduction: musical orders and disorders From sensibility to pathology: nervous music, 1700-1850 Modern music and nervous modernity: Wagnerism as a disease of civilization, 1850-1914 Pathological music, politics and race: Germany and the United States, 1900-45 Music as mind control, music as weapon: pathological music since 1945 Bibliography Index. |
Occipito-atlanto-axial Hypermobility : Clinical Features and Dynamic Analysis of Cranial Settling and Posterior Gliding of Occipital Condyle : Part 1 : Findings in Patients with Hereditary Disorders of Connective Tissue and Ehlers-Danlos Syndrome | Object:To investigate hereditary disorders of connective tissue(HDCT)and Ehlers-Danlos syndromes(EDS)that can present with lower brain stem symptoms attributable to occipito-atlanto-axial hypermobility and cranial settling, and relationship to Chiari malformation typeI(CMI). Methods:The diagnostic criteria for EDS and related HDCT were prospectively met by 155 patients. Osseous structures comprising the craniocervical junction were investigated morphometrically using reconstructed 2D-CT and plain x-ray images in 135 patients with HDCT・EDS and the results were compared to those in patients with normal controls(n=55). Results:There were 124 cases(80%) in HDCT・EDS of CMI. The diagnostic features of HDCT・EDS with CMI had a greater incidence of lower brain stem symptoms and signs. The measured distances of the basion-dens interval(BDI), basion-atlas interval(BAI), atlas-dens interval (ADI), dens-atlas interval(DAI), clivus-atlas angle(CAA), clivus-axis angle(CXA), and atlas-axis angle(AXA)were the same in the supine and upright positions in normal controls. There was reduction of the BDI(3.3 mm), enlargement of the BAI(2.8 mm), and reduction of the CXA(10.8°), CAA (5.8°, p<0.001), and AXA(12.3°)upon assumption of the upright position. These changes were reducible by cervical traction. Conclusions:Morphometric evidence of cranial settling, posterior gliding of the occipital condyles in the HDCT・EDS cohort suggests hypermobility of the atlanto-occipital and atlanto-axial joints. This hypermobility induces more prominent brain stem symptoms in patients associated with CMI. The patients with CMI have greater hypermobility of occipito-atlanto-axial joints. (Received:December 29, 2008;accepted:January 26, 2009) |
Chapter 1 Non-negative Matrix Factorizations for Clustering : A | Recently there has been significant development in the use of non-negative matrix factorization (NMF) methods for various clustering tasks. NMF factorizes an input nonnegative matrix into two nonnegative matrices of lower rank. Although NMF can be used for conventional data analysis, the recent overwhelming interest in NMF is due to the newly discovered ability of NMF to solve challenging data mining and machine learning problems. In particular, NMF with the sum of squared error cost function is equivalent to a relaxed K-means clustering, the most widely used unsupervised learning algorithm. In addition, NMF with the I-divergence cost function is equivalent to probabilistic latent semantic indexing, another unsupervised learning method popularly used in text analysis. Many other data mining and machine learning problems can be reformulated as an NMF problem. This chapter aims to provide a comprehensive review of non-negative matrix factorization methods for clustering. In particular, we outline the theoretical foundations on NMF for clustering, provide an overview of different variants on NMF formulations, and examine several practical issues in NMF algorithms. We also summarize recent advances on using NMF-based methods for solving many other clustering problems including coclustering, semi-supervised clustering, and consensus clustering and discuss some future research directions. |
Route Designer: A Retrosynthetic Analysis Tool Utilizing Automated Retrosynthetic Rule Generation | Route Designer, version 1.0, is a new retrosynthetic analysis package that generates complete synthetic routes for target molecules starting from readily available starting materials. Rules describing retrosynthetic transformations are automatically generated from reaction databases, which ensure that the rules can be easily updated to reflect the latest reaction literature. These rules are used to carry out an exhaustive retrosynthetic analysis of the target molecule, in which heuristics are used to mitigate the combinatorial explosion. Proposed routes are prioritized by an empirical rating algorithm to present a diverse profile of the most promising solutions. The program runs on a server with a web-based user interface. An overview of the system is presented together with examples that illustrate Route Designer's utility. |
RolX: structural role extraction & mining in large graphs | Given a network, intuitively two nodes belong to the same role if they have similar structural behavior. Roles should be automatically determined from the data, and could be, for example, "clique-members," "periphery-nodes," etc. Roles enable numerous novel and useful network-mining tasks, such as sense-making, searching for similar nodes, and node classification. This paper addresses the question: Given a graph, how can we automatically discover roles for nodes? We propose RolX (Role eXtraction), a scalable (linear in the number of edges), unsupervised learning approach for automatically extracting structural roles from general network data. We demonstrate the effectiveness of RolX on several network-mining tasks: from exploratory data analysis to network transfer learning. Moreover, we compare network role discovery with network community discovery. We highlight fundamental differences between the two (e.g., roles generalize across disconnected networks, communities do not); and show that the two approaches are complimentary in nature. |
Online Misinformation: Challenges and Future Directions | Misinformation has become a common part of our digital media environments and it is compromising the ability of our societies to form informed opinions. It generates misperceptions, which have affected the decision making processes in many domains, including economy, health, environment, and elections, among others. Misinformation and its generation, propagation, impact, and management is being studied through a variety of lenses (computer science, social science, journalism, psychology, etc.) since it widely affects multiple aspects of society. In this paper we analyse the phenomenon of misinformation from a technological point of view. We study the current socio-technical advancements towards addressing the problem, identify some of the key limitations of current technologies, and propose some ideas to target such limitations. The goal of this position paper is to reflect on the current state of the art and to stimulate discussions on the future design and development of algorithms, methodologies, and applications. |
An Adaptive OFDMA Platform for IEEE 802.22 Based on Cognitive Radio | CR is the key technology that will enable flexible, efficient and reliable spectrum use by adapting the radio's characteristics to the real-time conditions of the environment. In this paper, an adaptive OFDMA platform is presented to satisfy the requirements of IEEE 802.22, the first standard based on the CR. In the system, two channel types are supported to apply different AMC methods according to user channel environments. Simulation results are also provided to find a better way in constructing diversity subchannels when AMC is performed with average SNR over the whole subcarriers |
Development and initial validation of a 15-item informant version of the Geriatric Depression Scale. | OBJECTIVE
To develop a brief informant version of the Geriatric Depression Scale for use in screening for depression in older adults.
DESIGN
A scale development and validation study.
SETTING
Internal medicine and geriatric outpatient clinics located at the James A. Haley Veterans' Medical Center and the University of South Florida Medical Center, Tampa, Florida.
PARTICIPANTS
A total of 147 patients (81 females and 66 males) and their adult informants.
MEASUREMENTS
Self and informant versions of the 30-item Geriatric Depression Scale, NEO-FFI, and a health behaviors questionnaire.
RESULTS
The 15-item informant version of the GDS was found to have sufficient internal consistency reliability (alpha = 0.86) and retest reliability (r = 0.81) to support its use as a clinical instrument. Construct validity was demonstrated by a pattern of correlations with external demographic and personality variables consistent with those of other versions of the GDS, as well as substantive correlations with these other versions. Efficacy of the GDSI-15 was found to be as good as that for the full 30-item informant version of the GDS.
CONCLUSIONS
The GDSI-15 may be a useful adjunct or alternative to standard screening methods in assessing patients in outpatient settings. |
Environmental Applications of Semiconductor Photocatalysis | 0009-2665/95/0795-0069$15.50/0 0 a tremendous set of environmental problems related to the remediation of hazardous wastes, contaminated groundwaters, and the control of toxic air contaminants. For example, the slow pace of hazardous waste remediation at military installations around the world is causing a serious delay in conversion of many of these facilities to civilian uses. Over the last 10 years problems related to hazardous waste remediation' have emerged as a high national and international priority. Problems with hazardous wastes at military installations are related in part to the disposal of chemical wastes in lagoons, underground storage tanks, and dump sites. As a consequence of these disposal practices, the surrounding soil and underlying groundwater aquifers have become contaminated with a variety of hazardous (i.e., toxic) chemicals. Typical wastes of concern include heavy metals, aviation fuel, military-vehicle fuel, solvents and degreasing agents, and chemical byproducts from weapons manufacturing. The projected costs for cleanup at more than 1800 military installations in the United States have been put at $30 billion; the time required for cleanup has been estimated to be more than 10 years. In the civilian sector, the elimination of toxic and hazardous chemical substances such as the halogenated hydrocarbons from waste effluents and previously contaminated sites has become a major concern. More than 540 million metric tons of hazardous solid and liquid waste are generated annually by more than 14000 installations in the United States. A significant fraction of these wastes are disposed on the land each year. Some of these wastes eventually contaminate groundwater and surface water. Groundwater contamination is likely to be the primary source of human contact with toxic chemicals emanating from more than 70% of the hazardous waste sites in the United States. General classes of compounds of concern include: solvents, volatile organics, chlorinated volatile organics, dioxins, dibenzofurans, pesticides, PCB's, chlorophenols, asbestos, heavy metals, and arsenic compounds. Some specific compounds of interest are 4-chlorophenol, pentachlorophenol, trichloroethylene (TCE), perchloroethylene (PCE), CCL, HCC4, CHZC12, ethylene dibromide, vinyl chloride, ethylene dichloride, methyl chloroform, p-chlorobenzene, and hexachlorocyclopentadiene. The occurrence of TCE, PCE, CFC-113 (i.e., Freon-113) and other grease-cutting agents in soils and groundwaters is widespread. In order to address this significant problem, extensive research is underway to develop advanced analytical, biochemical, and physicochemical methods for the characterization and elimination of hazardous chemical compounds from air, soil, and water. Advanced physicochemical processes such as semicon- |
Using Smartphones to Collect Behavioral Data in Psychological Science: Opportunities, Practical Considerations, and Challenges. | Smartphones now offer the promise of collecting behavioral data unobtrusively, in situ, as it unfolds in the course of daily life. Data can be collected from the onboard sensors and other phone logs embedded in today's off-the-shelf smartphone devices. These data permit fine-grained, continuous collection of people's social interactions (e.g., speaking rates in conversation, size of social groups, calls, and text messages), daily activities (e.g., physical activity and sleep), and mobility patterns (e.g., frequency and duration of time spent at various locations). In this article, we have drawn on the lessons from the first wave of smartphone-sensing research to highlight areas of opportunity for psychological research, present practical considerations for designing smartphone studies, and discuss the ongoing methodological and ethical challenges associated with research in this domain. It is our hope that these practical guidelines will facilitate the use of smartphones as a behavioral observation tool in psychological science. |
Design of Hough transform hardware accelerator for lane detection | The Hough transform is a well-known straight line detection algorithm and it has been widely used for many lane detection algorithms. However, its real-time operation is not guaranteed due to its high computational complexity. In this paper, we designed a Hough transform hardware accelerator on FPGA to process it in real time. Its FPGA logic area usage was reduced by limiting the angles of the lines to (-20, 20) degrees which are enough for lane detection applications, and its arithmetic computations were performed in parallel to speed up the processing time. As a result of FPGA synthesis using Xilinx Vertex-5 XC5VLX330 device, it occupies 4,521 slices and 25.6Kbyte block memory giving performance of 10,000fps in VGA images(5000 edge points). The proposed hardware on FPGA (0.1ms) is 450 times faster than the software implementation on ARM Cortex-A9 1.4GHz (45ms). Our Hough transform hardware was verified by applying it to the newly developed LDWS (lane departure warning system). |
Evaluating business intelligence platforms: a case study | The paper examines some common platforms supporting Business Intelligence activities in order to state evaluation criteria for the system choice. The evaluation considers a software measurement method based on the analysis of the functional complexity of the platforms. The study has been performed on an academic warehouse that uses historical data available in legacy databases. Experimental results are reported which show the advantages and the drawbacks of each considered system. Key-Words: Data warehouse, Data mart, OLAP system, Functional size measurement. |
Intelligent Adaptive Curiosity: a source of Self-Development | This paper presents the mechanism of Intelligent Adaptive Curiosity. This is a drive which pushes the robot towards situations in which it maximizes its learning progress. It makes the robot focus on situations which are neither too predictable nor too unpredictable. This mechanism is a source of selfdevelopment for the robot: the complexity of its activity autonomously increases. Indeed, we show that it first spends time in situations which are easy to learn, then shifts progressively its attention to situations of increasing difficulty, avoiding situations in which nothing can be learnt. 1. An engine for self-development Development involves the progressive increase of the complexity of the activities of an agent with an associated increase of its capabilities. This process is now recognized to be crucial for the formation of nontrivial intelligence (Brooks and Steins, 1994; Weng et al., 2001). Thus, one of the goals of developmental robotics is to build robots which develop. Several papers have already been written showing how one could increase the efficiency of the learning of a robot by putting it in situations in which the task to be solved was of increasing difficulty (e.g. Elman, 1993; Nagai et al., 2002). Yet, in all of them to our knowledge, the building of learning situations of progressive complexity was done manually by a human. As a consequence, the robots were not developing autonomously: they were involved in passive development. The goal of this paper is to present a mechanism which enables a robot to autonomously develop in a process that we call selfdevelopment (active development may also be used). This means that the robot is put in a complex, continuous, dynamic environment, and that it will be able to figure out by itself without prior knowledge which situations in this environment have a complexity which is suited for efficient learning at a given moment of its development. This mechanism is called Intelligent Adaptive Curiosity (IAC): • it is a drive in the same sense than food level maintenance or heat maintenance are drives, but it is much more complex. Instead of being about the maintenance of a physical variable, the IAC drive is about the maintenance of an abstract dynamic cognitive variable: the learning progress, which must be kept maximal. • it is called curiosity because maximizing the learning progress pushes the robot towards novel situations in which things can be learnt. • it is adaptive because the situations that are attractive change over time: indeed, once something is learnt, it will not provide learning progress anymore. • it is called intelligent because it keeps the robot away both from situations which are too predictable and from situations which are too unpredictable (i.e. the edge of order and chaos in the cognitive dynamics). Indeed, naive curiosity algorithms like “go in most novel or in most unpredictable situations” might lead the robot to completely chaotic situations which can be dangerous and where nothing can be learnt (e.g. bumping very fast into walls and bouncing against it). IAC is related to emotions and value systems (Sporns, 2000): indeed, it tags all situations as positive or negative with real numbers and accordingly to their learning progress potential, and makes the robot act so that it finds itself often in positive situations, which bring internal positive rewards. The next section will explain in detail and in a practical case the concept of Intelligent Adaptive Curiosity. This new system is more general and much more robust than the initial system we proposed in (Kaplan and Oudeyer, 2003), as well as the system of (Schmidhuber, 1991). 127 In Berthouze, L., Kozima, H., Prince, C. G., Sandini, G., Stojanov, G., Metta, G., and Balkenius, C. (Eds.) Proceedings of the Fourth International Workshop on Epigenetic Robotics Lund University Cognitive Studies, 117, ISBN 91-974741-3-4 |
The Large Scale Machine Learning in an Artificial Society: Prediction of the Ebola Outbreak in Beijing | Ebola virus disease (EVD) distinguishes its feature as high infectivity and mortality. Thus, it is urgent for governments to draw up emergency plans against Ebola. However, it is hard to predict the possible epidemic situations in practice. Luckily, in recent years, computational experiments based on artificial society appeared, providing a new approach to study the propagation of EVD and analyze the corresponding interventions. Therefore, the rationality of artificial society is the key to the accuracy and reliability of experiment results. Individuals' behaviors along with travel mode directly affect the propagation among individuals. Firstly, artificial Beijing is reconstructed based on geodemographics and machine learning is involved to optimize individuals' behaviors. Meanwhile, Ebola course model and propagation model are built, according to the parameters in West Africa. Subsequently, propagation mechanism of EVD is analyzed, epidemic scenario is predicted, and corresponding interventions are presented. Finally, by simulating the emergency responses of Chinese government, the conclusion is finally drawn that Ebola is impossible to outbreak in large scale in the city of Beijing. |
The Affordance Template ROS package for robot task programming | This paper introduces the Affordance Template ROS package for quickly programming, adjusting, and executing robot applications in the ROS RViz environment. This package extends the capabilities of RViz interactive markers [1] by allowing an operator to specify multiple end-effector waypoint locations and grasp poses in object-centric coordinate frames and to adjust these waypoints in order to meet the run-time demands of the task (specifically, object scale and location). The Affordance Template package stores task specifications in a robot-agnostic JSON description format such that it is trivial to apply a template to a new robot. As such, the Affordance Template package provides a robot-generic ROS tool appropriate for building semi-autonomous, manipulation-based applications. Affordance Templates were developed by the NASA-JSC DARPA Robotics Challenge (DRC) team and have since successfully been deployed on multiple platforms including the NASA Valkyrie and Robonaut 2 humanoids, the University of Texas Dreamer robot and the Willow Garage PR2. In this paper, the specification and implementation of the affordance template package is introduced and demonstrated through examples for wheel (valve) turning, pick-and-place, and drill grasping, evincing its utility and flexibility for a wide variety of robot applications. |
Treating chronic worry: Psychological and physiological effects of a training programme based on mindfulness. | The present study examines psychological and physiological indices of emotional regulation in non-clinical high worriers after a mindfulness-based training programme aimed at reducing worry. Thirty-six female university students with high Penn State Worry Questionnaire scores were split into two equal intervention groups: (a) mindfulness, and (b) progressive muscle relaxation plus self-instruction to postpone worrying to a specific time of the day. Assessment included clinical questionnaires, daily self-report of number/duration of worry episodes and indices of emotional meta-cognition. A set of somatic and autonomic measures was recorded (a) during resting, mindfulness/relaxation and worrying periods, and (b) during cued and non-cued affective modulation of defence reactions (cardiac defence and eye-blink startle). Both groups showed equal post-treatment improvement in the clinical and daily self-report measures. However, mindfulness participants reported better emotional meta-cognition (emotional comprehension) and showed improved indices of somatic and autonomic regulation (reduced breathing pattern and increased vagal reactivity during evocation of cardiac defense). These findings suggest that mindfulness reduces chronic worry by promoting emotional and physiological regulatory mechanisms contrary to those maintaining chronic worry. |
Potential protective effect of honey against paracetamol-induced hepatotoxicity. | BACKGROUND
Paracetamol overdose causes severe hepatotoxicity that leads to liver failure in both humans and experimental animals. The present study investigates the protective effect of honey against paracetamol-induced hepatotoxicity in Wistar albino rats. We have used silymarin as a standard reference hepatoprotective drug.
METHODS
Hepatoprotective activity was assessed by measuring biochemical parameters such as the liver function enzymes, serum alanine aminotransferase (ALT) and serum aspartate aminotransferase (AST). Equally, comparative effects of honey on oxidative stress biomarkers such as malondialdyhyde (MDA), reduced glutathione (GSH) and glutathione peroxidase (GPx) were also evaluated in the rat liver homogenates. We estimated the effect of honey on serum levels and hepatic content of interleukin-1beta (IL-1β) because the initial event in paracetamol-induced hepatotoxicity has been shown to be a toxic-metabolic injury that leads to hepatocyte death, activation of the innate immune response and upregulation of inflammatory cytokines.
RESULTS
Paracetamol caused marked liver damage as noted by significant increased activities of serum AST and ALT as well as the level of Il-1β. Paracetamol also resulted in a significant decrease in liver GSH content and GPx activity which paralleled an increase in Il-1β and MDA levels. Pretreatment with honey and silymarin prior to the administration of paracetamol significantly prevented the increase in the serum levels of hepatic enzyme markers, and reduced both oxidative stress and inflammatory cytokines. Histopathological evaluation of the livers also revealed that honey reduced the incidence of paracetamol-induced liver lesions.
CONCLUSION
Honey can be used as an effective hepatoprotective agent against paracetamol-induced liver damage. |
Austrian Economics and Public Choice | In this introduction, we will first discuss the methodological affinities between the market process and public choice approaches to political economy, and then suggest that because of these affinities market process scholars should feel at home using public choice analysis to study politics, and public choice scholars should feel at home using market process analysis to study the economy. In a fundamental sense, public choice theory refers to the application of the economic way of thinking to study the political process.1 The economic way of thinking deals with individual decision-making, and exchange relationships in a variety of social settings. Mises is arguably the first scholar to champion a unification of the social sciences by way of a common rational choice model.2 And, Hayek should be recognized as one of the forerunners of the economics of politics with his The Road to Serfdom (1945) and constitutional political economy with his The Constitution of Liberty (1960).3 Furthermore, Buchanan and Tullock’s contribution to modern political economy touch on these themes: rational choice, catallactics or exchange, and constitutional construction. The papers in this special issue of The Review of Austrian Economics by Buchanan and Vanberg, Levy, Foldvary, and Naka point to the strong methodological and theoretical affinities between public choice and market process economists. The papers by Holcombe, Sutter, Benson, and López move from the methodological and theoretical level to the realm of applied theory and empirical work. |
A Low-Power Text-Dependent Speaker Verification System with Narrow-Band Feature Pre-Selection and Weighted Dynamic Time Warping | To fully enable voice interaction in wearable devices, a system requires low-power, customizable voice-authenticated wake-up. Existing speaker-verification (SV) methods have shortcomings relating to power consumption and noise susceptibility. To meet the application requirements, we propose a low-power, text-dependent SV system comprising a sparse spectral feature extraction front-end showing improved noise robustness and accuracy at low power, and a back-end running an improved dynamic time warping (DTW) algorithm that preserves signal envelope while reducing misalignments. Without background noise, the proposed system achieves an equal-errorrate (EER) of 1.1%, compared to 1.4% with a conventional Mel-frequency cepstral coefficients (MFCC)+DTW system and 2.6% with a Gaussian mixture universal background (GMMUBM) based system. At 3dB signal-to-noise ratio (SNR), the proposed system achieves an EER of 5.7%, compared to 13% with a conventional MFCC+DTW system and 6.8% with a GMM-UBM based system. The proposed system enables simple, low-power implementation such that the power consumption of the end-to-end system, which includes a voice activity detector, feature extraction front-end, and back-end decision unit, is under 380 μW. |
Functional neuroimaging in epilepsy: FDG PET and ictal SPECT. | Epileptogenic zones can be localized by F-18 fluorodeoxyglucose positron emission tomography (FDG PET) and ictal single-photon emission computed tomography(SPECT). In medial temporal lobe epilepsy, the diagnostic sensitivity of FDG PET or ictal SPECT is excellent, however, the sensitivity of MRI is so high that the incremental sensitivity by FDG PET or ictal SPECT has yet to be proven. When MRI findings are ambiguous or normal, or discordant with those of ictal EEG, FDG PET and ictal SPECT are helpful for localization without the need for invasive ictal EEG. In neocortical epilepsy, the sensitivities of FDG PET or ictal SPECT are fair. However, because almost a half of the patients are normal on MRI, FDG PET and ictal SPECT are helpful for localization or at least for lateralization in these non-lesional epilepsies in order to guide the subdural insertion of electrodes. Interpretation of FDG PET has been recently advanced by voxel-based analysis and automatic volume of interest analysis based on a population template. Both analytical methods confirmed the performance of previous visual interpretation results. Ictal SPECT was analyzed using subtraction methods(coregistered to MRI) and voxel-based analysis. Rapidity of injection of tracers, HMPAO versus ECD, and repeated ictal SPECT, which remain the technical issues of ictal SPECT, are detailed. |
Flipping bits in memory without accessing them: An experimental study of DRAM disturbance errors | Memory isolation is a key property of a reliable and secure computing system--an access to one memory address should not have unintended side effects on data stored in other addresses. However, as DRAM process technology scales down to smaller dimensions, it becomes more difficult to prevent DRAM cells from electrically interacting with each other. In this paper, we expose the vulnerability of commodity DRAM chips to disturbance errors. By reading from the same address in DRAM, we show that it is possible to corrupt data in nearby addresses. More specifically, activating the same row in DRAM corrupts data in nearby rows. We demonstrate this phenomenon on Intel and AMD systems using a malicious program that generates many DRAM accesses. We induce errors in most DRAM modules (110 out of 129) from three major DRAM manufacturers. From this we conclude that many deployed systems are likely to be at risk. We identify the root cause of disturbance errors as the repeated toggling of a DRAM row's wordline, which stresses inter-cell coupling effects that accelerate charge leakage from nearby rows. We provide an extensive characterization study of disturbance errors and their behavior using an FPGA-based testing platform. Among our key findings, we show that (i) it takes as few as 139K accesses to induce an error and (ii) up to one in every 1.7K cells is susceptible to errors. After examining various potential ways of addressing the problem, we propose a low-overhead solution to prevent the errors |
Content-Based Collaborative Filtering for News Topic Recommendation | News recommendation has become a big attraction with which major Web search portals retain their users. Contentbased Filtering and Collaborative Filtering are two effective methods, each serving a specific recommendation scenario. The Content-based Filtering approaches inspect rich contexts of the recommended items, while the Collaborative Filtering approaches predict the interests of long-tail users by collaboratively learning from interests of related users. We have observed empirically that, for the problem of news topic displaying, both the rich context of news topics and the long-tail users exist. Therefore, in this paper, we propose a Content-based Collaborative Filtering approach (CCF) to bring both Content-based Filtering and Collaborative Filtering approaches together. We found that combining the two is not an easy task, but the benefits of CCF are impressive. On one hand, CCF makes recommendations based on the rich contexts of the news. On the other hand, CCF collaboratively analyzes the scarce feedbacks from the long-tail users. We tailored this CCF approach for the news topic displaying on the Bing front page and demonstrated great gains in attracting users. In the experiments and analyses part of this paper, we discuss the performance gains and insights in news topic recommendation in Bing. |
Efficient segmentation and surface classification of range images | Derivation of geometric structures from point clouds is an important step towards scene understanding for mobile robots. In this paper, we present a novel method for segmentation and surface classification of ordered point clouds. Data from RGB-D cameras are used as input. Normal based region growing segments the cloud and point feature descriptors classify each segment. Not only planar segments can be described but also curved surfaces. In an evaluation on indoor scenes we show the performance of our approach as well as give a comparison to state of the art methods. |
A Multi-Resolution Pyramid for Outdoor Robot Terrain Perception | This paper addresses the problem of outdoor terrain modeling for the purposes of mobile robot navigation. We propose an approach in which a robot acquires a set of terrain models at differing resolutions. Our approach addresses one of the major shortcomings of Bayesian reasoning when applied to terrain modeling, namely artifacts that arise from the limited spatial resolution of robot perception. Limited spatial resolution causes small obstacles to be detectable only at close range. Hence, a Bayes filter estimating the state of terrain segments must consider the ranges at which that terrain is observed. We develop a multi-resolution approach that maintains multiple navigation maps, and derive rational arguments for the number of layers and their resolutions. We show that our approach yields significantly better results in a practical robot system, capable of acquiring detailed 3-D maps in large-scale outdoor environments. |
Presurgical evaluation of epilepsy. | An overview of the following six cortical zones that have been defined in the presurgical evaluation of candidates for epilepsy surgery is given: the symptomatogenic zone; the irritative zone; the seizure onset zone; the epileptogenic lesion; the epileptogenic zone; and the eloquent cortex. The stepwise historical evolution of these different zones is described. The current diagnostic techniques used in the definition of these cortical zones, such as video-EEG monitoring, MRI and ictal single photon emission computed tomography, are discussed. Established diagnostic tests are set apart from procedures that should still be regarded as experimental, such as magnetoencephalography, dipole source localization and spike-triggered functional MRI. Possible future developments that might lead to a more direct definition of the epileptogenic zone are presented. |
Evaluation of the influence of ayurvedic formulation (Ayushman-15) on psychopathology, heart rate variability and stress hormonal level in major depression (Vishada). | INTRODUCTION
Ayurveda (Indian-complimentary and alternative medicine) is still most sought after in India and has promising potential in management of Vishada [major depressive disorder (MDD)]. But, systematic research is lacking. In this study we evaluated of influence of ayurvedic treatment (Panchakarma and Ayushman-15) on psychopathology, heart rate variability (HRV) and endocrinal parameters in patients with major depression.
METHODS
81 drug naive patients diagnosed as Vishada by ayurvedic physician and MDD according to DSM IV-TR were given ayurvedic Virechana module (therapeutic purgation) and were randomized into two groups. Patients in group A (n=41) received Ayushman-15A while group B (n=40) received Ayushman-15B for two months and Shirodhara (forehead-oil pouring therapy). Patients were assessed with Hamilton Depression Rating Scale (HDRS), Montgomery Asberg Depression Rating Scale (MADRS), Heart Rate Variability (HRV). Cortisol and adrenocorticotropic hormone (ACTH) were estimated at baseline and after ayurvedic therapy. HRV and endocrinal parameters were compared with age and gender matched healthy volunteers.
RESULTS
HRV parameters showed significant sympathetic dominance in patients compared to healthy volunteers. Two months of ayurvedic treatment significantly decreased psychopathology, showed increase in vagal tone, decrease in sympathetic tone and reduced cortisol levels. However, there was no significant difference between groups receiving Ayushman A and B.
CONCLUSION
This study provides evidence for antidepressant, cardiac (HRV) and beneficial neuroendocrine modulatory influence of Ayurveda therapy in patients of Vishada (MDD). Further studies are needed to confirm these findings. Greater insight into the neurobiology behind this therapy might provide valuable information about newer drug target. |
BODY SENSOR NETWORK – A WIRELESS SENSOR PLATFORM FOR PERVASIVE HEALTHCARE MONITORING | With recent advances in wireless sensor networks and embedded computing technologies, miniaturized pervasive health monitoring devices have become practically feasible. In addition to providing continuous monitoring and analysis of physiological parameters, the recently proposed Body Sensor Networks (BSN) incorporates context aware sensing for increased sensitivity and specificity. To facilitate research and development in BSN and multi-sensor data fusion, a BSN hardware development platform is presented. With its low power, flexible and compact design, the BSN nodes provide a versatile environment for wireless sensing research and development. |
A pneumatic tactile alerting system for the driving environment | Sensory overloaded environments present an opportunity for innovative design in the area of Human-Machine Interaction. In this paper we study the usefulness of a tactile display in the automobile environment. Our approach uses a simple pneumatic pump to produce pulsations of varying frequencies on the driver's hands through a car steering wheel fitted with inflatable pads. The goal of the project is to evaluate the effectiveness of such a system in alerting the driver of a possible problem, when it is used to augment the visual display presently used in automobiles. A steering wheel that provides haptic feedback using pneumatic pockets was developed to test our hypothesis. The steering wheel can pulsate at different frequencies. The system was tested in a simple multitasking paradigm on several subjects and their reaction times to different stimuli were measured and analyzed. For these experiments, we found that using a tactile feedback device lowers reaction time significantly and that modulating frequency of vibration provides extra information that can reduce the time necessary to identify a problem. |
Relationship between postural instability and subcortical volume loss in Alzheimer's disease | The relationship between postural instability and subcortical structure in AD has received less attention. The aims of this study were to assess whether there are differences in the ability to control balance between Alzheimer's disease (AD) and controls, and to investigate the association between subcortical gray matter volumes and postural instability in AD.We enrolled 107 consecutive AD patients and 37 controls. All participants underwent detailed neuropsychological evaluations, T1-weighted MRI at 3 T, and posture assessment using computerized dynamic posturography. We segmented the volumes of 6 subcortical structures of the amygdala, thalamus, caudate nucleus, putamen, globus pallidus and nucleus accumbens, and of hippocampus, using the FMRIBs integrated registration and segmentation tool.All subcortical structures, except for the globus pallidus, were smaller in AD compared with controls on adjusting for age and gender. Falling frequencies in unilateral stance test (UST) and composite scores in sensory organization test (SOT) were worse in AD than in controls. The motor control test did not reveal any differences between groups. On subgroup analyses in AD, the groups with poor performance in UST or SOT exhibited significantly reduced nucleus accumbens and putamen volumes, and nucleus accumbens volume, respectively. The smaller volume of the nucleus accumbens was associated with postural instability in AD (OR [95% CI] 17.847 [2.59-122.80] for UST, 42.827[6.06-302.47] for SOT, all P < .05).AD patients exhibited reduced ability to control balance compared with controls, and this postural instability was associated with nucleus accumbens volume loss. Furthermore, cognitive dysfunction was more prominent in the group with severe postural instability. |
Metric-space analysis of spike trains: theory, algorithms, and application | We present the mathematical basis of a new approach to the analysis of temporal coding. The foundation of the approach is the construction of several families of novel distances (metrics) between neuronal impulse trains. In contrast to most previous approaches to the analysis of temporal coding, the present approach does not attempt to embed impulse trains in a vector space, and does not assume a Euclidean notion of distance. Rather, the proposed metrics formalize physiologically based hypotheses for those aspects of the firing pattern that might be stimulus dependent, and make essential use of the point-process nature of neural discharges. We show that these families of metrics endow the space of impulse trains with related but inequivalent topological structures. We demonstrate how these metrics can be used to determine whether a set of observed responses has a stimulus-dependent temporal structure without a vector-space embedding. We show how multidimensional scaling can be used to assess the similarity of these metrics to Euclidean distances. For two of these families of metrics (one based on spike times and one based on spike intervals), we present highly efficient computational algorithms for calculating the distances. We illustrate these ideas by application to artificial data sets and to recordings from auditory and visual cortex. |
Usability of Commercially Available Mobile Applications for Diverse Patients | Mobile applications or ‘apps’ intended to help people manage their health and chronic conditions are widespread and gaining in popularity. However, little is known about their acceptability and usability for low-income, racially/ethnically diverse populations who experience a disproportionate burden of chronic disease and its complications. The objective of this study was to investigate the usability of existing mobile health applications (“apps”) for diabetes, depression, and caregiving, in order to facilitate development and tailoring of patient-facing apps for diverse populations. Usability testing, a mixed-methods approach that includes interviewing and direct observation of participant technology use, was conducted with participants (n = 9 caregivers; n = 10 patients with depression; and n = 10 patients with diabetes) on a total of 11 of the most popular health apps (four diabetes apps, four depression apps, and three caregiver apps) on both iPad and Android tablets. The participants were diverse: 15 (58 %) African Americans, seven (27 %) Whites, two (8 %) Asians, two (8 %) Latinos with either diabetes, depression, or who were caregivers. Participants were given condition-specific tasks, such as entering a blood glucose value into a diabetes app. Participant interviews were video recorded and were coded using standard methods to evaluate attempts and completions of tasks. We performed inductive coding of participant comments to identify emergent themes. Participants completed 79 of 185 (43 %) tasks across 11 apps without assistance. Three themes emerged from participant comments: lack of confidence with technology, frustration with design features and navigation, and interest in having technology to support their self-management. App developers should employ participatory design strategies in order to have an impact on chronic conditions such as diabetes and depression that disproportionately affect vulnerable populations. While patients express interest in using technologies for self-management, current tools are not consistently usable for diverse patients. |
OF A NOVEL SUPER WIDE BAND CIRCULAR-HEXAGONAL FRACTAL ANTENNA | In this paper, a novel circular-hexagonal Fractal antenna is investigated for super wide band applications. The proposed antenna is made of iterations of a hexagonal slot inside a circular metallic patch with a transmission line. A partial ground plane and asymmetrical patch toward the substrate are used for designing the antenna to achieve a super wide bandwidth ranging from 2.18 GHz to 44.5GHz with a bandwidth ratio of 20.4 : 1. The impedance bandwidth and gain of the proposed antenna are much better than the recently reported super wideband antennas which make it appropriate for many wireless communications systems such as ISM, Wi-Fi, GPS, Bluetooth, WLAN and UWB. Details of the proposed antenna design are presented and discussed. |
Managerial Stock Ownership and the Maturity Structure of Corporate Debt | This study documents that managerial stock ownership plays an important role in determining corporate debt maturity. Controlling for previously identified determinants of debt maturity and modeling leverage and debt maturity as jointly endogenous, we document a significant and robust inverse relation between managerial stock ownership and corporate debt maturity. We also show that managerial stock ownership influences the relation between credit quality and debt maturity and between growth opportunities and debt maturity. THE IMPORTANCE OF LEVERAGE AND DEBT MATURITY STRUCTURE CHOICE in alleviating manager–shareholder agency conflicts is well recognized in the finance literature. These vital decisions are at the discretion of top managers who are expected to make optimal (value-maximizing) financing choices on behalf of the shareholders. However, given the separation of ownership and control, managers cannot be expected to voluntarily choose the optimal debt maturity structure or leverage and self-impose monitoring unless there is an incentive mechanism to align managerial and shareholder interests. It is clear, therefore, that these decisions themselves are subject to an agency problem of managerial discretion. Managerial stock ownership can be effective in aligning the interests of managers and shareholders to mitigate such agency problems (see, e.g., Jensen and Meckling (1976)). We argue that the conflict between managers and shareholders over the maturity structure of debt arises from the inherent preference of self-interested managers for less monitoring. Our study adds a new dimension to the recently growing body of literature on capital structure choice in the presence of agency conflicts. By examining how managerial stock ownership determines corporate debt maturity structure, we provide evidence on an important, yet unaddressed, issue that is at the confluence of the capital structure and corporate governance literatures. Earlier capital structure studies emphasize the role of debt in reducing agency problems between managers and shareholders (see, e.g., Jensen and ∗Datta and Iskandar-Datta are in the Department of Finance at the School of Business Administration, Wayne State University; Raman is in the Department of Finance at Bentley College. Datta acknowledges support from the T. Norris Hitchman Endowed Chair. We thank Husayn Shahrur, the seminar participants at the University of Alabama and Wayne State University, and especially the referee and Rob Stambaugh (the editor) for valuable comments. The usual disclaimer applies. |
Duplex Generative Adversarial Network for Unsupervised Domain Adaptation | Domain adaptation attempts to transfer the knowledge obtained from the source domain to the target domain, i.e., the domain where the testing data are. The main challenge lies in the distribution discrepancy between source and target domain. Most existing works endeavor to learn domain invariant representation usually by minimizing a distribution distance, e.g., MMD and the discriminator in the recently proposed generative adversarial network (GAN). Following the similar idea of GAN, this work proposes a novel GAN architecture with duplex adversarial discriminators (referred to as DupGAN), which can achieve domain-invariant representation and domain transformation. Specifically, our proposed network consists of three parts, an encoder, a generator and two discriminators. The encoder embeds samples from both domains into the latent representation, and the generator decodes the latent representation to both source and target domains respectively conditioned on a domain code, i.e., achieves domain transformation. The generator is pitted against duplex discriminators, one for source domain and the other for target, to ensure the reality of domain transformation, the latent representation domain invariant and the category information of it preserved as well. Our proposed work achieves the state-of-the-art performance on unsupervised domain adaptation of digit classification and object recognition. |
Bayesian image segmentation using wavelet-based priors | This paper introduces a formulation which allows using wavelet-based priors for image segmentation. This formulation can be used in supervised, unsupervised, or semi-supervised modes, and with any probabilistic observation model (intensity, multispectral, texture). Our main goal is to exploit the well-known ability of wavelet-based priors to model piece-wise smoothness (which underlies state-of-the-art methods for denoising, coding, and restoration) and the availability of fast algorithms for wavelet-based processing. The main obstacle to using wavelet-based priors for segmentation is that they're aimed at representing real values, rather than discrete labels, as needed for segmentation. This difficulty is sidestepped by the introduction of real-valued hidden fields, to which the labels are probabilistically related. These hidden fields, being unconstrained and real-valued, can be given any type of spatial prior, such as one based on wavelets. Under this model, Bayesian MAP segmentation is carried out by a (generalized) EM algorithm. Experiments on synthetic and real data testify for the adequacy of the approach. |
A Survey on Multiple-Antenna Techniques for Physical Layer Security | As a complement to high-layer encryption techniques, physical layer security has been widely recognized as a promising way to enhance wireless security by exploiting the characteristics of wireless channels, including fading, noise, and interference. In order to enhance the received signal power at legitimate receivers and impair the received signal quality at eavesdroppers simultaneously, multiple-antenna techniques have been proposed for physical layer security to improve secrecy performance via exploiting spatial degrees of freedom. This paper provides a comprehensive survey on various multiple-antenna techniques in physical layer security, with an emphasis on transmit beamforming designs for multiple-antenna nodes. Specifically, we provide a detailed investigation on multiple-antenna techniques for guaranteeing secure communications in point-to-point systems, dual-hop relaying systems, multiuser systems, and heterogeneous networks. Finally, future research directions and challenges are identified. |
A note on a theorem of Erdös & Gallai | We show that the [email protected]?s-Gallai condition characterizing graphical degree sequences of length p needs to be checked only for as many n as there are distinct terms in the sequence, not all n, 1= |
Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection | To cope with the increasing variability and sophistication of modern attacks, machine learning has been widely adopted as a statistically-sound tool for malware detection. However, its security against well-crafted attacks has not only been recently questioned, but it has been shown that machine learning exhibits inherent vulnerabilities that can be exploited to evade detection at test time. In other words, machine learning itself can be the weakest link in a security system. In this paper, we rely upon a previously-proposed attack framework to categorize potential attack scenarios against learning-based malware detection tools, by modeling attackers with different skills and capabilities. We then define and implement a set of corresponding evasion attacks to thoroughly assess the security of Drebin, an Android malware detector. The main contribution of this work is the proposal of a simple and scalable secure-learning paradigm that mitigates the impact of evasion attacks, while only slightly worsening the detection rate in the absence of attack. We finally argue that our secure-learning approach can also be readily applied to other malware detection tasks. |
Autonomous driving in a multi-level parking structure | Recently, the problem of autonomous navigation of automobiles has gained substantial interest in the robotics community. Especially during the two recent DARPA grand challenges, autonomous cars have been shown to robustly navigate over extended periods of time through complex desert courses or through dynamic urban traffic environments. In these tasks, the robots typically relied on GPS traces to follow pre-defined trajectories so that only local planners were required. In this paper, we present an approach for autonomous navigation of cars in indoor structures such as parking garages. Our approach utilizes multi-level surface maps of the corresponding environments to calculate the path of the vehicle and to localize it based on laser data in the absence of sufficiently accurate GPS information. It furthermore utilizes a local path planner for controlling the vehicle. In a practical experiment carried out with an autonomous car in a real parking garage we demonstrate that our approach allows the car to autonomously park itself in a large-scale multi-level structure. |
Fused Matrix Factorization with Geographical and Social Influence in Location-Based Social Networks | Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users’ preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user’s check-in on a location as a Multi-center Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly. |
Wearable Smart System for Visually Impaired People | In this paper, we present a wearable smart system to help visually impaired persons (VIPs) walk by themselves through the streets, navigate in public places, and seek assistance. The main components of the system are a microcontroller board, various sensors, cellular communication and GPS modules, and a solar panel. The system employs a set of sensors to track the path and alert the user of obstacles in front of them. The user is alerted by a sound emitted through a buzzer and by vibrations on the wrist, which is helpful when the user has hearing loss or is in a noisy environment. In addition, the system alerts people in the surroundings when the user stumbles over or requires assistance, and the alert, along with the system location, is sent as a phone message to registered mobile phones of family members and caregivers. In addition, the registered phones can be used to retrieve the system location whenever required and activate real-time tracking of the VIP. We tested the system prototype and verified its functionality and effectiveness. The proposed system has more features than other similar systems. We expect it to be a useful tool to improve the quality of life of VIPs. |
Convergence Analysis of MAP Based Blur Kernel Estimation | One popular approach for blind deconvolution is to formulate a maximum a posteriori (MAP) problem with sparsity priors on the gradients of the latent image, and then alternatingly estimate the blur kernel and the latent image. While several successful MAP based methods have been proposed, there has been much controversy and confusion about their convergence, because sparsity priors have been shown to prefer blurry images to sharp natural images. In this paper, we revisit this problem and provide an analysis on the convergence of MAP based approaches. We first introduce a slight modification to a conventional joint energy function for blind deconvolution. The reformulated energy function yields the same alternating estimation process, but more clearly reveals how blind deconvolution works. We then show the energy function can actually favor the right solution instead of the no-blur solution under certain conditions, which explains the success of previous MAP based approaches. The reformulated energy function and our conditions for the convergence also provide a way to compare the qualities of different blur kernels, and we demonstrate its applicability to automatic blur kernel size selection, blur kernel estimation using light streaks, and defocus estimation. |
Bilingual dictionaries for all EU languages | Bilingual dictionaries can be automatically generated using the GIZA++ tool. However, these dictionaries contain a lot of noise, because of which the qualities of outputs of tools relying on the dictionaries are negatively affected. In this work, we present three different methods for cleaning noise from automatically generated bilingual dictionaries: LLR, pivot and transliteration based approach. We have applied these approaches on the GIZA++ dictionaries – dictionaries covering official EU languages – in order to remove noise. Our evaluation showed that all methods help to reduce noise. However, the best performance is achieved using the transliteration based approach. We provide all bilingual dictionaries (the original GIZA++ dictionaries and the cleaned ones) free for download. We also provide the cleaning tools and scripts for free download. |
Detecting Trojan horses based on system behavior using machine learning method | The Research of detection malware using machine learning method attracts much attention recent years. However, most of research focused on code analysis which is signature-based or analysis of system call sequence in Linux environment. Obviously, all methods have their strengths and weaknesses. In this paper, we concentrate on detection Trojan horse by operation system information in Windows environment using data mining technology. Our main content and contribution contains as follows: First, we collect Trojan horse samples in true network environment and classify them by scanner. Secondly, we collect operation system behavior features under infected and clean circumstances separately by WMI manager tools. And then, several classic classification algorithms are applied and a performance comparison is given. Feature selection methods are applied to those features and we get a feature order list which reflects the relevance order of Trojan horse activities and the system feature. We believe the instructive meaning of the list is significant. Finally, a feature combination method is applied and features belongs different groups are combined according their characteristic for high classification performance. Results of experiments demonstrate the feasibility of our assumption that detecting Trojan horses by system behavior information is feasible and affective. |
Customer Service on Social Media: The Effect of Customer Popularity and Sentiment on Airline Response | Many companies are now providing customer service through social media, helping and engaging their customers on a real-time basis. To study this increasingly popular practice, we examine how major airlines respond to customer comments on Twitter by exploiting a large data set containing all Twitter exchanges between customers and four major airlines from June 2013 to August 2014. We find that these airlines pay significantly more attention to Twitter users with more followers, suggesting that companies literarily discriminate customers based on their social influence. Moreover, our findings suggest that companies in the digital age are increasingly more sensitive to the need to answer both customer complaints and customer compliments. |
PMSC: PatchMatch-Based Superpixel Cut for Accurate Stereo Matching | Estimating the disparity and normal direction of one pixel simultaneously, instead of only disparity, also known as 3D label methods, can achieve much higher subpixel accuracy in the stereo matching problem. However, it is extremely difficult to assign an appropriate 3D label to each pixel from the continuous label space $\mathbb {R}^{3}$ while maintaining global consistency because of the infinite parameter space. In this paper, we propose a novel algorithm called PatchMatch-based superpixel cut to assign 3D labels of an image more accurately. In order to achieve robust and precise stereo matching between local windows, we develop a bilayer matching cost, where a bottom–up scheme is exploited to design the two layers. The bottom layer is employed to measure the similarity between small square patches locally by exploiting a pretrained convolutional neural network, and then, the top layer is developed to assemble the local matching costs in large irregular windows induced by the tangent planes of object surfaces. To optimize the spatial smoothness of local assignments, we propose a novel strategy to update 3D labels. In the procedure of optimization, both segmentation information and random refinement of PatchMatch are exploited to update candidate 3D label set for each pixel with high probability of achieving lower loss. Since pairwise energy of general candidate label sets violates the submodular property of graph cut, we propose a novel multilayer superpixel structure to group candidate label sets into candidate assignments, which thereby can be efficiently fused by $\alpha $ -expansion graph cut. Extensive experiments demonstrate that our method can achieve higher subpixel accuracy in different data sets, and currently ranks first on the new challenging Middlebury 3.0 benchmark among all the existing methods. |
Detection of Occluding Contours and Occlusion by Active Binocular Stereo | |We propose a reliable method to detect occluding contours and occlusion by active binocular stereo with an occluding contour model that describes the correspondence between the contour of curved objects and its projected image. Applying the image ow generated by moving one camera to the model, we can restrict possible matched points seen by the other camera. We detect occluding contours by tting these matched points to the model. This method can nd occluding contours and occlusion more reliably than conventional ones because stereo matching is performed by using the geometric constraint based on the occluding contour model instead of heuristic constraints. Experimental results of the proposed method applied to the actual scene are presented. |
Integrated physiologic assessment of ischemic heart disease in real-world practice using index of microcirculatory resistance and fractional flow reserve: insights from the International Index of Microcirculatory Resistance Registry. | BACKGROUND
The index of microcirculatory resistance (IMR) is a quantitative and specific index for coronary microcirculation. However, the distribution and determinants of IMR have not been fully investigated in patients with ischemic heart disease (IHD).
METHODS AND RESULTS
Consecutive patients who underwent elective measurement of both fractional flow reserve (FFR) and IMR were enrolled from 8 centers in 5 countries. Patients with acute myocardial infarction were excluded. To adjust for the influence of collateral flow, IMR values were corrected with Yong's formula (IMRcorr). High IMR was defined as greater than the 75th percentile in each of the major coronary arteries. FFR≤0.80 was defined as an ischemic value. 1096 patients with 1452 coronary arteries were analyzed (mean age 61.1, male 71.2%). Mean FFR was 0.84 and median IMRcorr was 16.6 U (Q1, Q3 12.4 U, 23.0 U). There was no correlation between IMRcorr and FFR values (r=0.01, P=0.62), and the categorical agreement of FFR and IMRcorr was low (kappa value=-0.04, P=0.10). There was no correlation between IMRcorr and angiographic % diameter stenosis (r=-0.03, P=0.25). Determinants of high IMR were previous myocardial infarction (odds ratio [OR] 2.16, 95% confidence interval [CI] 1.24-3.74, P=0.01), right coronary artery (OR 2.09, 95% CI 1.54-2.84, P<0.01), female (OR 1.67, 95% CI 1.18-2.38, P<0.01), and obesity (OR 1.80, 95% CI 1.31-2.49, P<0.01). Determinants of FFR ≤0.80 were left anterior descending coronary artery (OR 4.31, 95% CI 2.92-6.36, P<0.01), angiographic diameter stenosis ≥50% (OR 5.16, 95% CI 3.66-7.28, P<0.01), male (OR 2.15, 95% CI 1.38-3.35, P<0.01), and age (per 10 years, OR 1.21, 95% CI 1.01-1.46, P=0.04).
CONCLUSIONS
IMR showed no correlation with FFR and angiographic lesion severity, and the predictors of high IMR value were different from those for ischemic FFR value. Therefore, integration of IMR into FFR measurement may provide additional insights regarding the relative contribution of macro- and microvascular disease in patients with ischemic heart disease.
CLINICAL TRIAL REGISTRATION
URL: http://www.clinicaltrials.gov. Unique identifier: NCT02186093. |
Impaired muscle strength is associated with fractures in hemodialysis patients | Fractures are extremely common among hemodialysis (HD) patients. To assess if bone mineral density (BMD) and/or tests of muscle strength were associated with fractures, we studied 37 men and 15 women, 50 years and older, on HD for at least 1 year. We excluded subjects with prior renal transplants and women taking hormone replacement therapy. We inquired about low-trauma fractures since starting dialysis. Subjects underwent BMD testing with a Lunar DPX-L densitometer. Tests of muscle strength included: timed up and go (TUG), 6-min walk, functional reach, and grip strength. Lateral and thoracic radiographs of the spine were obtained and reviewed for prevalent vertebral fractures. We used logistic regression to examine associations between fracture (prevalent vertebral, self-reported low trauma since starting dialysis and/or both) and BMD, and fracture and muscle-strength tests. Analyses were adjusted for age, weight, and gender. Mean age was 66±9.0 years, mean weight was 72.9±15.2 kg, and most (35 of 52) participants were Caucasian. Average duration of dialysis was 40.2 (interquartile range: 24–61.2) months. The most common cause of renal failure was diabetes (16 subjects). There were no differences by gender or fracture. Of the 52 subjects, 27 had either a vertebral fracture or low trauma fracture. There was no association between fractures, hip or spine BMD, or grip strength. In contrast, greater functional reach [odds ratio (OR) per standard deviation (SD) increase: 0.29; 95% CI: 0.13–0.69), quicker TUG (OR per SD decrease: 0.14; 95% CI: 0.11–0.23), and a greater distance walked in 6 min (OR per SD increase: 0.10; 95% CI: 0.03–0.36) were all associated with a reduced risk of fracture. Impaired neuromuscular function is associated with fracture in hemodialysis patients. |
Product aspect ranking using sentiment analysis and TOPSIS | The explosive growth of customer reviews on e-commerce websites has inspired many researchers to explore the problem of identifying the product aspects that have been mentioned in online reviews. Most of the conducted research studies extract the product aspects based on three main criteria: 1) extracting the aspects that have been commented repeatedly in online reviews, 2) determining of important aspects as those that have been described positively and negatively by many customers in their reviews, and 3) the association between the domain product aspect (like `camera') and the other aspects contained in a specific product review. However, a lacuna remains as how to efficiently investigate online reviews to identify the most important product aspects by considering all the three criteria jointly. In response, this paper proposes a novel product aspect ranking framework using sentiment analysis and TOPSIS (Technique for Order Performance by Similarity to Ideal Solution). The proposed work is decomposed into two stages: aspect extraction and aspect ranking. In aspect extraction stage, sentiment analysis is used to identify the product aspects from customer reviews in an unsupervised manner based on the three criteria of extraction. In the second stage, the extracted product aspects from the previous criteria have been involved simultaneously in TOPSIS to produce a ranked list of the most representative product aspects. The empirical evaluation of the proposed work using online reviews of four products shows its effectiveness in finding representative aspects. |
Learning to Generate Market Comments from Stock Prices | This paper presents a novel encoderdecoder model for automatically generating market comments from stock prices. The model first encodes both shortand long-term series of stock prices so that it can mention shortand long-term changes in stock prices. In the decoding phase, our model can also generate a numerical value by selecting an appropriate arithmetic operation such as subtraction or rounding, and applying it to the input stock prices. Empirical experiments show that our best model generates market comments at the fluency and the informativeness approaching human-generated reference texts. |
Electronic transport properties of individual chemically reduced graphene oxide sheets. | Individual graphene oxide sheets subjected to chemical reduction were electrically characterized as a function of temperature and external electric fields. The fully reduced monolayers exhibited conductivities ranging between 0.05 and 2 S/cm and field effect mobilities of 2-200 cm2/Vs at room temperature. Temperature-dependent electrical measurements and Raman spectroscopic investigations suggest that charge transport occurs via variable range hopping between intact graphene islands with sizes on the order of several nanometers. Furthermore, the comparative study of multilayered sheets revealed that the conductivity of the undermost layer is reduced by a factor of more than 2 as a consequence of the interaction with the Si/SiO2 substrate. |
Information security awareness and behavior : a theory-based literature review | Information security awareness and behavior: a theory-based literature review Benedikt Lebek, Jörg Uffen, Markus Neumann, Bernd Hohler, Michael H. Breitner, Article information: To cite this document: Benedikt Lebek, Jörg Uffen, Markus Neumann, Bernd Hohler, Michael H. Breitner, (2014) "Information security awareness and behavior: a theory-based literature review", Management Research Review, Vol. 37 Issue: 12, pp.1049-1092, https://doi.org/10.1108/MRR-04-2013-0085 Permanent link to this document: https://doi.org/10.1108/MRR-04-2013-0085 |
Reduction of plasma levels of soluble tumor necrosis factor and interleukin-2 receptors by means of a novel immunoadsorption column. | This non-randomized open clinical study investigated the safety and efficacy of extracorporeal fractionated plasma adsorption using the Oncosorb immune adsorption column. The column selectively bound soluble tumor necrosis factor receptors 1 and 2 (sTNF-R1 and sTNF-R2) and soluble interleukin-2 receptors (sIL2-R) by lowering plasma levels in patients with metastatic cancer. Nine patients (three men and six women; mean age 48 years, aged 41-68 years) with metastatic cancer received at least 12 immune adsorption procedures. Thrice-weekly immune adsorption separated a low molecular weight (150 000 D) plasma fraction from the patients' blood, passing the plasma fraction through the column in an extracorporeal plasma perfusion circuit. Respective plasma receptor mean concentrations before and after 167 procedures were: sTNF-R1: 1936 +/- 788 pg/mL before treatment vs. 1312 +/- 989 pg/mL after treatment; sTNF-R2: 3140 +/- 1173 pg/mL before treatment vs. 1816 +/- 1 +/- 1677 pg/mL after treatment (P < 0.01 for each inhibitor). Mean reductions in sTNF-R1 (48%), sTNF-R2 (55%), and sIL2-R levels (72%) were observed for treatments 3-12 (P < 0.001). Clinical findings indicated tumor inflammation and necrosis in most patients. Side-effects were low-grade fever, flu-like symptoms; tumor pain and redness, warmth, tenderness, and edema. The column demonstrated safety and efficacy in lowering plasma sTNF-R1, sTNF-R2, and sIL2-R levels. Minor clinical adverse effects common to the use of extracorporeal devices were seen. |
Counterfactual Analysis in Macroeconometrics : An Empirical Investigation into the Effects of Quantitative Easing * | Counterfactual Analysis in Macroeconometrics: An Empirical Investigation into the Effects of Quantitative Easing This paper is concerned with ex ante and ex post counterfactual analyses in the case of macroeconometric applications where a single unit is observed before and after a given policy intervention. It distinguishes between cases where the policy change affects the model’s parameters and where it does not. It is argued that for ex post policy evaluation it is important that outcomes are conditioned on ex post realized variables that are invariant to the policy change but nevertheless influence the outcomes. The effects of the control variables that are determined endogenously with the policy outcomes can be solved out for the policy evaluation exercise. An ex post policy ineffectiveness test statistic is proposed. The analysis is applied to the evaluation of the effects of the quantitative easing (QE) in the UK after March 2009. It is estimated that a 100 basis points reduction in the spread due to QE has an impact effect on output growth of about one percentage point, but the policy impact is very quickly reversed with no statistically significant effects remaining within 9-12 months of the policy intervention. JEL Classification: C18, C54, E65 |
COSSACS (Continue or Stop post-Stroke Antihypertensives Collaborative Study): rationale and design. | RATIONALE
Up to 40% of acute stroke patients are already taking antihypertensive therapy on hospital admission, and most will develop elevated blood pressure levels as an acute complication of the stroke. However, no clear data exist as to whether antihypertensive therapy should be continued or discontinued in the acute situation. Surveys of clinical practice reveal significant physician variability and no clear guidelines exist.
OBJECTIVES
The primary aim of the Continue or Stop post-Stroke Antihypertensives Collaborative Study (COSSACS) is to assess whether existing antihypertensive therapy should be continued or discontinued within the first 24 h for the first 2 weeks following acute ischaemic and haemorrhagic stroke onset.
DESIGN
COSSACS is a multi-centre, prospective, randomized, open, blinded-endpoint study, in which patients on pre-existing antihypertensive therapy, admitted to hospital within 24 h of onset of suspected stroke, and within 36 h of their last dose of antihypertensive medication, are randomized to continue or stop current antihypertensive therapy.
SETTING
Acute Stroke Units/ Medical Units of at least 25 UK Teaching and District General Hospitals.
PATIENTS
The study will involve 2900 patients with suspected stroke without specific indication to continue or stop their antihypertensive medication in the opinion of their treating clinician.
STUDY OUTCOMES
The primary outcome for COSSACS is the proportion of patients who are dead or dependent (defined by a modified Rankin score > 2) at 14 days post-stroke. Secondary outcomes include blood pressure changes, and neurological and functional status at 2 weeks and 6 months post-ictus. |
In-domain Context-aware Token Embeddings Improve Biomedical Named Entity Recognition | Rapidly expanding volume of publications in the biomedical domain makes it increasingly difficult for a timely evaluation of the latest literature. That, along with a push for automated evaluation of clinical reports, present opportunities for effective natural language processing methods. In this study we target the problem of named entity recognition, where texts are processed to annotate terms that are relevant for biomedical studies. Terms of interest in the domain include gene and protein names, and cell lines and types. Here we report on a pipeline built on Embeddings from Language Models (ELMo) and a deep learning package for natural language processing (AllenNLP). We trained context-aware token embeddings on a dataset of biomedical papers using ELMo, and incorporated these embeddings in the LSTM-CRF model used by AllenNLP for named entity recognition. We show these representations improve named entity recognition for different types of biomedical named entities. We also achieve a new state of the art in gene mention detection on the BioCreative II gene mention shared task. |
Photochemical Tyrosine Oxidation in the Structurally
Well-Defined α3Y Protein: Proton-Coupled Electron
Transfer and a Long-Lived Tyrosine Radical | Tyrosine oxidation-reduction involves proton-coupled electron transfer (PCET) and a reactive radical state. These properties are effectively controlled in enzymes that use tyrosine as a high-potential, one-electron redox cofactor. The α3Y model protein contains Y32, which can be reversibly oxidized and reduced in voltammetry measurements. Structural and kinetic properties of α3Y are presented. A solution NMR structural analysis reveals that Y32 is the most deeply buried residue in α3Y. Time-resolved spectroscopy using a soluble flash-quench generated [Ru(2,2'-bipyridine)3](3+) oxidant provides high-quality Y32-O• absorption spectra. The rate constant of Y32 oxidation (kPCET) is pH dependent: 1.4 × 10(4) M(-1) s(-1) (pH 5.5), 1.8 × 10(5) M(-1) s(-1) (pH 8.5), 5.4 × 10(3) M(-1) s(-1) (pD 5.5), and 4.0 × 10(4) M(-1) s(-1) (pD 8.5). k(H)/k(D) of Y32 oxidation is 2.5 ± 0.5 and 4.5 ± 0.9 at pH(D) 5.5 and 8.5, respectively. These pH and isotope characteristics suggest a concerted or stepwise, proton-first Y32 oxidation mechanism. The photochemical yield of Y32-O• is 28-58% versus the concentration of [Ru(2,2'-bipyridine)3](3+). Y32-O• decays slowly, t1/2 in the range of 2-10 s, at both pH 5.5 and 8.5, via radical-radical dimerization as shown by second-order kinetics and fluorescence data. The high stability of Y32-O• is discussed relative to the structural properties of the Y32 site. Finally, the static α3Y NMR structure cannot explain (i) how the phenolic proton released upon oxidation is removed or (ii) how two Y32-O• come together to form dityrosine. These observations suggest that the dynamic properties of the protein ensemble may play an essential role in controlling the PCET and radical decay characteristics of α3Y. |
An Implementation of ITIL Guidelines for IT Support Process in a Service Organization | —Service level management (SLM) is a challenge in a distributed systems environment because all processes should provide a consistent, reliable, and predictable service delivery. Early 1990s, most organizations established few service-level agreements (SLA) as the key performance indicators (KPI) but it was difficult to measure or monitor them in a distributed systems environment. The strength of Information Technology Infrastructure Library (ITIL) is the approach of integrating the SLM with the support processes at strategic, tactical, and operational levels. This paper focuses on implementing ITIL guidelines at an operational level for service desk, incidents, problems, and change management. The ITIL framework only provides guidelines, so a service organization needs to explore a methodology for evaluating existing service support processes and implementing ITIL guidelines for improvements. To this end, we investigate upon how to apply the ITIL framework for reengineering of IT service support process in an organization. The approach is actually implemented at a dentalcare service provider with ten dental clinics connected in Wide Area Network (WAN) and the data is collected into a central server in the main dental center. We first started with the process mapping, and then moved towards reengineering following ITIL guidelines, while collecting the results in key performance areas before and after the process reengineering. This paper used questionnaires, document reviews, archival records and observation techniques for collecting the data. The results demonstrated improvements in differing magnitudes. Some ststistical analysis such as mean and variance together with t-value distribution and null hypothesis were also developed to determine the quality of results. The paper is cautious about the limited scope of audience and the questionnaire which may compromise the results; however, the approach is useful in obtaining significant improvements when a company invests in integrating ITIL guidelines in to service support processes. |
Image Restoration Using Joint Statistical Modeling in a Space-Transform Domain | This paper presents a novel strategy for high-fidelity image restoration by characterizing both local smoothness and nonlocal self-similarity of natural images in a unified statistical manner. The main contributions are three-fold. First, from the perspective of image statistics, a joint statistical modeling (JSM) in an adaptive hybrid space-transform domain is established, which offers a powerful mechanism of combining local smoothness and nonlocal self-similarity simultaneously to ensure a more reliable and robust estimation. Second, a new form of minimization functional for solving the image inverse problem is formulated using JSM under a regularization-based framework. Finally, in order to make JSM tractable and robust, a new Split Bregman-based algorithm is developed to efficiently solve the above severely underdetermined inverse problem associated with theoretical proof of convergence. Extensive experiments on image inpainting, image deblurring, and mixed Gaussian plus salt-and-pepper noise removal applications verify the effectiveness of the proposed algorithm. |
Key management systems for sensor networks in the context of the Internet of Things | If a wireless sensor network (WSN) is to be completely integrated into the Internet as part of the Internet of Things (IoT), it is necessary to consider various security challenges, such as the creation of a secure channel between an Internet host and a sensor node. In order to create such a channel, it is necessary to provide key management mechanisms that allow two remote devices to negotiate certain security credentials (e.g. secret keys) that will be used to protect the information flow. In this paper we will analyse not only the applicability of existing mechanisms such as public key cryptography and pre-shared keys for sensor nodes in the IoT context, but also the applicability of those link-layer oriented key management systems (KMS) whose original purpose is to provide shared keys for sensor nodes belonging to the same WSN. 2011 Elsevier Ltd. All rights reserved. |
Comprehensive microbiological findings in peri-implantitis and periodontitis. | AIM
The microbial differences between peri-implantitis and periodontitis in the same subjects were examined using 16S rRNA gene clone library analysis and real-time polymerase chain reaction.
MATERIALS AND METHODS
Subgingival plaque samples were taken from the deepest pockets of peri-implantitis and periodontitis sites in six subjects. The prevalence of bacteria was analysed using a 16S rRNA gene clone library and real-time polymerase chain reaction.
RESULTS
A total of 333 different taxa were identified from 799 sequenced clones; 231 (69%) were uncultivated phylotypes, of which 75 were novel. The numbers of bacterial taxa identified at the sites of peri-implantitis and periodontitis were 192 and 148 respectively. The microbial composition of peri-implantitis was more diverse when compared with that of periodontitis. Fusobacterium spp. and Streptococcus spp. were predominant in both peri-implantitis and periodontitis, while bacteria such as Parvimonas micra were only detected in peri-implantitis. The prevalence of periodontopathic bacteria was not high, while quantitative evaluation revealed that, in most cases, prevalence was higher at peri-implantitis sites than at periodontitis sites.
CONCLUSIONS
The biofilm in peri-implantitis showed a more complex microbial composition when compared with periodontitis. Common periodontopathic bacteria showed low prevalence, and several bacteria were identified as candidate pathogens in peri-implantitis. |
Workflow Management: Models, Methods, and Systems | This book offers a comprehensive introduction to workflow management, the management of business processes with information technology. By defining, analyzing, and redesigning an organization’s resources and operations, workflow management systems ensure that the right information reaches the right person or computer application at the right time. The book provides a basic overview of workflow terminology and organization, as well as detailed coverage of workflow modeling with Petri nets. Because Petri nets make definitions easier to understand for nonexperts, they facilitate communication between designers and users. The book includes a chapter of case studies, review exercises, and a glossary. |
Vessel wall contrast enhancement: a diagnostic sign of cerebral vasculitis. | PURPOSE
Inflammatory stenoses of cerebral arteries cause stroke in patients with florid vasculitis. However, diagnosis is often difficult even with digital subtraction angiography (DSA) and biopsy. The purpose of this study was to establish the value of contrast-enhanced MRI, proven to be sensitive to extradural arteritis, for the identification of intracranial vessel wall inflammation.
PATIENTS AND METHODS
Twenty-seven patients with a diagnosis of cerebral vasculitis affecting large brain vessels were retrieved from the files: 8 children (2-10 years, 7 female, 1 male) and 19 adults (16-76 years, 10 female, 9 male). Diagnosis was based on histological or serological proof of vasculitis or on clinical and imaging criteria. All MRI examinations included diffusion-weighted imaging, time-of-flight magnetic resonance angiography (TOF-MRA) and contrast-enhanced scans. MRI scans were assessed for the presence of ischemic brain lesions, arterial stenoses, vessel wall thickening and contrast uptake.
RESULTS
Ischemic changes of the brain tissue were seen in 24/27 patients and restricted diffusion suggestive of recent ischemia in 17/27; 25/27 patients had uni- or multifocal stenoses of intracranial arteries on TOF-MRA and 5/6 had stenoses on DSA. Vessel wall thickening was identified in 25/27, wall enhancement in 23/27 patients.
CONCLUSION
Wall thickening and intramural contrast uptake are frequent findings in patients with active cerebral vasculitis affecting large brain arteries. Further prospective studies are required to determine the specificity of this finding. |
Finding and Fixing Vehicle NVH Problems with Transfer Path Analysis | This article discusses the use of experimental transfer path analysis (TPA) to find optimized solutions to NVH problems remaining late in vehicle development stages. After a short review of established TPA methods, four practical case histories are discussed to illustrate how TPA, FE models and practical experiments can supplement each other efficiently for finding optimum and attribute-balanced solutions to complex NVH issues late in the development process. Experimental transfer path analysis (TPA) is a fairly well established technique, 1,2 for estimating and ranking individual low-frequency noise or vibration contributions via the different structural transmission paths from point-coupled power-train or wheel suspensions to the vehicle body. TPA is also used to analyze the transmission paths into vibration-isolated truck or tractor cabs etc. TPA can also be used at higher frequencies (above 150-200 Hz) in road vehicles, although it may be reasonable to introduce a somewhat different formulation based on the response statistics of multimodal vibro-acoustic systems with strong modal overlap. 3 When NVH problems still remain close to start of production (SOP), experimental TPA is often a favored technique to investigate further possibilities to fine-tune the rubber components of the engine or wheel suspension with respect to NVH. The aim is to further improve NVH with minimal negative impact on other vehicle attributes, such as ride comfort, handling , drivability, durability, etc. The only design parameters that can directly be changed in a " what if? " study based purely on experimental TPA, are the dynamic properties of rubber elements connecting the source and the receiving structure. Also, any reduction of transfer path contributions to noise or vibration in that case will be a result of reducing some of the dynamic stiffness' for the connecting elements. To take any other design changes into account, additional measurements are normally necessary. Each degree of freedom (DOF) acting at interface points between a vibration source system and a receiving, passive vibro-acoustic system is a transfer path in TPA. TPA can also be performed analytically, using FE models or specialized system analysis software. 4 The experimental TPA method involves: 1) An indirect measurement procedure for estimating operating force components acting at the coupled DOFs. 2) The direct or reciprocal measurement of all transfer frequency response functions (FRFs) between response in points of interest (e.g. at the drivers ear) and points where these forces act. The FRFs are measured with the receiving subsystem disconnected at all … |
INTEGRATION OF MUNICIPAL SOLID WASTE MANAGEMENT IN ACCRA ( GHANA ) : BIOREACTOR TREATMENT TECHNOLOGY AS AN INTEGRAL PART OF THE MANAGEMENT PROCESS | The increasing solid waste generated in the Accra metropolis has not been accompanied with adequate sanitation facilities and management programs. Notable among the waste management problems is inadequate operational funding from the municipality’s budget allocation for the collection and disposal processes. The disposal methods mostly depend on the obsolete dumping with the associated environmental and social risks. Recycling of waste that reduces the waste dumping has also not been effective. With the problem of inadequate operational funding it is possible that the sanitary landfill under construction by the AMA may revert to a dump. This project however, is an opportunity to introduce the bioreactor waste treatment technology. This technology converts only degradable organic matter into useful fuel and reduces the environmental impact of organic wastes, and therefore not all the solid waste components are treated. The integration of this technology with collection and supply for recycling and composting is therefore important in order to cater for other components. Three levels of integration are proposed here. One level is to link the collection of materials for recycling and composting with available markets. The waste collection is also to be integrated with the bioreactor treatment process. The third level is to bring the biological treatment facility, collection, recycling and composting programs in co-operation with the stakeholders (the management authorities, the public and the private investors). The underlying purpose of this study is to develop a conceptual methodology for MSWM strategies in Accra that demonstrates the ability to include social, environmental and economic compatibilities as the dimensions of a sustainable waste management system. |
Differential Evolution: A Survey of the State-of-the-Art | Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms in current use. DE operates through similar computational steps as employed by a standard evolutionary algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of randomly selected and distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This paper presents a detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far. Also, it provides an overview of the significant engineering applications that have benefited from the powerful nature of DE. |
Government Control of the Media | We present a formal model of government control of the media to illuminate variation in media freedom across countries and over time. The extent of government control depends critically on two variables: the mobilizing character of the government and the size of the advertising market. Media bias is greater and state ownership of the media more likely when governments seek to mobilize populations through biased reporting; however, the distinction between state and private media is smaller. Large advertising markets reduce media bias in both state and private media, but increase the incentive for the government to nationalize private media. We illustrate these arguments with a case study of media freedom in postcommunist Russia, where media bias has responded to the mobilizing needs of the Kremlin and government control over the media has grown in tandem with the size of the advertising market. Paper prepared for presentation at the “Frontiers of Political Economics” conference, Moscow, May 2008. ∗Gehlbach: University of Wisconsin–Madison and CEFIR, [email protected]. Sonin: New Economic School, CEFIR, and CEPR, [email protected]. Preliminary: comments welcome. |
Privacy, Risk Perception, and Expert Online Behavior: An Exploratory Study of Household End Users | Advances in online technologies have raised new concerns about privacy. A sample of expert household end users was surveyed concerning privacy, risk perceptions, and online behavior intentions. A new e-privacy typology consisting of privacyaware, privacy-suspicious, and privacy-active types was developed from a principal component factor analysis. Results suggest the presence of a privacy hierarchy of effects where awareness leads to suspicion, which subsequently leads to active behavior. An important finding was that privacy-active behavior that was hypothesized to increase the likelihood of online subscription and purchasing was not found to be significant. A further finding was that perceived risk had a strong negative influence on the extent to which respondents participated in online subscription and purchasing. Based on these results, a number of implications for managers and directions for future research are discussed. |
Concerning Lev Shestov's conception of ethics | Anyone acquainted with Shestov's ideas knows that this philosopher regards ethics, together with reason, as being the enemies of man. Usually readers experience great difficulty in accepting his position on this matter, and it frightens them. It is as if Shestov is pushing his readers right out of their depth. In order to put right 'damage' of that kind, Professor George Kline has taken the step of amending Shestov's use of the word 'ethics' by adding the word 'rational' and thus arriving at the term 'rational ethics'. Doing so suggests that in Shestov's scheme of things there is another ethics, which is a somehow pure, good, authentic ethics, one that saves the philosopher from pronouncing an absolute condem nation of ethics as such. Reading Shestov is rendered bearable and acceptable again. Be that as it may, I do not think it permissible to proceed in that manner, for Shestov himself never speaks of two different kinds of ethics. Here I wish to examine two important matters: on one hand, to probe the question regarding what we fear if we are obliged to reject ethics; on the other hand, to consider where Shestov's antipathy to ethics springs from. I am struck by the fact that almost all his readers react alike: "He is a very beguiling thinker, but...." This "but" could equally well be conveyed by the words "But of course it is necessary to purify and correct his texts because it is plain that he exaggerates. What he writes could even be dangerous." Didn't Filosofov and Frank reproach him for "corrupting young people," that same notorious reproach which, as we know, cost Socrates his life? This means that Shestov's works do conceal some kind of threat, just as do the works of Nietzsche and even those of Dostoevskij, though people have always tried, and continue trying, to ascribe holy and prophetic |
WiFi-Nano: reclaiming WiFi efficiency through 800 ns slots | The increase in WiFi physical layer transmission speeds from 1~Mbps to 1 Gbps has reduced transmission times for a 1500 byte packet from 12 ms to 12 us. However, WiFi MAC overheads such as channel access and acks have not seen similar reductions and cumulatively contribute about 150 us on average per packet. Thus, the efficiency of WiFi has deteriorated from over 80% at 1 Mbps to under 10% at 1 Gbps.
In this paper, we propose WiFi-Nano, a system that uses 800 ns slots} to significantly improve WiFi efficiency. Reducing slot time from 9 us to 800 ns makes backoffs efficient, but clear channel assessment can no longer be completed in one slot since preamble detection can now take multiple slots. Instead of waiting for multiple slots for detecting preambles, nodes speculatively transmit preambles as their backoff counters expire, while continuing to detect premables using self-interference cancellation. Upon detection of preambles from other transmitters, nodes simply abort their own preamble transmissions, thereby allowing the earliest transmitter to succeed. Further, receivers speculatively transmit their ack preambles at the end of packet reception, thereby reducing ack overhead. We validate the effectiveness of WiFi-Nano through implementation on an FPGA-based software defined radio platform, and through extensive simulations, demonstrate efficiency gains of up to 100%. |
Dynamic Performance of a SCARA Robot Manipulator With Uncertainty Using Polynomial Chaos Theory | This short paper outlines how polynomial chaos theory (PCT) can be utilized for manipulator dynamic analysis and controller design in a 4-DOF selective compliance assembly robot-arm-type manipulator with variation in both the link masses and payload. It includes a simple linear control algorithm into the formulation to show the capability of the PCT framework. |
Extracellular synthesis of silver and gold nanoparticles by Sporosarcina koreensis DC4 and their biological applications. | The present study highlights the microbial synthesis of silver and gold nanoparticles by Sporosarcina koreensis DC4 strain, in an efficient way. The synthesized nanoparticles were characterized by ultraviolet-visible spectrophotometry, which displayed maximum absorbance at 424nm and 531nm for silver and gold nanoparticles, respectively. The spherical shape of nanoparticles was characterized by field emission transmission electron microscopy. The energy dispersive X-ray spectroscopy and elemental mapping were displayed the purity and maximum elemental distribution of silver and gold elements in the respective nanoproducts. The X-ray diffraction spectroscopy results demonstrate the crystalline nature of synthesized nanoparticles. The particle size analysis demonstrate the nanoparticles distribution with respect to intensity, volume and number of nanoparticles. For biological applications, the silver nanoparticles have been explored in terms of MIC and MBC against pathogenic microorganisms such as Vibrio parahaemolyticus, Escherichia coli, Salmonella enterica, Bacillus anthracis, Bacillus cereus and Staphylococcus aureus. Moreover, the silver nanoparticles in combination with commercial antibiotics, such as vancomycin, rifampicin, oleandomycin, penicillin G, novobiocin, and lincomycin have been explored for the enhancement of antibacterial activity and the obtained results showed that 3μg concentration of silver nanoparticles sufficiently enhance the antimicrobial efficacy of commercial antibiotics against pathogenic microorganism. Furthermore, the silver nanoparticles potential has been reconnoitered for the biofilm inhibition by S. aureus, Pseudomonas aeruginosa and E. coli and the results revealed sufficient activity at 6μg concentration. In addition, gold nanoparticles have been applied for catalytic activity, for the reduction of 4-nitrophenol to 4-aminophenol using sodium borohydride and positive results were attained. |
A genetic analysis of the validity of the Hypomanic Personality Scale. | OBJECTIVES
Studies of mania risk have increasingly relied on measures of subsyndromal tendencies to experience manic symptoms. The measures of mania risk employed in those studies have been shown to predict manic onset, to show familial associations, and to demonstrate expected correlations with psychosocial variables related to bipolar disorder. However, little work has been conducted to validate such measures against biologically relevant indices, or to consider whether early adversity, which has been shown to be highly elevated among those with bipolar disorder, is related to higher scores on mania risk measures. This study tested whether a well-used, self-report measure of vulnerability to mania is associated with several candidate genes that have previously been linked with bipolar disorder or with early adversity. Interactions of genes with early adversity in the prediction of mania vulnerability were also tested.
METHODS
Undergraduate students from the University of Miami (Coral Gables, FL, USA) (N = 305) completed the Hypomanic Personality Scale and the Risky Families Scale, and provided blood for genotyping.
RESULTS
Findings indicated that the Hypomanic Personality Scale was related to a number of dopamine-relevant polymorphisms and with early adversity. A polymorphism of ANKK1 appeared to specifically increase mania risk in the context of early adversity.
CONCLUSIONS
These results provide additional support for the validity of the Hypomanic Personality Scale. |
Smart cities and the Internet of Things | The smart city concept represents a compelling platform for IT-enabled service innovation. It offers a view of the city where service providers use information technologies to engage with citizens to create more effective urban organizations and systems that can improve the quality of life. The emerging Internet of Things (IoT) model is foundational to the development of smart cities. Integrated cloud-oriented architecture of networks, software, sensors, human interfaces, and data analytics are essential for value creation. IoT smart-connected products and the services they provision will become essential for the future development of smart cities. This paper will explore the smart city concept and propose a strategy development model for the implementation of IoT systems in a smart city context. |
Biomedical Waste Management: A study of knowledge, attitude and practice among health care personnel at tertiary care hospital in Rajkot | According to Bio-Medical Waste (management and handling) rules, 1998 of India, Bio Medical Waste (BMW) means any solid, fluid, or liquid waste including its containers and any intermediate product which is generated during the diagnosis, treatment, or immunization of human beings or animals or in research activities pertaining thereto or in the production or testing of biological and includes ten categories for same [1]. Majority of waste (75-90%) produced by the healthcare providers is non-risk or general and it is estimated that the ABSTRACT |
A Randomized Algorithm for CCA | We present RandomizedCCA, a randomized algorithm for computing canonical analysis, suitable for large datasets stored either out of core or on a distributed file system. Accurate results can be obtained in as few as two data passes, which is relevant for distributed processing frameworks in which iteration is expensive (e.g., Hadoop). The strategy also provides an excellent initializer for standard iterative solutions. |
Development of polyvinyl alcohol-sodium alginate gel-matrix-based wound dressing system containing nitrofurazone. | Polyvinyl alcohol (PVA)/sodium alginate (SA) hydrogel matrix-based wound dressing systems containing nitrofurazone (NFZ), a topical anti-infective drug, were developed using freeze-thawing method. Aqueous solutions of nitrofurazone and PVA/SA mixtures in different weight ratios were mixed homogeneously, placed in petri dishes, freezed at -20 degrees C for 18h and thawed at room temperature for 6h, for three consecutive cycles, and evaluated for swelling ratio, tensile strength, elongation and thermal stability of the hydrogel. Furthermore, the drug release from this nitrofurazone-loaded hydrogel, in vitro protein adsorption test and in vivo wound healing observations in rats were performed. Increased SA concentration decreased the gelation%, maximum strength and break elongation, but it resulted into an increment in the swelling ability, elasticity and thermal stability of hydrogel film. However, SA had insignificant effect on the release of nitrofurazone. The amounts of proteins adsorbed on hydrogel were increased with increasing sodium alginate ratio, indicating the reduced blood compatibility. In vivo experiments showed that this hydrogel improved the healing rate of artificial wounds in rats. Thus, PVA/SA hydrogel matrix based wound dressing systems containing nitrofurazone could be a novel approach in wound care. |
Automated Evaluation of Scientific Writing: AESW Shared Task Proposal | The goal of the Automated Evaluation of Scientific Writing (AESW) Shared Task is to analyze the linguistic characteristics of scientific writing to promote the development of automated writing evaluation tools that can assist authors in writing scientific papers. The proposed task is to predict whether a given sentence requires editing to ensure its “fit” with the scientific writing genre. We describe the proposed task, training, development, and test data sets, and evaluation metrics. Quality means doing it right when no one is looking. – Henry Ford |
Generating Natural Language Explanations for Visual Question Answering using Scene Graphs and Visual Attention | In this paper, we present a novel approach for the task of eXplainable Question Answering (XQA), i.e., generating natural language (NL) explanations for the Visual Question Answering (VQA) problem. We generate NL explanations comprising of the evidence to support the answer to a question asked to an image using two sources of information: (a) annotations of entities in an image (e.g., object labels, region descriptions, relation phrases) generated from the scene graph of the image, and (b) the attention map generated by a VQA model when answering the question. We show how combining the visual attention map with the NL representation of relevant scene graph entities, carefully selected using a language model, can give reasonable textual explanations without the need of any additional collected data (explanation captions, etc). We run our algorithms on the Visual Genome (VG) dataset and conduct internal user-studies to demonstrate the efficacy of our approach over a strong baseline. We have also released a live web demo showcasing our VQA and textual explanation generation using scene graphs and visual attention.1 |
A cross-platform model for secure Electronic Health Record communication | During the past decade, there have been many regional, national and European projects focused on the development of platforms for secure access and sharing of distributed patient information. A platform is needed because present local or enterprise-wide information systems are typically not intended for cross-organisational secure access of patient data. Most of the present secure platforms are local or regional. Commonly used platform types in the health care environment vary from secure point-to-point communication systems to internet-based portals. This paper defines an enhanced cross-security platform which makes it possible for different kinds of local, regional, and national health information systems to communicate in a secure way. The proposed evolutionary way interconnects regional or national security domains with the help of a cross-platform zone. A more revolutionary model based on peer-to-peer Grid like networks and dynamic security credentials is also discussed. The proposed evolutionary model uses cross-domain security and interoperability services to ensure secure communication and interoperability between different security domains. The platform supports both communication defined beforehand and adhoc dynamic access to distributed electronic health records (EHRs). The internet is proposed as the "glue" between different regional or national security domains. |
In depth performance evaluation of LTE-M for M2M communications | The Internet of Things (IoT) represents the next wave in networking and communication which will bring by 2020 tens of billions of Machine-to-Machine (M2M) devices connected through the internet. Hence, this rapid increase in Machine Type Communications (MTC) poses a challenge on cellular operators to support M2M communications without hindering the existing Quality of Service for already established Human-to-Human (H2H) communications. LTE-M is one of the candidates to support M2M communications in Long Term Evolution (LTE) cellular networks. In this paper, we appraise and present an in depth performance evaluation of LTE-M based on cross-layer network metrics. Compared with LTE Category 0 previously released by 3GPP for MTC, simulation results show that LTE-M offers additional advantages to meet M2M communication needs in terms of wider coverage, lower throughput, and a larger number of machines connected through LTE network. However, we show that LTE-M is not yet up to the level to meet future applications requirements regarding a near-zero latency and an advanced Quality of Service (QoS) for this massive number of connected Machine Type devices (MTDs). |
Human Resource Information Systems: A Current Assessment | Human resource information systems (HRIS) have become a major MIS subfunction within the personnel areas of many large corporations. This article traces the development of HRIS as an entity independent of centralized MIS, assesses its current operation and technological base, and considers its future role in the firm, especially its relationship to the centralized MIS function. The results of a survey of HRIS professionals from 171 U.S. corporations are described in order to provide an overview of the current design, operation, and effectiveness of HRIS. The findings of the survey are discussed in terms of their implications for management of human resource information systems. |
Global Downscaling of Remotely-Sensed Soil Moisture using Neural Networks | Characterizing soil moisture at spatio-temporal scales relevant to land surface processes (i.e. of the order of a kilometer) is necessary in order to quantify its role in regional feedbacks between land surface and the atmospheric boundary layer. Moreover, several applications such as agricultural management can benefit from soil moisture information at fine spatial scales. Soil moisture estimates from current satellite missions have a reasonably good temporal revisit over the globe (2-3 days repeat time); however, their finest spatial resolution is 9km. NASA’s Soil Moisture Active Passive (SMAP) satellite estimates soil moisture at two 15 different spatial scales of 36km and 9km since April 2015. In this study, we develop a neural networks-based downscaling algorithm using SMAP observations and disaggregate soil moisture to 2.25km spatial resolution. Our approach uses mean monthly Normalized Differenced Vegetation Index (NDVI) as an ancillary data to quantify sub-pixel heterogeneity of soil moisture. Evaluation of the downscaled soil moisture estimates against in situ observations shows that their accuracy is better than or equal to the SMAP 9km soil moisture estimates. 20 |
Deep learning in neural networks: An overview | In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks. |
Optimal convection volume for improving patient outcomes in an international incident dialysis cohort treated with online hemodiafiltration | Online hemodiafiltration (OL-HDF), the most efficient renal replacement therapy, enables enhanced removal of small and large uremic toxins by combining diffusive and convective solute transport. Randomized controlled trials on prevalent chronic kidney disease (CKD) patients showed improved patient survival with high-volume OL-HDF, underlining the effect of convection volume (CV). This retrospective international study was conducted in a large cohort of incident CKD patients to determine the CV threshold and range associated with survival advantage. Data were extracted from a cohort of adult CKD patients treated by post-dilution OL-HDF over a 101-month period. In total, 2293 patients with a minimum of 2 years of follow-up were analyzed using advanced statistical tools, including cubic spline analyses for determination of the CV range over which a survival increase was observed. The relative survival rate of OL-HDF patients, adjusted for age, gender, comorbidities, vascular access, albumin, C-reactive protein, and dialysis dose, was found to increase at about 55 l/week of CV and to stay increased up to about 75 l/week. Similar analysis of pre-dialysis β2-microglobin (marker of middle-molecule uremic toxins) concentrations found a nearly linear decrease in marker concentration as CV increased from 40 to 75 l/week. Analysis of log C-reactive protein levels showed a decrease over the same CV range. Thus, a convection dose target based on convection volume should be considered and needs to be confirmed by prospective trials as a new determinant of dialysis adequacy. |
Frequency doubling perimetry in patients with mild and moderate pituitary tumor-associated visual field defects detected by conventional perimetry. | PURPOSE
To test the ability of frequency doubling technology (FDT) perimetry to identify pituitary tumor-associated visual field defects.
METHODS
Twenty-three eyes from patients with pituitary tumor (PT) and 28 normal eyes were studied. Subjects were eligible when presenting with normal visual acuity and mild or moderate temporal field loss in at least one eye on Goldmann and standard automated perimetry (SAP). FDT testing was performed using the C-20-5 screening and the C-20 full-threshold test. The sensitivity and specificity for identification of the field defect were determined and the groups were compared with regard to several parameters. Finally, we compared the ability of FDT and SAP to detect the characteristic pattern of temporal hemianopia/quadrantanopia.
RESULTS
In the screening test, sensitivity ranged from 21.74% to 43.48% while specificity was 100%. In the threshold test, sensitivity ranged from 52.17% to 82.61% (specificities of 85.71% and 60.71%, respectively), according to total deviation analysis, and from 30.43% to 73.91% (specificities of 96.42% and 64.28%, respectively), according to the pattern deviation analysis. Patients with PT presented a significantly higher number of abnormal points in the temporal hemifield. In 12 eyes FDT and SAP were both able to identify the characteristic pattern of visual field defect; in 4 eyes FDT performed better than SAP; in 4 eyes, SAP performed better than FDT, while in 3 neither test was able to determine the pattern of visual field defect correctly.
CONCLUSIONS
Threshold FDT perimetry seems to be a sensitive instrument for identifying abnormality in eyes with chiasmal compression-induced field defects detected on conventional perimetry. |
NEED FOR HUMAN DIRECTION OF DATA MINING CROSS-INDUSTRY STANDARD PROCESS: CRISP–DM CASE STUDY 1: ANALYZING AUTOMOBILE WARRANTY CLAIMS: EXAMPLE OF THE CRISP–DM INDUSTRY STANDARD PROCESS IN ACTION FALLACIES OF DATA MINING | WHAT IS DATA MINING? WHY DATA MINING? NEED FOR HUMAN DIRECTION OF DATA MINING CROSS-INDUSTRY STANDARD PROCESS: CRISP–DM CASE STUDY 1: ANALYZING AUTOMOBILE WARRANTY CLAIMS: EXAMPLE OF THE CRISP–DM INDUSTRY STANDARD PROCESS IN ACTION FALLACIES OF DATA MINING WHAT TASKS CAN DATA MINING ACCOMPLISH? CASE STUDY 2: PREDICTING ABNORMAL STOCK MARKET RETURNS USING NEURAL NETWORKS CASE STUDY 3: MINING ASSOCIATION RULES FROM LEGAL DATABASES CASE STUDY 4: PREDICTING CORPORATE BANKRUPTCIES USING DECISION TREES CASE STUDY 5: PROFILING THE TOURISM MARKET USING k-MEANS CLUSTERING ANALYSIS |
A modulated model predictive control scheme for a two-level voltage source inverter | Traditional finite-set model predictive control (FS-MPC) techniques are characterized by a variable switching frequency which causes noise as well as large voltage and current ripple. In this paper a novel predictive control strategy with a fixed switching frequency for a voltage source inverter called as modulated model predictive control (M2PC) is proposed, with the aim of obtaining a modulated waveform at the output of the converter. The feasibility of this strategy is evaluated using simulation results to demonstrate the advantages of predictive control, such as fast dynamic response and the easy inclusion of nonlinearities. Finally, a modified strategy is proposed in order to naturally reduce the common mode voltage. The constraints of the system are maintained but the performance of the system in terms of power quality is improved when compared to FS-MPC. |
An Exploratory Analysis of Market Orientation of Small and Medium-Sized Businesses (SMEs) in Peru Un Análisis Exploratorio de la Orientación de Mercado de Pe- queñas y Medianas Empresas (PyMEs) en Perú | Traditional marketing knowledge says that a market oriented firm provides its employees with a better understanding of its customers, competitors, and environment. The consequence of having this knowledge leads the organization to provide enhanced customer satisfaction and firm performance. The objective of this research was to assess the market orientations of small business managers in the retail environment in Peru and provide empirical information about characteristics of the market oriented managers and non-market oriented managers. A concerted effort was made to determine the differences in the perceived importance of different elements of the marketing function. |
Development of automatic vehicle plate detection system | This paper presents the development of automatic vehicle plate detection system using image processing technique. The famous name for this system is Automatic Number Plate Recognition (ANPR). Automatic vehicle plate detection system is commonly used in field of safety and security systems especially in car parking area. Beside the safety aspect, this system is applied to monitor road traffic such as the speed of vehicle and identification of the vehicle's owner. This system is designed to assist the authorities in identifying the stolen vehicle not only for car but motorcycle as well. In this system, the Optical Character Recognition (OCR) technique was the prominent technique employed by researchers to analyse image of vehicle plate. The limitation of this technique was the incapability of the technique to convert text or data accurately. Besides, the characters, the background and the size of the vehicle plate are varied from one country to other country. Hence, this project proposes a combination of image processing technique and OCR to obtain the accurate vehicle plate recognition for vehicle in Malaysia. The outcome of this study is the system capable to detect characters and numbers of vehicle plate in different backgrounds (black and white) accurately. This study also involves the development of Graphical User Interface (GUI) to ease user in recognizing the characters and numbers in the vehicle or license plates. |
Visually guided landing of an unmanned aerial vehicle | We present the design and implementation of a real-time, vision-based landing algorithm for an autonomous helicopter. The landing algorithm is integrated with algorithms for visual acquisition of the target (a helipad), and navigation to the target, from an arbitrary initial position and orientation. We use vision for precise target detection and recognition, and a combination of vision and GPS for navigation. The helicopter updates its landing target parameters based on vision and uses an onboard behavior-based controller to follow a path to the landing site. We present significant results from flight trials in the field which demonstrate that our detection, recognition and control algorithms are accurate, robust and repeatable. |
Information Security Awareness: Literature Review and Integrative Framework | Individuals’ information security awareness (ISA) plays a critical role in determining their securityrelated behavior in both organizational and private contexts. Understanding this relationship has important implications for individuals and organizations alike who continuously struggle to protect their information security. Despite much research on ISA, there is a lack of an overarching picture of the concept of ISA and its relationship with other constructs. By reviewing 40 studies, this study synthesizes the relationship between ISA and its antecedents and consequences. In particular, we (1) examine definitions of ISA; (2) categorize antecedents of ISA according to their level of origin; and (3) identify consequences of ISA in terms of changes in beliefs, attitudes, intentions, and actual security-related behaviors. A framework illustrating the relationships between the constructs is provided and areas for future research are identified. |
An evolutionary trust game for the sharing economy | In this paper, we present an evolutionary trust game to investigate the formation of trust in the so-called sharing economy from a population perspective. To the best of our knowledge, this is the first attempt to model trust in the sharing economy using the evolutionary game theory framework. Our sharing economy trust model consists of four types of players: a trustworthy provider, an untrustworthy provider, a trustworthy consumer, and an untrustworthy consumer. Through systematic simulation experiments, five different scenarios with varying proportions and types of providers and consumers were considered. Our results show that each type of players influences the existence and survival of other types of players, and untrustworthy players do not necessarily dominate the population even when the temptation to defect (i.e., to be untrustworthy) is high. Our findings may have important implications for understanding the emergence of trust in the context of sharing economy transactions. |
Incremental diagnostic value of preoperative 99mTc-MIBI SPECT in patients with a parathyroid adenoma. | UNLABELLED
The purpose of this prospective study was to evaluate the diagnostic value of early parathyroid SPECT combined with quantitative analysis as compared with planar imaging in patients undergoing minimally invasive radioguided surgery.
METHODS
A total of 52 consecutive patients with primary hyperparathyroidism underwent planar and SPECT parathyroid scintigraphy 2-5 d before surgery. Each patient had a single-tracer dual-phase technique using (99m)Tc-methoxyisobutylisonitrile ((99m)Tc-MIBI) and a double-tracer subtraction technique using a delayed (99m)Tc-pertechnetate scan. Immediately after the first (99m)Tc-MIBI planar image, a SPECT study was acquired. Before radioguided parathyroidectomy, each patient was reinjected with (99m)Tc-MIBI. Serum calcium levels were available for all patents before surgery and at 8 and 24 h after surgery. Serum parathyroid hormone (PTH) levels were also available for all patients. Quantitative analysis was performed using the average count ratio of parathyroid to left thyroid lobe, right thyroid lobe, and maximum thyroid activity. All patients had histopathologic examination of the removed glands.
RESULTS
The average time for radioguided surgery was 30 min (range, 20-40 min). Postsurgical calcium levels correlated significantly with the adenoma weight (r = 0.5; P = 0.016). Combined planar scintigraphy correctly identified 41 adenomas (79%). SPECT increased the sensitivity to 96%. SPECT was superior to planar imaging in 9 patients, mainly in patients with ectopic adenomas or with multinodular goiters. Gland size did not affect significantly the detectability of SPECT. (99m)Tc-MIBI retention was noted in only 31 adenomas (60%). The average uptake ratios of parathyroid counts to the left lobe, right lobe, and maximum thyroid activity were 1.20 +/- 0.42, 1.29 +/- 0.45, and 0.84 +/- 0.35, respectively. The latter ratio was significantly correlated with PTH levels before surgery (r = 0.408; P = 0.04).
CONCLUSION
Our data indicate that early preoperative SPECT in patients with primary hyperparathyroidism is essential for accurate localization of parathyroid adenomas and for the selection of patients who are candidates for minimally invasive radioguided surgery. Planar parathyroid imaging is less sensitive compared with SPECT, and washout kinetics of (99m)Tc-MIBI are unreliable in the dual-phase technique. Patients with higher presurgical PTH levels may especially benefit from radioguided surgery. |
Molecular targeting of glioblastoma: Drug discovery and therapies. | Despite advances in treatment for glioblastoma multiforme (GBM), patient prognosis remains poor. Although there is growing evidence that molecular targeting could translate into better survival for GBM, current clinical data show limited impact on survival. Recent progress in GBM genomics implicate several activated pathways and numerous mutated genes. This molecular diversity can partially explain therapeutic resistance and several approaches have been postulated to target molecular changes. Furthermore, most drugs are unable to reach effective concentrations within the tumor owing to elevated intratumoral pressure, restrictive vasculature and other limiting factors. Here, we describe the preclinical and clinical developments in treatment strategies of GBM. We review the current clinical trials for GBM and discuss the challenges and future directions of targeted therapies. |
Sentribute: image sentiment analysis from a mid-level perspective | Visual content analysis has always been important yet challenging. Thanks to the popularity of social networks, images become an convenient carrier for information diffusion among online users. To understand the diffusion patterns and different aspects of the social images, we need to interpret the images first. Similar to textual content, images also carry different levels of sentiment to their viewers. However, different from text, where sentiment analysis can use easily accessible semantic and context information, how to extract and interpret the sentiment of an image remains quite challenging. In this paper, we propose an image sentiment prediction framework, which leverages the mid-level attributes of an image to predict its sentiment. This makes the sentiment classification results more interpretable than directly using the low-level features of an image. To obtain a better performance on images containing faces, we introduce eigenface-based facial expression detection as an additional mid-level attributes. An empirical study of the proposed framework shows improved performance in terms of prediction accuracy. More importantly, by inspecting the prediction results, we are able to discover interesting relationships between mid-level attribute and image sentiment. |
Identifying Sarcasm in Twitter: A Closer Look | Sarcasm transforms the polarity of an apparently positive or negative utterance into its opposite. We report on a method for constructing a corpus of sarcastic Twitter messages in which determination of the sarcasm of each message has been made by its author. We use this reliable corpus to compare sarcastic utterances in Twitter to utterances that express positive or negative attitudes without sarcasm. We investigate the impact of lexical and pragmatic factors on machine learning effectiveness for identifying sarcastic utterances and we compare the performance of machine learning techniques and human judges on this task. Perhaps unsurprisingly, neither the human judges nor the machine learning techniques perform very well. |
AN EVALUATION OF THE 360o PROJECT MANAGEMENT COMPETENCY ASSESSMENT QUESTIONNAIRE D THERON Human Resources, SASOL Technology | The primary purpose of this study was to evaluate a 360o project management competency questionnaire relevant to a chemical engineering environment. The competency questionnaire was developed using the input of the employees who took part in the appraisal. The secondary purpose of this study was to determine if significant differences existed between the multi-rater competency evaluations of different rater groups. Eighty technically qualified employees within a technology development environment were each evaluated by a number of raters, including themselves, their managers, customers and peers. In the case of both the Importance and the Performance Scales, single factors were extracted with internal reliabilities of 0,943 and 0,941 respectively. No significant differences were obtained on paired t-tests between the various rater groups. These findings and their implications are further discussed. |
Average Reward Reinforcement Learning: Foundations, Algorithms, and Empirical Results | This paper presents a detailed study of average reward reinforcement learning, an undiscounted optimality framework that is more appropriate for cyclical tasks than the much better studied discounted framework. A wide spectrum of average reward algorithms are described, ranging from synchronous dynamic programming methods to several (provably convergent) asynchronous algorithms from optimal control and learning automata. A general sensitive discount optimality metric called n-discount-optimality is introduced, and used to compare the various algorithms. The overview identifies a key similarity across several asynchronous algorithms that is crucial to their convergence, namely independent estimation of the average reward and the relative values. The overview also uncovers a surprising limitation shared by the different algorithms: while several algorithms can provably generate gain-optimal policies that maximize average reward, none of them can reliably filter these to produce bias-optimal (or T-optimal) policies that also maximize the finite reward to absorbing goal states. This paper also presents a detailed empirical study of R-learning, an average reward reinforcement learning method, using two empirical testbeds: a stochastic grid world domain and a simulated robot environment. A detailed sensitivity analysis of R-learning is carried out to test its dependence on learning rates and exploration levels. The results suggest that R-learning is quite sensitive to exploration strategies, and can fall into sub-optimal limit cycles. The performance of R-learning is also compared with that of Q-learning, the best studied discounted RL method. Here, the results suggest that R-learning can be fine-tuned to give better performance than Q-learning in both domains. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.