title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Consumer perceptual mapping towards e-banking channel: (A study of bank BRI customer in Indonesia) | Implementation of e-banking services system provides various advantages for the company are cost and time efficiencies, and be able to create differentiation and able to target market segments with low cost. To determine the underlying motives customers to select and prefer one channel and the other required a systematic exploration of the customer perception. This study aims to understand the customers' perception of various e-banking channels and how their motives in choosing the channel usage. By understanding customer perceptions about various banking channels, it is expected to be more helpful in assisting banks in Indonesia to introduce e-banking. A convinience sampling technique used to select 234 customers who surveyed in this study. Findings suggest that Automatic Teller Machine was percieved to be low cost, low complexity and most usefulness. EFT is also almost similar to the ATM, get the low cost and low complexity but low on security. Perception on Internet banking was secure and usefulness and high on privacy. Meanwhile perception on SMS banking was easy to access and also high on privacy. And phone banking was perceived to be the most expensive and inaccurate. Future studies are expected in addition to using other multivariate techniques would also be able to add other attributes are more influential. |
Measuring IT Service Management Capability: Scale Development and Empirical Validation | This paper conceptualizes IT service management (ITSM) capability, a key competence of today’s IT provider organizations, and presents a survey instrument to facilitate the measurement of an ITSM capability for research and practice. Based on the review of four existing ITSM maturity models (CMMISVC, COBIT 4.1, SPICE, ITIL v3), we first develop a multi-attributive scale to assess maturity on an ITSM process level. We then use this scale in a survey with 205 ITSM key informants who assessed IT provider organizations along a set of 26 established ITSM processes. Our exploratory factor analysis and measurement model assessment results support the validity of an operationalization of ITSM capability as a second-order construct formed by ITSM processes that span three dimensions: service planning, service transition, and service operation. The practical utility of our survey instrument and avenues for future research on ITSM capability are outlined. |
False Positive and False Negative Results in Urine Drug Screening Tests: Tampering Methods and Specimen Integrity Tests | Urine drug screening can detect cases of drug abuse, promote workplace safety, and monitor drugtherapy compliance. Compliance testing is necessary for patients taking controlled drugs. To order and interpret these tests, it is required to know of testing modalities, kinetic of drugs, and different causes of false-positive and false-negative results. Standard immunoassay testing is fast, cheap, and the preferred primarily test for urine drug screening. This method reliably detects commonly drugs of abuse such as opiates, opioids, amphetamine/methamphetamine, cocaine, cannabinoids, phencyclidine, barbiturates, and benzodiazepines. Although immunoassays are sensitive and specific to the presence of drugs/drug metabolites, false negative and positive results may be created in some cases. Unexpected positive test results should be checked with a confirmatory method such as gas chromatography/mass spectrometry. Careful attention to urine collection methods and performing the specimen integrity tests can identify some attempts by patients to produce false-negative test results. |
Brief cognitive-behavioral depression prevention program for high-risk adolescents outperforms two alternative interventions: a randomized efficacy trial. | In this depression prevention trial, 341 high-risk adolescents (mean age = 15.6 years, SD = 1.2) with elevated depressive symptoms were randomized to a brief group cognitive-behavioral (CB) intervention, group supportive-expressive intervention, bibliotherapy, or assessment-only control condition. CB participants showed significantly greater reductions in depressive symptoms than did supportive-expressive, bibliotherapy, and assessment-only participants at posttest, though only the difference compared with assessment controls was significant at 6-month follow-up. CB participants showed significantly greater improvements in social adjustment and reductions in substance use at posttest and 6-month follow-up than did participants in all 3 other conditions. Supportive-expressive and bibliotherapy participants showed greater reductions in depressive symptoms than did assessment-only controls at certain follow-up assessments but produced no effects for social adjustment and substance use. CB, supportive-expressive, and bibliotherapy participants showed a significantly lower risk for major depression onset over the 6-month follow-up than did assessment-only controls. The evidence that this brief CB intervention reduced risk for future depression onset and outperformed alternative interventions for certain ecologically important outcomes suggests that this intervention may have clinical utility. |
Brand relationships and brand equity in franchising ☆ , ☆ ☆ | ☆ The authors would like to acknowledge the financial ash Research Graduate School (MRGS) and the Faculty Monash University. ☆☆The authors would like to thank the two anonymou and invaluable feedback. ⁎ Corresponding author at: Department of Marketin 197, Caulfield East, Victoria, 3145. Tel.: +61 3 9903 256 E-mail addresses: [email protected] [email protected] (M.J. Matanda), michae (M.T. Ewing). 1 Tel.: +61 3 990 31286. 2 Tel.: +61 3 990 44021. |
On Self-Collapsing Wavefunctions and the Fine Tuning of the Universe | A new variation on the Copenhagen interpretation of quantum mechanics is introduced, and its effects on the evolution of the Universe are reviewed. It is demonstrated that this modified form of quantum mechanics will produce a habitable Universe with no required tuning of the parameters, and without requiring multiple Universes or external creators. |
End-to-End 3D Face Reconstruction with Deep Neural Networks | Monocular 3D facial shape reconstruction from a single 2D facial image has been an active research area due to its wide applications. Inspired by the success of deep neural networks (DNN), we propose a DNN-based approach for End-to-End 3D FAce Reconstruction (UH-E2FAR) from a single 2D image. Different from recent works that reconstruct and refine the 3D face in an iterative manner using both an RGB image and an initial 3D facial shape rendering, our DNN model is end-to-end, and thus the complicated 3D rendering process can be avoided. Moreover, we integrate in the DNN architecture two components, namely a multi-task loss function and a fusion convolutional neural network (CNN) to improve facial expression reconstruction. With the multi-task loss function, 3D face reconstruction is divided into neutral 3D facial shape reconstruction and expressive 3D facial shape reconstruction. The neutral 3D facial shape is class-specific. Therefore, higher layer features are useful. In comparison, the expressive 3D facial shape favors lower or intermediate layer features. With the fusion-CNN, features from different intermediate layers are fused and transformed for predicting the 3D expressive facial shape. Through extensive experiments, we demonstrate the superiority of our end-to-end framework in improving the accuracy of 3D face reconstruction. |
Flattened Convolutional Neural Networks for Feedforward Acceleration | We present flattened convolutional neural networks that are designed for fast feedforward execution. The redundancy of the parameters, especially weights of the convolutional filters in convolutional neural networks has been extensively studied and different heuristics have been proposed to construct a low rank basis of the filters after training. In this work, we train flattened networks that consist of consecutive sequence of one-dimensional filters across all directions in 3D space to obtain comparable performance as conventional convolutional networks. We tested flattened model on different datasets and found that the flattened layer can effectively substitute for the 3D filters without loss of accuracy. The flattened convolution pipelines provide around two times speed-up during feedforward pass compared to the baseline model due to the significant reduction of learning parameters. Furthermore, the proposed method does not require efforts in manual tuning or post processing once the model is trained. |
Inhaled Therapy in Respiratory Disease: The Complex Interplay of Pulmonary Kinetic Processes | The inhalation route is frequently used to administer drugs for the management of respiratory diseases such as asthma or chronic obstructive pulmonary disease. Compared with other routes of administration, inhalation offers a number of advantages in the treatment of these diseases. For example, via inhalation, a drug is directly delivered to the target organ, conferring high pulmonary drug concentrations and low systemic drug concentrations. Therefore, drug inhalation is typically associated with high pulmonary efficacy and minimal systemic side effects. The lung, as a target, represents an organ with a complex structure and multiple pulmonary-specific pharmacokinetic processes, including (1) drug particle/droplet deposition; (2) pulmonary drug dissolution; (3) mucociliary and macrophage clearance; (4) absorption to lung tissue; (5) pulmonary tissue retention and tissue metabolism; and (6) absorptive drug clearance to the systemic perfusion. In this review, we describe these pharmacokinetic processes and explain how they may be influenced by drug-, formulation- and device-, and patient-related factors. Furthermore, we highlight the complex interplay between these processes and describe, using the examples of inhaled albuterol, fluticasone propionate, budesonide, and olodaterol, how various sequential or parallel pulmonary processes should be considered in order to comprehend the pulmonary fate of inhaled drugs. |
Single Image Dehazing via Multi-scale Convolutional Neural Networks | The performance of existing image dehazing methods is limited by hand-designed features, such as the dark channel, color disparity and maximum contrast, with complex fusion schemes. In this paper, we propose a multi-scale deep neural network for single-image dehazing by learning the mapping between hazy images and their corresponding transmission maps. The proposed algorithm consists of a coarse-scale net which predicts a holistic transmission map based on the entire image, and a fine-scale net which refines results locally. To train the multiscale deep network, we synthesize a dataset comprised of hazy images and corresponding transmission maps based on the NYU Depth dataset. Extensive experiments demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of quality and speed. |
Increased α-linolenic acid intake lowers C-reactive protein, but has no effect on markers of atherosclerosis | Objective: To investigate the effects of increased alpha-linolenic acid (ALA)-intake on intima–media thickness (IMT), oxidized low-density lipoprotein (LDL) antibodies, soluble intercellular adhesion molecule-1 (sICAM-1), C-reactive protein (CRP), and interleukins 6 and 10.Design: Randomized double-blind placebo-controlled trial.Subjects: Moderately hypercholesterolaemic men and women (55±10 y) with two other cardiovascular risk factors (n=103).Intervention: Participants were assigned to a margarine enriched with ALA (fatty acid composition 46% LA, 15% ALA) or linoleic acid (LA) (58% LA, 0.3% ALA) for 2 y.Results: Dietary ALA intake was 2.3 en% among ALA users, and 0.4 en% among LA users. The 2-y progression rate of the mean carotid IMT (ALA and LA: +0.05 mm) and femoral IMT (ALA:+0.05 mm; LA:+0.04 mm) was similar, when adjusted for confounding variables. After 1 and 2 y, ALA users had a lower CRP level than LA users (net differences −0.53 and −0.56 mg/l, respectively, P<0.05). No significant effects were observed in oxidized LDL antibodies, and levels of sICAM-1, interleukins 6 and 10.Conclusions: A six-fold increased ALA intake lowers CRP, when compared to a control diet high in LA. The present study found no effects on markers for atherosclerosis.Sponsorship: The Dutch ‘Praeventiefonds’. |
The Crowdfunding Game | The recent success of crowdfunding for supporting new and innovative products has been overwhelming with over 34 Billion Dollars raised in 2015. In many crowdfunding platforms, firms set a campaign threshold and contributions are collected only if this threshold is reached. During the campaign, consumers are uncertain as to the ex-post value of the product, the business model viability, and the seller’s reliability. Consumer who commit to a contribution therefore gambles. This gamble is effected by the campaign’s threshold. Contributions to campaigns with higher thresholds are collected only if a greater number of agents find the offering acceptable. Therefore, high threshold serves as a social insurance and thus in high-threshold campaigns, potential contributors feel more at ease with contributing. We introduce the crowdfunding game and explore the contributor’s dilemma in the context of experience goods. We discuss equilibrium existence and related social welfare, information aggregation and revenue implications. |
Universal Style Transfer via Feature Transforms | Universal style transfer aims to transfer arbitrary visual styles to content images. Existing feed-forward based methods, while enjoying the inference efficiency, are mainly limited by inability of generalizing to unseen styles or compromised visual quality. In this paper, we present a simple yet effective method that tackles these limitations without training on any pre-defined styles. The key ingredient of our method is a pair of feature transforms, whitening and coloring, that are embedded to an image reconstruction network. The whitening and coloring transforms reflect a direct matching of feature covariance of the content image to a given style image, which shares similar spirits with the optimization of Gram matrix based cost in neural style transfer. We demonstrate the effectiveness of our algorithm by generating high-quality stylized images with comparisons to a number of recent methods. We also analyze our method by visualizing the whitened features and synthesizing textures via simple feature coloring. |
Effect of exercise and cognitive activity on self-reported sleep quality in community-dwelling older adults with cognitive complaints: a randomized controlled trial. | OBJECTIVES
To compare the effects of different types of physical and mental activity on self-reported sleep quality over 12 weeks in older adults with cognitive and sleep complaints.
DESIGN
Randomized controlled trial.
SETTING
General community.
PARTICIPANTS
Seventy-two inactive community-dwelling older adults with self-reported sleep and cognitive problems (mean age 73.3 ± 6.1; 60% women).
INTERVENTION
Random allocation to four arms using a two-by-two factorial design: aerobic+cognitive training, aerobic+educational DVD, stretching+cognitive training, and stretching+educational DVD arms (60 min/d, 3 d/wk for physical and mental activity for 12 weeks).
MEASUREMENTS
Change in sleep quality using seven questions from the Sleep Disorders Questionnaire on the 2005 to 2006 National Health and Nutrition Examination Survey (range 0-28, with higher scores reflecting worse sleep quality). Analyses used intention-to-treat methods.
RESULTS
Sleep quality scores did not differ at baseline, but there was a significant difference between the study arms in change in sleep quality over time (P < .005). Mean sleep quality scores improved significantly more in the stretching+educational DVD arm (5.1 points) than in the stretching+cognitive training (1.2 points), aerobic+educational DVD (1.1 points), or aerobic+cognitive training (0.25 points) arms (all P < .05, corrected for multiple comparisons). Differences between arms were strongest for waking at night (P = .02) and taking sleep medications (P = .004).
CONCLUSION
Self-reported sleep quality improved significantly more with low-intensity physical and mental activities than with moderate- or high-intensity activities in older adults with self-reported cognitive and sleep difficulties. Future longer-term studies with objective sleep measures are needed to corroborate these results. |
Unmet care demands as perceived by stroke patients: deficits in health care? | OBJECTIVES
To describe unmet care demands as perceived by stroke patients and to identify sociodemographic and health characteristics associated with these unmet demands to investigate the appropriateness of health care.
SETTING
Sample of patients who participated in a multicentre study (23 hospitals) on quality of care in The Netherlands.
PATIENTS
Non-institutionalised patients who had been admitted to hospital because of stroke. Patients were interviewed six months (n = 382) and five years (n = 224) after stroke.
DESIGN
Six months after stroke data were collected on: (a) sociodemographic characteristics in terms of age, sex, living arrangement, educational level, and regional level of urbanisation; (b) health characteristics in terms of cognitive function, disability, emotional distress, and general health perception; (c) utilisation of professional care; and (d) unmet care demands as perceived by patients. Data on utilisation of care and unmet demands were also collected five years after stroke. Data were collected from June 1991 until December 1996.
RESULTS
The percentage of unmet care demands was highest at six months after stroke (n = 120, 31%). Multiple logistic regression analyses showed that disabled patients were more likely to be unmet demanders for therapy, (I)ADL care and aids (range odds ratio (OR) = 3.5 to 7.9) than to be no demanders, whereas emotionally distressed patients were more likely to be unmet demanders for psychosocial support (OR = 3.8). When comparing unmet demanders with care users only for (instrumental) activities of daily living (I)ADL care differences were found: men (OR = 3.8), disabled patients (OR = 3.0), and emotionally distressed patients (OR = 6.5) were more likely to be users.
CONCLUSIONS
Patients who perceived an unmet care demand do appear genuinely to have an unmet care need as supported by assessment of their health status: (a) types of unmet care demands correspond with types of health problems and (b) unmet demanders were in general unhealthier than no demanders and more comparable with care users for health characteristics.
IMPLICATIONS
To improve an equitable distribution of healthcare services, guidelines for indicating and allocating health care have to be developed and should be based on scientific evidence and consensus meetings including professionals' and patients' perspectives. |
Decreased expression of megalin and cubilin and altered mitochondrial activity in tenofovir nephrotoxicity. | Tenofovir disoproxil fumarate (TDF) is a commonly used antiretroviral drug for HIV, rarely causing Fanconi syndrome and acute kidney injury. We retrospectively analyzed the clinico pathological presentation of 20 cases of tenofovir-induced tubulopathy, and investigated the renal expression of the megalin and cubilin proteins, as well as the mitochondrial respiratory chain activity. Estimated glomerular filtration rate (eGFR) before TDF exposure was 92 ml/min/1.73m2, decreasing to 27.5 ml/min/1.73m2 at the time of biopsy, with 30% of patients requiring renal replacement therapy. Proximal tubular expression of megalin and cubilin was altered in 19 and 18 cases, respectively, whereas it was preserved in patients exposed to TDF without proximal tubular dysfunction and in HIV-negative patients with acute tubular necrosis. Loss of megalin/cubilin was correlated with low eGFR and high urine retinol binding protein at the time of biopsy, low eGFR at last follow-up, and was more severe in patients with multifactorial toxicity. Patients with additional nephrotoxic conditions promoting tenofovir accumulation showed a lower eGFR at presentation and at last follow-up, and more severe lesions of acute tubular necrosis, than those with isolated tenofovir toxicity. Altered mitochondrial COX activity in proximal tubules was observed and may be an early cellular alteration in tenofovir nephrotoxicity. In conclusion, altered megalin/cubilin expression represents a distinctive feature in tenofovir-induced tubulopathy, and its severity is correlated with urine retinol binding protein loss and is associated with a poor renal prognosis. Concomitant exposure to other nephrotoxic conditions severely impacts the renal presentation and outcome. |
McRT-STM: a high performance software transactional memory system for a multi-core runtime | Applications need to become more concurrent to take advantage of the increased computational power provided by chip level multiprocessing. Programmers have traditionally managed this concurrency using locks (mutex based synchronization). Unfortunately, lock based synchronization often leads to deadlocks, makes fine-grained synchronization difficult, hinders composition of atomic primitives, and provides no support for error recovery. Transactions avoid many of these problems, and therefore, promise to ease concurrent programming.We describe a software transactional memory (STM) system that is part of McRT, an experimental Multi-Core RunTime. The McRT-STM implementation uses a number of novel algorithms, and supports advanced features such as nested transactions with partial aborts, conditional signaling within a transaction, and object based conflict detection for C/C++ applications. The McRT-STM exports interfaces that can be used from C/C++ programs directly or as a target for compilers translating higher level linguistic constructs.We present a detailed performance analysis of various STM design tradeoffs such as pessimistic versus optimistic concurrency, undo logging versus write buffering, and cache line based versus object based conflict detection. We also show a MCAS implementation that works on arbitrary values, coexists with the STM, and can be used as a more efficient form of transactional memory. To provide a baseline we compare the performance of the STM with that of fine-grained and coarse-grained locking using a number of concurrent data structures on a 16-processor SMP system. We also show our STM performance on a non-synthetic workload -- the Linux sendmail application. |
Correlates and consequences of exposure to video game violence: hostile personality, empathy, and aggressive behavior. | Research has shown that exposure to violent video games causes increases in aggression, but the mechanisms of this effect have remained elusive. Also, potential differences in short-term and long-term exposure are not well understood. An initial correlational study shows that video game violence exposure (VVE) is positively correlated with self-reports of aggressive behavior and that this relation is robust to controlling for multiple aspects of personality. A lab experiment showed that individuals low in VVE behave more aggressively after playing a violent video game than after a nonviolent game but that those high in VVE display relatively high levels of aggression regardless of game content. Mediational analyses show that trait hostility, empathy, and hostile perceptions partially account for the VVE effect on aggression. These findings suggest that repeated exposure to video game violence increases aggressive behavior in part via changes in cognitive and personality factors associated with desensitization. |
Estradiol therapy combined with progesterone and endothelium-dependent vasodilation in postmenopausal women. | BACKGROUND
Epidemiological studies indicate that estrogen replacement therapy decreases the risk of cardiovascular events in postmenopausal women. Estrogen may confer cardiovascular protection by improving endothelial function because it increases endothelium-dependent vasodilation. It is not known whether progesterone attenuates the beneficial effects of estrogen on endothelial function.
METHODS AND RESULTS
Seventeen postmenopausal women with mild hypercholesterolemia were enrolled in a placebo-controlled, crossover trial to evaluate the effect of transdermal estradiol, with and without vaginal micronized progesterone, on endothelium-dependent vasodilation in a peripheral conduit artery. Brachial artery diameter was measured with high-resolution B-mode ultrasonography. To assess endothelium-dependent vasodilation, brachial artery diameter was determined at baseline and after a flow stimulus induced by reactive hyperemia. To assess endothelium-independent vasodilation, brachial artery diameter was measured after administration of sublingual nitroglycerin. During estradiol therapy, reactive hyperemia caused an 11.1+/-1.0% change in brachial artery diameter compared with 4. 7+/-0.6% during placebo therapy (P<0.001). Progesterone did not significantly attenuate this improvement. During combined estrogen and progesterone therapy, flow-mediated vasodilation of the brachial artery was 9.6+/-0.8% (P=NS versus estradiol alone). Endothelium-independent vasodilation was not altered by estradiol therapy, either with or without progesterone, compared with placebo. There was a modest decrease in total and LDL cholesterol during treatment both with estradiol alone and when estradiol was combined with progesterone (all P<0.001 versus placebo). In a multivariate analysis that included serum estradiol, progesterone, total and LDL cholesterol concentrations, blood pressure, and heart rate, only the estradiol level was a significant predictor of endothelium-dependent vasodilation.
CONCLUSIONS
The addition of micronized progesterone does not attenuate the favorable effect of estradiol on endothelium-dependent vasodilation. The vasoprotective effect of hormone replacement therapy may extend beyond its beneficial actions on lipids. |
Eight weeks of treatment with long-acting GLP-1 analog taspoglutide improves postprandial insulin secretion and sensitivity in metformin-treated patients with type 2 diabetes. | OBJECTIVE
Loss of pancreatic function is pivotal to the deterioration of fasting and postprandial glycemic control in type 2 diabetes (T2D). We evaluated the effects of a long-acting, human glucagon-like peptide-1 analog, taspoglutide, added to metformin, on pancreatic function and peripheral insulin sensitivity.
MATERIALS/METHODS
We studied 80 T2D patients inadequately controlled [glycosylated hemoglobin (HbA1c), 7.0%-9.5%] receiving stable metformin for ≥12weeks. They were a subset of participants to a phase 2 trial that received also a 240-min mixed-meal tolerance test (MTT) at baseline and study end. Patients received once weekly (QW) sc injection of taspoglutide 5, 10, or 20mg (n=21, 19, or 19), or placebo (n=21), plus metformin, for 8weeks. We measured postprandial plasma glucose (PPG) and insulin profiles, insulin secretion rate (ISR), oral glucose insulin sensitivity (OGIS) index; β-cell glucose sensitivity, glucagon/glucose and insulin/glucagon ratios, and insulin sensitivity-to-insulin resistance (or disposition) index.
RESULTS
After 8 weeks of treatment, taspoglutide 5, 10, and 20mg QW doses vs. placebo improved mean PPG0-240 min (relative change from baseline: -22.1%, -25.9%, and -22.9% vs. -8.1%; P<0.005) and mean postprandial ISR0-240 min (+14%, +18%, and +23% vs. +1%; P<0.005 vs dose). Taspoglutide at 20mg QW dose also resulted in improvements from baseline in OGIS, β-cell glucose sensitivity, glucagon/glucose and insulin/glucagon ratios and the disposition index during the MTT.
CONCLUSION
Taspoglutide QW significantly improved pancreatic function in patients with T2D treated with metformin. |
Digital CMOS neuromorphic processor design featuring unsupervised online learning | The compute-intensive and power-efficient brain has been a source of inspiration for a broad range of neural networks to solve recognition and classification tasks. Compared to the supervised deep neural networks (DNNs) that have been very successful on well-defined labeled datasets, bio-plausible spiking neural networks (SNNs) with unsupervised learning rules could be well-suited for training and learning representations from the massive amount of unlabeled data. To design dense and low-power hardware for such unsupervised SNNs, we employ digital CMOS circuits for neuromorphic processors, which can exploit transistor scaling and dynamic voltage scaling to the utmost. As exemplary works, we present two neuromorphic processor designs. First, a 45nm neuromorphic chip is designed for a small-scale network of spiking neurons. Through tight integration of memory (64k SRAM synapses) and computation (256 digital neurons), the chip demonstrates on-chip learning on pattern recognition tasks down to 0.53V supply. Secondly, a 65nm neuromorphic processor that performs unsupervised on-line spike-clustering for brain sensing applications is implemented with 1.2k digital neurons and 4.7k latch-based synapses. The processor exhibits a power consumption of 9.3μW/ch at 0.3V supply. Synapse hardware precision, efficient synapse memory array access, overfitting, and voltage scaling will be discussed for dense and power-efficient on-chip learning for CMOS spiking neural networks. |
Disease Prediction from Electronic Health Records Using Generative Adversarial Networks | Electronic health records (EHRs) have contributed to the computerization of patient records so that it can be used not only for efficient and systematic medical services, but also for research on data science. In this paper, we compared disease prediction performance of generative adversarial networks (GANs) and conventional learning algorithms in combination with missing value prediction methods. As a result, the highest accuracy of 98.05% was obtained using stacked autoencoder as the missing value prediction method and auxiliary classifier GANs (AC-GANs) as the disease predicting method. Results show that the combination of stacked autoencoder and AC-GANs performs significantly greater than existing algorithms at the problem of disease prediction in which missing values and class imbalance exist. |
Psychological and behavioral correlates of family violence in child witnesses and victims. | Abused and nonabused child witnesses to parental violence temporarily residing in a battered women's shelter were compared to children from a similar economic background on measures of self-esteem, anxiety, depression, and behavior problems, using mothers' and self-reports. Results indicated significantly more distress in the abused-witness children than in the comparison group, with nonabused witness children's scores falling between the two. Age of child and types of violence were mediating factors. Implications of the findings are discussed. |
Optimization Using Simulation of Traffic Light Signal Timings | Traffic congestion has become a great challenge and a large burden on both the governments and the citizens in vastly populated cities. The main problem, originally initiated by several factors, continues to threaten the stability of modern cities and the livelihood of its habitants. Hence, improving the control strategies that govern the traffic operations is a powerful solution that can solve the congestion problem. These improvements can be achieved by enhancing the traffic control performance through adjusting the traffic signal timings. This paper focuses on finding various solutions for the addressed problem through the optimization using simulation of traffic signal timings under oversaturated conditions. The system under study is an actual road network in Alexandria, Egypt; where, numerous data have been collected in different time intervals. A series of computer simulation models to represent the actual system as well as proposed solutions have been developed using the ExtendSim simulation environment. Furthermore, an evolutionary optimizer is utilized to attain a set of optimum/near-optimum signal timings to minimize the total time in system of the vehicles, resulting in an improved performance of the road network. Analysis of experimentation results shows that the adopted methodology optimizes the vehicular flow in a multiple-junction urban traffic network. |
Prosocial behavior: multilevel perspectives. | Current research on prosocial behavior covers a broad and diverse range of phenomena. We argue that this large research literature can be best organized and understood from a multilevel perspective. We identify three levels of analysis of prosocial behavior: (a) the "meso" level--the study of helper-recipient dyads in the context of a specific situation; (b) the micro level--the study of the origins of prosocial tendencies and the sources of variation in these tendencies; and (c) the macro level--the study of prosocial actions that occur within the context of groups and large organizations. We present research at each level and discuss similarities and differences across levels. Finally, we consider ways in which theory and research at these three levels of analysis might be combined in future intra- and interdisciplinary research on prosocial behavior. |
Resilient overlay networks | A Resilient Overlay Network (RON) is an architecture that allows distributed Internet applications to detect and recover from path outages and periods of degraded performance within several seconds, improving over today's wide-area routing protocols that take at least several minutes to recover. A RON is an application-layer overlay on top of the existing Internet routing substrate. The RON nodes monitor the functioning and quality of the Internet paths among themselves, and use this information to decide whether to route packets directly over the Internet or by way of other RON nodes, optimizing application-specific routing metrics.Results from two sets of measurements of a working RON deployed at sites scattered across the Internet demonstrate the benefits of our architecture. For instance, over a 64-hour sampling period in March 2001 across a twelve-node RON, there were 32 significant outages, each lasting over thirty minutes, over the 132 measured paths. RON's routing mechanism was able to detect, recover, and route around all of them, in less than twenty seconds on average, showing that its methods for fault detection and recovery work well at discovering alternate paths in the Internet. Furthermore, RON was able to improve the loss rate, latency, or throughput perceived by data transfers; for example, about 5% of the transfers doubled their TCP throughput and 5% of our transfers saw their loss probability reduced by 0.05. We found that forwarding packets via at most one intermediate RON node is sufficient to overcome faults and improve performance in most cases. These improvements, particularly in the area of fault detection and recovery, demonstrate the benefits of moving some of the control over routing into the hands of end-systems. |
Skyfire: Data-Driven Seed Generation for Fuzzing | Programs that take highly-structured files as inputs normally process inputs in stages: syntax parsing, semantic checking, and application execution. Deep bugs are often hidden in the application execution stage, and it is non-trivial to automatically generate test inputs to trigger them. Mutation-based fuzzing generates test inputs by modifying well-formed seed inputs randomly or heuristically. Most inputs are rejected at the early syntax parsing stage. Differently, generation-based fuzzing generates inputs from a specification (e.g., grammar). They can quickly carry the fuzzing beyond the syntax parsing stage. However, most inputs fail to pass the semantic checking (e.g., violating semantic rules), which restricts their capability of discovering deep bugs. In this paper, we propose a novel data-driven seed generation approach, named Skyfire, which leverages the knowledge in the vast amount of existing samples to generate well-distributed seed inputs for fuzzing programs that process highly-structured inputs. Skyfire takes as inputs a corpus and a grammar, and consists of two steps. The first step of Skyfire learns a probabilistic context-sensitive grammar (PCSG) to specify both syntax features and semantic rules, and then the second step leverages the learned PCSG to generate seed inputs. We fed the collected samples and the inputs generated by Skyfire as seeds of AFL to fuzz several open-source XSLT and XML engines (i.e., Sablotron, libxslt, and libxml2). The results have demonstrated that Skyfire can generate well-distributed inputs and thus significantly improve the code coverage (i.e., 20% for line coverage and 15% for function coverage on average) and the bug-finding capability of fuzzers. We also used the inputs generated by Skyfire to fuzz the closed-source JavaScript and rendering engine of Internet Explorer 11. Altogether, we discovered 19 new memory corruption bugs (among which there are 16 new vulnerabilities and received 33.5k USD bug bounty rewards) and 32 denial-of-service bugs. |
An All-SiC Three-Phase Buck Rectifier for High-Efficiency Data Center Power Supplies | The low power losses of silicon carbide (SiC) devices provide new opportunities to implement an ultra high-efficiency front-end rectifier for data center power supplies based on a 400-Vdc power distribution architecture, which requires high conversion efficiency in each power conversion stage. This paper presents a 7.5-kW high-efficiency three-phase buck rectifier with 480-Vac,rms input line-to-line voltage and 400-Vdc output voltage using SiC MOSFETs and Schottky diodes. To estimate power devices' losses, which are the dominant portion of total loss, the method of device evaluation and loss calculation is proposed based on a current source topology. This method simulates the current commutation process and estimates devices' losses during switching transients considering devices with and without switching actions in buck rectifier operation. Moreover, the power losses of buck rectifiers based on different combinations of 1200-V power devices are compared. The investigation and comparison demonstrate the benefits of each combination, and the lowest total loss in the all-SiC rectifier is clearly shown. A 7.5-kW prototype of the all-SiC three-phase buck rectifier using liquid cooling is fabricated and tested, with filter design and switching frequency chosen based on loss minimization. A full-load efficiency value greater than 98.5% is achieved. |
FAST VEHICLE DETECTION AND TRACKING IN AERIAL IMAGE BURSTS | Caused by the rising interest in traffic surveillance for simulations and decision management many publications concentrate on automatic vehicle detection or tracking. Quantities and velocities of different car classes form the data basis for almost every traffic model. Especially during mass events or disasters a wide-area traffic monitoring on demand is needed which can only be provided by airborne systems. This means a massive amount of image information to be handled. In this paper we present a combination of vehicle detection and tracking which is adapted to the special restrictions given on image size and flow but nevertheless yields reliable information about the traffic situation. Combining a set of modified edge filters it is possible to detect cars of different sizes and orientations with minimum computing effort, if some a priori information about the street network is used. The found vehicles are tracked between two consecutive images by an algorithm using Singular Value Decomposition. Concerning their distance and correlation the features are assigned pairwise with respect to their global positioning among each other. Choosing only the best correlating assignments it is possible to compute reliable values for the average velocities. |
Resting EEG Discrimination of Early Stage Alzheimer’s Disease from Normal Aging Using Inter-Channel Coherence Network Graphs | Amnestic mild cognitive impairment (MCI) is a degenerative neurological disorder at the early stage of Alzheimer’s disease (AD). This work is a pilot study aimed at developing a simple scalp-EEG-based method for screening and monitoring MCI and AD. Specifically, the use of graphical analysis of inter-channel coherence of resting EEG for the detection of MCI and AD at early stages is explored. Resting EEG records from 48 age-matched subjects (mean age 75.7 years)—15 normal controls (NC), 16 with early-stage MCI, and 17 with early-stage AD—are examined. Network graphs are constructed using pairwise inter-channel coherence measures for delta–theta, alpha, beta, and gamma band frequencies. Network features are computed and used in a support vector machine model to discriminate among the three groups. Leave-one-out cross-validation discrimination accuracies of 93.6% for MCI vs. NC (p < 0.0003), 93.8% for AD vs. NC (p < 0.0003), and 97.0% for MCI vs. AD (p < 0.0003) are achieved. These results suggest the potential for graphical analysis of resting EEG inter-channel coherence as an efficacious method for noninvasive screening for MCI and early AD. |
Dominant forces in protein folding. | T e purpose of this review is to assess the nature and magnitudes of the dominant forces in protein folding. Since proteins are only marginally stable at room temperature,’ no type of molecular interaction is unimportant, and even small interactions can contribute significantly (positively or negatively) to stability (Alber, 1989a,b; Matthews, 1987a,b). However, the present review aims to identify only the largest forces that lead to the structural features of globular proteins: their extraordinary compactness, their core of nonpolar residues, and their considerable amounts of internal architecture. This review explores contributions to the free energy of folding arising from electrostatics (classical charge repulsions and ion pairing), hydrogen-bonding and van der Waals interactions, intrinsic propensities, and hydrophobic interactions. An earlier review by Kauzmann (1959) introduced the importance of hydrophobic interactions. His insights were particularly remarkable considering that he did not have the benefit of known protein structures, model studies, high-resolution calorimetry, mutational methods, or force-field or statistical mechanical results. The present review aims to provide a reassessment of the factors important for folding in light of current knowledge. Also considered here are the opposing forces, conformational entropy and electrostatics. The process of protein folding has been known for about 60 years. In 1902, Emil Fischer and Franz Hofmeister independently concluded that proteins were chains of covalently linked amino acids (Haschemeyer & Haschemeyer, 1973) but deeper understanding of protein structure and conformational change was hindered because of the difficulty in finding conditions for solubilization. Chick and Martin (191 1) were the first to discover the process of denaturation and to distinguish it from the process of aggregation. By 1925, the denaturation process was considered to be either hydrolysis of the peptide bond (Wu & Wu, 1925; Anson & Mirsky, 1925) or dehydration of the protein (Robertson, 1918). The view that protein denaturation was an unfolding process was |
Empathy Training for Resident Physicians: A Randomized Controlled Trial of a Neuroscience-Informed Curriculum | Physician empathy is an essential attribute of the patient–physician relationship and is associated with better outcomes, greater patient safety and fewer malpractice claims. We tested whether an innovative empathy training protocol grounded in neuroscience could improve physician empathy as rated by patients. Randomized controlled trial. We randomly assigned residents and fellows from surgery, medicine, anesthesiology, psychiatry, ophthalmology, and orthopedics (N = 99, 52% female, mean age 30.6 ± 3.6) to receive standard post-graduate medical education or education augmented with three 60-minute empathy training modules. Patient ratings of physician empathy were assessed within one-month pre-training and between 1–2 months post-training with the use of the Consultation and Relational Empathy (CARE) measure. Each physician was rated by multiple patients (pre-mean = 4.6 ± 3.1; post-mean 4.9 ± 2.5), who were blinded to physician randomization. The primary outcome was change score on the patient-rated CARE. The empathy training group showed greater changes in patient-rated CARE scores than the control (difference 2.2; P = 0.04). Trained physicians also showed greater changes in knowledge of the neurobiology of empathy (difference 1.8; P < 0.001) and in ability to decode facial expressions of emotion (difference 1.9; P < 0.001). A brief intervention grounded in the neurobiology of empathy significantly improved physician empathy as rated by patients, suggesting that the quality of care in medicine could be improved by integrating the neuroscience of empathy into medical education. |
Stretchable Heater Using Ligand-Exchanged Silver Nanowire Nanocomposite for Wearable Articular Thermotherapy. | Thermal therapy is one of the most popular physiotherapies and it is particularly useful for treating joint injuries. Conventional devices adapted for thermal therapy including heat packs and wraps have often caused discomfort to their wearers because of their rigidity and heavy weight. In our study, we developed a soft, thin, and stretchable heater by using a nanocomposite of silver nanowires and a thermoplastic elastomer. A ligand exchange reaction enabled the formation of a highly conductive and homogeneous nanocomposite. By patterning the nanocomposite with serpentine-mesh structures, conformal lamination of devices on curvilinear joints and effective heat transfer even during motion were achieved. The combination of homogeneous conductive elastomer, stretchable design, and a custom-designed electronic band created a novel wearable system for long-term, continuous articular thermotherapy. |
Automatic Extraction of Hypernyms and Hyponyms from Russian Texts | The paper describes a rule-based approach for hypernym and hyponym extraction from Russian texts. For this task we employ finite state transducers (FSTs). We developed 6 finite state transducers that encode 6 lexicosyntactic patterns, which show a good precision on Russian DBpedia: 79.5% of the matched contexts are correct. |
Characteristics, toxicity, and source apportionment of polycylic aromatic hydrocarbons (PAHs) in road dust of Ulsan, Korea. | This study identified concentrations, molecular distributions, toxicities, and sources of polycylic aromatic hydrocarbons (PAHs) in road dust from different areas of Ulsan, the largest industrial city in Korea. The total PAH concentrations in industrial areas were dependent on industrial emissions and vehicular exhaust, while those in urban areas were mainly dependent on traffic density, sampling site location, and accumulation of pollutants or road dust. The PAH concentration of each particle size group increased with decreasing particle size. This may be because of the higher surface area available for deposition or coating of PAHs in road dust with smaller particle sizes. The molecular distributions of PAHs among the sites in the petrochemical area and heavy traffic area were similar because of the similarities in their emission sources. The toxic equivalent concentrations (TEQs) of PAHs in the road dust ranged from 0.93 microg/g to 16.74 microg/g in industrial areas and from 4.37 microg/g to 68.84 microg/g in urban areas. The correlation coefficient of total PAH concentration and TEQ in urban areas was 0.98, which was much higher than that in industrial areas where it was 0.75. Principal component analysis showed that PAHs in road dust from Ulsan originate from four main sources: diesel vehicular emissions, oil combustion, gasoline vehicular emissions, and coal combustion. |
Exploring end-user smartphone security awareness within a South African context | International research has shown that users are complacent when it comes to smartphone security behaviour. This is contradictory, as users perceive data stored on the `smart' devices to be private and worth protecting. Traditionally less attention is paid to human factors compared to technical security controls (such as firewalls and antivirus), but there is a crucial need to analyse human aspects as technology alone cannot deliver complete security solutions. Increasing a user's knowledge can improve compliance with good security practices, but for trainers and educators to create meaningful security awareness materials they must have a thorough understanding of users' existing behaviours, misconceptions and general attitude towards smartphone security. |
Neurobiological correlates of social functioning in autism. | Although autism is defined by deficits in three areas of functioning (social, communicative, and behavioral), impairments in social interest and restricted behavioral repertoires are central to the disorder. As a result, a detailed understanding of the neurobiological systems subserving social behavior may have implications for prevention, early identification, and intervention for affected families. In this paper, we review a number of potential neurobiological mechanisms--across several levels of analysis--that subserve normative social functioning. These include neural networks, neurotransmitters, and hormone systems. After describing the typical functioning of each system, we review available empirical findings specific to autism. Among the most promising potential mechanisms of social behavioral deficits in autism are those involving neural networks including the amygdala, the mesocorticolimbic dopamine system, and the oxytocin system. Particularly compelling are explanatory models that integrate mechanisms across biological systems, such as those linking dopamine and oxytocin with brain regions critical to reward processing. |
Teachers of the Gifted: A Comparison Of Students' Perpectives in Australia, Austria and the United States: | What characteristics of teachers are most appreciated by their gifted students? The current study sought the views of gifted adolescents through the administration of a questionnaire, the Preferred Instructor Characteristics Scale (Krumboltz & Farquhar, 1957). The scale required students to select between a personal characteristic and an intellectual characteristic on each item in order to determine which group of characteristics was more highly valued. Three additional open-ended questions were included in the questionnaire to elicit students' views about the qualities that are evident in effective teachers. The questionnaire was administered to secondary students in academically selective schools in Australia (n = 387), Austria (n = 142) and the United States (n = 328). All three cohorts produced similar results with the means indicating a strong preference for personal characteristics over intellectual characteristics of teachers. Grade and gender differences were also noted. The open-ended questions r... |
Soy isoflavones in conjunction with radiation therapy in patients with prostate cancer. | Soy isoflavones sensitize prostate cancer cells to radiation therapy by inhibiting cell survival pathways activated by radiation. At the same time, soy isoflavones have significant antioxidant and anti-inflammatory activity, which may help prevent the side effects of radiation. Therefore, we hypothesized that soy isoflavones could be useful when given in conjunction with curative radiation therapy in patients with localized prostate cancer. In addition to enhancing the efficacy of radiation therapy, soy isoflavones could prevent the adverse effects of radiation. We conducted a pilot study to investigate the effects of soy isoflavone supplementation on acute and subacute toxicity (≤6 mo) of external beam radiation therapy in patients with localized prostate cancer. Forty-two patients with prostate cancer were randomly assigned to receive 200 mg soy isoflavone (Group 1) or placebo (Group 2) daily for 6 mo beginning with the first day of radiation therapy, which was administered in 1.8 to 2.5 Gy fractions for a total of 73.8 to 77.5 Gy. Adverse effects of radiation therapy on bladder, bowel, and sexual function were assessed by a self-administered quality of life questionnaire at 3 and 6 mo. Only 26 and 27 patients returned completed questionnaires at 3 and 6 mo, respectively. At each time point, urinary, bowel, and sexual adverse symptoms induced by radiation therapy were decreased in the soy isoflavone group compared to placebo group. At 3 mo, soy-treated patients had less urinary incontinence, less urgency, and better erectile function as compared to the placebo group. At 6 mo, the symptoms in soy-treated patients were further improved as compared to the placebo group. These patients had less dripping/leakage of urine (7.7% in Group 1 vs. 28.4% in Group 2), less rectal cramping/diarrhea (7.7% vs. 21.4%), and less pain with bowel movements (0% vs. 14.8%) than placebo-treated patients. There was also a higher overall ability to have erections (77% vs. 57.1%). The results suggest that soy isoflavones taken in conjunction with radiation therapy could reduce the urinary, intestinal, and sexual adverse effects in patients with prostate cancer. |
In Strange Company: The Puzzle of Private Investment in State-Controlled Firms | A large legal and economic literature describes how state-owned enterprises (SOEs) suffer from a variety of agency and political problems. Less theory and evidence, however, have been generated about the reasons why state-owned enterprises listed in stock markets manage to attract investors to buy their shares (and bonds). In this Article, we examine this apparent puzzle and develop a theory of how legal and extralegal constraints allow mixed enterprises to solve some of these problems. We then use three detailed case studies of state-owned oil companies – Brazil’s Petrobras, Norway’s Statoil, and Mexico’s Pemex – to examine how our theory fares in practice. Overall, we show how mixed enterprises have made progress to solve some of their agency problems, even as government intervention persists as the biggest threat to private minority shareholders in these firms. |
Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition | This paper presents a new approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy of smaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The decomposition, known as the MAXQ decomposition, has both a procedural semantics|as a subroutine hierarchy|and a declarative semantics|as a representation of the value function of a hierarchical policy. MAXQ uni es and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. It is based on the assumption that the programmer can identify useful subgoals and de ne subtasks that achieve these subgoals. By de ning such subgoals, the programmer constrains the set of policies that need to be considered during reinforcement learning. The MAXQ value function decomposition can represent the value function of any policy that is consistent with the given hierarchy. The decomposition also creates opportunities to exploit state abstractions, so that individual MDPs within the hierarchy can ignore large parts of the state space. This is important for the practical application of the method. This paper de nes the MAXQ hierarchy, proves formal results on its representational power, and establishes ve conditions for the safe use of state abstractions. The paper presents an online model-free learning algorithm, MAXQ-Q, and proves that it converges with probability 1 to a kind of locally-optimal policy known as a recursively optimal policy, even in the presence of the ve kinds of state abstraction. The paper evaluates the MAXQ representation and MAXQ-Q through a series of experiments in three domains and shows experimentally that MAXQ-Q (with state abstractions) converges to a recursively optimal policy much faster than at Q learning. The fact that MAXQ learns a representation of the value function has an important bene t: it makes it possible to compute and execute an improved, non-hierarchical policy via a procedure similar to the policy improvement step of policy iteration. The paper demonstrates the e ectiveness of this non-hierarchical execution experimentally. Finally, the paper concludes with a comparison to related work and a discussion of the design tradeo s in hierarchical reinforcement learning. c 2000 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved. |
Modelling Radiocesium in Lakes and Coastal Areas: New Approaches for Ecosystem Modellers | Prologue. 1. Modelling radiocesium in lakes. 2. Modelling radiocesium in coastal areas. 3. Epilogue. 4. Literature references. 5. Appendix. Subject index. |
Primipaternity and duration of exposure to sperm antigens as risk factors for pre-eclampsia. | OBJECTIVE
To establish whether primipaternity and duration of unprotected sexual cohabitation is associated with an increased risk of pre-eclampsia.
METHOD
At a tertiary referral center, the study had a case and control group of 60 multigravid women each, as well as a case and control group of 50 primigravid women each. Information was compiled by means of a confidential questionnaire.
RESULT
After multiple logistic regression analysis using age, smoking, hypertension in previous pregnancies, change of paternity and duration of unprotected sexual cohabitation as predictors, the regression coefficients for change of paternity and sexual cohabitation of longer than 6 months in multigravid women were -0.4 (P = 0.15) and -1.4 (P = 0.03), respectively.
CONCLUSION
Multigravid women with a period of unprotected sexual cohabitation of longer than 6 months had a decreased risk of pre-eclampsia. Primipaternity was not a significant risk factor for pre-eclampsia. |
Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups | We propose a new method for creating computationally efficient and compact convolutional neural networks (CNNs) using a novel sparse connection structure that resembles a tree root. This allows a significant reduction in computational cost and number of parameters compared to state-of-the-art deep CNNs, without compromising accuracy, by exploiting the sparsity of inter-layer filter dependencies. We validate our approach by using it to train more efficient variants of state-of-the-art CNN architectures, evaluated on the CIFAR10 and ILSVRC datasets. Our results show similar or higher accuracy than the baseline architectures with much less computation, as measured by CPU and GPU timings. For example, for ResNet 50, our model has 40% fewer parameters, 45% fewer floating point operations, and is 31% (12%) faster on a CPU (GPU). For the deeper ResNet 200 our model has 48% fewer parameters and 27% fewer floating point operations, while maintaining state-of-the-art accuracy. For GoogLeNet, our model has 7% fewer parameters and is 21% (16%) faster on a CPU (GPU). |
Synthesis of Behavioral Models from Scenarios | Scenario-based specifications such as Message Sequence Charts (MSCs) are useful as part of a requirements specification. A scenario is a partial story, describing how system components, the environment, and users work concurrently and interact in order to provide system level functionality. Scenarios need to be combined to provide a more complete description of system behavior. Consequently, scenario synthesis is central to the effective use of scenario descriptions. How should a set of scenarios be interpreted? How do they relate to one another? What is the underlying semantics? What assumptions are made when synthesizing behavior models from multiple scenarios? In this paper, we present an approach to scenario synthesis based on a clear sound semantics, which can support and integrate many of the existing approaches to scenario synthesis. The contributions of the paper are threefold. We first define an MSC language with sound abstract semantics in terms of labeled transition systems and parallel composition. The language integrates existing approaches based on scenario composition by using high-level MSCs (hMSCs) and those based on state identification by introducing explicit component state labeling. This combination allows stakeholders to break up scenario specifications into manageable parts and reuse scenarios using hMCSs; it also allows them to introduce additional domainspecific information and general assumptions explicitly into the scenario specification using state labels. Second, we provide a sound synthesis algorithm which translates scenarios into a behavioral specification in the form of Finite Sequential Processes. This specification can be analyzed with the Labeled Transition System Analyzer using model checking and animation. Finally, we demonstrate how many of the assumptions embedded in existing synthesis approaches can be made explicit and modeled in our approach. Thus, we provide the basis for a common approach to scenario-based specification, synthesis, and analysis. |
Light propagation in a birefringent plate with topological charge. | We calculated the Fresnel paraxial propagator in a birefringent plate having topological charge q at its center, named "q-plate." We studied the change of the beam transverse profile when it traverses the plate. An analytical closed form of the beam profile propagating in the q-plate can be found for many important specific input beam profiles. We paid particular attention to the plate having a topological unit charge and found that if small losses due to reflection, absorption, and scattering are neglected, the plate can convert the photon spin into orbital angular momentum with up to 100% efficiency provided the thickness of the plate is less than the Rayleigh range of the incident beam. |
Deciphering Signatures of Mutational Processes Operative in Human Cancer | The genome of a cancer cell carries somatic mutations that are the cumulative consequences of the DNA damage and repair processes operative during the cellular lineage between the fertilized egg and the cancer cell. Remarkably, these mutational processes are poorly characterized. Global sequencing initiatives are yielding catalogs of somatic mutations from thousands of cancers, thus providing the unique opportunity to decipher the signatures of mutational processes operative in human cancer. However, until now there have been no theoretical models describing the signatures of mutational processes operative in cancer genomes and no systematic computational approaches are available to decipher these mutational signatures. Here, by modeling mutational processes as a blind source separation problem, we introduce a computational framework that effectively addresses these questions. Our approach provides a basis for characterizing mutational signatures from cancer-derived somatic mutational catalogs, paving the way to insights into the pathogenetic mechanism underlying all cancers. |
Avoiding, finding and fixing spreadsheet errors - A survey of automated approaches for spreadsheet QA | Spreadsheet programs can be found everywhere in organizations and they are used for a variety of purposes, including financial calculations, planning, data aggregation and decision making tasks. A number of research surveys have however shown that such programs are particularly prone to errors. Some reasons for the error-proneness of spreadsheets are that spreadsheets are developed by end users and that standard software quality assurance processes are mostly not applied. Correspondingly, during the last two decades, researchers have proposed a number of techniques and automated tools aimed at supporting the end user in the development of error-free spreadsheets. In this paper, we provide a review of the research literature and develop a classification of automated spreadsheet quality assurance (QA) approaches, which range from spreadsheet visualization, static analysis and quality reports, over testing and support to model-based spreadsheet development. Based on this review, we outline possible opportunities for future work in the area of automated spreadsheet QA. |
Classifying Single Trial EEG: Towards Brain Computer Interfacing | Driven by the progress in the field of single-trial analysis of EEG, there is a growing interest in brain computer interfaces (BCIs), i.e., systems that enable human subjects to control a computer only by means of their brain signals. In a pseudo-online simulation our BCI detects upcoming finger movements in a natural keyboard typing condition and predicts their laterality. This can be done on average 100–230ms before the respective key is actually pressed, i.e., long before the onset of EMG. Our approach is appealing for its short response time and high classification accuracy (>96%) in a binary decision where no human training is involved. We compare discriminative classifiers like Support Vector Machines (SVMs) and different variants of Fisher Discriminant that possess favorable regularization properties for dealing with high noise cases (inter-trial variablity). |
The Whale Optimization Algorithm | This paper proposes a novel nature-inspired meta-heuristic optimization algorithm, called Whale Optimization Algorithm (WOA), which mimics the social behavior of humpback whales. The algorithm is inspired by the bubble-net hunting strategy. WOA is tested with 29 mathematical optimization problems and 6 structural design problems. Optimization results prove that the WOA algorithm is very competitive compared to the state-of-art meta-heuristic algorithms as well as conventional methods. The source codes of the WOA algorithm are publicly available at http://www.alimirjalili.com/WOA.html © 2016 Elsevier Ltd. All rights reserved. |
Unramified extensions and geometric $\mathbb{Z}_p$-extensions of global function fields | We study on finite unramified extensions of global function fields (function fields of one valuable over a finite field). We show two results. One is an extension of Perret's result about the ideal class group problem. Another is a construction of a geometric $\mathbb{Z}_p$-extension which has a certain property. |
Visual Analytics for MOOC Data | With the rise of massive open online courses (MOOCs), tens of millions of learners can now enroll in more than 1,000 courses via MOOC platforms such as Coursera and edX. As a result, a huge amount of data has been collected. Compared with traditional education records, the data from MOOCs has much finer granularity and also contains new pieces of information. It is the first time in history that such comprehensive data related to learning behavior has become available for analysis. What roles can visual analytics play in this MOOC movement? The authors survey the current practice and argue that MOOCs provide an opportunity for visualization researchers and that visual analytics systems for MOOCs can benefit a range of end users such as course instructors, education researchers, students, university administrators, and MOOC providers. |
TOWARD A THEORY OF PARADOX : A DYNAMIC EQUILIBRIUM MODEL OF ORGANIZING | As organizational environments become more global, dynamic, and competitive, contradictory demands intensify. To understand and explain such tensions, academics and practitioners are increasingly adopting a paradox lens. We review the paradox literature, categorizing types and highlighting fundamental debates. We then present a dynamic equilibrium model of organizing, which depicts how cyclical responses to paradoxical tensions enable sustainability—peak performance in the present that enables success in the future. This review and the model provide the foundation of a theory of paradox. |
Generative Knowledge Transfer for Neural Language Models | In this paper, we propose a generative knowledge transfer technique that trains an RNN based language model (student network) using text and output probabilities generated from a previously trained RNN (teacher network). The text generation can be conducted by either the teacher or the student network. We can also improve the performance by taking the ensemble of soft labels obtained from multiple teacher networks. This method can be used for privacy conscious language model adaptation because no user data is directly used for training. Especially, when the soft labels of multiple devices are aggregated via a trusted third party, we can expect very strong privacy protection. |
Cancer preventive properties of ginger: a brief review. | Ginger, the rhizome of Zingiber officinalis, one of the most widely used species of the ginger family, is a common condiment for various foods and beverages. Ginger has a long history of medicinal use dating back 2500 years. Ginger has been traditionally used from time immemorial for varied human ailments in different parts of the globe, to aid digestion and treat stomach upset, diarrhoea, and nausea. Some pungent constituents present in ginger and other zingiberaceous plants have potent antioxidant and anti-inflammatory activities, and some of them exhibit cancer preventive activity in experimental carcinogenesis. The anticancer properties of ginger are attributed to the presence of certain pungent vallinoids, viz. [6]-gingerol and [6]-paradol, as well as some other constituents like shogaols, zingerone etc. A number of mechanisms that may be involved in the chemopreventive effects of ginger and its components have been reported from the laboratory studies in a wide range of experimental models. |
A Beginner ’ s Course in Legal Translation : the Case of Culture-bound Terms | The translation course consists of an introductory lecture followed by twelve hours of seminars held over one semester. It is complemented by a nine-hour course that presents an overview of the English legal system and some aspects of comparative law. Authentic English-language documents (contracts, guarantees, judgments, summonses etc.) are studied and subsequently used as models for translation into English. |
An Experimental Evaluation of the Best-of-Many Christofides’ Algorithm for the Traveling Salesman Problem | Recent papers on approximation algorithms for the traveling salesman problem (TSP) have given a new variant of the well-known Christofides’ algorithm for the TSP, called the Best-of-Many Christofides’ algorithm. The algorithm involves sampling a spanning tree from the solution to the standard LP relaxation of the TSP, subject to the condition that each edge is sampled with probability at most its value in the LP relaxation. One then runs Christofides’ algorithm on the tree by computing a minimum-cost matching on the odd-degree vertices in the tree, and shortcutting a traversal of the resulting Eulerian graph to a tour. In this paper we perform an experimental evaluation of the Best-of-Many Christofides’ algorithm to see if there are empirical reasons to believe its performance is better than that of Christofides’ algorithm. Furthermore, several different sampling schemes have been proposed; we implement several different schemes to determine which ones might be the most promising for obtaining improved performance guarantees over that of Christofides’ algorithm. In our experiments, all of the implemented methods perform significantly better than Christofides’ algorithm; an algorithm that samples from a maximum entropy distribution over spanning trees seems to be particularly good, though there are others that perform almost as well. |
Leg orientation as a clinical sign for pusher syndrome | BACKGROUND
Effective control of (upright) body posture requires a proper representation of body orientation. Stroke patients with pusher syndrome were shown to suffer from severely disturbed perception of own body orientation. They experience their body as oriented 'upright' when actually tilted by nearly 20 degrees to the ipsilesional side. Thus, it can be expected that postural control mechanisms are impaired accordingly in these patients. Our aim was to investigate pusher patients' spontaneous postural responses of the non-paretic leg and of the head during passive body tilt.
METHODS
A sideways tilting motion was applied to the trunk of the subject in the roll plane. Stroke patients with pusher syndrome were compared to stroke patients not showing pushing behaviour, patients with acute unilateral vestibular loss, and non brain damaged subjects.
RESULTS
Compared to all groups without pushing behaviour, the non-paretic leg of the pusher patients showed a constant ipsiversive tilt across the whole tilt range for an amount which was observed in the non-pusher subjects when they were tilted for about 15 degrees into the ipsiversive direction.
CONCLUSION
The observation that patients with acute unilateral vestibular loss showed no alterations of leg posture indicates that disturbed vestibular afferences alone are not responsible for the disordered leg responses seen in pusher patients. Our results may suggest that in pusher patients a representation of body orientation is disturbed that drives both conscious perception of body orientation and spontaneous postural adjustment of the non-paretic leg in the roll plane. The investigation of the pusher patients' leg-to-trunk orientation thus could serve as an additional bedside tool to detect pusher syndrome in acute stroke patients. |
Augmented Reality Fashion Apparel Simulation using a Magic Mirror | Digital technology innovations have led to significant changes in everyday life, made possible by the widespread use of computers and continuous developments in information technology (IT). Based on the utilization of systems applying 3D(three-dimensional) technology, as well as virtual and augmented reality techniques, IT has become the basis for a new fashion industry model, featuring consumer-centered service and production methods. Because of rising wages and production costs, the fashion industry’s international market power has been significantly weakened in recent years. To overcome this situation, new markets must be established by building a new knowledge and technology-intensive fashion industry. Development of virtual clothing simulation software, which has played an important role in the fashion industry’s IT-based digitalization, has led to continuous technological improvements for systems that can virtually adapt existing 2D(two-dimensional) design work to 3D design work. Such adaptions have greatly influenced the fashion industry by increasing profits. Both here and abroad, studies have been conducted to support the development of consumercentered, high value-added clothing and fashion products by employing digital technology. This study proposes a system that uses a depth camera to capture the figure of a user standing in front of a large display screen. The display can show fashion concepts and various outfits to the user, coordinated to his or her body. Thus, a “magic mirror” effect is produced. Magic mirror-based fashion apparel simulation can support total fashion coordination for accessories and outfits automatically, and does not require computer or fashion expertise. This system can provide convenience for users by assuming the role of a professional fashion coordinator giving an appearance presentation. It can also be widely used to support a customized method for clothes shopping. |
Experimental estimation of a 5-axis active control type bearingless canned motor pump | Bearingless motors are characterized by integration of electrical motors and magnetic bearings. These motors can realize magnetic suspension of rotor shafts without mechanical contacts. If the bearingless motor is applied to a canned motor pump, it is possible to levitate and rotate a impeller and a rotor shaft of a centrifugal pump. Therefore, the bearingless canned motor pump can realize next several advantages; (1) high power and high head, (2) clean liquid sending, (3) maintenance free, (4) longer operating life, (5) high chemical resistance. On the other hand, the rotor shaft and the stator must be covered with thick bulkheads, because fluid flows into the gap between the rotor shaft and the stator. It becomes difficult to generate sufficient suspension force for magnetic suspension, as a magnetic gap length is increased by the thick bulkheads. Accordingly, a novel structure of a 5-axis active control type bearingless motor, that can generate the sufficient suspension force under the wide-gap condition, has been proposed. This paper introduces pump characteristics with experiments using a prototype of the proposed novel structure. |
A Dataset of Operator-client Dialogues Aligned with Database Queries for End-to-end Training | This paper presents a novel dataset for training end-to-end task oriented conversational agents. The dataset contains conversations between an operator – a task expert, and a client who seeks information about the task. Along with the conversation transcriptions, we record database API calls performed by the operator, which capture a distilled meaning of the user query. We expect that the easy-to-get supervision of database calls will allow us to train end-to-end dialogue agents with significantly less training data. The dataset is collected using crowdsourcing and the conversations cover the well-known restaurant domain. Quality of the data is enforced by mutual control among contributors. The dataset is available for download under the Creative Commons 4.0 BY-SA license. |
Covfefe: A Computer Vision Approach For Estimating Force Exertion | Cumulative exposure to repetitive and forceful activities may lead to musculoskeletal injuries which not only reduce workers’ efficiency and productivity, but also affect their quality of life. Thus, widely accessible techniques for reliable detection of unsafe muscle force exertion levels for human activity is necessary for their well-being. However, measurement of force exertion levels is challenging and the existing techniques pose a great challenge as they are either intrusive, interfere with humanmachine interface, and/or subjective in the nature, thus are not scalable for all workers. In this work, we use face videos and the photoplethysmography (PPG) signals to classify force exertion levels of 0%, 50%, and 100% (representing rest, moderate effort, and high effort), thus providing a non-intrusive and scalable approach. Efficient feature extraction approaches have been investigated, including standard deviation of the movement of different landmarks of the face, distances between peaks and troughs in the PPG signals. We note that the PPG signals can be obtained from the face videos, thus giving an efficient classification algorithm for the force exertion levels using face videos. Based on the data collected from 20 subjects, features extracted from the face videos give 90% accuracy in classification among the 100% and the combination of 0% and 50% datasets. Further combining the PPG signals provide 81.7% accuracy. The approach is also shown to be robust to the correctly identify force level when the person is talking, even though such datasets are not included in the training. |
A time-efficient image processing algorithm for multicore/manycore parallel computing | Traditional methods for processing large images are extremely time intensive. Also, conventional image processing methods do not take advantage of available computing resources such as multicore central processing unit (CPU) and manycore general purpose graphics processing unit (GP-GPU). Studies suggest that applying parallel programming techniques to various image filters should improve the overall performance without compromising the existing resources. Recent studies also suggest that parallel implementation of image processing on compute unified device architecture (CUDA)-accelerated CPU/GPU system has potential to process the image very fast. In this paper, we introduce a CUDA-accelerated image processing method suitable for multicore/manycore systems. Using a bitmap file, we implement image processing and filtering through traditional sequential C and newly introduced parallel CUDA/C programs. A key step of the proposed algorithm is to load the pixel's bytes in a one dimensional array with length equal to matrix width * matrix height * bytes per pixel. This is done to process the image concurrently in parallel. According to experimental results, the proposed CUDA-accelerated parallel image processing algorithm provides benefit with a speedup factor up to 365 for an image with 8,192×8,192 pixels. |
Abnormal magnetic-resonance scans of the lumbar spine in asymptomatic subjects. A prospective investigation. | We performed magnetic resonance imaging on sixty-seven individuals who had never had low-back pain, sciatica, or neurogenic claudication. The scans were interpreted independently by three neuro-radiologists who had no knowledge about the presence or absence of clinical symptoms in the subjects. About one-third of the subjects were found to have a substantial abnormality. Of those who were less than sixty years old, 20 per cent had a herniated nucleus pulposus and one had spinal stenosis. In the group that was sixty years old or older, the findings were abnormal on about 57 per cent of the scans: 36 per cent of the subjects had a herniated nucleus pulposus and 21 per cent had spinal stenosis. There was degeneration or bulging of a disc at at least one lumbar level in 35 per cent of the subjects between twenty and thirty-nine years old and in all but one of the sixty to eighty-year-old subjects. In view of these findings in asymptomatic subjects, we concluded that abnormalities on magnetic resonance images must be strictly correlated with age and any clinical signs and symptoms before operative treatment is contemplated. |
Patent document categorization based on semantic structural information | The number of patent documents is currently rising rapidly worldwide, creating the need for an automatic categorization system to replace time-consuming and labor-intensive manual categorization. Because accurate patent classification is crucial to search for relevant existing patents in a certain field, patent categorization is a very important and useful field. As patent documents are structural documents with their own characteristics distinguished from general documents, these unique traits should be considered in the patent categorization process. In this paper, we categorize Japanese patent documents automatically, focusing on their characteristics: patents are structured by claims, purposes, effects, embodiments of the invention, and so on. We propose a patent document categorization method that uses the k-NN (k-Nearest Neighbour) approach. In order to retrieve similar documents from a training document set, some specific components to denote the socalled semantic elements, such as claim, purpose, and application field, are compared instead of the whole texts. Because those specific components are identified by various user-defined tags, first all of the components are clustered into several semantic elements. Such semantically clustered structural components are the basic features of patent categorization. We can achieve a 74% improvement of categorization performance over a baseline system that does not use the structural information of the patent. 2007 Published by Elsevier Ltd. |
PPP-RTK : Precise Point Positioning Using State-Space Representation in RTK Networks | The concept of precise point positioning (PPP) is currently associated with global networks. Precise orbit and clock solutions are used to enable absolute positioning of a single receiver. However, it is restricted in ambiguity resolution, in convergence time and in accuracy. Precise point positioning based on RTK networks (PPP-RTK) as presented overcomes these limitations and gives centimeter-accuracy in a few seconds. The primary task in RTK networks using the Geo++ GNSMART software is the precise monitoring and representation of all individual GNSS error components using state-space modeling. The advantages of state-space modeling are well known for PPP applications. It is much closer to the physical error sources and can thus better represent the error characteristics. It allows to better separate the various error sources to improve performance and can lead to much less bandwidth for transmission. With RTK networks based on GNSMART it is possible to apply the PPP concept with high accuracy. Ambiguity resolution within the RTK network is mandatory and allows the precise modeling of the system state. Since the integer nature of the carrier phase ambiguities is maintained, all error components can be consistently modeled and give full accuracy in an ambiguity fixing GNSS application. For today's realtime applications, observations of a reference station together with network derived parameters to describe distance dependent errors or a virtual reference station are transmitted to GNSS users in the field using the RTCM standards. This can be termed as representation in observation space (Observation Space Representation: OSR). In contrast to this, also the actual state-space data Presented at the 18th International Technical Meeting, ION GNSS-05, September 13-16, 2005, Long Beach, California. can be used for the representation of the complete GNSS state (State Space Representation: SSR). Hence, precise absolute positioning based on a RTK network (PPP-RTK) using state-space data is a practicable concept. In principle, the concept can be applied to small, regional and global networks. A reference station separation of several 100 km to achieve ambiguity resolution and therefore the key-issue to PPP-RTK is already possible with GNSMART. The complete transition from observation-space to statespace requires the definition of adequate formats and standardized models to provide the state-space data for GNSS application. A single receiver then can position itself with centimeter-accuracy within a few seconds in post-processing and realtime applications. In between, state-space data can still be used to generate data in observation-space, e.g. RTCM or RINEX format, through a conversion algorithm. The state-space concept and pre-requisites are discussed. The benefits of state space representation of GNSS errors and their applications are pointed out. |
Both aerobic endurance and strength training programmes improve cardiovascular health in obese adults. | Regular exercise training is recognized as a powerful tool to improve work capacity, endothelial function and the cardiovascular risk profile in obesity, but it is unknown which of high-intensity aerobic exercise, moderate-intensity aerobic exercise or strength training is the optimal mode of exercise. In the present study, a total of 40 subjects were randomized to high-intensity interval aerobic training, continuous moderate-intensity aerobic training or maximal strength training programmes for 12 weeks, three times/week. The high-intensity group performed aerobic interval walking/running at 85-95% of maximal heart rate, whereas the moderate-intensity group exercised continuously at 60-70% of maximal heart rate; protocols were isocaloric. The strength training group performed 'high-intensity' leg press, abdominal and back strength training. Maximal oxygen uptake and endothelial function improved in all groups; the greatest improvement was observed after high-intensity training, and an equal improvement was observed after moderate-intensity aerobic training and strength training. High-intensity aerobic training and strength training were associated with increased PGC-1alpha (peroxisome-proliferator-activated receptor gamma co-activator 1alpha) levels and improved Ca(2+) transport in the skeletal muscle, whereas only strength training improved antioxidant status. Both strength training and moderate-intensity aerobic training decreased oxidized LDL (low-density lipoprotein) levels. Only aerobic training decreased body weight and diastolic blood pressure. In conclusion, high-intensity aerobic interval training was better than moderate-intensity aerobic training in improving aerobic work capacity and endothelial function. An important contribution towards improved aerobic work capacity, endothelial function and cardiovascular health originates from strength training, which may serve as a substitute when whole-body aerobic exercise is contra-indicated or difficult to perform. |
Audio-haptic feedback in mobile phones | A new breed of mobile phones has been designed to enable concurrent vibration and audio stimulation, or audio-haptics. This paper aims to share techniques for creating and optimizing audio-haptic effects to enhance the user interface.The authors present audio manipulation techniques specific to the multifunction transducer (MFT) technology. In particular two techniques, the Haptic Inheritance and Synthesis and Matching methods are discussed. These two methods of haptic media generation allow simple creation of vibration content, and also allow for compatibility with non-haptic mobile devices.The authors present preliminary results of an evaluation of 42 participants comparing audio-based haptic user interface (UI) feedback with audio-only feedback. The results show that users were receptive to audio-haptic UI feedback. The results also suggest that audio-haptics seems to enhance the perception of audio quality. |
Design and dynamic analysis of a Transformable Hovering Rotorcraft (THOR) | This paper describes the Transformable HOvering Rotorcraft (THOR), a prototype Unmanned Aerial Vehicle (UAV) that explores a novel approach in combining the range and speed of a horizontal flying platform with the hovering and maneuverability of a rotor-wing. This is achieved by integrating a tailless flying wing configuration with a single-axis rotor, or monocopter. By maintaining full utilization of all aerodynamic surfaces and propulsion sources in both flight modes, this method represents the most structurally efficient approach to achieving a cruising mode and a hovering mode on the same frame. Using a dual servo and motor configuration, we propose an under-actuated system that is able to achieve controllability in 4 degrees of freedom while in its horizontal cruising mode and in 5 degrees of freedom while in its hovering mode. In both indoor and outdoor experiments, the UAV is able to transition between either flight modes seamlessly and repeatedly without the need for any additional mechanisms and actuators. |
Saint Marx, Literalism and American Academic Revolutionary Marxism | It would be churlish to be other than pleased when someone as prominent as Richard A. Brosio – the doyen of American revolutionary Marxism – should go to the trouble of reviewing my book and even to call it “worthwhile” along with a number of other praiseworthy epithets. And if the title of my response is a little facile and facetious the message is anything but. The points I wish to make have been accumulating over my short career and now fall out like so many lost coins to be refound, counted and reinvested. A warning then: if I sound a little miffed Richard Brosio should not take it to heart entirely as the criticisms I am about to raise in this response are more of a broadcast directed at those, especially American academics, who appropriate the label “revolutionary Marxists” and apply it themselves self-consciously. The process of canonisation in academia is a vital part of “disciple” and “discipline” probably to be uncovered in the scholastic ruins of the medieval university. And the popular saying “imitation is the sincerest form of flattery” goes some way to explaining the importance of imitation and copying behaviour not only as a form of pedagogy but also as a form of academic self-stylization. There are many different aspects to this process from the unconscious adoption of mannerisms and habits to the more sophisticated modelling of one’s writing and thinking. So we have (unashameably) little Wittgensteinians, little Heideggerians and little Marxians. (I was a little Wittgensteinian when I was growing up.) We need them! This is, in part, social reproduction in the academy. We hear of it with students who are educated by professor so-and-so, or examined by professor such-and-such. Academic genealogies. After all many of us after the PhD never attain the same levels of scholarship again. Increasingly, government and state pressures are such that they direct us and our students to “useful knowledge” defined for us by national research agendas and funding patterns. Yet there is a sense where we should question our relationships to those thinkers we value, to examine the nature |
Relationship between oxidative stress, semen characteristics, and clinical diagnosis in men undergoing infertility investigation. | OBJECTIVE
To determine whether particular semen characteristics in various clinical diagnoses of infertility are associated with high oxidative stress and whether any group of infertile men is more likely to have high seminal oxidative stress. Reactive oxygen species (ROS) play an important role in sperm physiological functions, but elevated levels of ROS or oxidative stress are related to male infertility.
DESIGN
Measurement of sperm concentration, motility, morphology, seminal ROS, and total antioxidant capacity (TAC) in patients seeking infertility treatment and controls.
SETTING
Male infertility clinic of a tertiary care center.
PATIENT(S)
One hundred sixty-seven infertile patients and 19 controls.
INTERVENTION(S)
None.
MAIN OUTCOME MEASURE(S)
Semen characteristics, seminal ROS, and TAC in samples from patients with various clinical diagnoses and controls.
RESULT(S)
Fifteen patients (9.0%) were Endtz positive and 152 (91.0%) Endtz negative. Sperm concentration, motility, and morphology were significantly reduced in all groups compared with the controls (P =.02), except in varicocele associated with infection group. Mean (+/-SD) ROS levels in patient groups ranged from 2.2 +/- 0.13 to 3.2 +/- 0.35, significantly higher than controls (1.3 +/- 0.3; P<.005). Patient groups had a significantly lower mean (+/-SD) TAC from 1014.75 +/- 79.22 to 1173.05 +/- 58.07 than controls (1653 +/- 115.28, P<.001), except in the vasectomy reversal group (1532.02 +/- 74.24). Sperm concentration was negatively correlated with ROS both overall and within all groups (P</=.007), with the exception of idiopathic infertility.
CONCLUSION(S)
Irrespective of the clinical diagnosis and semen characteristics, the presence of seminal oxidative stress in infertile men suggests its role in the pathophysiology of infertility. Medical or surgical treatments for infertility in these men should include strategies to reduce oxidative stress. |
Organisational Change Management : A Critical Review | It can be argued that the successful management of change is crucial to any organisation in order to survive and succeed in the present highly competitive and continuously evolving business environment. However, theories and approaches to change management currently available to academics and practitioners are often contradictory, mostly lacking empirical evidence and supported by unchallenged hypotheses concerning the nature of contemporary organisational change management. The purpose of this article is, therefore, to provide a critical review of some of the main theories and approaches to organisational change management as an important first step towards constructing a new framework for managing change. The article concludes with recommendations for further research. |
3D analysis of brace treatment in idiopathic scoliosis | We have evaluated the effect of bracing in scoliosis on coronal alignment in a cohort of patients. Current literature has not described the specific effect of bracing on the 3D shape of the scoliotic curves. The purpose of this study was to analyze the variability of the 3D effect of bracing on idiopathic scoliosis. The spines of 30 patients with adolescent idiopathic scoliosis were reconstructed using biplanar stereoradiography with and without the brace. The Cobb angle, sagittal and pelvic parameters and transverse plane parameters were calculated. The variability and the mean values of each parameter, with and without a brace, were analyzed and compared using a student t test. The Cobb angle improved in 50 % of patients but remained unchanged in 50 % cases. In 90 % of the cases lordosis was decreased. The thoracic kyphosis was decreased in 26 % cases, unchanged in 57 % of cases and increased in 17 % cases. The AVR was improved (>5°) in 26 % cases, worsened in 23 % and unchanged in 50 %. Only the differences of Cobb angle and the lordosis were statistically significant. Global statistics of this study concur with the literature. The Cobb angle was significantly improved. It also showed a significant hypolordotic effect. However, the results showed a high variability of the brace treatment effect in almost every parameter. Analysis of this variability by means of 3D reconstructions instead of global statistics should help characterize the mechanisms of correction of brace treatment. |
Analyzing Classifiers: Fisher Vectors and Deep Neural Networks | Fisher vector (FV) classifiers and Deep Neural Networks (DNNs) are popular and successful algorithms for solving image classification problems. However, both are generally considered 'black box' predictors as the non-linear transformations involved have so far prevented transparent and interpretable reasoning. Recently, a principled technique, Layer-wise Relevance Propagation (LRP), has been developed in order to better comprehend the inherent structured reasoning of complex nonlinear classification models such as Bag of Feature models or DNNs. In this paper we (1) extend the LRP framework also for Fisher vector classifiers and then use it as analysis tool to (2) quantify the importance of context for classification, (3) qualitatively compare DNNs against FV classifiers in terms of important image regions and (4) detect potential flaws and biases in data. All experiments are performed on the PASCAL VOC 2007 and ILSVRC 2012 data sets. |
A Graph Degeneracy-based Approach to Keyword Extraction | We operate a change of paradigm and hypothesize that keywords are more likely to be found among influential nodes of a graph-ofwords rather than among its nodes high on eigenvector-related centrality measures. To test this hypothesis, we introduce unsupervised techniques that capitalize on graph degeneracy. Our methods strongly and significantly outperform all baselines on two datasets (short and medium size documents), and reach best performance on the third one (long documents). |
Linking Tweets to News: A Framework to Enrich Short Text Data in Social Media | Many current Natural Language Processing [NLP] techniques work well assuming a large context of text as input data. However they become ineffective when applied to short texts such as Twitter feeds. To overcome the issue, we want to find a related newswire document to a given tweet to provide contextual support for NLP tasks. This requires robust modeling and understanding of the semantics of short texts. The contribution of the paper is two-fold: 1. we introduce the Linking-Tweets-toNews task as well as a dataset of linked tweet-news pairs, which can benefit many NLP applications; 2. in contrast to previous research which focuses on lexical features within the short texts (text-to-word information), we propose a graph based latent variable model that models the inter short text correlations (text-to-text information). This is motivated by the observation that a tweet usually only covers one aspect of an event. We show that using tweet specific feature (hashtag) and news specific feature (named entities) as well as temporal constraints, we are able to extract text-to-text correlations, and thus completes the semantic picture of a short text. Our experiments show significant improvement of our new model over baselines with three evaluation metrics in the new task. |
Self-Guiding Multimodal LSTM - when we do not have a perfect training dataset for image captioning | In this paper, a self-guiding multimodal LSTM (sg-LSTM) image captioning model is proposed to handle uncontrolled imbalanced real-world image-sentence dataset. We collect FlickrNYC dataset from Flickr as our testbed with 306, 165 images and the original text descriptions uploaded by the users are utilized as the ground truth for training. Descriptions in FlickrNYC dataset vary dramatically ranging from short term-descriptions to long paragraph-descriptions and can describe any visual aspects, or even refer to objects that are not depicted. To deal with the imbalanced and noisy situation and to fully explore the dataset itself, we propose a novel guiding textual feature extracted utilizing a multimodal LSTM (m-LSTM) model. Training of m-LSTM is based on the portion of data in which the image content and the corresponding descriptions are strongly bonded. Afterwards, during the training of sg-LSTM on the rest training data, this guiding information serves as additional input to the network along with the image representations and the ground-truth descriptions. By integrating these input components into a multimodal block, we aim to form a training scheme with the textual information tightly coupled with the image content. The experimental results demonstrate that the proposed sg-LSTM model outperforms the traditional state-of-the-art multimodal RNN captioning framework in successfully describing the key components of the input images. |
Evolutionary agglomeration theory: increasing returns, diminishing returns, and the industry life cycle | According to Marshall?s agglomeration theory, Krugman?s New Economic Geography models, and Porter?s cluster policies, firms should receive increasing returns from a trinity of agglomeration economies: a local pool of skilled labour, local supplier linkages, and local knowledge spillovers. Recent evolutionary theories suggest that whether agglomeration economies generate increasing returns or diminishing returns depends on time, and especially the evolution of the industry life cycle. At the start of the 21st century, we re-examine Marshall?s trinity of agglomeration economies in the city-region where he discovered them. The econometric results from our multivariate regression models are the polar opposite of Marshall?s. During the later stages of the industry life cycle, Marshall?s agglomeration economies decrease the economic performance of firms and create widespread diminishing returns for the economic development of the city-region, which has evolved to become one of the poorest city-regions in Europe. |
APISan: Sanitizing API Usages through Semantic Cross-Checking | API misuse is a well-known source of bugs. Some of them (e.g., incorrect use of SSL API, and integer overflow of memory allocation size) can cause serious security vulnerabilities (e.g., man-in-the-middle (MITM) attack, and privilege escalation). Moreover, modern APIs, which are large, complex, and fast evolving, are error-prone. However, existing techniques to help finding bugs require manual effort by developers (e.g., providing specification or model) or are not scalable to large real-world software comprising millions of lines of code. In this paper, we present APISAN, a tool that automatically infers correct API usages from source code without manual effort. The key idea in APISAN is to extract likely correct usage patterns in four different aspects (e.g., causal relation, and semantic relation on arguments) by considering semantic constraints. APISAN is tailored to check various properties with security implications. We applied APISAN to 92 million lines of code, including Linux Kernel, and OpenSSL, found 76 previously unknown bugs, and provided patches for all the bugs. |
Robust Adaptive Segmentation of Range Images | We propose a novel image segmentation technique using the robust, adaptive least k-th order squares (ALKS) estimator which minimizes the k-th order statistics of the squared of residuals. The optimal value of k is determined from the data and the procedure detects the homogeneous surface patch representing the relative majority of the pixels. The ALKS shows a better tolerance to structured outliers than other recently proposed similar techniques: Minimize the Probability of Randomness (MINPRAN) and Residual Consensus (RESC). The performance of the new, fully autonomous, range image segmentation algorithm is compared to several other methods. Index Terms|robust methods, range image segmentation, surface tting |
Homeward Bound? Interest, Identity, and Investor Behavior in a Third World Export | Are indigenous investors in Third World export platforms more stable than their allegedly “footloose” foreign rivals? While mainstream economists hold that investor behavior is independent of investor identity and therefore call for the parallel treatment of domestic and foreign firms, their critics hold that indigenous investors are more dedicated than their foreign counterparts and therefore call for industrial policies designed to cultivate and defend native entrepreneurs. Who is correct? The author uses a unique combination of qualitative and quantitative data to document and account for the relative stability and dynamism of indigenous investors in the Dominican Republic’s largest export processing zone and thereby brings economic sociology back from the margins of development theory—and development theory back into the heartland of sociology. |
Cybercriminals, cyberattacks and cybercrime | Nowadays, cybercrime is growing rapidly around the world, as new technologies, applications and networks emerge. In addition, the Deep Web has contributed to the growth of illegal activities in cyberspace. As a result, cybercriminals are taking advantage of system vulnerabilities for their own benefit. This article presents the history and conceptualization of cybercrime, explores different categorizations of cybercriminals and cyberattacks, and sets forth our exhaustive cyberattack typology, or taxonomy. Common categories include where the computer is the target to commit the crime, where the computer is used as a tool to perpetrate the felony, or where a digital device is an incidental condition to the execution of a crime. We conclude our study by analyzing lessons learned and future actions that can be undertaken to tackle cybercrime and harden cybersecurity at all levels. |
Lack of association between CAG repeat polymorphism in the androgen receptor gene and the outcome of rheumatoid arthritis treatment with leflunomide | Leflunomide (LEF) is a disease-modifying antirheumatic drug used for treating rheumatoid arthritis (RA) and the action of which may be modified by sex hormones. The aim of this study was to examine the association between CAG repeat polymorphism in the androgen receptor (AR) gene and the response to treatment with LEF in women with RA. We studied 114 women diagnosed with RA and treated with LEF (20 mg daily). Follow-up was 12 months. CAG repeat polymorphism was determined using polymerase chain reaction (PCR) and subsequent fragment analysis by capillary electrophoresis. Analysis revealed no statistically significant associations between CAG repeat polymorphism in the AR gene and improvement of disease activity parameters: erythrocyte sedimentation rate, serum C-reactive protein, patient’s global assessment of disease activity on a visual analog scale (VAS), disease activity score of 28 joints (DAS28), and swollen and tender joint count. Our results suggest no correlation between CAG repeat polymorphism in the AR gene and response to treatment with LEF in women with RA. |
Generalized space shift keying modulation for MIMO channels | A fundamental component of spatial modulation (SM), termed generalized space shift keying (GSSK), is presented. GSSK modulation inherently exploits fading in wireless communication to provide better performance over conventional amplitude/phase modulation (APM) techniques. In GSSK, only the antenna indices, and not the symbols themselves (as in the case of SM and APM), relay information. We exploit GSSKpsilas degrees of freedom to achieve better performance, which is done by formulating its constellation in an optimal manner. To support our results, we also derive upper bounds on GSSKpsilas bit error probability, where the source of GSSKpsilas strength is made clear. Analytical and simulation results show performance gains (1.5-3 dB) over popular multiple antenna APM systems (including Bell Laboratories layered space time (BLAST) and maximum ratio combining (MRC) schemes), making GSSK an excellent candidate for future wireless applications. |
Antioxidant activity of water-soluble chitosan derivatives. | Water-soluble chitosan derivatives were prepared by graft copolymerization of maleic acid sodium onto hydroxypropyl chitosan and carboxymethyl chitosan sodium. Their scavenging activities against hydroxyl radical *OH were investigated by chemiluminescence technique. They exhibit IC(50) values ranging from 246 to 498 microg/mL, which should be attributed to their different contents of hydroxyl and amino groups and different substituting groups. |
3 D ACTION RECOGNITION USING DATA VISUALIZATION AND CONVOLUTIONAL NEURAL NETWORKS | It remains a challenge to efficiently represent spatial-temporal data for 3D action recognition. To solve this problem, this paper presents a new skeleton-based action representation using data visualization and convolutional neural networks, which contains four main stages. First, skeletons from an action sequence are mapped as a set of five dimensional points, containing three dimensions of location, one dimension of time label and one dimension of joint label. Second, these points are encoded as a series of color images, by visualizing points as RGB pixels. Third, convolutional neural networks are adopted to extract deep features from color images. Finally, action class score is calculated by fusing selected deep features. Extensive experiments on three benchmark datasets show that our method achieves state-of-the-art results. |
Piecewise 3D Euler spirals | 3D Euler spirals are visually pleasing, due to their property of having their curvature and their torsion change linearly with arc-length. This paper presents a novel algorithm for fitting piecewise 3D Euler spirals to 3D curves with G2 continuity and torsion continuity. The algorithm can also handle sharp corners. Our piecewise representation is invariant to similarity transformations and it is close to the input curves up to an error tolerance. |
Raltegravir in HIV-1-Infected Pregnant Women: Pharmacokinetics, Safety, and Efficacy. | BACKGROUND
The use of raltegravir in human immunodeficiency virus (HIV)-infected pregnant women is important in the prevention of mother-to-child HIV transmission, especially in circumstances when a rapid decline of HIV RNA load is warranted or when preferred antiretroviral agents cannot be used. Physiological changes during pregnancy can reduce antiretroviral drug exposure. We studied the effect of pregnancy on the pharmacokinetics of raltegravir and its safety and efficacy in HIV-infected pregnant women.
METHODS
An open-label, multicenter, phase 4 study in HIV-infected pregnant women receiving raltegravir 400 mg twice daily was performed (Pharmacokinetics of Newly Developed Antiretroviral Agents in HIV-Infected Pregnant Women Network). Steady-state pharmacokinetic profiles were obtained in the third trimester and postpartum along with cord and maternal delivery concentrations. Safety and virologic efficacy were evaluated.
RESULTS
Twenty-two patients were included, of which 68% started raltegravir during pregnancy. Approaching delivery, 86% of the patients had an undetectable viral load (<50 copies/mL). None of the children were HIV-infected. Exposure to raltegravir was highly variable. Overall area under the plasma concentration-time curve (AUC) and plasma concentration at 12 hours after intake (C12h) plasma concentrations in the third trimester were on average 29% and 36% lower, respectively, compared with postpartum: Geometric mean ratios (90% confidence interval) were 0.71 (.53-.96) for AUC0-12h and 0.64 (.34-1.22) for C12h. The median ratio of raltegravir cord to maternal blood was 1.21 (interquartile range, 1.02-2.17; n = 9).
CONCLUSIONS
Raltegravir was well tolerated during pregnancy. The pharmacokinetics of raltegravir showed extensive variability. The observed mean decrease in exposure to raltegravir during third trimester compared to postpartum is not considered to be of clinical importance. Raltegravir can be used in standard dosages in HIV-infected pregnant women.
CLINICAL TRIALS REGISTRATION
NCT00825929. |
Design of a Biped Robot Using DSP and FPGA | A biped robot should be designed to be an effective mechanical structure and have smaller hardware system if it is to be a stand-alone structure. This paper shows the design methodology of a biped robot controller using FPGA(Field Programmable Gate Array). A hardware system consists of DSP(Digital Signal Processor) as the main CPU, and FPGA as the motor controller. By using FPGA, more flexible hardware system has been achieved, and more compact and simple controller has been designed.. |
The Stable Marriage Problem | The original work of Gale and Shapley on an assignment method using the stable marriage criterion has been extended to find all the stable marriage assignments. The algorithm derived for finding all the stable marriage assignments is proved to satisfy all the conditions of the problem. Algorithm 411 applies to this paper. |
Syslog processing for switch failure diagnosis and prediction in datacenter networks | Syslogs on switches are a rich source of information for both post-mortem diagnosis and proactive prediction of switch failures in a datacenter network. However, such information can be effectively extracted only through proper processing of syslogs, e.g., using suitable machine learning techniques. A common approach to syslog processing is to extract (i.e., build) templates from historical syslog messages and then match syslog messages to these templates. However, existing template extraction techniques either have low accuracies in learning the “correct” set of templates, or does not support incremental learning in the sense the entire set of templates has to be rebuilt (from processing all historical syslog messages again) when a new template is to be added, which is prohibitively expensive computationally if used for a large datacenter network. To address these two problems, we propose a frequent template tree (FT-tree) model in which frequent combinations of (syslog) words are identified and then used as message templates. FT-tree empirically extracts message templates more accurately than existing approaches, and naturally supports incremental learning. To compare the performance of FT-tree and three other template learning techniques, we experimented them on two-years' worth of failure tickets and syslogs collected from switches deployed across 10+ datacenters of a tier-1 cloud service provider. The experiments demonstrated that FT-tree improved the estimation/prediction accuracy (as measured by F1) by 155% to 188%, and the computational efficiency by 117 to 730 times. |
3D Mesh Labeling via Deep Convolutional Neural Networks | This article presents a novel approach for 3D mesh labeling by using deep Convolutional Neural Networks (CNNs). Many previous methods on 3D mesh labeling achieve impressive performances by using predefined geometric features. However, the generalization abilities of such low-level features, which are heuristically designed to process specific meshes, are often insufficient to handle all types of meshes. To address this problem, we propose to learn a robust mesh representation that can adapt to various 3D meshes by using CNNs. In our approach, CNNs are first trained in a supervised manner by using a large pool of classical geometric features. In the training process, these low-level features are nonlinearly combined and hierarchically compressed to generate a compact and effective representation for each triangle on the mesh. Based on the trained CNNs and the mesh representations, a label vector is initialized for each triangle to indicate its probabilities of belonging to various object parts. Eventually, a graph-based mesh-labeling algorithm is adopted to optimize the labels of triangles by considering the label consistencies. Experimental results on several public benchmarks show that the proposed approach is robust for various 3D meshes, and outperforms state-of-the-art approaches as well as classic learning algorithms in recognizing mesh labels. |
Coupled Bayesian Sets Algorithm for Semi-supervised Learning and Information Extraction | Our inspiration comes from Nell (Never Ending Language Learning), a computer program running at Carnegie Mellon University to extract structured information from unstructured web pages. We consider the problem of semi-supervised learning approach to extract category instances (e.g. country(USA), city(New York)) from web pages, starting with a handful of labeled training examples of each category or relation, plus hundreds of millions of unlabeled web documents. Semisupervised approaches using a small number of labeled examples together with many unlabeled examples are often unreliable as they frequently produce an internally consistent, but nevertheless, incorrect set of extractions. We believe that this problem can be overcome by simultaneously learning independent classifiers in a new approach named Coupled Bayesian Sets algorithm, based on Bayesian Sets, for many different categories and relations (in the presence of an ontology defining constraints that couple the training of these classifiers). Experimental results show that simultaneously learning a coupled collection of classifiers for random 11 categories resulted in much more accurate extractions than training classifiers through original Bayesian Sets algorithm, Naive Bayes, BaS-all and Coupled Pattern Learner (the category extractor used in NELL). |
Rectenna Design and Signal Optimization for Electromagnetic Energy Harvesting and Wireless Power Transfer | This work addresses two key topics in the field of energy harvesting and wireless power transfer. The first is the optimum signal design for improved RF-DC conversion efficiency in rectifier circuits by using time varying envelope signals. The second is the design of rectifiers that present reduced sensitivity to input power and output load variations by introducing resistance compression network (RCN) structures. key words: rectenna, energy harvesting, wireless power transfer, resistance compression network, rectifier |
Deep Learning for Detecting Robotic Grasps | We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. In this work, we apply a deep learning approach to solve this problem, which avoids time-consuming hand-design of features. This presents two main challenges. First, we need to evaluate a huge number of candidate grasps. In order to make detection fast, as well as robust, we present a two-step cascaded structure with two deep networks, where the top detections from the first are re-evaluated by the second. The first network has fewer features, is faster to run, and can effectively prune out unlikely candidate grasps. The second, with more features, is slower but has to run only on the top few detections. Second, we need to handle multimodal inputs well, for which we present a method to apply structured regularization on the weights based on multimodal group regularization. We demonstrate that our method outperforms the previous state-of-the-art methods in robotic grasp detection, and can be used to successfully execute grasps on a Baxter robot. 1 |
Automated scenario generation: toward tailored and optimized military training in virtual environments | Scenario-based training exemplifies the learning-by-doing approach to human performance improvement. In this paper, we enumerate the advantages of incorporating automated scenario generation technologies into the traditional scenario development pipeline. An automated scenario generator is a system that creates training scenarios from scratch, augmenting human authoring to rapidly develop new scenarios, providing a richer diversity of tailored training opportunities, and delivering training scenarios on demand. We introduce a combinatorial optimization approach to scenario generation to deliver the requisite diversity and quality of scenarios while tailoring the scenarios to a particular learner's needs and abilities. We propose a set of evaluation metrics appropriate to scenario generation technologies and present preliminary evidence for the suitability of our approach compared to other scenario generation approaches. |
Impact of ABCB1 variants on neutrophil depression: a pharmacogenomic study of paclitaxel in 92 women with ovarian cancer. | The standard treatment for ovarian cancer in advanced stages is post-surgery treatment with taxane-platin chemotherapy. Despite an initial high response rate, most patients eventually relapse. The dose-limiting toxicities of paclitaxel are neutropenia and neuropathy, but the inter-individual variability is large. The aim of this prospective study was to investigate the impact of genetic variants in key drug metabolizing/transporter genes on toxicity and compliance. CYP2C8*3 and three ABCB1 polymorphisms were chosen for primary analysis, and a host of other candidate genes was explored in 92 prospectively recruited Scandinavian Caucasian women with primary ovarian cancer who were treated with paclitaxel and carboplatin. A single investigator assessed the clinical toxicity in 97% of the patients. Patients carrying variant alleles of ABCB1 C3435T experienced more pronounced neutrophil decrease (63%, 72% and 80% for 3435CC, CT and TT, respectively; p-value 0.03). A similar association was found for G2677T/A, p-value 0.02. For C1236T, there was a trend with p-value 0.06. No statistically significant correlations were found for paclitaxel compliance and sensory neuropathy in the primary analysis. Variants in the drug transporter ABCB1 gene are possibly associated with the neutrophil suppressing effect of paclitaxel in patients with ovarian cancer. This finding has implications for the understanding of bone marrow suppression and future tailored chemotherapy. |
Integration of scheduling and advanced process control in semiconductor manufacturing: review and outlook | Scheduling optimization in semiconductor manufacturing is always a crucial task in production performance indicators such as machine utilization, cycle time and delivery times. With the increasing complexity of fabrication techniques and scales, scheduling and control activities are inevitably confronted with each other and shall be integrated correspondingly. In particular scheduling and control are mutually dependent as control requires decisions from schedules, and scheduling should take control information into account. Based on a literature survey, we propose a general review and an outlook of the expected improvements related to binding scheduling decisions and information/constraints coming from Advanced Process Control systems in semiconductor manufacturing. Potential issues and tasks concerning this integration are addressed in this paper in order to stimulate more work in research and industrial practices. |
Designing Wide-band Transformers for HF and VHF Power Amplifiers | In the design of RF power amplifiers, wide-band transformers play an important role in the quality of the amplifier as they are fundamental in determining the input and output impedances, gain flatness, linearity, power efficiency and other performance characteristics. The three forms of transformers that are encountered, unbalanced-to-unbalanced (unun), balanced-to-balanced (balbal), and balanced-to-unbalanced (balun), are used in various combinations to accomplish the desired goals. Careful consideration needs to be given when making choices of the magnetic materials (if any is to be used), the conductors, and the method of construction, as the choices made weigh significantly in the overall performance of the transformer. The type and length of the conductors and the permeability of the magnetic material are the primary factors that determine the coupling, which in turn determines the transmission loss and the low frequency cutoff. The type and length of conductor used and the loss characteristics of the magnetic material also affects the coupling, and further influences the parasitic reactances that affect the high frequency performance. |
Intensive cholesterol reduction lowers blood pressure and large artery stiffness in isolated systolic hypertension. | OBJECTIVES
We sought to investigate the effects of intensive cholesterol reduction on large artery stiffness and blood pressure in normolipidemic patients with isolated systolic hypertension (ISH).
BACKGROUND
Isolated systolic hypertension is associated with elevated cardiovascular morbidity and mortality and is primarily due to large artery stiffening, which has been independently related to cardiovascular mortality. Cholesterol-lowering therapy has been efficacious in reducing arterial stiffness in patients with hypercholesterolemia, and thus may be beneficial in ISH.
METHODS
In a randomized, double-blinded, cross-over study design, 22 patients with stage I ISH received three months of atorvastatin therapy (80 mg/day) and three months of placebo treatment. Systemic arterial compliance was measured noninvasively using carotid applanation tonometry and Doppler velocimetry of the ascending aorta.
RESULTS
Atorvastatin treatment reduced total and low-density lipoprotein cholesterol and triglyceride levels by 36 +/- 2% (p < 0.001), 48 +/- 3% (p < 0.001) and 23 +/- 5% (p = 0.003), respectively, and increased high density lipoprotein cholesterol by 7 +/- 3% (p = 0.03). Systemic arterial compliance was higher after treatment (placebo vs. atorvastatin: 0.36 +/- 0.03 vs. 0.43 +/- 0.05 ml/mm Hg, p = 0.03). Brachial systolic blood pressure was lower after atorvastatin treatment (154 +/- 3 vs. 148 +/- 2 mm Hg, p = 0.03), as were mean (111 +/- 2 vs. 107 +/- 2 mm Hg, p = 0.04) and diastolic blood pressures (83 +/- 1 vs. 81 +/- 2 mm Hg, p = 0.04). There was a trend toward a reduction in pulse pressure (71 +/- 3 vs. 67 +/- 2 mm Hg, p = 0.08).
CONCLUSIONS
Intensive cholesterol reduction may be beneficial in the treatment of patients with ISH and normal lipid levels, through a reduction in large artery stiffness. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.