title
stringlengths
8
300
abstract
stringlengths
0
10k
Digit Ratio ( 2 D : 4 D ) and Cattell ’ s Personality Traits
The ratio between second and fourth finger (2D:4D) is sexually dimorphic; it is lower in men than in women. Studies using broad personality domains yielded correlations of 2D:4D with neuroticism, extraversion or agreeableness, but the obtained results have been inconsistent. We correlated 2D:4D of 184 women and 101 men with their scores in Cattell’s 16 Personality Factor (16PF) Questionnaire. We found women with a higher (more ‘feminine’) right hand 2D:4D to score lower in emotional stability and social boldness and higher in privateness. Mediator analysis showed emotional stability to be probably primarily correlated with 2D:4D and to act as a mediator between 2D:4D and social boldness. Privateness appears to be mediated by an even more complex path. We discuss the usefulness of primary-level personality questionnaires and mediator analyses in the investigation of psycho-morphological associations. Copyright # 2007 John Wiley & Sons, Ltd.
Video2GIF: Automatic Generation of Animated GIFs from Video
We introduce the novel problem of automatically generating animated GIFs from video. GIFs are short looping video with no sound, and a perfect combination between image and video that really capture our attention. GIFs tell a story, express emotion, turn events into humorous moments, and are the new wave of photojournalism. We pose the question: Can we automate the entirely manual and elaborate process of GIF creation by leveraging the plethora of user generated GIF content? We propose a Robust Deep RankNet that, given a video, generates a ranked list of its segments according to their suitability as GIF. We train our model to learn what visual content is often selected for GIFs by using over 100K user generated GIFs and their corresponding video sources. We effectively deal with the noisy web data by proposing a novel adaptive Huber loss in the ranking formulation. We show that our approach is robust to outliers and picks up several patterns that are frequently present in popular animated GIFs. On our new large-scale benchmark dataset, we show the advantage of our approach over several state-of-the-art methods.
Developing a successful SemEval task in sentiment analysis of Twitter and other social media texts
We present the development and evaluation of a semantic analysis task that lies at the intersection of two very trendy lines of research in contemporary computational linguistics: (i) sentiment analysis, and (ii) natural language processing of social media text. The task was part of SemEval, the International Workshop on Semantic Evaluation, a semantic evaluation forum previously known as SensEval. P. Nakov Qatar Computing Research Institute, HBKU Tornado Tower, floor 10, P.O. box 5825, Doha, Qatar E-mail: [email protected] S. Rosenthal Columbia University E-mail: [email protected] S. Kiritchenko National Research Council Canada, 1200 Montreal Rd., Ottawa, ON, Canada E-mail: [email protected] S. Mohammad National Research Council Canada, 1200 Montreal Rd., Ottawa, ON, Canada E-mail: [email protected] Z. Kozareva USC Information Sciences Institute, 4676 Admiralty Way, Marina del Rey, CA 90292-6695 E-mail: [email protected] A. Ritter The Ohio State University E-mail: [email protected] V. Stoyanov Facebook E-mail: [email protected] X. Zhu National Research Council Canada, 1200 Montreal Rd., Ottawa, ON, Canada E-mail: [email protected] 2 Preslav Nakov et al. The task ran in 2013 and 2014, attracting the highest number of participating teams at SemEval in both years, and there is an ongoing edition in 2015. The task included the creation of a large contextual and message-level polarity corpus consisting of tweets, SMS messages, LiveJournal messages, and a special test set of sarcastic tweets. The evaluation attracted 44 teams in 2013 and 46 in 2014, who used a variety of approaches. The best teams were able to outperform several baselines by sizable margins with improvement across the two years the task has been run. We hope that the long-lasting role of this task and the accompanying datasets will be to serve as a test bed for comparing different approaches, thus facilitating research.
Safety critical computer systems: An information management perspective on their development
The growing use of computers to control critical functions of complex systems brings with it concerns over dependability of the software. Engineers involved in developing such systems use a range of techniques for hazard and risk assessment, each offering a different safety perspective. Confidence in the system, and ultimately its certification, is (partially) dependent on confidence in consistency and traceability between results of the various analyses, and between each set of results and the system design. That is, software safety may be viewed from one perspective as an information management problem. Clearly, tool support is essential. However, current tool-sets are incapable of achieving consistency at the fine level of granularity required to justify such confidence. This paper presents one approach towards achieving that objective.
Mining Association Rules with Item Constraints
The problem of discovering association rules has received considerable research attention and several fast algorithms for mining association rules have been developed. In practice, users are often interested in a subset of association rules. For example, they may only want rules that contain a specific item or rules that contain children of a specific item in a hierarchy. While such constraints can be applied as a postprocessing step, integrating them into the mining algorithm can dramatically reduce the execution time. We consider the problem of integrating constraints that n..,, l.....l,.... ,....,,....:,,, -1.~.. cl., -..s..a..-m e.. ..l.“,“, CUG Y”“Ac;Qu GnpLz:I)DIVua “YGI “Us: pGYaLcG “I OLJDciliLG of items into the association discovery algorithm. We present three integrated algorithms for mining association rules with item constraints and discuss their tradeoffs.
The Trouble with Quantum Bit Commitment
In a recent paper, Lo and Chau explain how to break a family of quantum bit commitment schemes, and they claim that their attack applies to the 1993 protocol of Brassard, Crépeau, Jozsa and Langlois (BCJL). The intuition behind their attack is correct, and indeed they expose a weakness common to all proposals of a certain kind, but the BCJL protocol does not fall in this category. Nevertheless, it is true that the BCJL protocol is insecure, but the required attack is more subtle. Here we provide the first complete proof that the BCJL protocol is insecure. ∗ Supported in part by Canada’s Nserc and Québec’s Fcar. † Département IRO, Université de Montréal, C.P. 6128, succursale centre-ville, Montréal (Québec), Canada H3C 3J7. e-mail: mayersd.umontreal.ca.
Mucocele of the upper lip: case report of an uncommon presentation and its differential diagnosis.
This report describes a lesion of the upper lip that was definitively diagnosed by histologic examination as a mucocele or mucus retention phenomenon. The usual location of mucoceles is the lower lip. This case illustrates an uncommon presentation of mucocele with respect to symptoms, location and duration. The features of a variety of oral lesions are discussed and compared, to help clinicians in establishing an appropriate differential diagnosis.
The Foundations of Cost-Sensitive Learning
This paper revisits the problem of optimal learning and decision-making when different misclassification errors incur different penalties. We characterize precisely but intuitively when a cost matrix is reasonable, and we show how to avoid the mistake of defining a cost matrix that is economically incoherent. For the two-class case, we prove a theorem that shows how to change the proportion of negative examples in a training set in order to make optimal cost-sensitive classification decisions using a classifier learned by a standard non-costsensitive learning method. However, we then argue that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods. Accordingly, the recommended way of applying one of these methods in a domain with differing misclassification costs is to learn a classifier from the training set as given, and then to compute optimal decisions explicitly using the probability estimates given by the classifier. 1 Making decisions based on a cost matrix Given a specification of costs for correct and incorrect predictions, an example should be predicted to have the class that leads to the lowest expected cost, where the expectatio n is computed using the conditional probability of each class given the example. Mathematically, let the (i; j) entry in a cost matrixC be the cost of predicting class i when the true class isj. If i = j then the prediction is correct, while if i 6= j the prediction is incorrect. The optimal prediction for an examplex is the classi that minimizes L(x; i) =Xj P (jjx)C(i; j): (1) Costs are not necessarily monetary. A cost can also be a waste of time, or the severity of an illness, for example. For eachi,L(x; i) is a sum over the alternative possibilities for the true class of x. In this framework, the role of a learning algorithm is to produce a classifier that for any example x can estimate the probability P (jjx) of each classj being the true class ofx. For an examplex, making the predictioni means acting as if is the true class of x. The essence of cost-sensitive decision-making is that it can be optimal to ct as if one class is true even when some other class is more probable. For example, it can be rational not to approve a large credit card transaction even if the transaction is mos t likely legitimate. 1.1 Cost matrix properties A cost matrixC always has the following structure when there are only two classes: actual negative actual positive predict negative C(0; 0) = 00 C(0; 1) = 01 predict positive C(1; 0) = 10 C(1; 1) = 11 Recent papers have followed the convention that cost matrix rows correspond to alternative predicted classes, whi le columns correspond to actual classes, i.e. row/column = i/j predicted/actual. In our notation, the cost of a false positive is 10 while the cost of a false negative is 01. Conceptually, the cost of labeling an example incorrectly should always be greater th an the cost of labeling it correctly. Mathematically, it shoul d always be the case that 10 > 00 and 01 > 11. We call these conditions the “reasonableness” conditions. Suppose that the first reasonableness condition is violated , so 00 10 but still 01 > 11. In this case the optimal policy is to label all examples positive. Similarly, if 10 > 00 but 11 01 then it is optimal to label all examples negative. We leave the case where both reasonableness conditions are violated for the reader to analyze. Margineantu[2000] has pointed out that for some cost matrices, some class labels are never predicted by the optimal policy as given by Equation (1). We can state a simple, intuitive criterion for when this happens. Say that row m dominates rown in a cost matrixC if for all j,C(m; j) C(n; j). In this case the cost of predictingis no greater than the cost of predictingm, regardless of what the true class j is. So it is optimal never to predict m. As a special case, the optimal prediction is alwaysn if row n is dominated by all other rows in a cost matrix. The two reasonableness conditions for a two-class cost matrix imply that neither row in the matrix dominates the other. Given a cost matrix, the decisions that are optimal are unchanged if each entry in the matrix is multiplied by a positiv e constant. This scaling corresponds to changing the unit of account for costs. Similarly, the decisions that are optima l are unchanged if a constant is added to each entry in the matrix. This shifting corresponds to changing the baseline aw ay from which costs are measured. By scaling and shifting entries, any two-class cost matrix that satisfies the reasonab leness conditions can be transformed into a simpler matrix tha t always leads to the same decisions:
The Impact of Performance-Contingent Rewards on Perceived Autonomy and Competence 1
Two studies examined the impact of performance-contingent rewards on perceived autonomy, competence, and intrinsic motivation. Autonomy was measured in terms of both decisional and affective reports. The first study revealed an undermining effect of performance-contingent rewards on affective reports of autonomy among university students, and an increase in reports of competence. Decisional autonomy judgements were unaffected by rewards. The second study replicated this pattern of findings among elementary school children. These results help resolve Cognitive Evaluation Theory’s (E. L. Deci & R. M. Ryan, 1985; R. M. Ryan, V. Mims, & R. Koestner, 1983) and Eisenberger, Rhoades, et al.’s (R. Eisenberger, L. Rhoades, & J. Cameron, 1999) divergent positions on the impact of performance-contingent rewards on autonomy. The studies also included measures of intrinsic motivation.
Laparoendoscopic single-site totally extraperitoneal adult inguinal hernia repair: initial 100 patients
This report aims to describe the authors’ initial experience with laparoendoscopic single-site (LESS) totally extraperitoneal (TEP) inguinal hernia repair in 100 patients. Patients who underwent an elective LESS TEP inguinal hernia repair between December 2008 and September 2010 in a single center were enrolled prospectively in this study. Patient demographic data, hernia characteristics, and operative and postoperative outcomes were analyzed. An Alexis wound retractor was placed through the 2-cm subumbilical incision as a homemade transumbilical access platform after the preperitoneal space was created by a balloon dissector. Standard procedures of TEP all were finished using conventional straight laparoscopic instruments. Of the 100 patients in this study, 2 underwent conversion to LESS transabdominal preperitoneal (TAPP) repair. The remaining 98 patients received successful LESS TEP inguinal hernia repair by a single surgeon. No patient required open or conventional laparoscopic conversion. However, one patient did experience recurrence. The mean operative time was 64.2 min, and the hospital stay was 1.54 days. One patient with a history of bladder surgery had a minor intraoperative bladder injury. No major postoperative complication occurred, but 11 patients had seroma or hematoma, 2 had epididymitis, 2 had urinary tract infection, 1 had wound dehiscence, 1 had wound infection, and 1 had urinary retention. This single-arm observational study was limited by the absence of a control cohort. Based on our experience, in the hands of experienced laparoscopic surgeons, LESS TEP repair for adult inguinal hernia using the homemade port as an access platform is feasible and safe and provides acceptable operative outcomes.
Consistent sparsification for graph optimization
In a standard pose-graph formulation of simultaneous localization and mapping (SLAM), due to the continuously increasing numbers of nodes (states) and edges (measurements), the graph may grow prohibitively too large for long-term navigation. This motivates us to systematically reduce the pose graph amenable to available processing and memory resources. In particular, in this paper we introduce a consistent graph sparsification scheme: (i) sparsifying nodes via marginalization of old nodes, while retaining all the information (consistent relative constraints) - which is conveyed in the discarded measurements - about the remaining nodes after marginalization; and (ii) sparsifying edges by formulating and solving a consistent ℓ1-regularized minimization problem, which automatically promotes the sparsity of the graph. The proposed approach is validated on both synthetic and real data.
Role of affective self-regulatory efficacy in diverse spheres of psychosocial functioning.
This prospective study with 464 older adolescents (14 to 19 years at Time 1; 16 to 21 years at Time 2) tested the structural paths of influence through which perceived self-efficacy for affect regulation operates in concert with perceived behavioral efficacy in governing diverse spheres of psychosocial functioning. Self-efficacy to regulate positive and negative affect is accompanied by high efficacy to manage one's academic development, to resist social pressures for antisocial activities, and to engage oneself with empathy in others' emotional experiences. Perceived self-efficacy for affect regulation essentially operated mediationally through the latter behavioral forms of self-efficacy rather than directly on prosocial behavior, delinquent conduct, and depression. Perceived empathic self-efficacy functioned as a generalized contributor to psychosocial functioning. It was accompanied by prosocial behavior and low involvement in delinquency but increased vulnerability to depression in adolescent females.
Identifying Where to Focus in Reading Comprehension for Neural Question Generation
A first step in the task of automatically generating questions for testing reading comprehension is to identify questionworthy sentences, i.e. sentences in a text passage that humans find it worthwhile to ask questions about. We propose a hierarchical neural sentence-level sequence tagging model for this task, which existing approaches to question generation have ignored. The approach is fully data-driven — with no sophisticated NLP pipelines or any hand-crafted rules/features — and compares favorably to a number of baselines when evaluated on the SQuAD data set. When incorporated into an existing neural question generation system, the resulting end-to-end system achieves stateof-the-art performance for paragraph-level question generation for reading comprehension.
RFID-based techniques for human-activity detection
The iBracelet and the Wireless Identification and Sensing Platform promise the ability to infer human activity directly from sensor readings.
Auction Mechanisms in Cloud/Fog Computing Resource Allocation for Public Blockchain Networks
As an emerging decentralized secure data management platform, blockchain has gained much popularity recently. To maintain a canonical state of blockchain data record, proof-of-work based consensus protocols provide the nodes, referred to as miners, in the network with incentives for confirming new block of transactions through a process of “block mining” by solving a cryptographic puzzle. Under the circumstance of limited local computing resources, e.g., mobile devices, it is natural for rational miners, i.e., consensus nodes, to offload computational tasks for proof of work to the cloud/fog computing servers. Therefore, we focus on the trading between the cloud/fog computing service provider and miners, and propose an auction-based market model for efficient computing resource allocation. In particular, we consider a proof-of-work based blockchain network, which is constrained by the computation resource and deployed as an infrastructure for decentralized data management applications. Due to the competition among miners in the blockchain network, the allocative externalities are particularly taken into account when designing the auction mechanisms. Specifically, we consider two bidding schemes: the constant-demand scheme where each miner bids for a fixed quantity of resources, and the multidemand scheme where the miners can submit their preferable demands and bids. For the constant-demand bidding scheme, we propose an auction mechanism that achieves optimal social welfare. In the multidemand bidding scheme, the social welfare maximization problem is NP-hard. Therefore, we design an approximate algorithm which guarantees the truthfulness, individual rationality and computational efficiency. Through extensive simulations, we show that our proposed auction mechanisms with the two bidding schemes can efficiently maximize the social welfare of the blockchain network and provide effective strategies for the cloud/fog computing service provider. Yutao Jiao, Ping Wang, Dusit Niyato and Kongrath Suankaewmanee are with School of Computer Science and Engineering, Nanyang Technological University, Singapore. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. ar X iv :1 80 4. 09 96 1v 2 [ cs .G T ] 2 8 A pr 2 01 8
A psychological perspective on augmented reality in the mathematics classroom
Physical objects and virtual information are used as teaching aids in classrooms everywhere, and until recently, merging these two worlds has been difficult at best. Augmented reality offers the combination of physical and virtual, drawing on the strengths of each. We consider this technology in the realm of the mathematics classroom, and offer theoretical underpinnings for understanding the benefits and limitations of AR learning experiences. The paper presents a framework for understanding AR learning from three perspectives: physical, cognitive, and contextual. On the physical dimension, we argue that physical manipulation affords natural interactions, thus encouraging the creation of embodied representations for educational concepts. On the cognitive dimension, we discuss how spatiotemporal alignment of information through AR experiences can aid student’s symbolic understanding by scaffolding the progression of learning, resulting in improved understanding of abstract concepts. Finally, on the contextual dimension, we argue that AR creates possibilities for collaborative learning around virtual content and in non-traditional environments, ultimately facilitating personally meaningful experiences. In the process of discussing these dimensions, we discuss examples from existing AR applications and provide guidelines for future AR learning experiences, while considering the pragmatic and technological concerns facing the widespread implementation of augmented reality inside and outside the classroom. 2013 Elsevier Ltd. All rights reserved.
Building strong e-democracy: the role of technology in developing democracy for the information age
The idea of democracy had come a long way before it was given its first modern forms in the liberal ideas of the 17th and 18th centuries. Now the premises of this hierarchical and representative political system are crumbling, and we must seriously consider the need to revitalize democracy. This article aims at clarifying the overall preconditions for the revitalization of democracy, and demonstrates how to build a comprehensive framework for a multidimensional institutional design in which the potentials of ICTs are made to serve relevant democratic purposes. What conditions the functioning of any contemporary democratic system includes such contextual factors as increased global interdependency, extended use of market-based mechanisms, significant impacts of media and ICTs, new forms of governance, and individualism in its various forms. One of the most burning issues is how to develop new democracy in such a complex setting so that it accords with people’s ways of thinking and acting. To ensure this, citizens with all their collective actions and willingness to influence public affairs must be placed in the overall framework of e-transformation in politics [11]. This implies that we go beyond the dichotomous discourse that suggests that we have a choice to make between democracy-as-usual and direct e-democracy [9].
Improving the Transformer Translation Model with Document-Level Context
Although the Transformer translation model (Vaswani et al., 2017) has achieved state-ofthe-art performance in a variety of translation tasks, how to use document-level context to deal with discourse phenomena problematic for Transformer still remains a challenge. In this work, we extend the Transformer model with a new context encoder to represent document-level context, which is then incorporated into the original encoder and decoder. As large-scale document-level parallel corpora are usually not available, we introduce a two-step training method to take full advantage of abundant sentence-level parallel corpora and limited document-level parallel corpora. Experiments on the NIST ChineseEnglish datasets and the IWSLT FrenchEnglish datasets show that our approach improves over Transformer significantly. 1
Microstrip Bandpass Filter Using Degenerate Modes of a Novel Meander Loop Resonator
A novel type of dual-mode microstrip bandpass filter using degenerate modes of a meander loop resonator has been developed for miniaturization of high selectivity narrowband microwave bandpass filters. A filter of this type having a 2.5% bandwidth at 1.58 GHz was designed and fabricated. The measured filter performance is presented.
L-Tryptophan administered to chronic sleep-onset insomniacs: Late-appearing reduction of sleep latency
The effects of 3 g l-tryptophan on sleep, performance, arousal threshold, and brain electrical activity during sleep were assessed in 20 male, chronic sleep-onset insomniacs (mean age 20.3±2.4 years). Following a sleep laboratory screening night, all subjects received placebo for 3 consecutive nights (single-blind), ten subjects received l-tryptophan, and ten received placebo for 6 nights (double-blind). All subjects received placebo on 2 withdrawal nights (single-blind). There was no effect of l-tryptophan on sleep latency during the first 3 nights of administration. On nights 4–6 of administration, sleep latency was significantly reduced. Unlike benzodiazepine hypnotics, l-tryptophan did not alter sleep stages, impair performance, elevate arousal threshold, or alter brain electrical activity during sleep.
Local anaesthetic wound infiltration and abdominal nerves block during caesarean section for postoperative pain relief.
BACKGROUND Caesarean section delivery is becoming more frequent. Childbirth is an emotion-filled event and the mother needs to bond with her newborn baby as early as possible. Any intervention that leads to improvement in pain relief is worthy of investigation. Local anaesthetics, either on their own or in combination with opioids or nonsteroidal antiinflammatory drugs, have been employed as an adjunct to other postoperative pain relief strategies. Conflicting reports were noted. OBJECTIVES To assess the effects of local anaesthetic agent wound infiltration/irrIgation and/or abdominal nerve blocks on post-caesarean section pain and the mother's well being and interaction with her baby. SEARCH STRATEGY We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (April 2009). SELECTION CRITERIA Randomised controlled trials of pre-emptive local analgesia during caesarean section. DATA COLLECTION AND ANALYSIS One author extracted data. The second author checked the data. MAIN RESULTS Twenty studies (1150 women) were included. Women who had caesarean section performed under regional analgesia and had wound infiltration had a decrease in morphine consumption at 24 hours (SMD -1.70mg; 95% confidence interval (CI) -2.75 to -0.94) compared to placebo.In women under general anaesthesia, with caesarean section wound infiltration and peritoneal spraying with local anaesthetic (one study, 100 participants), the need for opioid rescue was reduced (risk ratio (RR) 0.51; 95% CI 0.38 to 0.69). The numerical pain score (0 to10) within the first hour was also reduced (mean difference (MD) -1.46; 95% CI -2.60 to -0.32).Women with regional analgesia who had local anaesthetic and non-steriodal anti-inflammatory cocktail wound infiltration consumed less morphine (one study, 60 participants; MD -7.40 mg; 95% CI -9.58 to -5.22) compared to local anaesthetic control.Women who had regional analgesia with abdominal nerves blocked had decreased opioid consumption (four studies, 175 participants; MD -25.80 mg; 95% CI -50.39 to -5.37).For the outcome of visual analogue scale 0 to 10 over 24 hours, no advantage was demonstrated in the single study of 50 participants who had wound infiltrated with a mixture of local analgesia and narcotics versus local analgesia.Addition of ketamine to the local analgesia in women who had regional analgesia does not confer any advantage. AUTHORS' CONCLUSIONS Local analgesia infiltration and abdominal nerve blocks as adjuncts to regional analgesia and general anaesthesia are of benefit in caesarean section by reducing opioid consumption. Nonsteroidal anti-inflammatory drugs as an adjuvant may confer additional pain relief.
Learning Structured Output Representation using Deep Conditional Generative Models
Supervised deep learning has been successfully applied to many recognition problems. Although it can approximate a complex many-to-one function well when a large amount of training data is provided, it is still challenging to model complex structured output representations that effectively perform probabilistic inference and make diverse predictions. In this work, we develop a deep conditional generative model for structured output prediction using Gaussian latent variables. The model is trained efficiently in the framework of stochastic gradient variational Bayes, and allows for fast prediction using stochastic feed-forward inference. In addition, we provide novel strategies to build robust structured prediction algorithms, such as input noise-injection and multi-scale prediction objective at training. In experiments, we demonstrate the effectiveness of our proposed algorithm in comparison to the deterministic deep neural network counterparts in generating diverse but realistic structured output predictions using stochastic inference. Furthermore, the proposed training methods are complimentary, which leads to strong pixel-level object segmentation and semantic labeling performance on Caltech-UCSD Birds 200 and the subset of Labeled Faces in the Wild dataset.
An Exploration of the Canon of Hausa Prose Fiction in Hausa Language and Translation: The Literary Contest of 1933 as a Historical Reference
The paper premises that the literary contest of 1933 has been a very important historical reference and a determining condition for the emergence of the canon of Hausa prose fiction (Furniss, 1991, 1996). As a consequence, my exploration is going to be directed first toward the role of the British colonial administration as the organizer of the literary contest and then toward the role played by the university critics in transforming the winning essays into a corpus of literary masterpieces, thus creating some of the conditions for the emergence and consolidation of the canon of the Hausa prose fiction in both the Hausa language and in translation into English.
Walking in Facebook: A Case Study of Unbiased Sampling of OSNs
With more than 250 million active users, Facebook (FB) is currently one of the most important online social networks. Our goal in this paper is to obtain a representative (unbiased) sample of Facebook users by crawling its social graph. In this quest, we consider and implement several candidate techniques. Two approaches that are found to perform well are the Metropolis-Hasting random walk (MHRW) and a re-weighted random walk (RWRW). Both have pros and cons, which we demonstrate through a comparison to each other as well as to the "ground-truth" (UNI - obtained through true uniform sampling of FB userIDs). In contrast, the traditional Breadth-First-Search (BFS) and Random Walk (RW) perform quite poorly, producing substantially biased results. In addition to offline performance assessment, we introduce online formal convergence diagnostics to assess sample quality during the data collection process. We show how these can be used to effectively determine when a random walk sample is of adequate size and quality for subsequent use (i.e., when it is safe to cease sampling). Using these methods, we collect the first, to the best of our knowledge, unbiased sample of Facebook. Finally, we use one of our representative datasets, collected through MHRW, to characterize several key properties of Facebook.
Best Practices for Measuring Students’ Attitudes toward Learning Science
Science educators often characterize the degree to which tests measure different facets of college students' learning, such as knowing, applying, and problem solving. A casual survey of scholarship of teaching and learning research studies reveals that many educators also measure how students' attitudes influence their learning. Students' science attitudes refer to their positive or negative feelings and predispositions to learn science. Science educators use attitude measures, in conjunction with learning measures, to inform the conclusions they draw about the efficacy of their instructional interventions. The measurement of students' attitudes poses similar but distinct challenges as compared with measurement of learning, such as determining validity and reliability of instruments and selecting appropriate methods for conducting statistical analyses. In this review, we will describe techniques commonly used to quantify students' attitudes toward science. We will also discuss best practices for the analysis and interpretation of attitude data.
Cobalt-salen complex-catalyzed oxidative generation of alkyl radicals from aldehydes for the preparation of hydroperoxides.
Catalytic generation of alkyl radicals from aldehydes via oxidative deformylation was realized using a cobalt-salen complex with H2O2. The deformylation was thought to proceed through homolytic cleavage of peroxohemiacetal intermediates to provide even primary alkyl radicals under mild conditions. Variously substituted and functionalized hydroperoxides were obtained from corresponding aldehydes in good yield.
Virtual spatial registration of stand-alone fNIRS data to MNI space
The registration of functional brain data to common stereotaxic brain space facilitates data sharing and integration across different subjects, studies, and even imaging modalities. Thus, we previously described a method for the probabilistic registration of functional near-infrared spectroscopy (fNIRS) data onto Montreal Neurological Institute (MNI) coordinate space that can be used even when magnetic resonance images of the subjects are not available. This method, however, requires the careful measurement of scalp landmarks and fNIRS optode positions using a 3D-digitizer. Here we present a novel registration method, based on simulations in place of physical measurements for optode positioning. First, we constructed a holder deformation algorithm and examined its validity by comparing virtual and actual deformation of holders on spherical phantoms and real head surfaces. The discrepancies were negligible. Next, we registered virtual holders on synthetic heads and brains that represent size and shape variations among the population. The registered positions were normalized to MNI space. By repeating this process across synthetic heads and brains, we statistically estimated the most probable MNI coordinate values, and clarified errors, which were in the order of several millimeters across the scalp, associated with this estimation. In essence, the current method allowed the spatial registration of completely stand-alone fNIRS data onto MNI space without the use of supplementary measurements. This method will not only provide a practical solution to the spatial registration issues in fNIRS studies, but will also enhance cross-modal communications within the neuroimaging community.
Extended Target Tracking Using Gaussian Processes with High-Resolution Automotive Radar
In this paper, an implementation of an extended target tracking filter using measurements from high-resolution automotive Radio Detection and Ranging (RADAR) is proposed. Our algorithm uses the Cartesian point measurements from the target's contour as well as the Doppler range rate provided by the RADAR to track a target vehicle's position, orientation, and translational and rotational velocities. We also apply a Gaussian Process (GP) to model the vehicle's shape. To cope with the nonlinear measurement equation, we implement an Extended Kalman Filter (EKF) and provide the necessary derivatives for the Doppler measurement. We then evaluate the effectiveness of incorporating the Doppler rate on simulations and on 2 sets of real data.
Learning Temporal Dynamics for Video Super-Resolution: A Deep Learning Approach
Video super-resolution (SR) aims at estimating a high-resolution video sequence from a low-resolution (LR) one. Given that the deep learning has been successfully applied to the task of single image SR, which demonstrates the strong capability of neural networks for modeling spatial relation within one single image, the key challenge to conduct video SR is how to efficiently and effectively exploit the temporal dependence among consecutive LR frames other than the spatial relation. However, this remains challenging because the complex motion is difficult to model and can bring detrimental effects if not handled properly. We tackle the problem of learning temporal dynamics from two aspects. First, we propose a temporal adaptive neural network that can adaptively determine the optimal scale of temporal dependence. Inspired by the inception module in GoogLeNet [1], filters of various temporal scales are applied to the input LR sequence before their responses are adaptively aggregated, in order to fully exploit the temporal relation among the consecutive LR frames. Second, we decrease the complexity of motion among neighboring frames using a spatial alignment network that can be end-to-end trained with the temporal adaptive network and has the merit of increasing the robustness to complex motion and the efficiency compared with the competing image alignment methods. We provide a comprehensive evaluation of the temporal adaptation and the spatial alignment modules. We show that the temporal adaptive design considerably improves the SR quality over its plain counterparts, and the spatial alignment network is able to attain comparable SR performance with the sophisticated optical flow-based approach, but requires a much less running time. Overall, our proposed model with learned temporal dynamics is shown to achieve the state-of-the-art SR results in terms of not only spatial consistency but also the temporal coherence on public video data sets. More information can be found in http://www.ifp.illinois.edu/~dingliu2/videoSR/.
Savoring the Past: Positive Memories Evoke Value Representations in the Striatum
Reminders of happy memories can bring back pleasant feelings tied to the original experience, suggesting an intrinsic value in reminiscing about the positive past. However, the neural circuitry underlying the rewarding aspects of autobiographical memory is poorly understood. Using fMRI, we observed enhanced activity during the recall of positive relative to neutral autobiographical memories in corticostriatal circuits that also responded to monetary reward. Enhanced activity in the striatum and medial prefrontal cortex was associated with increases in positive emotion during recall, and striatal engagement further correlated with individual measures of resiliency. Striatal response to the recall of positive memories was greater in individuals whose mood improved after the task. Notably, participants were willing to sacrifice a more tangible reward, money, in order to reminisce about positive past experiences. Our findings suggest that recalling positive autobiographical memories is intrinsically valuable, which may be adaptive for regulating positive emotion and promoting better well-being.
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package.
Transnational Corporations as ‘Keystone Actors’ in Marine Ecosystems
Keystone species have a disproportionate influence on the structure and function of ecosystems. Here we analyze whether a keystone-like pattern can be observed in the relationship between transnational corporations and marine ecosystems globally. We show how thirteen corporations control 11-16% of the global marine catch (9-13 million tons) and 19-40% of the largest and most valuable stocks, including species that play important roles in their respective ecosystem. They dominate all segments of seafood production, operate through an extensive global network of subsidiaries and are profoundly involved in fisheries and aquaculture decision-making. Based on our findings, we define these companies as keystone actors of the Anthropocene. The phenomenon of keystone actors represents an increasingly important feature of the human-dominated world. Sustainable leadership by keystone actors could result in cascading effects throughout the entire seafood industry and enable a critical transition towards improved management of marine living resources and ecosystems.
THE NEXT PARADIGM SHIFT : FROM VEHICULAR NETWORKS TO VEHICULAR CLOUDS
The past decade has witnessed a growing interest in vehicular networking and its vast array of potential applications. Increased wireless accessibility of the Internet from vehicles has triggered the emergence of vehicular safety applications, locationspecific applications, and multimedia applications. Recently, Professor Olariu and his coworkers have promoted the vision of Vehicular Clouds (VCs), a non-trivial extension, along several dimensions, of conventional Cloud Computing. In a VC, the under-utilized vehicular resources including computing power, storage and Internet connectivity can be shared between drivers or rented out over the Internet to various customers, very much as conventional cloud resources are. The goal of this chapter is to introduce and review the challenges and opportunities offered by what promises to be the Next Paradigm Shift:From Vehicular Networks to Vehicular Clouds. Specifically, the chapter introduces VCs and discusses some of their distinguishing characteristics and a number of fundamental research challenges. To illustrate the huge array of possible applications of the powerful VC concept, a number of possible application scenarios are presented and discussed. As the adoption and success of the vehicular cloud concept is inextricably related to security and privacy issues, a number of security and privacy issues specific to vehicular clouds are discussed as well. Additionally, data aggregation and empirical results are presented. Mobile Ad Hoc Networking: Cutting Edge Directions, Second Edition. Edited by Stefano Basagni, Marco Conti, Silvia Giordano, and Ivan Stojmenovic. © 2013 by The Institute of Electrical and Electronics Engineers, Inc. Published 2013 by John Wiley & Sons, Inc.
BIM in facilities management applications: a case study of a large university complex
Purpose – Building information modelling (BIM) in facilities management (FM) applications is an emerging area of research based on the theoretical proposition that BIM information, generated and captured during the lifecycle of a facility, can improve its management. Using this proposition as a starting point, the purpose of this paper is to investigate the value of BIM and the challenges affecting its adoption in FM applications. Design/methodology/approach – Two inter-related research methods are utilised. The literature is utilised to identify the application areas, value and challenges of BIM in FM. Due to the lack of case studies identified in the literature review, and to provide empirical evidence of the value and challenges of BIM in FM, a case study of Northumbria University’s city campus, is used to empirically explore the value and challenges of BIM in FM. Findings – The results demonstrated that BIM value in FM stems from improvement to current manual processes of information handover; improvement to the accuracy of FM data, improvement to the accessibility of FM data and efficiency increase in work order execution. The main challenges were the lack of methodologies that demonstrate the tangible benefits of BIM in FM, the limited knowledge of implementation requirement including BIM for FM modelling requirements, the interoperability between BIM and FM technologies, the presence of disparate operational systems managing the same building and finally, the shortage of BIM skills in the FM industry. Originality/value – There is lack of real-life cases on BIM in FM especially for existing assets despite new constructions representing only 1-2 per cent of the total building stock in a typical year. The originality of this paper stems from both adding a real-life case study of BIM in FM and providing empirical evidence of both the value and challenges of BIM in FM applications.
Reinforcement Learning with Parameterized Actions
We introduce a model-free algorithm for learning in Markov decision processes with parameterized actions—discrete actions with continuous parameters. At each step the agent must select both which action to use and which parameters to use with that action. We introduce the Q-PAMDP algorithm for learning in these domains, show that it converges to a local optimum, and compare it to direct policy search in the goalscoring and Platform domains.
Association of Second and Third Trimester Weight Gain in Pregnancy with Maternal and Fetal Outcomes
OBJECTIVE To investigate the association between weekly weight gain, during the second and third trimesters, classified according to the 2009 Institute of Medicine (IOM/NRC) recommendations, and maternal and fetal outcomes. METHODS Gestational weight gain was evaluated in 2,244 pregnant women of the Brazilian Study of Gestational Diabetes (Estudo Brasileiro do Diabetes Gestacional--EBDG). Outcomes were cesarean delivery, preterm birth and small or large for gestational age birth (SGA, LGA). Associations between inadequate weight gain and outcomes were estimated using robust Poisson regression adjusting for pre-pregnancy body mass index, trimester-specific weight gain, age, height, skin color, parity, education, smoking, alcohol consumption, gestational diabetes and hypertensive disorders in pregnancy. RESULTS In fully adjusted models, in the second trimester, insufficient weight gain was associated with SGA (relative risk [RR] 1.72, 95% confidence interval [CI] 1.26-2.33), and excessive weight gain with LGA (RR 1.64, 95% CI 1.16-2.31); in third trimester, excessive weight gain with preterm birth (RR 1.70, 95% CI 1.08-2.70) and cesarean delivery (RR 1.21, 95% CI 1.03-1.44). Women with less than recommended gestational weight gain in the 2nd trimester had a lesser risk of cesarean deliveries (RR 0.82, 95% CI 0.71-0.96) than women with adequate gestational weight gain in this trimester. CONCLUSION Though insufficient weight gain in the 3rd trimester was not associated with adverse outcomes, other deviations from recommended weight gain during second and third trimester were associated with adverse pregnancy outcomes. These findings support, in part, the 2009 IOM/NRC recommendations for nutritional monitoring during pregnancy.
Ankle syndesmosis: a qualitative and quantitative anatomic analysis.
BACKGROUND Syndesmosis sprains can contribute to chronic pain and instability, which are often indications for surgical intervention. The literature lacks sufficient objective data detailing the complex anatomy and localized osseous landmarks essential for current surgical techniques. PURPOSE To qualitatively and quantitatively analyze the anatomy of the 3 syndesmotic ligaments with respect to surgically identifiable bony landmarks. STUDY DESIGN Descriptive laboratory study. METHODS Sixteen ankle specimens were dissected to identify the anterior inferior tibiofibular ligament (AITFL), posterior inferior tibiofibular ligament (PITFL), interosseous tibiofibular ligament (ITFL), and bony anatomy. Ligament lengths, footprints, and orientations were measured in reference to bony landmarks by use of an anatomically based coordinate system and a 3-dimensional coordinate measuring device. RESULTS The syndesmotic ligaments were identified in all specimens. The pyramidal-shaped ITFL was the broadest, originating from the distal interosseous membrane expansion, extending distally, and terminating 9.3 mm (95% CI, 8.3-10.2 mm) proximal to the central plafond. The tibial cartilage extended 3.6 mm (95% CI, 2.8-4.4 mm) above the plafond, a subset of which articulated directly with the fibular cartilage located 5.2 mm (95% CI, 4.6-5.8 mm) posterior to the anterolateral corner of the tibial plafond. The primary AITFL band(s) originated from the tibia 9.3 mm (95% CI, 8.6-10.0 mm) superior and medial to the anterolateral corner of the tibial plafond and inserted on the fibula 30.5 mm (95% CI, 28.5-32.4 mm) proximal and anterior to the inferior tip of the lateral malleolus. Superficial fibers of the PITFL originated along the distolateral border of the posterolateral tubercle of the tibia 8.0 mm (95% CI, 7.5-8.4 mm) proximal and medial to the posterolateral corner of the plafond and inserted along the medial border of the peroneal groove 26.3 mm (95% CI, 24.5-28.1 mm) superior and posterior to the inferior tip of the lateral malleolus. CONCLUSION The qualitative and quantitative anatomy of the syndesmotic ligaments was reproducibly described and defined with respect to surgically identifiable bony prominences. CLINICAL RELEVANCE Data regarding anatomic attachment sites and distances to bony prominences can optimize current surgical fixation techniques, improve anatomic restoration, and reduce the risk of iatrogenic injury from malreduction or misplaced implants. Quantitative data also provide the consistency required for the development of anatomic reconstructions.
Sentiment analysis: From binary to multi-class classification: A pattern-based approach for multi-class sentiment analysis in Twitter
Most of the state of the art works and researches on the automatic sentiment analysis and opinion mining of texts collected from social networks and microblogging websites are oriented towards the classification of texts into positive and negative. In this paper, we propose a pattern-based approach that goes deeper in the classification of texts collected from Twitter (i.e., tweets). We classify the tweets into 7 different classes; however the approach can be run to classify into more classes. Experiments show that our approach reaches an accuracy of classification equal to 56.9% and a precision level of sentimental tweets (other than neutral and sarcastic) equal to 72.58%. Nevertheless, the approach proves to be very accurate in binary classification (i.e., classification into “positive” and “negative”) and ternary classification (i.e., classification into “positive”, “negative” and “neutral”): in the former case, we reach an accuracy of 87.5% for the same dataset used after removing neutral tweets, and in the latter case, we reached an accuracy of classification of 83.0%.
Update on nasopharyngeal carcinoma.
The most common type of nasopharyngeal tumor is nasopharyngeal carcinoma. The etiology is multifactorial with race, genetics, environment and Epstein-Barr virus (EBV) all playing a role. While rare in Caucasian populations, it is one of the most frequent nasopharyngeal cancers in Chinese, and has endemic clusters in Alaskan Eskimos, Indians, and Aleuts. Interestingly, as native-born Chinese migrate, the incidence diminishes in successive generations, although still higher than the native population. EBV is nearly always present in NPC, indicating an oncogenic role. There are raised antibodies, higher titers of IgA in patients with bulky (large) tumors, EBERs (EBV encoded early RNAs) in nearly all tumor cells, and episomal clonal expansion (meaning the virus entered the tumor cell before clonal expansion). Consequently, the viral titer can be used to monitor therapy or possibly as a diagnostic tool in the evaluation of patients who present with a metastasis from an unknown primary. The effect of environmental carcinogens, especially those which contain a high levels of volatile nitrosamines are also important in the etiology of NPC. Chinese eat salted fish, specifically Cantonese-style salted fish, and especially during early life. Perhaps early life (weaning period) exposure is important in the ‘‘two-hit’’ hypothesis of cancer development. Smoking, cooking, and working under poor ventilation, the use of nasal oils and balms for nose and throat problems, and the use of herbal medicines have also been implicated but are in need of further verification. Likewise, chemical fumes, dusts, formaldehyde exposure, and radiation have all been implicated in this complicated disorder. Various human leukocyte antigens (HLA) are also important etiologic or prognostic indicators in NPC. While histocompatibility profiles of HLA-A2, HLA-B17 and HLA-Bw46 show increased risk for developing NPC, there is variable expression depending on whether they occur alone or jointly, further conferring a variable prognosis (B17 is associated with a poor and A2B13 with a good prognosis, respectively).
Purine Metabolism and Inhibition of Xanthine Oxidase in Severely Hypoxic Neonates Going onto Extracorporeal Membrane Oxygenation1
The effect of allopurinol to inhibit purine metabolism via the xanthine oxidase pathway in neonates with severe, progressive hypoxemia during rescue and reperfusion with extracorporeal membrane oxygenation (ECMO) was examined. Twenty-five term infants meeting ECMO criteria were randomized in a double-blinded, placebo-controlled trial. Fourteen did not receive allopurinol, whereas 11 were treated with 10 mg/kg after meeting criteria and before cannulation, in addition to a 20-mg/kg priming dose to the ECMO circuit. Infant plasma samples before cannulation, and at 15, 30, 60, and 90 min, and 3, 6, 9, and 12 h on bypass were analyzed (HPLC) for allopurinol, oxypurinol, hypoxanthine, xanthine, and uric acid concentrations. Urine samples were similarly evaluated for purine excretion. Hypoxanthine concentrations in isolated blood-primed ECMO circuits were separately measured. Hypoxanthine, xanthine, and uric acid levels were similar in both groups before ECMO. Hypoxanthine was higher in allopurinol-treated infants during the time of bypass studied (p = 0.022). Xanthine was also elevated (p < 0.001), and uric acid was decreased (p = 0.005) in infants receiving allopurinol. Similarly, urinary elimination of xanthine increased (p < 0.001), and of uric acid decreased(p = 0.04) in treated infants. No allopurinol toxicity was observed. Hypoxanthine concentrations were significantly higher in isolated ECMO circuits and increased over time during bypass (p < 0.001). This study demonstrates that allopurinol given before cannulation for and during ECMO significantly inhibits purine degradation and uric acid production, and may reduce the production of oxygen free radicals during reoxygenation and reperfusion of hypoxic neonates recovered on bypass.
Knowledge of the abortion legislation among South African women: a cross-sectional study
BACKGROUND In order to ensure that legalized abortion in South Africa improves reproductive health, women must know that abortion is a legal option in the case of unwanted pregnancy. This study investigated knowledge of abortion legislation eight years after the introduction of legal abortion services in one province of South Africa. METHODS In 2004/2005, we conducted a cross-sectional study among 831 sexually-active women attending 26 public health clinics in one urban and one rural health region of the Western Cape Province. RESULTS Thirty-two percent of women did not know that abortion is currently legal. Among those who knew of legal abortion, few had knowledge of the time restrictions involved. CONCLUSION In South Africa there is an unmet need among women for information on abortion. Strategies should be developed to address this gap so that women are fully informed of their rights to a safe and legal termination of pregnancy.
77 GHz SiGe based bipolar transceivers for automotive radar applications — An industrial perspective
Radar sensors operating in the 76–81 GHz range are considered key for Advanced Driver Assistance Systems (ADAS) like adaptive cruise control (ACC), collision mitigation and avoidance systems (CMS) or lane change assist (LCA). These applications are the next wave in automotive safety systems and have thus generated increased interest in lower-cost solutions especially for the mm-wave front-end (FE) section. Today, most of the radar sensors in this frequency range use GaAs based FEs. These multi-chip GaAs FEs are a main cost driver in current radar sensors due to their low integration level. The step towards monolithic microwave integrated circuits (MMIC) based on a 200 GHz ft silicon-germanium (SiGe) technology integrating all needed RF building blocks (mixers, VCOs, dividers, buffers, PAs) on an single die does not only lead to cost reductions but also benefits the testability of these MMICs. This is especially important in the light of upcoming functional safety standards like ASIL-D and ISO26262.
Relationships between epistaxis, migraines, and triggers in hereditary hemorrhagic telangiectasia.
OBJECTIVES/HYPOTHESIS To identify whether relationships exist between epistaxis and migraines in hereditary hemorrhagic telangiectasia (HHT), to potentially provide further preventative and therapeutic options for the debilitating nosebleeds that are often very difficult to manage in clinical practice. STUDY DESIGN Study participants were recruited from a UK specialist service, and online following advertisement by the HHT Foundation International. They completed a nonbiased questionnaire in which paired questions on nosebleeds and migraines were separated by at least 17 other questions. METHODS Migraines were defined as headaches with associated autonomic and/or neurological features. The reported frequencies and precipitants of epistaxis and migraines were compared using numerical scales applied equally for each condition. RESULTS The 220 HHT-affected respondents reported frequent nosebleeds, 153 (69.5%) used iron tablets, and 39 (17.7%) had received at least 10 blood transfusions. Migraines displaying typical features were reported by 51 (23.2%), and were more common with pulmonary or cerebral arteriovenous malformations. Thirty of 51 (58.8%) migraine sufferers reported that nosebleeds occurred at the same time as their migraines. More frequent migraines were reported by patients with more frequent nosebleeds (r2=15%, P=.007), or transfusions (r2=16.9%, P=.004). In menstrual, lifestyle, and dietary analyses, consistency was observed between factors having no effect, and those provoking both nosebleeds and migraines in multiple patients (premenses; activity; lack of sleep; stress; caffeine, cheese, alcohol, and chocolate). CONCLUSIONS We demonstrate an unexpected and provocative association between nosebleeds and migraines in HHT patients. Evaluation of whether antimigraine approaches limit HHT nosebleeds may be appropriate. LEVEL OF EVIDENCE 4.
MAP kinase phosphatase-1, a critical negative regulator of the innate immune response.
Mitogen-activated protein (MAP) kinase cascades are crucial signal transduction pathways in the regulation of the host inflammatory response to infection. MAP kinase phosphatase (MKP)-1, an archetypal member of the MKP family, plays a pivotal role in the deactivation of p38 and JNK. In vitro studies using cultured macrophages have provided compelling evidence for a central role of MKP-1 in the restraint of pro-inflammatory cytokine biosynthesis. Studies using MKP-1 knockout mice have strengthened the findings from in vitro studies and defined the critical importance of MKP-1 in the regulation of pro-inflammatory cytokine synthesis in vivo during the host response to bacterial cell wall components. Upon challenge with Toll-like receptor ligands MKP-1 knockout mice produced dramatically greater amounts of inflammatory cytokines, developed severe hypotension and multi-organ failure, and exhibited a remarkable increase in mortality. More recent investigations using intact bacteria confirmed these observations and further revealed novel functions of MKP-1 in host defense against bacterial infection. These studies demonstrate that MKP-1 is an essential feedback regulator of the innate immune response, and that it plays a critical role in preventing septic shock and multi-organ dysfunction during pathogenic infection. In this review, we will summarize the studies on the function of MKP-1 in innate immune responses and discuss the regulation of this novel protein phosphatase.
Is There a Relationship between Baseline and Treatment-Associated Changes in [3H]-IMI Platelet Binding and Clinical Response in Major Depression?
A peripheral model for the central 5-HT neuron is the characterization of platelet imipramine binding. We studied an outpatient major depressive cohort who fulfilled Research Diagnostic Criteria for agitation. After a 1-week placebo lead-in, subjects were blindly randomized to either imipramine (IMI) of fluoxetine (FLU) during an 8-week, double-blind study period. Thirty-three subjects (15 IMI, 18 FLU) provided both baseline and endpoint samples for the platelet [3H]-IMI assay. Depression efficacy was comparable across the two treatments, whereas FLU was significantly more effective in reducing secondary anxiolysis (p = .023). Discontinuations due to an adverse event were significantly more frequent with IMI than FLU (p < .01). Baseline affinity (KD) was mildly predictive of change in the HAMD (r = -.22; p = .07). Whereas baseline to endpoint density (Bmax) changes (Δ) were similar for IMI (183 ± 329 fmol/mg) and FLU (196 ± 402 fmol/mg), a statistically significant treatment difference in ΔKD emerged (IMI -0.005 ± 0.010 pmol/ml versus FLU 0.08 ± 0.013 at p = .004). Moreover, the changes in KD and HAMD17 trended to a positive correlation among only the FLU-treated subjects (4 = 0.406, p = .095). The Clinical effects of 5-HT-based selective antidepressant may be reflected by dynamic changes in the platelet 5-HT uptake apparatus. These data suggest that the baseline confirmational status of the [3H]-IMI:5-HT transporter may reflect a "capacity" for a treatment response.
Improving Cross-Topic Authorship Attribution: The Role of Pre-Processing
The effectiveness of character n-gram features for representing the stylistic properties of a text has been demonstrated in various independent Authorship Attribution (AA) studies. Moreover, it has been shown that some categories of character n-grams perform better than others both under single and cross-topic AA conditions. In this work, we present an improved algorithm for cross-topic AA. We demonstrate that the effectiveness of character n-grams representation can be significantly enhanced by performing simple pre-processing steps and appropriately tuning the number of features, especially in cross-topic conditions.
Security Threats and Countermeasures for Intra-vehicle Networks
Controller Area Network (CAN) is the leading serial bus system for embedded control. More than two billion CAN nodes have been sold since the protocol's development in the early 1980s. CAN is a mainstream network and was internationally standardized (ISO 11898–1) in 1993. This paper describes an approach to implementing security services on top of a higher level Controller Area Network (CAN) protocol, in particular, CANopen. Since the CAN network is an open, unsecured network, every node has access to all data on the bus. A system which produces and consumes sensitive data is not well suited for this environment. Therefore, a general-purpose security solution is needed which will allow secure nodes access to the basic security services such as authentication, integrity, and confidentiality.
Impact of atrial fibrillation on early complications and one year-survival after cardioverter defibrillator implantation: results from the German DEVICE registry.
AIMS AND OBJECTIVE Outcome data of patients with implantable cardioverter defibrillators (ICD) and atrial fibrillation (AF) are conflicting. The German DEVICE registry aims to add further information on this particular cohort. METHODS AND RESULTS The German DEVICE registry is a nationwide prospective multicenter database of ICD implantations. 3261 patients are included (81% males, 2701 (82.8%) first ICD implantations, 560 (17.2%) ICD replacements). Cardiac resynchronization therapy (CRT-D) was performed in 882 patients (27.0%). Sinus rhythm (SR) was present in 2654 (81.4%) and atrial fibrillation (AF) in 607 (18.6%). Left ventricular ejection fraction (LVEF) did not differ between groups (SR 32.3%, AF 30.4%; p = 0.09). AF patients were older (AF 70.9 versus SR 63.9 years; p < 0.0001), presented with more co-morbidities (diabetes, hypertension, chronic kidney disease; all p < 0.001). In-hospital complications were not significantly different between groups (p = 0.58). Follow-up information after one year was available in 2967 patients (91%). One-year overall mortality after first ICD implantation was 4.9% for SR and 11.2% for AF patients (p < 0.0001); mortality one year after ICD replacement was 8.4% for SR and 12.0% for AF (p = 0.34). No statistically significant difference between SR and AF patients receiving a CRT device was observed (SR 6.9%, AF 10.7%, p = 0.16) in terms of one-year mortality. CONCLUSION The German DEVICE registry demonstrates that patients with AF who receive ICD devices are older, have more co-morbidity and more severe heart failure. AF carries an independent 1.39 fold risk (95% CI 1.02-1.89) of death after one year in patients only with first ICD implantation.
A Speaker Adaptive DNN Training Approach for Speaker-Independent Acoustic Inversion
We address the speaker-independent acoustic inversion (AI) problem, also referred to as acoustic-to-articulatory mapping. The scarce availability of multi-speaker articulatory data makes it difficult to learn a mapping which generalizes from a limited number of training speakers and reliably reconstructs the articulatory movements of unseen speakers. In this paper, we propose a Multi-task Learning (MTL)-based approach that explicitly separates the modeling of each training speaker AI peculiarities from the modeling of AI characteristics that are shared by all speakers. Our approach stems from the well known Regularized MTL approach and extends it to feed-forward deep neural networks (DNNs). Given multiple training speakers, we learn for each an acoustic-to-articulatory mapping represented by a DNN. Then, through an iterative procedure, we search for a canonical speaker-independent DNN that is “similar” to all speaker-dependent DNNs. The degree of similarity is controlled by a regularization parameter. We report experiments on the University of Wisconsin X-ray Microbeam Database under different training/testing experimental settings. The results obtained indicate that our MTL-trained canonical DNN largely outperforms a standardly trained (i.e., single task learning-based) speaker independent DNN.
Principles of Stator DC Winding Excited Vernier Reluctance Machines
Stator dc winding excited Vernier reluctance machines (DC-VRMs) are one novel kind of Vernier reluctance machines, and have doubly salient structure and additional dc field windings in their stators to generate the exciting field. These machines advantages include a wide speed range, due to the flexible exciting field by the dc winding and a robust rotor structure without permanent magnets or windings. In this paper, the nature and principles of DC-VRMs are first illustrated theoretically with winding function and harmonic theories. First, by considering the permeance modulation function, the equations and harmonics of the exciting field are obtained. Next, based on these results, the stator/rotor pole combinations and armature winding configuration methods are proposed. Additionally, the expressions for the self-inductance, mutual inductance, the back electromotive force (back-EMF) of armature windings are summarized with the winding function theories. Also, the effects of permeance, field, and armature winding harmonics on inductance harmonics are analyzed. The equation for electromagnetic torque is also given, and the design parameters that may influence the machine's torque are provided. Finally, the inductances and torque in synchronous reference frame are analyzed. All the analytical results are validated by finite element analyses and some experimental results are also given to validate the theoretical analysis.
Multi-oriented scene text detection in video based on wavelet and angle projection boundary growing
In this paper, we address two complex issues: 1) Text frame classification and 2) Multi-oriented text detection in video text frame. We first divide a video frame into 16 blocks and propose a combination of wavelet and median-moments with k-means clustering at the block level to identify probable text blocks. For each probable text block, the method applies the same combination of feature with k-means clustering over a sliding window running through the blocks to identify potential text candidates. We introduce a new idea of symmetry on text candidates in each block based on the observation that pixel distribution in text exhibits a symmetric pattern. The method integrates all blocks containing text candidates in the frame and then all text candidates are mapped on to a Sobel edge map of the original frame to obtain text representatives. To tackle the multi-orientation problem, we present a new method called Angle Projection Boundary Growing (APBG) which is an iterative algorithm and works based on a nearest neighbor concept. APBG is then applied on the text representatives to fix the bounding box for multi-oriented text lines in the video frame. Directional information is used to eliminate false positives. Experimental results on a variety of datasets such as non-horizontal, horizontal, publicly available data (Hua’s data) and ICDAR-03 competition data (camera images) show that the proposed method outperforms existing methods proposed for video and the state of the art methods for scene text as well.
Massively Parallel NUMA-aware Hash Joins
Driven by the two main hardware trends increasing main memory and massively parallel multi-core processing in the past few years, there has been much research e ort in parallelizing well-known join algorithms. However, the non-uniform memory access (NUMA) of these architectures to main memory has only gained limited attention in the design of these algorithms. We study recent proposals of main memory hash join implementations and identify their major performance problems on NUMA architectures. We then develop a NUMA-aware hash join for massively parallel environments, and show how the speci c implementation details a ect the performance on a NUMA system. Our experimental evaluation shows that a carefully engineered hash join implementation outperforms previous high performance hash joins by a factor of more than two, resulting in an unprecedented throughput of 3/4 billion join argument tuples per second.
Factors affecting ERP system adoption: A comparative analysis between SMEs and large companies
Purpose – Proposes providing an insight about enterprise resource planning (ERP) adoption, highlighting contact points and significant differences between the way small to medium-sized enterprises (SMEs) and large companies approach such a task. Design/methodology/approach – The research is based on a wide literature review, focused on the identification of a taxonomy of business and organizational factors influencing ERP adoption. The deriving research model was incorporated in a questionnaire that was preliminarily tested and finally provided to a sample of 366 companies of any size. Responses were collected through personal interviews made by a dedicated team to a top manager. Findings – The analysis of the empirical data shows that business complexity, as a composed factor, is a weak predictor of ERP adoption, whereas just company size turns out to be a very good one. In other words, companies seem to be disregarding ERP systems as an answer to their business complexity. Unexpectedly, SMEs disregard financial constraints as the main cause for ERP system non-adoption, suggesting structural and organizational reasons as major ones. This pattern is partially different from what was observed in large organizations where the first reason for not adopting an ERP system is organizational. Moreover, the decision process regarding the adoption of ERP systems within SMEs is still more affected by exogenous reasons or “opportunity of the moment” than business-related factors, contrary to large companies that are more interested in managing process integration and data redundancy/inconsistency through ERP implementation. Research limitations/implications – The research model is based on the assumption that business complexity and organizational change are the most relevant variables influencing ERP adoption, and such variables are explained through a set of factors inherently limited by the results of the literature review. Practical implications – The results of the empirical research provide indication to SMEs willing to take into consideration the adoption of an ERP system. The same outcomes could be incorporated into the development strategies of ERP software houses. Originality/value – This paper contributes to enhancing the understanding of the factors influencing the evolution of information systems within SMEs with respect to large companies.
A Bistatic SAR Raw Data Simulator Based on Inverse $ \omega{-}k$ Algorithm
A synthetic aperture radar (SAR) raw data simulator is an important tool for testing the system parameters and the imaging algorithms. In this paper, a scene raw data simulator based on an inverse ω-k algorithm for bistatic SAR of a translational invariant case is proposed. The differences between simulations of monostatic and bistatic SAR are also described. The algorithm proposed has high precision and can be used in long-baseline configuration and for single-pass interferometry. Implementation details are described, and plenty of simulation results are provided to validate the algorithm.
Large-Margin Classification in Hyperbolic Space
Representing data in hyperbolic space can effectively capture latent hierarchical relationships. With the goal of enabling accurate classification of points in hyperbolic space while respecting their hyperbolic geometry, we introduce hyperbolic SVM1, a hyperbolic formulation of support vector machine classifiers, and elucidate through new theoretical work its connection to the Euclidean counterpart. We demonstrate the performance improvement of hyperbolic SVM for multi-class prediction tasks on real-world complex networks as well as simulated datasets. Our work allows analytic pipelines that take the inherent hyperbolic geometry of the data into account in an end-to-end fashion without resorting to ill-fitting tools developed for Euclidean space.
Image ranking and retrieval based on multi-attribute queries
We propose a novel approach for ranking and retrieval of images based on multi-attribute queries. Existing image retrieval methods train separate classifiers for each word and heuristically combine their outputs for retrieving multiword queries. Moreover, these approaches also ignore the interdependencies among the query terms. In contrast, we propose a principled approach for multi-attribute retrieval which explicitly models the correlations that are present between the attributes. Given a multi-attribute query, we also utilize other attributes in the vocabulary which are not present in the query, for ranking/retrieval. Furthermore, we integrate ranking and retrieval within the same formulation, by posing them as structured prediction problems. Extensive experimental evaluation on the Labeled Faces in the Wild(LFW), FaceTracer and PASCAL VOC datasets show that our approach significantly outperforms several state-of-the-art ranking and retrieval methods.
A Cost-Based Security Analysis of Symmetric and Asymmetric Key Lengths RSA Labs bulletin
Recently, there has been considerable debate concerning key sizes for publ i c key based cry p t o graphic methods. Included in the debate have been considerations about equivalent key sizes for diffe rent methods and considerations about the minimum re q u i red key size for diffe rent methods. In this paper we propose a method of a n a lyzing key sizes based upon the value of the data being protected and the cost of b reaking ke y s . I . I n t ro d u c t i o n A . W H Y I S K E Y S I Z E I M P O R T A N T ? In order to keep transactions based upon public key cryptography secure, one must ensure that the underlying keys are sufficiently large as to render the best possible attack infeasible. However, this really just begs the question as one is now left with the task of defining ‘infeasible’. Does this mean infeasible given access to (say) most of the Internet to do the computations? Does it mean infeasible to a large adversary with a large (but unspecified) budget to buy the hardware for an attack? Does it mean infeasible with what hardware might be obtained in practice by utilizing the Internet? Is it reasonable to assume that if utilizing the entire Internet in a key breaking effort makes a key vulnerable that such an attack might actually be conducted? If a public effort involving a substantial fraction of the Internet breaks a single key, does this mean that similar sized keys are unsafe? Does one need to be concerned about such public efforts or does one only need to be concerned about possible private, sur reptitious efforts? After all, if a public attack is known on a particular key, it is easy to change that key. We shall attempt to address these issues within this paper. number 13 Apr i l 2000 B u l l e t i n News and A dv i c e f rom RSA La bo rat o r i e s I . I n t ro d u c t i o n I I . M et ho ds o f At tac k I I I . H i s tor i ca l R es u l t s and t he R S A Ch a l le nge I V. Se cu r i t y E st i m ate s
External root resorption after orthodontic treatment: a study of contributing factors
PURPOSE The purpose of this study was to examine the patient- and treatment-related etiologic factors of external root resorption. MATERIALS AND METHODS This study consisted of 163 patients who had completed orthodontic treatments and taken the pre- and post-treatment panoramic and lateral cephalometric radiographs. The length of tooth was measured from the tooth apex to the incisal edge or cusp tip on the panoramic radiograph. Overbite and overjet were measured from the pre- and post-treatment lateral cephalometric radiographs. The root resorption of each tooth and the factors of malocclusion were analyzed with an analysis of variance. A paired t test was performed to compare the mean amount of root resorption between male and female, between extraction and non-extraction cases, and between surgery and non-surgery groups. Correlation coefficients were measured to assess the relationship between the amount of root resorption and the age in which the orthodontic treatment started, the degree of changes in overbite and overjet, and the duration of treatment. RESULTS Maxillary central incisor was the most resorbed tooth, followed by the maxillary lateral incisor, the mandibular central incisor, and the mandibular lateral incisor. The history of tooth extraction was significantly associated with the root resorption. The duration of orthodontic treatment was positively correlated with the amount of root resorption. CONCLUSION These findings show that orthodontic treatment should be carefully performed in patients who need the treatment for a long period and with a pre-treatment extraction of teeth.
Drug–Drug Interaction Studies: Regulatory Guidance and An Industry Perspective
Recently, the US Food and Drug Administration and European Medicines Agency have issued new guidance for industry on drug interaction studies, which outline comprehensive recommendations on a broad range of in vitro and in vivo studies to evaluate drug–drug interaction (DDI) potential. This paper aims to provide an overview of these new recommendations and an in-depth scientifically based perspective on issues surrounding some of the recommended approaches in emerging areas, particularly, transporters and complex DDIs. We present a number of theoretical considerations and several case examples to demonstrate complexities in applying (1) the proposed transporter decision trees and associated criteria for studying a broad spectrum of transporters to derive actionable information and (2) the recommended model-based approaches at an early stage of drug development to prospectively predict DDIs involving time-dependent inhibition and mixed inhibition/induction of drug metabolizing enzymes. We hope to convey the need for conducting DDI studies on a case-by-case basis using a holistic scientifically based interrogative approach and to communicate the need for additional research to fill in knowledge gaps in these areas where the science is rapidly evolving to better ensure the safety and efficacy of new therapeutic agents.
THE RELATION BETWEEN IMPLICIT AND EXPLICIT LEARNING : Evidence for Parallel Development
Much research has focused on the separability of implicit and explicit learning, but less has focused on how they might interact. A recent model suggests that in the motor-skill domain, explicit knowledge can guide movement, and the implicit system learns in parallel, based on these movements. Functional imaging studies do not support that contention, however; they indicate that learning is exclusively implicit or explicit. In the experiment reported here, participants learned a motor sequencing task either implicitly or explicitly. At transfer, most of the stimuli were random, but the sequence occasionally appeared; thus, it was not obvious that explicit knowledge could be applied to the task. Nevertheless, participants with explicit training showed sequence knowledge equivalent to those with implicit training, implying that implicit knowledge had been acquired in parallel with explicit knowledge. This result has implications for the development of automaticity and of motor-skill learning. Much of the research on learning and memory over the past two decades has focused on the neural and cognitive separability of implicit and explicit learning (for a number of perspectives, see Schacter & Tulving, 1994). Less research has examined the relationship between implicit and explicit learning. Their relationship is of particular interest in motor-skill learning because it holds stark examples of the importance of both implicit and explicit learning. At advanced levels of skill, explicit access to the knowledge supporting the skill becomes unnecessary or even difficult (Fitts, 1964); an advanced tennis player need not consciously direct the movements of a serve, and in fact may be unable to describe the movement components. Indeed, some components of motor skills remain completely implicit throughout training. For example, in the serial response time (SRT) task, participants perform a four-choice response time task in which the stimuli appear in a repeating 12-unit sequence. Participants may demonstrate implicit learning of the sequence through faster response times even though they never learn it explicitly (Willingham, Nissen, & Bullemer, 1989). Nevertheless, conscious explicit processes are also important in motor-skill learning. Participants can use explicit knowledge of the sequence to support skilled performance in the SRT task (Curran & Keele, 1993). Thus, there is evidence that both implicit and explicit knowledge are useful in producing skilled behavior. Do they interact, and if so, how? One of us (Willingham, 1998) recently proposed a model, COBALT, that posits that implicit motor-skill learning takes place in parallel with explicit learning, so long as physical responses to the stimuli are made. For example, in the SRT task, if participants were pushing buttons while they learned the sequence explicitly, they should simultaneously learn the sequence implicitly. However, studies using positron emission tomography (PET; Grafton, Hazeltine, & Ivry, 1995; Hazeltine, Grafton, & Ivry, 1997; Rauch et al., 1995) support s parate, not parallel, implicit and explicit learning. Grafton and colleagues reported implicit learning was associated with metabolic changes in primary and supplementary motor cortices and the putamen, whereas explicit learning caused changes in prefrontal and premotor cortices (there was not simultaneous activity in the areas associated with implicit learning). Rauch and colleagues also reported separate sites supporting implicit learning (premotor cortex, caudate, and thalamus) and explicit learning (primary visual cortex, perisylvian cortex, and cerebellar vermis). In the experiment reported here, we sought to test whether implicit knowledge is acquired in parallel with explicit knowledge in a motor-skill task. We trained participants to explicitly learn the sequence in the SRT task. We then administered an implicit test of the sequence to see if implicit knowledge had been acquired in parallel.
Randomized clinical trial of oral and intravenous versus intravenous antibiotic prophylaxis for laparoscopic colorectal resection.
BACKGROUND The use of oral prophylactic antibiotics for the prevention of surgical-site infection (SSI) in patients undergoing laparoscopic surgery for colorectal cancer is controversial. The aim of this RCT was to evaluate whether intravenous perioperative antibiotics are inferior to combined preoperative oral and perioperative intravenous antibiotics in this setting. METHODS Patients undergoing elective laparoscopic colorectal resection in a single cancer centre were assigned randomly to combined preoperative oral antibiotics (metronidazole and kanamycin) and perioperative intravenous antibiotics (cefmetazole) (oral/IV group) or to perioperative intravenous antibiotics (cefmetazole) alone (IV-only group). Patients were stratified for the analyses based on type of operation (colonic surgery, anterior resection or abdominoperineal resection), preoperative use of mechanical bowel preparation, preoperative chemoradiotherapy and the presence of diabetes mellitus. The primary endpoint was the overall rate of SSI. Secondary endpoints were the rates of incisional site infection, organ/space infection, anastomotic leakage, intra-abdominal abscess, adverse events and postoperative complications. RESULTS Of 540 patients offered participation in the trial in 2013-2014, 515 agreed to take part and were randomized. Some 256 patients in the IV-only group and 255 in the oral/IV group completed the treatment per protocol. The overall rate of SSI was 7·8 per cent (20 of 256) in the IV-only group and 7·8 per cent (20 of 255) in the oral/IV group, confirming that perioperative administration of intravenous antibiotics alone was not inferior to the combined regimen (P = 0·017). There were no differences in rates of incisional site infection (5·5 versus 5·9 per cent respectively), organ/space infection (2·3 versus 2·0 per cent) or other secondary endpoints between the two groups. CONCLUSION Intravenous perioperative antimicrobial prophylaxis alone is not inferior to combined preoperative oral and intravenous perioperative prophylaxis with regard to SSI in patients with colorectal cancer undergoing elective laparoscopic resection. Registration number: UMIN000019339 ( http://www.umin.ac.jp/ctr/).
Electrostatic separation of metals and plastics from granular industrial wastes
The paper presents the main conclusions of authors' 15-year involvement in theoretical study, laboratory experimentation, equipment development and industry application of electrostatic separation technologies. The roll-type separator with combined corona-electrostatic field is the most advantageous technical solution when the purpose is to isolate conductive particles from nonconductive ones. The theoretical analysis and the laboratory tests of electrostatic separation point out the multitude of factors which influence the efficiency of this equipment: the configuration of the electric field, the size and the shape of the processed particles, the speed and the radius of the rotating roll electrode and the environmental conditions. The optimum electrode arrangement and operating parameters (high-voltage level, roll speed, feed rate) differ from one application to another. In-plant tests are recommended for the fine tuning of the electrostatic separation technology.
Inverse molecular design using machine learning: Generative models for matter engineering
The discovery of new materials can bring enormous societal and technological progress. In this context, exploring completely the large space of potential materials is computationally intractable. Here, we review methods for achieving inverse design, which aims to discover tailored materials from the starting point of a particular desired functionality. Recent advances from the rapidly growing field of artificial intelligence, mostly from the subfield of machine learning, have resulted in a fertile exchange of ideas, where approaches to inverse molecular design are being proposed and employed at a rapid pace. Among these, deep generative models have been applied to numerous classes of materials: rational design of prospective drugs, synthetic routes to organic compounds, and optimization of photovoltaics and redox flow batteries, as well as a variety of other solid-state materials.
Vitamin D: An overview of vitamin D status and intake in Europe
In recent years, there have been reports suggesting a high prevalence of low vitamin D intakes and vitamin D deficiency or inadequate vitamin D status in Europe. Coupled with growing concern about the health risks associated with low vitamin D status, this has resulted in increased interest in the topic of vitamin D from healthcare professionals, the media and the public. Adequate vitamin D status has a key role in skeletal health. Prevention of the well-described vitamin D deficiency disorders of rickets and osteomalacia are clearly important, but there may also be an implication of low vitamin D status in bone loss, muscle weakness and falls and fragility fractures in older people, and these are highly significant public health issues in terms of morbidity, quality of life and costs to health services in Europe. Although there is no agreement on optimal plasma levels of vitamin D, it is apparent that blood 25-hydroxyvitamin D [25(OH)D] levels are often below recommended ranges for the general population and are particularly low in some subgroups of the population, such as those in institutions or who are housebound and non-Western immigrants. Reported estimates of vitamin D status within different European countries show large variation. However, comparison of studies across Europe is limited by their use of different methodologies. The prevalence of vitamin D deficiency [often defined as plasma 25(OH)D <25 nmol/l] may be more common in populations with a higher proportion of at-risk groups, and/or that have low consumption of foods rich in vitamin D (naturally rich or fortified) and low use of vitamin D supplements. The definition of an adequate or optimal vitamin D status is key in determining recommendations for a vitamin D intake that will enable satisfactory status to be maintained all year round, including the winter months. In most European countries, there seems to be a shortfall in achieving current vitamin D recommendations. An exception is Finland, where dietary survey data indicate that recent national policies that include fortification and supplementation, coupled with a high habitual intake of oil-rich fish, have resulted in an increase in vitamin D intakes, but this may not be a suitable strategy for all European populations. The ongoing standardisation of measurements in vitamin D research will facilitate a stronger evidence base on which policies can be determined. These policies may include promotion of dietary recommendations, food fortification, vitamin D supplementation and judicious sun exposure, but should take into account national, cultural and dietary habits. For European nations with supplementation policies, it is important that relevant parties ensure satisfactory uptake of these particularly in the most vulnerable groups of the population.
A multilevel meta-analysis of studies reporting correlations between the h index and 37 different h index variants
This paper presents the first meta-analysis of studies that computed correlations between the h index and variants of the h index (such as the g index; in total 37 different variants) that have been proposed and discussed in the literature. A high correlation between the h index and its variants would indicate that the h index variants hardly provide added information to the h index. This meta-analysis included 135 correlation coefficients from 32 studies. The studies were based on a total sample size of N = 9005; on average, each study had a sample size of n = 257. The results of a three-level cross-classified mixed-effects metaanalysis show a high correlation between the h index and its variants: Depending on the model, the mean correlation coefficient varies between .8 and .9. This means that there is redundancy between most of the h index variants and the h index. There is a statistically significant study-to-study variation of the correlation coefficients in the information they yield. The lowest correlation coefficients with the h index are found for the h index variants MII and m index. Hence, these h index variants make a non-redundant contribution to the h index. © 2011 Elsevier Ltd. All rights reserved.
Automatic Hashtag Recommendation for Microblogs using Topic-Specific Translation Model
Microblogging services continue to grow in popularity, users publish massive instant messages every day through them. Many tweets are marked with hashtags, which usually represent groups or topics of tweets. Hashtags may provide valuable information for lots of applications, such as retrieval, opinion mining, classification, and so on. However, since hashtags should be manually annotated, only 14.6% tweets contain them (Wang et al., 2011). In this paper, we adopt topic-specific translation model(TSTM) to suggest hashtags for microblogs. It combines the advantages of both topic model and translation model. Experimental result on dataset crawled from real world microblogging service demonstrates that the proposed method can outperform some state-of-the-art methods. TITLE AND ABSTRACT IN CHINESE
Results of provisional stenting with a Sirolimus-eluting stent for bifurcation lesion: multicenter study in Japan.
BACKGROUND Treatment of bifurcation lesion with a drug-eluting stent (DES) remains problematic. The purpose of this study was to investigate an appropriate treatment strategy for bifurcation lesion with a Sirolimus-eluting stent (SES). METHOD One-hundred-forty-one patients with 169 bifurcation lesions were treated at three centers in Japan using a Sirolimus-eluting stent. Forty-six lesions (39 patients) were treated on side branches, and provisional stenting was performed in these cases. We evaluated the angiographic results and clinical outcomes with this strategy. Patients with acute myocardial infarction were excluded. RESULT After a follow-up period of 184 +/- 65 days, there were no deaths or myocardial infarction (MI), and only one (2.0%) target lesion revascularization (TLR). The strategies used for side-branch treatment were balloon only (83.7%) and T or Modified T stent (16.3%). The final kissing balloon technique was performed on 53.4% overall. In patients with a 6-month follow-up angiogram who had 25 bifurcation lesions (including 5 LMT bifurcation Lesions, 6 LCX-OM Lesions, 13 LAD-Dx lesions, and 1 RCA lesion) that were treated with balloon only, the percent diameter stenosis (%DS) of the side branch at follow-up was similar to that after the procedure (47.2 +/- 34.4% vs. 46.4 +/- 24.1%). CONCLUSIONS In the treatment of bifurcation lesions using a SES, the results of provisional stenting for the side branch are acceptable. Percent DS of the side branch remained unchanged over time after PCI.
Debugging the Internet of Things: The Case of Wireless Sensor Networks
The Internet of Things (IoT) has the strong potential to support a human society interacting more symbiotically with its physical environment. Indeed, the emergence of tiny devices that sense environmental cues and trigger actuators after consulting logic and human preferences promises a more environmentally aware and less wasteful society. However, the IoT inherently challenges software development processes, particularly techniques for ensuring software reliability. Researchers have developed debugging tools for wireless sensor networks (WSNs), which can be viewed as the enablers of perception in the IoT. These tools gather run-time information on individual sensor node executions and node interactions and then compress that information.
THE ARENA PRODUCT FAMILY : ENTERPRISE MODELING SOLUTIONS
Organizations throughout the world are quickly moving adopt process modeling and simulation as an integral of their business decision-making and continuo improvement initiatives. With wider acceptance simulation, these consumers are demanding tools support a breadth of applications, scale to fit differe needs through a project life cycle, and integrated w corporate modeling and database systems. Systems Modeling fulfills these needs in the Are product family, encompassing Arena  Business, Standard and Professional Editions for mapping processes simulating discrete and continuous systems; Call$im  for call-center analysis; and HiSpeed$im  for high-speed production-line modeling. These products compleme each other in meeting the various needs for simulation an enterprise via a common software interface a compatible features, providing a natural growth path simulation needs expand. This paper introduces the Arena suite of products modeling and simulation, highlighting product architectu and technology features that are targeted toward succe deployment of simulation and Arena throughout enterprise.
Modular inverse reinforcement learning for visuomotor behavior
In a large variety of situations one would like to have an expressive and accurate model of observed animal or human behavior. While general purpose mathematical models may capture successfully properties of observed behavior, it is desirable to root models in biological facts. Because of ample empirical evidence for reward-based learning in visuomotor tasks, we use a computational model based on the assumption that the observed agent is balancing the costs and benefits of its behavior to meet its goals. This leads to using the framework of reinforcement learning, which additionally provides well-established algorithms for learning of visuomotor task solutions. To quantify the agent’s goals as rewards implicit in the observed behavior, we propose to use inverse reinforcement learning, which quantifies the agent’s goals as rewards implicit in the observed behavior. Based on the assumption of a modular cognitive architecture, we introduce a modular inverse reinforcement learning algorithm that estimates the relative reward contributions of the component tasks in navigation, consisting of following a path while avoiding obstacles and approaching targets. It is shown how to recover the component reward weights for individual tasks and that variability in observed trajectories can be explained succinctly through behavioral goals. It is demonstrated through simulations that good estimates can be obtained already with modest amounts of observation data, which in turn allows the prediction of behavior in novel configurations.
Seamless patches for GPU-based terrain rendering
In this paper we present a novel approach for interactive rendering of large terrain datasets. Our approach is based on subdividing a terrain into rectangular patches at different resolutions. Each patch is represented by four triangular tiles that are selected form different resolutions, and four strips which are used to stitch the four tiles in a seamless manner. Such a scheme maintains resolution changes within patches through the stitching strips, and not across patches. At runtime, these patches are used to construct a level-of-detail representation of the input terrain based on view-parameters. A selected level of detail only includes the layout of the patches and their boundary edges resolutions. The layout includes the location and dimension of each patch. Within the graphics hardware, the GPU generates the meshes of the patches by using scaled instances of cached tiles and assigns elevation for each vertex from cached textures. Since adjacent rectangular patches agree on the resolution of the common edges, the resulted mesh does not include cracks or degenerate triangles. Our algorithm manages to achieve quality images at high frame rates while providing seamless transition between different levels of detail.
Surveillance of transmitted HIV drug resistance among women attending antenatal clinics in Dar es Salaam, Tanzania.
BACKGROUND In resource-limited settings where antiretroviral treatment (ART) access is being scaled-up, the World Health Organization (WHO) recommends surveillance of transmitted HIV drug resistance (HIVDR). We used the WHO HIVDR threshold survey method to assess transmitted HIVDR in Dar es Salaam where ART was introduced in 1995 and where approximately 11,000 people are currently on ART. METHODS From November 2005 to February 2006, dried blood spot (DBS) specimens were made from remnant specimens collected during the national HIV serosurvey from 60 primagravidas <25 years old attending six antenatal clinics for routine syphilis testing. Genotyping was performed at the Centers for Disease Control and Prevention, Atlanta, Georgia, USA. Protease and reverse transcriptase drug resistance mutations were identified using the Stanford University HIV drug resistance database. We used the National Institutes of Health genotyping tool for HIV-1 subtyping. HIVDR prevalence categorization was based on the WHO threshold survey binomial sequential sampling method. RESULTS Among the 60 eligible specimens collected, 50 DBS were successfully amplified using RT-PCR. Sequencing was performed on the first 39 specimens: 13 (33.3%) were subtype A1, 13 (33.3%) subtype C, and 4 (10.3%) subtype D, the remainder differed in the closest subtype based on protease versus reverse transcriptase. No resistance mutations were seen; HIVDR to all drug classes was categorized as <5%. CONCLUSIONS Our survey indicates that prevalence of transmitted HIVDR among recently infected pregnant women in Dar es Salaam is low (<5/%). The survey should be repeated during the next HIV sentinel survey in Dar es Salaam and extended to other regions where ART is being scaled up.
An Introduction to Subtitling: Challenges and Strategies Bilal
122 BILAL KHALID KHALAF ABSTRACT This study attempts to identify the main challenges for the translators during subtitling process, and what are the presented solutions ‘strategies’ by theorists to overcome them. In order to have full understanding of subtitling, it will start with defining what is subtitling? its types, classifications and then exploring the challenges which translators encounter during the subtitling process. Later on, it will show briefly how the subtitling related to Skopos functional theory. After all, the suggested strategies in the field of subtitling to overcome these challenges will be discussed like, Gottlieb (1992) and Schjoldager (2008) with related concepts.
Factors influencing healthy eating habits among college students: an application of the health belief model.
Poor eating habits are an important public health issue that has large health and economic implications. Many food preferences are established early, but because people make more and more independent eating decisions as they move through adolescence, the transition to independent living during the university days is an important event. To study the phenomenon of food selection, the heath belief model was applied to predict the likelihood of healthy eating among university students. Structural equation modeling was used to investigate the validity of the health belief model (HBM) among 194 students, followed by gender-based analyses. The data strongly supported the HBM. Social change campaign implications are discussed.
The Orchid MADS-Box Genes Controlling Floral Morphogenesis
Orchids are known for both their floral diversity and ecological strategies. The versatility and specialization in orchid floral morphology, structure, and physiological properties have fascinated botanists for centuries. In floral studies, MADS-box genes contributing to the now famous ABCDE model of floral organ identity control have dominated conceptual thinking. The sophisticated orchid floral organization offers an opportunity to discover new variant genes and different levels of complexity to the ABCDE model. Recently, several remarkable research studies done on orchid MADS-box genes have revealed the important roles on orchid floral development. Knowledge about MADS-box genes' encoding ABCDE functions in orchids will give insights into the highly evolved floral morphogenetic networks of orchids.
Development of nanocomposite lead-free electronic solders
Inert, hybrid inorganic/organic, nano-structured chemicals, can be incorporated into low melting metallic materials, such as lead-free electronic solders to achieve desired level of performance. The nano-structured materials technology of polyhedral oligomeric silsesquioxanes (POSS), with appropriate organic groups, can produce suitable means to promote bonding between nano-reinforcements and the metallic matrix. The microstructures of lead-free solder with surface-active POSS tri-silanols were evaluated using scanning electron microscopy (SEM). Wettability of POSS-containing lead-free solders to copper substrate was also examined. Steady-state deformation of solder joints made of eutectic Sn-Ag solder with varying weight fraction of POSS of different chemical moieties were evaluated at a range of temperatures (25/spl deg/C, 100/spl deg/C, and 150/spl deg/C) using a Rheometric Solids Analyzer (RSA-III). Mechanical properties such as shear stress versus simple shear-strain relationships, peak shear stress as a function of rate of simple shear-strain and testing temperature were reported. The service reliability of joints made with these newly formulated nanocomposite solders was evaluated using a realistic thermomechanical fatigue (TMF) profile. Evolution of microstructures and residual mechanical property at different extend of TMF cycles were compared with joints made of standard, un-reinforced eutectic Sn-Ag solder.
Semantic trajectory-based high utility item recommendation system
http://dx.doi.org/10.1016/j.eswa.2014.01.042 0957-4174/ 2014 Elsevier Ltd. All rights reserved. ⇑ Corresponding author. Fax: +886 6 2747076. E-mail addresses: [email protected] (J.-C. Ying), [email protected] (H.-S. Chen), [email protected] (K.W. Lin), [email protected] (E.H.-C. Lu), [email protected] (V.S. Tseng), [email protected] (H.-W. Tsai), itriA00432@ itri.org.tw (K.H. Cheng), [email protected] (S.-C. Lin). Jia-Ching Ying , Huan-Sheng Chen , Kawuu W. Lin , Eric Hsueh-Chan Lu , Vincent S. Tseng a,⇑, Huan-Wen Tsai , Kuang Hung Cheng , Shun-Chieh Lin c
Low x phenomena
We review recent developments in the application of perturbative QCD to phenomena at small x.
Quantum Information Set Decoding Algorithms
The security of code-based cryptosystems such as the McEliece cryptosystem relies primarily on the difficulty of decoding random linear codes. The best decoding algorithms are all improvements of an old algorithm due to Prange: they are known under the name of information set decoding techniques. It is also important to assess the security of such cryptosystems against a quantum computer. This research thread started in [22] and the best algorithm to date has been Bernstein’s quantising [5] of the simplest information set decoding algorithm, namely Prange’s algorithm. It consists in applying Grover’s quantum search to obtain a quadratic speed-up of Prange’s algorithm. In this paper, we quantise other information set decoding algorithms by using quantum walk techniques which were devised for the subset-sum problem in [6]. This results in improving the worst-case complexity of 2 of Bernstein’s algorithm to 2 with the best algorithm presented here (where n is the codelength).
Alert: An Architecture for Transforming a Passive DBMS into an Active DBMS
Alert is an extension architecture designed for transforming a passive SQL DBMS into. an active DBMS. The salient features of the design of Alert are reusing, to the extent possible, the passive DBMS technology, and making minimal changes to the language and implementation of the passive DBMS. Alert provides a layered architecture that allows the semantics of a variety of production rule languages to be supported on top. Rules may be specified on userdefined as well as built-in operations. Both synchronous and asynchronous event, monit,oring are possible. This paper presents the design of Alert and its implementation in the Starburst extensible DBMS.
Avoidance of Information Technology Threats: A Theoretical Perspective
This paper describes the development of the technology threat avoidance theory (TTAT), which explains individual IT users’ behavior of avoiding the threat of malicious information technologies. We articulate that avoidance and adoption are two qualitatively different phenomena and contend that technology acceptance theories provide a valuable, but incomplete, understanding of users’ IT threat avoidance behavior. Drawing from cybernetic theory and coping theory, TTAT delineates the avoidance behavior as a dynamic positive feedback loop in which users go through two cognitive processes, threat appraisal and coping appraisal, to decide how to cope with IT threats. In the threat appraisal, users will perceive an IT threat if they believe that they are susceptible Alan Dennis was the accepting senior editor for this paper. to malicious IT and that the negative consequences are severe. The threat perception leads to coping appraisal, in which users assess the degree to which the IT threat can be avoided by taking safeguarding measures based on perceived effectiveness and costs of the safeguarding measure and selfefficacy of taking the safeguarding measure. TTAT posits that users are motivated to avoid malicious IT when they perceive a threat and believe that the threat is avoidable by taking safeguarding measures; if users believe that the threat cannot be fully avoided by taking safeguarding measures, they would engage in emotion-focused coping. Integrating process theory and variance theory, TTAT enhances our understanding of human behavior under IT threats and makes an important contribution to IT security research and practice.
The impact of transoral robotic surgery on the overall treatment of oropharyngeal cancer patients.
OBJECTIVES/HYPOTHESIS To assess adjuvant therapy in patients undergoing surgical management of oropharyngeal squamous cell carcinoma (OPSCCA) with transoral robotic surgery (TORS) and neck dissection. STUDY DESIGN A prospective, nonrandomized, consecutive patient series from two separate protocols in a tertiary academic medical center. METHODS Patients undergoing treatment for OPSCCA were selected from a prospective protocol evaluating functional and oncologic outcomes following TORS with a comparator group of OPSCCA patients receiving definitive chemoradiotherapy (CRT) participating in a separate prospective protocol. RESULTS Forty-two patients represented the TORS group and 38 the CRT group. Twenty (48%) of the TORS patients received surgery only, whereas nine (21%) underwent adjuvant radiotherapy and 13 (31%) adjuvant CRT. Adjuvant therapy patients had a higher overall T (P =.0007) and N (P < .0001) stage than the TORS-only group. Surgery resulted in stage changes in 18 (43%) patients, leading to alteration in therapy for nine (21%) patients. The 3-year overall survival (OS), disease-specific survival (DSS), and locoregional control was 74% versus 90% (P = .30), 94% versus 94% (P = .91), and 72% versus 91% (P = .19) for the TORS-alone versus TORS plus adjuvant therapy groups, respectively. Comparison with the CRT group revealed a survival benefit in the TORS group approaching significance, with a 3-year OS of 83% versus 57% (P = .06) and DSS of 94% versus 85% (P = .08), respectively. CONCLUSIONS Primary surgical management of OPSCCA with TORS and neck dissection provides accurate staging information, which can lead to the appropriate selection of subsequent therapy. This approach does not compromise survival and warrants additional investigation. LEVEL OF EVIDENCE 3b.
Towards Photo Watercolorization with Artistic Verisimilitude
We present a novel artistic-verisimilitude driven system for watercolor rendering of images and photos. Our system achieves realistic simulation of a set of important characteristics of watercolor paintings that have not been well implemented before. Specifically, we designed several image filters to achieve: 1) watercolor-specified color transferring; 2) saliency-based level-of-detail drawing; 3) hand tremor effect due to human neural noise; and 4) an artistically controlled wet-in-wet effect in the border regions of different wet pigments. A user study indicates that our method can produce watercolor results of artistic verisimilitude better than previous filter-based or physical-based methods. Furthermore, our algorithm is efficient and can easily be parallelized, making it suitable for interactive image watercolorization.
Recovering Semantic Traceability Links between APIs and Security Vulnerabilities: An Ontological Modeling Approach
Over the last decade, a globalization of the software industry took place, which facilitated the sharing and reuse of code across existing project boundaries. At the same time, such global reuse also introduces new challenges to the software engineering community, with not only components but also their problems and vulnerabilities being now shared. For example, vulnerabilities found in APIs no longer affect only individual projects but instead might spread across projects and even global software ecosystem borders. Tracing these vulnerabilities at a global scale becomes an inherently difficult task since many of the existing resources required for such analysis still rely on proprietary knowledge representation. In this research, we introduce an ontology-based knowledge modeling approach that can eliminate such information silos. More specifically, we focus on linking security knowledge with other software knowledge to improve traceability and trust in software products (APIs). Our approach takes advantage of the Semantic Web and its reasoning services, to trace and assess the impact of security vulnerabilities across project boundaries. We present a case study, to illustrate the applicability and flexibility of our ontological modeling approach by tracing vulnerabilities across project and resource boundaries.
Casino gaming and local employment trends
C asino gaming has become a major industry in the United States over the past two decades. Prior to the late 1980s, casino gaming was legal only in Nevada and Atlantic City, New Jersey. Today, casino gaming is available in 29 states. As a consequence, annual gaming revenue has grown from $9 billion in 1991 to over $40 billion in 2001.1 Americans spend more money in casinos than individually on golf, on-screen movies, CDs and sound equipment, and cable TV.2 The casino industry consists of two major parties—Indian tribes and publicly traded private corporations such as Harrah’s Entertainment and Trump Hotels and Casino Resorts. The Indian Gaming Regulatory Act (Public Law 100-497, passed in 1988) allows Indian tribes to own and operate casinos on their reservations. Tribal gaming is available in 25 states and generates nearly $13 billion in annual revenue. Corporate casino gaming is available in 10 states and generated over $27 billion in revenue in 2001.3 Table 1 provides a listing of these states.4 While tribal gaming is available in more states, corporate casino gaming has traditionally been perceived as a more appropriate tool for fostering general economic development through increased employment and tax revenue.5 The primary reason for this perception is that states have no power to tax Indian casino revenue because Indian reservations are sovereign entities, distinct from the state.6 While states and Indian tribes do cooperate on regulation and security issues (dictated by state-tribal gaming compacts), the relationship between a tribe and a state is very similar to the relationship between two states—one state generally cannot legally dictate what another state can do. Corporate casinos, however, are private industries that are taxed and regulated by the state. These casinos generate much more revenue and hire more labor from the general labor market than Indian casinos. The impact of corporate casino gaming on local employment is a major issue in the debate over legalized casino gaming. As a result, the issue has received careful study.7 This paper explores how corporate casinos affect employment in six Midwestern counties using various employment data and forecasting models. Changes in both household and payroll employment are examined to separate the effects on the residents and businesses in counties with casinos. Payroll employment changes may allude to possible interindustry substitution resulting from casino gaming. Also, both urban and rural “casino counties” are used in the analysis to
Impact of extracorporeal blood flow rate on blood pressure, pulse rate and cardiac output during haemodialysis.
BACKGROUND If blood pressure (BP) falls during haemodialysis (HD) [intradialytic hypotension (IDH)] a common clinical practice is to reduce the extracorporeal blood flow rate (EBFR). Consequently the efficacy of the HD (Kt/V) is reduced. However, only very limited knowledge on the effect of reducing EBFR on BP exists and data are conflicting. The aim of this study was to evaluate the effect and the potential mechanism(s) involved by investigating the impact of changes in EBFR on BP, pulse rate (PR) and cardiac output (CO) in HD patients with arteriovenous-fistulas (AV-fistulas). METHODS We performed a randomized, crossover trial in 22 haemodynamically stable HD patients with AV-fistula. After a conventional HD session each patient was examined during EBFR of 200, 300 and 400 mL/min in random order. After 15 min when steady state was achieved CO, BP and PR were measured at each EFBR, respectively. RESULTS Mean (SD) age was 71 (11) years. Systolic BP was significantly higher at an EBFR of 200 mL/min as compared with 300 mL/min [133 (23) versus 128 (24) mmHg; P < 0.05], but not as compared with 400 mL/min [133 (23) versus 130 (19) mmHg; P = 0.20]. At EBFR of 200, 300 and 400 mL/min diastolic BP, mean arterial pressure, PR and CO remained unchanged. CONCLUSION Our study does not show any consistent trend in BP changes by a reduction in EBFR. Reduction in EBFR if BP falls during IDH is thus not supported. However, none of the patients experienced IDH. Further studies are required to evaluate the impact of changes in EBFR on BP during IDH.
Simplified Optimal Trajectory Control (SOTC) for LLC Resonant Converters
In this paper, a simplified optimal trajectory control (SOTC) for the LLC resonant converter is proposed. During the steady state, a linear compensator such as proportional-integral or proportional-integral-derivative (PID) is used, controlling the switching frequency fs to eliminate the steady-state error. However, during load transients, the SOTC method takes over, immediately changing the pulse widths of the gate-driving signals. Using the state-plane analysis, the pulse widths are estimated, letting the state variables track the optimal trajectory locus in the minimum period of time. The proposed solution is implemented in a digital controller, and experimental results show that while the digital logic requirement is very small, the performance improvement is significant.
Are Choice-Making Opportunities Needed in the Classroom? Using Self-Determination Theory to Consider Student Motivation and Learner Empowerment.
Self-determination theory (SDT) underpins research on learner empowerment, but it is rarely discussed in empowerment-related literature. In addition, a motivational measure stemming from SDT has received little visibility in communication research. To address these concerns, this study focuses on motivational theory and measurement in an attempt to tease out the relationship between motivation and learner empowerment as well as how these constructs are related to students’ choicemaking opportunities in the classroom. In essence, this study aims to offer a strong synthesis of the literature related to these constructs, and also to make methodological and practical advancements in understanding student motivation, learner empowerment, and how freedom in the college classroom shapes students’ enthusiasm for learning.
DC Arc Flash Studies for Solar Photovoltaic Systems: Challenges and Recommendations
A dc arc flash hazard exists in solar photovoltaic (PV) power systems, but there is no widely accepted methodology for characterizing the severity of the hazard. Calculation methods have been proposed, and most rely on the nameplate I-V characteristic of the PV modules at standard test conditions to determine the worst case incident energy. This paper proposes to consider other factors in performing a dc arc flash hazard analysis, including possible weather conditions and variations of PV module characteristics from the datasheet ratings. It is recommended to consider two conditions when determining the worst case incident energy from a PV system: 1) the failure of all protective devices to trip within 2 s due to insufficient current and 2) the array output power exceeding the nameplate rating due to technological and environmental factors.
Identifying Partisan Slant in News Articles and Twitter during Political Crises
In this paper, we are interested in understanding the interrelationships between mainstream and social media in forming public opinion during mass crises, specifically in regards to how events are framed in the mainstream news and on social networks and to how the language used in those frames may allow to infer political slant and partisanship. We study the lingual choices for political agenda setting in mainstream and social media by analyzing a dataset of more than 40M tweets and more than 4M news articles from the mass protests in Ukraine during 2013-2014 — known as "Euromaidan" — and the post-Euromaidan conflict between Russian, pro-Russian and Ukrainian forces in eastern Ukraine and Crimea. We design a natural language processing algorithm to analyze at scale the linguistic markers which point to a particular political leaning in online media and show that political slant in news articles and Twitter posts can be inferred with a high level of accuracy. These findings allow us to better understand the dynamics of partisan opinion formation during mass crises and the interplay between mainstream and social media in such circumstances.
Institutionalization of African traditional medicine in health care systems in Africa.
In recent times, the phrase " traditional medicine" has become a catchword among the peoples in all countries in Africa. This has been due partly because the use of herbal remedies has gained popularity worldwide and the exploitation of these remedies has become a multimillion industry. The term "African traditional medicine" is not synonymous with "Alternative and complimentary medicine" which is a misnomer which is sometimes used. African traditional medicine is the African indigenous system of health care and therefore cannot be an alternative. In Africa, there is an important reason why African traditional medicine has become increasingly popular. The high cost of allopathic medical health care and the expensive pharmaceutical products have become unavailable to a majority of people. Naturally, the many centuries-old alternative sources of health care have become handy, often in desperate situations. In fact, the frequently quoted statement that 85 per cent of the people in Africa use traditional medicine, is an understatement because this figure is much higher and continues to increase. At the Alma Atta Declaration of 1978, it was resolved that traditional medicine had to be incorporated in the health care systems in developing countries if the objective of the "Health for All by the Year 2000" was to be realized. Notwithstanding this strategy, African countries did not come near the objective at the end of the 20th century. Therefore, the Member States of the WHO African Region adopted a resolution in 2000 called "Promoting the role of traditional medicine in health care systems: A strategy for the African Region". This strategy provides for the institutionalization of traditional medicine in health care systems of the member states of the WHO African Region. Furthermore, the OAU (African Union) Heads of State and Government declared the period 2000 - 2010 as the African Decade on African Traditional Medicine. In addition, the Director General of the World Health Organization also declared 31st August every year as African Traditional Medicine Day. All these declarations signify the importance and the approval by Governments and international institutions of the need to institutionalize African traditional medicine in health care. Therefore the mechanisms for institutionalization have to be developed to make these resolutions a reality. In view of the complexity and heterogeneity of African traditional medicine, a system of incorporation in the current health care systems has to be developed. During the last four years the WHO Regional Director for Africa and his Secretariat took up the challenge and have developed model guidelines that the Member States can adapt or adopt as may be appropriate in the respective Member States. Some of the relevant guidelines include the following: 1. Guidelines for the formulation, implementation, monitoring and evaluation of a National Traditional Medicine Policy 2. Model legal framework for the practice of traditional medicine: The Traditional Health Practitioners Bill; 3. Model Codes of Ethics for Traditional Health Practitioners 4. A Regional framework for the registration of traditional medicines in the WHO African Region; 5. A regulatory framework for the protection of intellectual property rights (IPR) and indigenous knowledge of traditional medicines in the WHO African Region. These guidelines and others provide a basis for the incorporation of African traditional medicine in a manner that would best suit a particular country. The WHO Regional Director for Africa also appointed a Regional Expert Committee on Traditional Medicine which assists in the development of these guidelines. It is important to emphasize that as more and more people use this traditional health care facility, there is an urgent need for the appropriate systems of quality control in the practice as well as in the production and use of the medicines. Such systems will protect the public and also ensure that the best practices and the most useful medicines are made available in the most affordable manner. Every country in the African region would be expected to adopt a method of incorporation that would be suitable: integrative, inclusive, or tolerant, as the case may be. It is an undeniable fact that we cannot afford to sit on the fence. All the stakeholders stand to gain a great deal in the development and promotion of African traditional medicine. In particular, all the practitioners in the present allopathic health care system will gain professionally as well as economically as they will have access to an additional culture-friendly system with which to provide services to the people. All the stakeholders must join hands in the effort to institutionalize the appropriate African traditional medicine in the health care systems in order to provide the health services that are urgently needed in the communities.
Colchicine reduces postoperative atrial fibrillation: results of the Colchicine for the Prevention of the Postpericardiotomy Syndrome (COPPS) atrial fibrillation substudy.
BACKGROUND Inflammation and pericarditis may be contributing factors for postoperative atrial fibrillation (POAF), and both are potentially affected by antiinflammatory drugs and colchicine, which has been shown to be safe and efficacious for the prevention of pericarditis and the postpericardiotomy syndrome (PPS). The aim of the Colchicine for the Prevention of the Post-Pericardiotomy Syndrome (COPPS) POAF substudy was to test the efficacy and safety of colchicine for the prevention of POAF after cardiac surgery. METHODS AND RESULTS The COPPS POAF substudy included 336 patients (mean age, 65.7±12.3 years; 69% male) of the COPPS trial, a multicenter, double-blind, randomized trial. Substudy patients were in sinus rhythm before starting the intervention (placebo/colchicine 1.0 mg twice daily starting on postoperative day 3 followed by a maintenance dose of 0.5 mg twice daily for 1 month in patients ≥70 kg, halved doses for patients <70 kg or intolerant to the highest dose). The substudy primary end point was the incidence of POAF on intervention at 1 month. Despite well-balanced baseline characteristics, patients on colchicine had a reduced incidence of POAF (12.0% versus 22.0%, respectively; P=0.021; relative risk reduction, 45%; number needed to treat, 11) with a shorter in-hospital stay (9.4±3.7 versus 10.3±4.3 days; P=0.040) and rehabilitation stay (12.1±6.1 versus 13.9±6.5 days; P=0.009). Side effects were similar in the study groups. CONCLUSION Colchicine seems safe and efficacious in the reduction of POAF with the potentiality of halving the complication and reducing the hospital stay.
Recent Advances in Optimal Reliability Allocation
Reliability has become a greater concern in recent years, because high-tech industrial processes with ever increasing levels of sophistication comprise most engineering systems today. To keep pace with this rapidly developing field, this paper provides a broad overview of recent research on reliability optimization problems and their solution methodologies. In particular, we address issues related to: 1) universal generating-function-based optimal multistate system design; 2) percentile life employed as a system performance measure; 3) multiobjective optimization of reliability systems, especially with uncertain component-reliability estimations; and 4) innovation and improvement in traditional reliability optimization problems, such as fault-tolerance mechanism and cold-standby redundancy-involved system design. New developments in optimization techniques are also emphasized in this paper, especially the methods of ant colony optimization and hybrid optimization. We believe that the interesting problems that are reviewed here are deserving of more attention in the literature. To that end, this paper concludes with a discussion of future challenges related to reliability optimization
Evolutionary genetics: A ring of species
A long-standing model illustrating how new species can arise through ‘circular overlap’, without interruption of gene flow through intervening populations, has now been confirmed among passerine birds (Irwin et al, 2005). The new study of the Eurasian greenish warbler (Phylloscopus trochiloides) complex shows that reproductive isolation, the hallmark of biological species, can arise through ‘isolation-by-distance’ (Slatkin, 1993), such that large distances themselves restrict gene flow. The question of how species arise, the ‘mystery of mysteries’ as Darwin put it, has received much recent attention (Coyne and Orr, 2004). Birds have featured prominently as subjects in speciation research, in particular, illustrating the classic allopatric model of speciation through geographic separation. The recently deceased Ernst Mayr, one of the most outspoken proponents of the allopatric speciation model, was also the first to outline clearly a nonallopatric alternative, which he called ‘speciation by circular overlap’ (see Figure 1). This intuitively plausible idea actually goes back to the German biologists Bernhard Rensch and Geyr von Schweppenburg and was based on their observations of the Great Tit (Parus major) complex in Eurasia and the circumpolar Herring Gull (Larus argentatus) group, respectively. However, when these and other potential ‘ring species’ like the Ensatina salamanders of western North America were studied in detail (Wake, 1997; Kvist et al, 2003; Liebers et al, 2004), the allopatric divergence model seemed to explain the genetic data better than the ring species model did. In most cases, fragmentation of ranges, and thus divergence in geographic isolation, has happened at some point before the circular range around an uninhabitable barrier closed. This is not to say that isolation-by-distance was unimportant, but the crucial question remained whether it is sufficient to lead to reproductive isolation. This question has now been answered positively by Irwin and colleagues’ (2005) study of the greenish warbler complex, which provides the most convincing case to date in support of the ring species model. Greenish warblers are small, drab-looking songbirds living in forested habitats across much of Eurasia and around the Himalayan mountain system. It had been known for some time that north of the Tibetan plateau, a huge treeless desert uninhabitable to a forest-dwelling warbler, there are two forms of greenish warbler that differ in plumage and song, but do not interbreed (viridanus in the west, plumbeitarsus in the east). These populations coexist as distinct biological species in the Siberian lowlands, but are connected through an until recently contiguous breeding range all around the Central Asian mountains. Using amplified fragment length polymorphism markers to survey a large number of anonymous nuclear gene loci, Irwin and co-workers show that there is a genetic continuum throughout this ring-shaped range, except in the zone of overlap in Siberia, where genetic divergence reaches its maximum. The physical distance around the ring is about 9000 km, orders of magnitude larger than the distance an average young greenish warbler will travel from its birth place to its later breeding territory. Genetic distance is strongly correlated with geographic distance throughout the ring of intergrading populations, except in the zone of overlap, where this correlation breaks down abruptly. Historically, this situation most likely arose when ancestors of today’s greenish warblers from a refugium somewhere south of the Himalayas spread northwards on both sides of the mountain range. Separated by larger and larger intervening distances, the peripheral populations diverged as they were exposed to different selection pressures leading to differences in morphology, plumage and song. All conditions specified by the ring species model are met: a contiguous ring-shaped range with genetic continuity throughout, but a sharp genetic break where the ring has closed. Two aspects are important in the warbler example that might distinguish it from other cases of potential ring species that have since been discredited. First, sexual selection is important in these songbirds, and it seems to be song characteristics that are most strongly selected by females when choosing a mate. This opens the door for a Fisherian run-away process in which different song parameters may be selected between geographically distant populations. True enough, song characteristics best distinguish the sympatric (overlapping) populations of viridanus and plumbeitarsus warblers in Siberia, although variation in song parameters, just like the genetic make-up of populations, was found to be continuous as one travels from the overlap zone
Antibody 10-1074 suppresses viremia in HIV-1-infected individuals
Monoclonal antibody 10-1074 targets the V3 glycan supersite on the HIV-1 envelope (Env) protein. It is among the most potent anti-HIV-1 neutralizing antibodies isolated so far. Here we report on its safety and activity in 33 individuals who received a single intravenous infusion of the antibody. 10-1074 was well tolerated and had a half-life of 24.0 d in participants without HIV-1 infection and 12.8 d in individuals with HIV-1 infection. Thirteen individuals with viremia received the highest dose of 30 mg/kg 10-1074. Eleven of these participants were 10-1074-sensitive and showed a rapid decline in viremia by a mean of 1.52 log10 copies/ml. Virologic analysis revealed the emergence of multiple independent 10-1074-resistant viruses in the first weeks after infusion. Emerging escape variants were generally resistant to the related V3-specific antibody PGT121, but remained sensitive to antibodies targeting nonoverlapping epitopes, such as the anti-CD4-binding-site antibodies 3BNC117 and VRC01. The results demonstrate the safety and activity of 10-1074 in humans and support the idea that antibodies targeting the V3 glycan supersite might be useful for the treatment and prevention of HIV-1 infection.
A Multiband CPW-Fed Slot Antenna with Fractal Stub and Parasitic Line
This paper presents a multiband CPW-fed slot antenna with fractal stub and parasitic line. The conventional wideband slot antenna with fractal stub is modified by inserting the parasitic line surrounding the fractal stub that affects the attribution to be a multiband operation suitable for some applications in wireless communication systems. The parasitic line surrounding the fractal stub can generate a dual-notched frequency that can be controlled by varying the parameters of the parasitic structure. The lengths of slit and stub on both sides of the parasitic line can control the lower and higher notched frequencies, respectively. Additionally, the prototype of the proposed antenna can operate and cover the applications of DCS 1800, WiMAX IEEE 802.16, WLAN IEEE 802.11a/b/g, and IMT advance systems.
Menstrual and reproductive characteristics and breast density in young women
Breast density is strongly related to breast cancer risk, but determinants of breast density in young women remain largely unknown. Associations of reproductive and menstrual characteristics with breast density measured by magnetic resonance imaging were evaluated in a cross-sectional study of 176 healthy women, 25–29 years old, using linear mixed effects models. Parity was significantly inversely associated with breast density. In multivariable adjusted models that included non-reproductive variables, mean percent dense breast volume (%DBV) decreased from 20.5 % in nulliparous women to 16.0 % in parous women, while mean absolute dense breast volume (ADBV) decreased from 85.3 to 62.5 cm3. Breast density also was significantly inversely associated with the age women started using hormonal contraceptives, whereas it was significantly positively associated with duration of hormonal contraceptive use. In adjusted models, mean %DBV decreased from 21.7 % in women who started using hormones at 12–17 years of age to 14.7 % in those who started using hormones at 22–28 years of age, while mean ADBV decreased from 86.2 to 53.7 cm3. The age at which women started using hormonal contraceptives and duration of hormone use were inversely correlated, and mean %DBV increased from 15.8 % in women who used hormones for not more than 2.0 years to 22.0 % in women who used hormones for more than 8 years, while mean ADBV increased from 61.9 to 90.4 cm3 over this interval. Breast density in young women is inversely associated with parity and the age women started using hormonal contraceptives but positively associated with duration of hormone use.
Introduction to Geography: People, Places, and Environment
1. Introduction to Geography 2. Weather and Climate 3. Landforms 4. Biogeochemical Cycles and the Biosphere 5. Population, Population Increase, and Migration 6. Cultural Geography 7. The Geography of Languages and Religions 8. The Human Food Supply 9. Earth's Resources and Environmental Protection 10. Cities and Urbanization 11. A World of States 12. National Paths to Economic Growth 13. Political Regionalization and Globalization
Sedative hypnotic use in Alberta.
OBJECTIVE Benzodiazepines and similar sedative-hypnotics (BDZ-SSHs) are associated with both beneficial and adverse effects. Pharmacoepidemiologic data describing the use of these medications in contemporary Canadian populations has not been readily available. Our objective was to examine the hypothesis that increasing use of antidepressant medications for anxiety and mood disorders during the past decade led to less frequent use of BDZ-SSH medications. METHOD We used data from an Alberta Mental Health Survey to describe the pattern of BDZ-SSH use and to estimate provincial and health region frequencies of use. We supplemented the data with pharmacy dispensing data from IMS Health. RESULTS The frequency of use was comparable to that reported in previous studies. Unexpectedly, in the survey data, we observed trends suggesting regional variation both in the frequency and pattern of use. Examination of prescription dispensing data confirmed this pattern. Clinical factors, including the use of other psychotropic medications and psychiatric diagnoses, were strongly associated with BDZ-SSH use. Among the drugs examined, zopiclone had the highest frequency of use. Prescription dispensing data confirmed that the frequency of zopiclone use in Alberta is higher than that in most other provinces. CONCLUSIONS This descriptive study generates several new research questions and provides benchmarks for future pharmacoepidemiologic monitoring.
Sentinel: occupancy based HVAC actuation using existing WiFi infrastructure within commercial buildings
Commercial buildings contribute to 19% of the primary energy consumption in the US, with HVAC systems accounting for 39.6% of this usage. To reduce HVAC energy use, prior studies have proposed using wireless occupancy sensors or even cameras for occupancy based actuation showing energy savings of up to 42%. However, most of these solutions require these sensors and the associated network to be designed, deployed, tested and maintained within existing buildings which is significantly costly. We present Sentinel, a system that leverages existing WiFi infrastructure in commercial buildings along with smartphones with WiFi connectivity carried by building occupants to provide fine-grained occupancy based HVAC actuation. We have implemented Sentinel on top of RESTful web services, and demonstrate that it is scalable and compatible with legacy building management. We show that Sentinel accurately determines the occupancy in office spaces 86% of the time, with 6.2% false negative errors. We high-light the reasons for the inaccuracies, mostly attributed to aggressive power management by smartphones. Finally, we actuate 23% of the HVAC zones within a commercial building using Sentinel for one day and measure HVAC electrical energy savings of 17.8%.