title
stringlengths
8
300
abstract
stringlengths
0
10k
Calcium supplementation during pregnancy for preventing hypertensive disorders is not associated with changes in platelet count, urate, and urinary protein: a randomized control trial.
OBJECTIVE To test the hypothesis that calcium supplementation inhibits the underlying pathological processes in women with preeclampsia. METHODS Seven hundred and eight nulliparous women were enrolled in a WHO randomized double-blind trial, who received 1.5 g of calcium or placebo from 20 weeks of pregnancy or earlier. Platelet count, serum urate, and urinary protein/creatinine ratio were measured at or near 35 gestational weeks. RESULTS No difference was detected in rates of abnormal platelet count (relative risk [RR] 1.18; 95% confidence interval [CI], 0.63 to 2.18), serum urate level (1.0; 0.64 to 1.57) or urine protein/creatinine ratio (1.01; 0.76 to 1.34). This was consistent with the main trial finding of no difference in the incidence of 'dipstick' proteinuria between women receiving calcium and those receiving placebo (8312 women; RR, 1.01; 95% CI, 0.88 to 1.15). CONCLUSIONS An effect of calcium supplementation in the second half of pregnancy on the rate of abnormal laboratory measures associated with preeclampsia was not demonstrated.
Multi-label automatic GrabCut for image segmentation
This paper presents a multi-label automatic GrabCut technique for the problem of image segmentation. GrabCut is considered as one of the binary-label segmentation techniques because it is based on the famous s/t graph cut minimization technique for image segmentation. This paper extends the automatic binary-label GrabCut to a multi-label technique that can segment a given image into its natural segments without user intervention. Since multi-label segmentation is an NP-hard problem, the proposed algorithm converts the segmentation problem into multiple iterative piecewise binary label GrabCut segmentations. This implies separating one segment from the image, under consideration, per iteration. In this way, the proposed algorithm maintains the powerful advantage of the GrabCut to get the optimal solution for the segmentation problem. Evaluation of the segmentation results was carried out using different accuracy metrics from the literature. The evaluations were conducted with human ground truth segmentations from Berkeley benchmark dataset of natural images. Although human segmentations are semantically more meaningful, experiments showed that the proposed multi-label GrabCut provided matching segmentation results to that of individual humans with acceptable accuracy.
Applying deep learning to classify pornographic images and videos
It is no secret that pornographic material is now a one-clickaway from everyone, including children and minors. General social media networks are striving to isolate adult images and videos from normal ones. Intelligent image analysis methods can help to automatically detect and isolate questionable images in media. Unfortunately, these methods require vast experience to design the classifier including one or more of the popular computer vision feature descriptors. We propose to build a classifier based on one of the recently flourishing deep learning techniques. Convolutional neural networks contain many layers for both automatic features extraction and classification. The benefit is an easier system to build (no need for hand-crafting features and classifiers). Additionally, our experiments show that it is even more accurate than the state of the art methods on the most recent benchmark dataset.
Design and Experimental Results of a 300-kHz Synthetic Aperture Sonar Optimized for Shallow-Water Operations
The design of a shallow-water synthetic aperture sonar (SAS) requires an understanding of key system and environmental issues. The main factors that limit SAS performance are as follows: micronavigation accuracy, where micronavigation is defined as the problem of estimating the acoustic path lengths to allow the focusing of the aperture; multipath effects; and target view angle changes. All of them can degrade shadow classification performance. Micronavigation accuracy is successfully addressed by the gyrostabilized displaced phase center antenna technique, which combines data-driven motion estimates with external attitude sensors. Multipath effects in shallow water are effectively countered by narrow vertical beams. Shadow blur induced by view angle changes is mitigated by increasing the center frequency, to reduce the SAS integration length while still maintaining the desired resolution and by designing the system with a minimum grazing angle of about 6deg to reduce shadow length. The combination of these factors led to the choice of a 300-kHz center frequency and of a multipath mitigation scheme that uses multiple vertical beams. Experimental results obtained with a sonar incorporating these features have produced SAS images with 1.6 cm times 5 cm resolution in range times cross-range and shadow contrast in excess of 5 dB, at ranges of up to 170 m in 20 m of water.
Large-scale image classification: Fast feature extraction and SVM training
Most research efforts on image classification so far have been focused on medium-scale datasets, which are often defined as datasets that can fit into the memory of a desktop (typically 4G∼48G). There are two main reasons for the limited effort on large-scale image classification. First, until the emergence of ImageNet dataset, there was almost no publicly available large-scale benchmark data for image classification. This is mostly because class labels are expensive to obtain. Second, large-scale classification is hard because it poses more challenges than its medium-scale counterparts. A key challenge is how to achieve efficiency in both feature extraction and classifier training without compromising performance. This paper is to show how we address this challenge using ImageNet dataset as an example. For feature extraction, we develop a Hadoop scheme that performs feature extraction in parallel using hundreds of mappers. This allows us to extract fairly sophisticated features (with dimensions being hundreds of thousands) on 1.2 million images within one day. For SVM training, we develop a parallel averaging stochastic gradient descent (ASGD) algorithm for training one-against-all 1000-class SVM classifiers. The ASGD algorithm is capable of dealing with terabytes of training data and converges very fast–typically 5 epochs are sufficient. As a result, we achieve state-of-the-art performance on the ImageNet 1000-class classification, i.e., 52.9% in classification accuracy and 71.8% in top 5 hit rate.
Enzymatic conversion of sunflower oil to biodiesel in a solvent-free system: process optimization and the immobilized system stability.
The feasibility of using the commercial immobilized lipase from Candida antarctica (Novozyme 435) to synthesize biodiesel from sunflower oil in a solvent-free system has been proved. Using methanol as an acyl acceptor and the response surface methodology as an optimization technique, the optimal conditions for the transesterification has been found to be: 45 degrees C, 3% of enzyme based on oil weight, 3:1 methanol to oil molar ratio and with no added water in the system. Under these conditions, >99% of oil conversion to fatty acid methyl ester (FAME) has been achieved after 50 h of reaction, but the activity of the immobilized lipase decreased markedly over the course of repeated runs. In order to improve the enzyme stability, several alternative acyl acceptors have been tested for biodiesel production under solvent-free conditions. The use of methyl acetate seems to be of great interest, resulting in high FAME yield (95.65%) and increasing the half-life of the immobilized lipase by about 20.1 times as compared to methanol. The reaction has also been verified in the industrially feasible reaction system including both a batch stirred tank reactor and a packed bed reactor. Although satisfactory performance in the batch stirred tank reactor has been achieved, the kinetics in a packed bed reactor system seems to have a slightly better profile (93.6+/-3.75% FAME yield after 8-10 h), corresponding to the volumetric productivity of 48.5 g/(dm(3) h). The packed bed reactor has operated for up to 72 h with almost no loss in productivity, implying that the proposed process and the immobilized system could provide a promising solution for the biodiesel synthesis at the industrial scale.
Real-Time Strategy Game Competitions
106 AI MAGAZINE RTS games — such as StarCraft by Blizzard Entertainment and Command and Conquer by Electronic Arts — are popular video games that can be described as real-time war simulations in which players delegate units under their command to gather resources, build structures, combat and support units, scout opponent locations, and attack. The winner of an RTS game usually is the player or team that destroys the opponents’ structures first. Unlike abstract board games like chess and go, moves in RTS games are executed simultaneously at a rate of at least eight frames per second. In addition, individual moves in RTS games can consist of issuing simultaneous orders to hundreds of units at any given time. If this wasn’t creating enough complexity already, RTS game maps are also usually large and states are only partially observable, with vision restricted to small areas around friendly units and structures. Complexity by itself, of course, is not a convincing motivation for studying RTS games and building AI systems for them. What makes them attractive research subjects is the fact that, despite the perceived complexity, humans are able to outplay machines by means of spatial and temporal reasoning, long-range adversarial planning and plan
Baseline Detection in Historical Documents Using Convolutional U-Nets
Baseline detection is still a challenging task for heterogeneous collections of historical documents. We present a novel approach to baseline extraction in such settings, turning out the winning entry to the ICDAR 2017 Competition on Baseline detection (cBAD). It utilizes deep convolutional nets (CNNs) for both, the actual extraction of baselines, as well as for a simple form of layout analysis in a pre-processing step. To the best of our knowledge it is the first CNN-based system for baseline extraction applying a U-net architecture and sliding window detection, profiting from a high local accuracy of the candidate lines extracted. Final baseline post-processing complements our approach, compensating for inaccuracies mainly due to missing context information during sliding window detection. We experimentally evaluate the components of our system individually on the cBAD dataset. Moreover, we investigate how it generalizes to different data by means of the dataset used for the baseline extraction task of the ICDAR 2017 Competition on Layout Analysis for Challenging Medieval Manuscripts (HisDoc). A comparison with the results reported for HisDoc shows that it also outperforms the contestants of the latter.
Dietary flaxseed alters tumor biological markers in postmenopausal breast cancer.
PURPOSE Flaxseed, the richest source of mammalian lignan precursors, has previously been shown to reduce the growth of tumors in rats. This study examined, in a randomized double-blind placebo-controlled clinical trial, the effects of dietary flaxseed on tumor biological markers and urinary lignan excretion in postmenopausal patients with newly diagnosed breast cancer. EXPERIMENTAL DESIGN Patients were randomized to daily intake of either a 25 g flaxseed-containing muffin (n = 19) or a control (placebo) muffin (n = 13). At the time of diagnosis and again at definitive surgery, tumor tissue was analyzed for the rate of tumor cell proliferation (Ki-67 labeling index, primary end point), apoptosis, c-erbB2 expression, and estrogen and progesterone receptor levels. Twenty-four-hour urine samples were analyzed for lignans, and 3-day diet records were evaluated for macronutrient and caloric intake. Mean treatment times were 39 and 32 days in the placebo and flaxseed groups, respectively. RESULTS Reductions in Ki-67 labeling index (34.2%; P = 0.001) and in c-erbB2 expression (71.0%; P = 0.003) and an increase in apoptosis (30.7%; P = 0.007) were observed in the flaxseed, but not in the placebo group. No significant differences in caloric and macronutrient intake were seen between groups and between pre- and posttreatment periods. A significant increase in mean urinary lignan excretion was observed in the flaxseed group (1,300%; P < 0.01) compared with placebo controls. The total intake of flaxseed was correlated with changes in c-erbB2 score (r = -0.373; P = 0.036) and apoptotic index (r = 0.495; P < 0.004). CONCLUSION Dietary flaxseed has the potential to reduce tumor growth in patients with breast cancer.
Effective emotion recognition in movie audio tracks
This paper addresses the problem of speech emotion recognition from movie audio tracks. The recently collected Acted Facial Expression in the Wild 5.0 database is used. The aim is to discriminate among angry, happy, and neutral. We extract a relatively small number of features, a subset of which is not commonly used for the emotion recognition task. Those features are fed as input to an ensemble classifier that combines random forests with support vector machines. An accuracy of 65.63% is reported, outperforming a baseline system that uses the K-nearest neighbor classifier and has an accuracy of 56.88%. To verify the suitability of the exploited features, the same ensemble classification schema is applied on the feature set similar those employed in Audio/Visual Emotion Challenge 2011. In the latter case, an accuracy of 61.25% is achieved using a large set of 1582 features, as opposed to just 86 features in our case that lead to a relative improvement of 7.15% in accuracy.
Mining a corpus of biographical texts using keywords
Using statistically derived keywords to characterize texts has become an important research method for digital humanists and corpus linguists in areas such as literary analysis and the exploration of genre difference. Keywords—and the associated concepts of ‘keyness’ and ‘key-keyness’—have inspired conferences and workshops, many and varied research papers, and are central to several modern corpus processing tools. In this article, we present evidence that (at least for the task of biographical sentence classification) frequent words characterize texts better than keywords or key-keywords. Using the naı̈ve Bayes learning algorithm in conjunction with frequency-, keyword-, and key-keyword-based text representation to classify a corpus of biographical sentences, we discovered that the use of frequent words alone provided a classification accuracy better than either the keyword or key-keyword representations at a statistically significant level. This result suggests that (for the biographical sentence classification task at least) frequent words characterize texts better than keywords derived using more computationally intensive methods. .................................................................................................................................................................................
The detection of epileptic seizure signals based on fuzzy entropy
BACKGROUND Entropy is a nonlinear index that can reflect the degree of chaos within a system. It is often used to analyze epileptic electroencephalograms (EEG) to detect whether there is an epileptic attack. Much research into the state inspection of epileptic seizures has been conducted based on sample entropy (SampEn). However, the study of epileptic seizures based on fuzzy entropy (FuzzyEn) has lagged behind. NEW METHODS We propose a method of state inspection of epileptic seizures based on FuzzyEn. The method first calculates the FuzzyEn of EEG signals from different epileptic states, and then feature selection is conducted to obtain classification features. Finally, we use the acquired classification features and a grid optimization method to train support vector machines (SVM). RESULTS The results of two open-EEG datasets in epileptics show that there are major differences between seizure attacks and non-seizure attacks, such that FuzzyEn can be used to detect epilepsy, and our method obtains better classification performance (accuracy, sensitivity and specificity of classification of the CHB-MIT are 98.31%, 98.27% and 98.36%, and of the Bonn are 100%, 100%, 100%, respectively). COMPARISONS WITH EXISTING METHOD(S) To verify the performance of the proposed method, a comparison of the classification performance for epileptic seizures using FuzzyEn and SampEn is conducted. Our method obtains better classification performance, which is superior to the SampEn-based methods currently in use. CONCLUSIONS The results indicate that FuzzyEn is a better index for detecting epileptic seizures effectively. The FuzzyEn-based method is preferable, exhibiting potential desirable applications for medical treatment.
Aerodynamic Design Optimization Studies of a Blended-Wing-Body Aircraft
Abstract The blended-wing body is an aircraft configuration that has the potential to be more efficient than conventional large transport aircraft configurations with the same capability. However, the design of the blended-wing is challenging due to the tight coupling between aerodynamic performance, trim, and stability. Other design challenges include the nature and number of the design variables involved, and the transonic flow conditions. To address these issues, we perform a series of aerodynamic shape optimization studies using Reynolds-averaged Navier–Stokes computational fluid dynamics with a Spalart–Allmaras turbulence model. A gradient-based optimization algorithm is used in conjunction with a discrete adjoint method that computes the derivatives of the aerodynamic forces. A total of 273 design variables—twist, airfoil shape, sweep, chord, and span—are considered. The drag coefficient at the cruise condition is minimized subject to lift, trim, static margin, and center plane bending moment constraints. The studies investigate the impact of the various constraints and design variables on optimized blended-wing-body configurations. The lowest drag among the trimmed and stable configurations is obtained by enforcing a 1% static margin constraint, resulting in a nearly elliptical spanwise lift distribution. Trim and static stability are investigated at both onand off-design flight conditions. The single-point designs are relatively robust to the flight conditions, but further robustness is achieved through a multi-point optimization.
Satellites measure recent rates of groundwater depletion in California's Central Valley: CENTRAL VALLEY GROUNDWATER DEPLETION
[1] In highly‐productive agricultural areas such as California’s Central Valley, where groundwater often supplies the bulk of thewater required for irrigation, quantifying rates of groundwater depletion remains a challenge owing to a lack of monitoring infrastructure and the absence of water use reporting requirements. Here we use 78months (October, 2003–March, 2010) of data from the Gravity Recovery and Climate Experiment satellite mission to estimate water storage changes in California’s Sacramento and San Joaquin River Basins. We find that the basins are losing water at a rate of 31.0 ± 2.7 mm yr equivalent water height, equal to a volume of 30.9 km for the study period, or nearly the capacity of Lake Mead, the largest reservoir in the United States. We use additional observations and hydrological model information to determine that the majority of these losses are due to groundwater depletion in the Central Valley. Our results show that the Central Valley lost 20.4 ± 3.9 mm yr of groundwater during the 78‐month period, or 20.3 km in volume. Continued groundwater depletion at this rate may well be unsustainable, with potentially dire consequences for the economic and food security of the United States.Citation: Famiglietti, J. S., M. Lo, S. L. Ho, J. Bethune, K. J. Anderson, T. H. Syed, S. C. Swenson, C. R. de Linage, and M. Rodell (2011), Satellites measure recent rates of groundwater depletion in California’s Central Valley, Geophys. Res. Lett., 38, L03403, doi:10.1029/2010GL046442.
Recurrent ATP2A2 p.(Pro602Leu) mutation differentiates Acrokeratosis verruciformis of Hopf from the allelic condition Darier disease.
Darier disease and Acrokeratosis Verruciformis of Hopf (AKV) are rare disorders of keratinization with autosomal dominant inheritance and very distinct clinical pictures. Both have been shown to be caused by mutations in ATP2A2 (ATPase, Ca++ transporting, cardiac muscle, slow-twitch) a gene encoding one of the SERCA (sarcoplasmic/endoplasmic reticulum calcium ATPase2) intracellular pumps with a crucial role in cell-to-cell adhesion in both skin and heart. While hundreds of different missense and nonsense mutations cause Darier disease, only one missense mutation, p.(Pro602Leu), has been identified in families with AKV. We report a family with AKV due to the p.(Pro602Leu) mutation and discuss implications for this recurrent mutation on knowledge of ATP2A2 structure and function.
A framework for automated measurement of the intensity of non-posed Facial Action Units
This paper presents a framework to automatically measure the intensity of naturally occurring facial actions. Naturalistic expressions are non-posed spontaneous actions. The facial action coding system (FACS) is the gold standard technique for describing facial expressions, which are parsed as comprehensive, nonoverlapping action units (Aus). AUs have intensities ranging from absent to maximal on a six-point metric (i.e., 0 to 5). Despite the efforts in recognizing the presence of non-posed action units, measuring their intensity has not been studied comprehensively. In this paper, we develop a framework to measure the intensity of AU12 (lip corner puller) and AU6 (cheek raising) in videos captured from infant-mother live face-to-face communications. The AU12 and AU6 are the most challenging case of infant's expressions (e.g., low facial texture in infant's face). One of the problems in facial image analysis is the large dimensionality of the visual data. Our approach for solving this problem is to utilize the spectral regression technique to project high dimensionality facial images into a low dimensionality space. Represented facial images in the low dimensional space are utilized to train support vector machine classifiers to predict the intensity of action units. Analysis of 18 minutes of captured video of non-posed facial expressions of several infants and mothers shows significant agreement between a human FACS coder and our approach, which makes it an efficient approach for automated measurement of the intensity of non-posed facial action units.
Comparison between linear and daily undulating periodized resistance training to increase strength.
To determine the most effective periodization model for strength and hypertrophy is an important step for strength and conditioning professionals. The aim of this study was to compare the effects of linear (LP) and daily undulating periodized (DUP) resistance training on body composition and maximal strength levels. Forty men aged 21.5 +/- 8.3 and with a minimum 1-year strength training experience were assigned to an LP (n = 20) or DUP group (n = 20). Subjects were tested for maximal strength in bench press, leg press 45 degrees, and arm curl (1 repetition maximum [RM]) at baseline (T1), after 8 weeks (T2), and after 12 weeks of training (T3). Increases of 18.2 and 25.08% in bench press 1 RM were observed for LP and DUP groups in T3 compared with T1, respectively (p < or = 0.05). In leg press 45 degrees , LP group exhibited an increase of 24.71% and DUP of 40.61% at T3 compared with T1. Additionally, DUP showed an increase of 12.23% at T2 compared with T1 and 25.48% at T3 compared with T2. For the arm curl exercise, LP group increased 14.15% and DUP 23.53% at T3 when compared with T1. An increase of 20% was also found at T2 when compared with T1, for DUP. Although the DUP group increased strength the most in all exercises, no statistical differences were found between groups. In conclusion, undulating periodized strength training induced higher increases in maximal strength than the linear model in strength-trained men. For maximizing strength increases, daily intensity and volume variations were more effective than weekly variations.
Provably secure ciphertext policy ABE
In ciphertext policy attribute-based encryption (CP-ABE), every secret key is associated with a set of attributes, and every ciphertext is associated with an access structure on attributes. Decryption is enabled if and only if the user's attribute set satisfies the ciphertext access structure. This provides fine-grained access control on shared data in many practical settings, e.g., secure database and IP multicast. In this paper, we study CP-ABE schemes in which access structures are AND gates on positive and negative attributes. Our basic scheme is proven to be chosen plaintext (CPA) secure under the decisional bilinear Diffie-Hellman (DBDH) assumption. We then apply the Canetti-Halevi-Katz technique to obtain a chosen ciphertext (CCA) secure extension using one-time signatures. The security proof is a reduction to the DBDH assumption and the strong existential unforgeability of the signature primitive. In addition, we introduce hierarchical attributes to optimize our basic scheme - reducing both ciphertext size and encryption/decryption time while maintaining CPA security. We conclude with a discussion of practical applications of CP-ABE.
Performance Analysis of FlexRay-based ECU Networks
It is now widely believed that FlexRay will emerge as the predominant protocol for in-vehicle automotive communication systems. As a result, there has been a lot of recent interest in timing and predictability analysis techniques that are specifically targeted towards FlexRay. In this paper we propose a compositional performance analysis framework for a network of electronic control units (ECUs) that communicate via a FlexRay bus. Given a specification of the tasks running on the different ECUs, the scheduling policy used at each ECU, and a specification of the FlexRay bus (e.g. slot sizes and message priorities), our framework can answer questions related to the maximum end-to-end delay experienced by any message, the amount of buffer required at each communication controller and the utilization of the different ECUs and the bus. In contrast to previous timing analysis techniques which analyze the FlexRay bus in isolation, our framework is fully compositional and allows the modeling of the schedulers at the ECUs and the FlexRay protocol in a seamless manner. As a result, it can be used to analyze large systems and does not involve any computationally expensive step like solving an ILP (which previous approaches require). We illustrate our framework using detailed examples and also present results from a Matlab-based implementation.
Weighted partition consensus via kernels
The combination of multiple clustering results (clustering ensemble) has emerged as an important procedure to improve the quality of clustering solutions. In this paper we propose a new cluster ensemble method based on kernel functions, which introduces the Partition Relevance Analysis step. This step has the goal of analyzing the set of partition in the cluster ensemble and extract valuable information that can improve the quality of the combination process. Besides, we propose a new similarity measure between partitions proving that it is a kernel function. A new consensus function is introduced using this similarity measure and based on the idea of finding the median partition. Related to this consensus function, some theoretical results that endorse the suitability of our methods are proven. Finally, we conduct a numerical experimentation to show the behavior of our method on several databases by making a comparison with simple clustering algorithms as well as to other cluster ensemble methods. & 2010 Elsevier Ltd. All rights reserved.
Historical and philosophical issues in the conservation of cultural heritage
This first volume of the Getty Conservation Institute's "Readings in Conservation" series presents a comprehensive collection of texts on the conservation of art and architecture. Designed for students of art history as well as conservation, the book consists of 46 texts, many originally published in obscure or foreign journals. The 30 art historians and scholars represented raise questions such as when to restore, what to preserve and how to maintain aesthetic character. Excerpts have been selected from the following books and essays: John Ruskin "The Seven Lamps of Architecture"; Bernard Berenson "Aesthetics and History in the Visual Arts"; Clive Bell "The Aesthetic Hypothesis"; Cesare Brandi "Theory of Restoration"; Kenneth Clark "Looking at Pictures"; Erwin Panofsky "The History of Art as a Humanistic Discipline"; E.H. Gombrich "Art and Illusion"; Marie Cl. Berducou "The Conservation of Archaeology: and Paul Philipott "Restoration from the Perspective of the Social Sciences".
Association and linkage of anxiety-related traits with a functional polymorphism of the serotonin transporter gene regulatory region in Israeli sibling pairs
A functional polymorphism in the regulatory region of the serotonin transporter gene (5-HTTLPR) has been reported to be both associated and linked to anxiety-related personality measures, although other studies have not replicated these findings. The current study examines both association and linkage of the gene to two major anxiety-related personality measures, the harm avoidance scale on the Tridimensional Personality Questionnaire and the neuroticism scale of the NEO-PI-R, in a sample of 148 Israeli subjects comprising 74 same-sex sibling pairs. We replicated the reported association between the short allele and higher scores on the TPQ harm avoidance scale (P = 0.03), including the subscale of shyness (P = 0.02), and also found association in the same direction between the short allele and the NEO-PI-R neuroticism subscales of anxiety (P = 0.03) and depression (P = 0.04). Sib-pair linkage analysis, using the regression method, further supported a role of the 5-HTTLPR in anxiety-related personality traits.
When does self-esteem relate to deviant behavior? The role of contingencies of self-worth.
Researchers have assumed that low self-esteem predicts deviance, but empirical results have been mixed. This article draws upon recent theoretical developments regarding contingencies of self-worth to clarify the self-esteem/deviance relation. It was predicted that self-esteem level would relate to deviance only when self-esteem was not contingent on workplace performance. In this manner, contingent self-esteem is a boundary condition for self-consistency/behavioral plasticity theory predictions. Using multisource data collected from 123 employees over 6 months, the authors examined the interaction between level (high/low) and type (contingent/noncontingent) of self-esteem in predicting workplace deviance. Results support the hypothesized moderating effects of contingent self-esteem; implications for self-esteem theories are discussed.
Antibacterial Activity Of Tetrapleura Tetraptera Taub. Pod Extracts
Phytochemical composition and antibacterial activity of ethanolic and water extract of Tetrapleura tetraptera were studied. Four known human bacterial pathogens were used. They were Escherichia coli ,Staphylococcus aureus, Salmonella typhi and Pseudomonas aeruginosa.The test plant yielded 2.322% of extract with water and 3.180% of extract with ethanol. Also, phytochemical composition revealed the presence of tannin,0.36%:saponin,0.54%;flavonoid,0.84%;alkaloid,1.28%:phenol,0.42% and 9.86mg/kg of hydrocyanic acid (HCN).Both water and ethanol extracts showed strong antibacterial activity. The water extract gave inhibition zone of diameters 14.1mm,11.0mm,12.9mm and 20.4mm with Escherichia coli, Staphylococcus aureus, Pseudomonas aeruginosa and Salmonella typhi respectively,while ethanol extract gave 20.1mm,26.4mm,24.8mm and 14.0mm of the same test organisms. It was recorded that ethanolic extracts were more potent against the test organisms than the water extract.The established antibacterial activities were attributed to the presence of phytochemical and the continuous use of the test plant was highly recommended.
Failover mechanisms for distributed SDN controllers
Distributed SDN controllers have been proposed to address performance and resilience issues. While approaches for datacenters are built on strongly-consistent state sharing among controllers, others for WAN and constrained networks rely on a loosely-consistent distributed state. In this paper, we address the problem of failover for distributed SDN controllers by proposing two strategies for neighbor active controllers to take over the control of orphan OpenFlow switches: (1) a greedy incorporation and (2) a pre-partitioning among controllers. We built a prototype with distributed Floodlight controllers to evaluate these strategies. The results show that the failover duration with the greedy approach is proportional to the quantity of orphan switches while the pre-partitioning approach, introducing a very small additional control traffic, enables to react quicker in less than 200ms.
Using Pseudowords for Algorithm Comparison: An Evaluation Framework for Graph-based Word Sense Induction
In this paper we define two parallel data sets based on pseudowords, extracted from the same corpus. They both consist of word-centered graphs for each of 1225 different pseudowords, and use respectively first-order co-occurrences and secondorder semantic similarities. We propose an evaluation framework on these data sets for graph-based Word Sense Induction (WSI) focused on the case of coarsegrained homonymy: We compare different WSI clustering algorithms by measuring how well their outputs agree with the a priori known ground-truth decomposition of a pseudoword. We perform this evaluation for four different clustering algorithms: the Markov cluster algorithm, Chinese Whispers, MaxMax and a gangplankbased clustering algorithm. To further improve the comparison between these algorithms and the analysis of their behaviours, we also define a new specific evaluation measure. As far as we know, this is the first large-scale systematic pseudoword evaluation dedicated to the induction of coarsegrained homonymous word senses.
Students' learning styles and their effects on the use of social media technology for learning
Students with varying different learning styles approach learning differentlyhave different approaches in learning activities,. and wWith the emergence rise of social media technologies, investigating the effect of these styles on their intentions to use social media for learning it ishas become all the more importantpertinent to investigate if their styles affect their intentions to use social media for learning. This study investigated explored the factors that affect affecting students’ intentions to use social media for learning based on their learning styles (i.e., participatory, collaborative, and independent), using the Ssocial Mmedia Aacceptance Mmodel. By Convenience convenience sampling, resulted in the recruitment of 300 Malaysian students were recruited via an online survey (Nparticipatory = 116; Nindependent = 97; and Ncollaborative = 87). The survey was prepared based on by drawing on the Ssocial Mmedia Aacceptance Mmodel. It was piloted before the final data collection exercise step was that was carried out conducted in August 2013. The students’ demographic details of the students were analyzed using Statistical Program for Social Sciences 21, whilst while path modeling and multivariate analyses were administered conducted using SmartPLS 2.0. The Results results revealed the significant effect of Self and Performance to significantly affect on students’ intentions to use social media regardless of their learning styles. A pair-wise comparison showed revealed that Self to was more significant be significantly more important toin participatory style students compared to than to in collaborative students. Effort is was found to be the least significant factor, indicating the popularity of social media among studentssuggesting existing widespread use and familiarity of social media amongst the students. Further insight into Understanding the different driving factors of that drive students with different learning styles to use social media for students with different learning styles will help educators use this technology to assist learning more effectivelybe beneficial for educators to effectively use social media to assist students in their academic endeavors.
Approaches for a tether-guided landing of an autonomous helicopter
In this paper, we address the design of an autopilot for autonomous landing of a helicopter on a rocking ship, due to rough sea. A tether is used for landing and securing a helicopter to the deck of the ship in rough weather. A detailed nonlinear dynamic model for the helicopter is used. This model is underactuated, where the rotational motion couples into the translation. This property is used to design controllers which separate the time scales of rotation and translation. It is shown that the tether tension can be used to couple the translation of the helicopter to the rotation. Two controllers are proposed in this paper. In the first, the rotation time scale is chosen much shorter than the translation, and the rotation reference signals are created to achieve a desired controlled behavior of the translation. In the second, due to coupling of the translation of the helicopter to the rotation through the tether, the translation reference rates are created to achieve a desired controlled behavior of the attitude and altitude. Controller A is proposed for use when the helicopter is far away from the goal, while Controller B is for the case when the helicopter is close to the ship. The proposed control schemes are proved to be robust to the tracking error of its internal loop and results in local exponential stability. The performance of the control system is demonstrated by computer simulations. Currently, work is in progress to implement the algorithm using an instrumented model of a helicopter with a tether.
Moving Beyond Neighborhood: Activity Spaces and Ecological Networks As Contexts for Youth Development.
Many scholars, policy analysts, and practitioners agree that neighborhoods are important contexts for urban youth. Yet, despite decades of research, our knowledge of why and how neighborhoods influence the day-to-day lives of youth is still emerging. Theories about neighborhood effects largely assume that neighborhoods operate to influence youth through exposure-based mechanisms. Extant theoretical approaches, however, have neglected the processes by which neighborhood socioeconomic contexts influence the routine spatial exposures-or activity spaces-of urban residents. In this article, we argue that exposure to organizations, institutions, and other settings that characterize individual activity spaces is a key mechanism through which neighborhoods influence youth outcomes. Moreover, we hypothesize that aggregate patterns of shared local exposure-captured by the concept of ecological networks-are influenced by neighborhood socioeconomic characteristics and are independently consequential for neighborhood youth. Neighborhoods in which residents intersect in space more extensively as a result of routine conventional activities will exhibit higher levels of social capital relevant to youth well-being, including (1) familiarity, (2) beneficial (weak) social ties, (3) trust, (4) shared expectations for pro-social youth behavior (collective efficacy), and (5) the capacity for consistent monitoring of public space. We then consider the implications of ecological networks for understanding the complexities of contextual exposure. We specifically discuss the role of embeddedness in ecological communities-that is, clusters of actors and locations that intersect at higher rates-for understanding contextual influences that are inadequately captured by geographically defined neighborhoods. We conclude with an overview of new approaches to data collection that incorporate insights from an activity-space and ecological-network perspective on neighborhood and contextual influences on youth. Our approach offers (1) a new theoretical approach to understanding the links between neighborhood socioeconomic characteristics and youth-relevant dimensions of neighborhood social capital; (2) a basis for conceptualizing contextual influences that vary within, or extend beyond, traditionally understood geographic neighborhoods; and (3) a suite of methodological tools and resources to address the mechanisms of contextual influence more precisely. Research into the causes and consequences of urban neighborhood routine activity structures will illuminate the social processes accounting for compromised youth outcomes in disadvantaged neighborhoods and enhance the capacity for effective youth-oriented interventions.
Botnet Detection in the Internet of Things using Deep Learning Approaches
The recent growth of the Internet of Things (IoT) has resulted in a rise in IoT based DDoS attacks. This paper presents a solution to the detection of botnet activity within consumer IoT devices and networks. A novel application of Deep Learning is used to develop a detection model based on a Bidirectional Long Short Term Memory based Recurrent Neural Network (BLSTM-RNN). Word Embedding is used for text recognition and conversion of attack packets into tokenised integer format. The developed BLSTM-RNN detection model is compared to a LSTM-RNN for detecting four attack vectors used by the mirai botnet, and evaluated for accuracy and loss. The paper demonstrates that although the bidirectional approach adds overhead to each epoch and increases processing time, it proves to be a better progressive model over time. A labelled dataset was generated as part of this research, and is available upon request.
The Ship Movement Trajectory Prediction Algorithm Using Navigational Data Fusion
It is essential for the marine navigator conducting maneuvers of his ship at sea to know future positions of himself and target ships in a specific time span to effectively solve collision situations. This article presents an algorithm of ship movement trajectory prediction, which, through data fusion, takes into account measurements of the ship's current position from a number of doubled autonomous devices. This increases the reliability and accuracy of prediction. The algorithm has been implemented in NAVDEC, a navigation decision support system and practically used on board ships.
Issues in long-term protein delivery using biodegradable microparticles.
Recently, a variety of bioactive protein drugs have been available in large quantities as a result of advances in biotechnology. Such availability has prompted development of long-term protein delivery systems. Biodegradable microparticulate systems have been used widely for controlled release of protein drugs for days and months. The most widely used biodegradable polymer has been poly(d,l-lactic-co-glycolic acid) (PLGA). Protein-containing microparticles are usually prepared by the water/oil/water (W/O/W) double emulsion method, and variations of this method, such as solid/oil/water (S/O/W) and water/oil/oil (W/O/O), have also been used. Other methods of preparation include spray drying, ultrasonic atomization, and electrospray methods. The important factors in developing biodegradable microparticles for protein drug delivery are protein release profile (including burst release, duration of release, and extent of release), microparticle size, protein loading, encapsulation efficiency, and bioactivity of the released protein. Many studies used albumin as a model protein, and thus, the bioactivity of the release protein has not been examined. Other studies which utilized enzymes, insulin, erythropoietin, and growth factors have suggested that the right formulation to preserve bioactivity of the loaded protein drug during the processing and storage steps is important. The protein release profiles from various microparticle formulations can be classified into four distinct categories (Types A, B, C, and D). The categories are based on the magnitude of burst release, the extent of protein release, and the protein release kinetics followed by the burst release. The protein loading (i.e., the total amount of protein loaded divided by the total weight of microparticles) in various microparticles is 6.7+/-4.6%, and it ranges from 0.5% to 20.0%. Development of clinically successful long-term protein delivery systems based on biodegradable microparticles requires improvement in the drug loading efficiency, control of the initial burst release, and the ability to control the protein release kinetics.
A family's request for complementary medicine after patient brain death.
A 19-year-old woman living with relatives in the United States who was admitted for elective cranial surgery for complications related to a congenital disorder developed an acute intracranial hemorrhage 10 days after surgery. The patient was declared dead following repeat negative apnea tests. The patient's father requested that the treating team administer an unverified traditional medicinal substance to the patient. Because of the unusual nature of this request, the treating team called an ethics consultation. The present article reviews this case and discusses other cases that share key features to determine whether and when it is appropriate to accommodate requests for interventions on patients who have been declared dead.
Semantic Role Labeling Via Integer Linear Programming Inference
We present a system for the semantic role labeling task. The system combines a machine learning technique with an inference procedure based on integer linear programming that supports the incorporation of linguistic and structural constraints into the decision process. The system is tested on the data provided in CoNLL2004 shared task on semantic role labeling and achieves very competitive results.
Derivation and validation of REASON: a risk score identifying candidates to screen for peripheral arterial disease using ankle brachial index.
BACKGROUND The recommendation of screening with ankle brachial index (ABI) in asymptomatic individuals is controversial. The aims of the present study were to develop and validate a pre-screening test to select candidates for ABI measurement in the Spanish population 50-79 years old, and to compare its predictive capacity to current Inter-Society Consensus (ISC) screening criteria. METHODS AND RESULTS Two population-based cross-sectional studies were used to develop (n = 4046) and validate (n = 3285) a regression model to predict ABI < 0.9. The validation dataset was also used to compare the model's predictive capacity to that of ISC screening criteria. The best model to predict ABI < 0.9 included age, sex, smoking, pulse pressure and diabetes. Assessment of discrimination and calibration in the validation dataset demonstrated a good fit (AUC: 0.76 [95% CI 0.73-0.79] and Hosmer-Lemeshow test: χ(2): 10.73 (df = 6), p-value = 0.097). Predictions (probability cut-off value of 4.1) presented better specificity and positive likelihood ratio than the ABI screening criteria of the ISC guidelines, and similar sensitivity. This resulted in fewer patients screened per diagnosis of ABI < 0.9 (10.6 vs. 8.75) and a lower proportion of the population aged 50-79 years candidate to ABI screening (63.3% vs. 55.0%). CONCLUSION This model provides accurate ABI < 0.9 risk estimates for ages 50-79, with a better predictive capacity than that of ISC criteria. Its use could reduce possible harms and unnecessary work-ups of ABI screening as a risk stratification strategy in primary prevention of peripheral vascular disease.
Learning Cross-Domain Landmarks for Heterogeneous Domain Adaptation
While domain adaptation (DA) aims to associate the learning tasks across data domains, heterogeneous domain adaptation (HDA) particularly deals with learning from cross-domain data which are of different types of features. In other words, for HDA, data from source and target domains are observed in separate feature spaces and thus exhibit distinct distributions. In this paper, we propose a novel learning algorithm of Cross-Domain Landmark Selection (CDLS) for solving the above task. With the goal of deriving a domain-invariant feature subspace for HDA, our CDLS is able to identify representative cross-domain data, including the unlabeled ones in the target domain, for performing adaptation. In addition, the adaptation capabilities of such cross-domain landmarks can be determined accordingly. This is the reason why our CDLS is able to achieve promising HDA performance when comparing to state-of-the-art HDA methods. We conduct classification experiments using data across different features, domains, and modalities. The effectiveness of our proposed method can be successfully verified.
On the power efficiency of cascode compensation over Miller compensation in two-stage operational amplifiers
Optimization of power consumption is one of the main design challenges in today's low-power high-speed analog integrated circuits. In this paper, two popular techniques to stabilize two-stage operational amplifiers, namely Miller and cascode compensations are compared from power point of view. To accomplish this, the cascode-compensated structure is basically analyzed to derive the required equations for comparison. The results show that for the same specifications, cascode compensation is more power efficient than Miller compensation especially for heavy capacitive loads. This has been confirmed by circuit-level simulations in 0.25μm CMOS technology.
Delay in presentation of symptomatic referrals to a breast clinic: patient and system factors
We attempted to identify factors associated with delay in presentation and assessment of women with breast symptoms who attended a London breast clinic. A total of 692 consecutive symptomatic referrals, aged 40–75 years, were studied. Patient delay, assessed prior to diagnosis, was defined as time elapsing between symptom discovery and first presentation to a medical provider. This was studied in relation to: reasons for delaying, beliefs and attitudes, socio-demographic and clinical variables, psychiatric morbidity and subsequent diagnosis. Thirty-five per cent of the cohort delayed presentation 4 weeks or more (median 13 days). The most common reason given was that they thought their symptom was not serious (odds ratio (OR) =3.6–8.0). Others thought their symptom would go away (OR = 3.73, 95% CI 2.2–6.4) or delayed because they were scared (OR = 4.61, 95% CI 2.1–10.0). Delay was associated with psychiatric morbidity but not age. Patients who turned out to have cancer tended to delay less (median 7 days) but not significantly. Median system delay – time between first medical consultation and first clinic visit – was 18 days. Patients who thought they had cancer and those so diagnosed were seen more promptly (median 14 days). Most factors, including socio-economic status and ethnicity were non-contributory. Beliefs about breast symptoms and their attribution are the most important factors determining when women present. Health education messages should aim to convince symptomatic women that their condition requires urgent evaluation, without engendering fear in them. © 2000 Cancer Research Campaign
Pilot wave quantum model for the stock market
We use methods of classical and quantum mechanics for mathematical modeling of price dynamics at the financial market. The Hamiltonian formalism on the price/price-change phase space is used to describe the classical-like evolution of prices. This classical dynamics of prices is determined by ”hard” conditions (natural resources, industrial production, services and so on). These conditions as well as ”hard” relations between traders at the financial market are mathematically described by the classical financial potential. At the real financial market ”hard” conditions are not the only source of price changes. The information exchange and market psychology play important (and sometimes determining) role in price dynamics. We propose to describe this ”soft” financial factors by using the pilot wave (Bohmian) model of quantum mechanics. The theory of financial mental (or psychological) waves is used to take into account market psychology. The real trajectories of prices are determined (by the financial analogue of the second Newton law) by two financial potentials: classical-like (”hard” market conditions) and quantum-like (”soft” market conditions).
DNA Cryptography and Deep Learning using Genetic Algorithm with NW algorithm for Key Generation
Cryptography is not only a science of applying complex mathematics and logic to design strong methods to hide data called as encryption, but also to retrieve the original data back, called decryption. The purpose of cryptography is to transmit a message between a sender and receiver such that an eavesdropper is unable to comprehend it. To accomplish this, not only we need a strong algorithm, but a strong key and a strong concept for encryption and decryption process. We have introduced a concept of DNA Deep Learning Cryptography which is defined as a technique of concealing data in terms of DNA sequence and deep learning. In the cryptographic technique, each alphabet of a letter is converted into a different combination of the four bases, namely; Adenine (A), Cytosine (C), Guanine (G) and Thymine (T), which make up the human deoxyribonucleic acid (DNA). Actual implementations with the DNA don’t exceed laboratory level and are expensive. To bring DNA computing on a digital level, easy and effective algorithms are proposed in this paper. In proposed work we have introduced firstly, a method and its implementation for key generation based on the theory of natural selection using Genetic Algorithm with Needleman-Wunsch (NW) algorithm and Secondly, a method for implementation of encryption and decryption based on DNA computing using biological operations Transcription, Translation, DNA Sequencing and Deep Learning.
Early detection: The case for early detection
Early detection represents one of the most promising approaches to reducing the growing cancer burden. It already has a key role in the management of cervical and breast cancer, and is likely to become more important in the control of colorectal, prostate and lung cancer. Early-detection research has recently been revitalized by the advent of novel molecular technologies that can identify cellular changes at the level of the genome or proteome, but how can we harness these new technologies to develop effective and practical screening tests?
A climbing autonomous robot for inspection applications in 3D complex environments
Often inspection and maintenance work involve a large number of highly dangerous manual operations, especially within industrial fields such as shipbuilding and construction. This paper deals with the autonomous climbing robot which uses the "caterpillar" concept to climb in a complex 3D metallic-based structures. During its motion the robot generates in real-time the path and grasp planning in order to ensure stable self-support, avoid the environment obstacles, and optimise the robot consumption during the inspection. The control and monitoring of the robot is achieved through an advanced Graphical User Interface to allow an effective and user friendly operation of the robot. The experiments confirm its advantages in executing the inspection operations.
Joint Placement of Controllers and Gateways in SDN-Enabled 5G-Satellite Integrated Network
Leveraging the concept of software-defined network (SDN), the integration of terrestrial 5G and satellite networks brings us lots of benefits. The placement problem of controllers and satellite gateways is of fundamental importance for design of such SDN-enabled integrated network, especially, for the network reliability and latency, since different placement schemes would produce various network performances. To the best of our knowledge, it is an entirely new problem. Toward this end, in this paper, we first explore the satellite gateway placement problem to obtain the minimum average latency. A simulated annealing based approximate solution (SAA), is developed for this problem, which is able to achieve a near-optimal latency. Based on the analysis of latency, we further investigate a more challenging problem, i.e., the joint placement of controllers and gateways, for the maximum network reliability while satisfying the latency constraint. A simulated annealing and clustering hybrid algorithm (SACA) is proposed to solve this problem. Extensive experiments based on real world online network topologies have been conducted and as validated by our numerical results, enumeration algorithms are able to produce optimal results but having extremely long running time, while SAA and SACA can achieve approximate optimal performances with much lower computational complexity.
A 2-kW Single-Phase Seven-Level Flying Capacitor Multilevel Inverter With an Active Energy Buffer
High-efficiency and compact single-phase inverters are desirable in many applications such as solar energy harvesting and electric vehicle chargers. This paper presents a 2-kW, 60-Hz, 450-V<inline-formula> <tex-math notation="LaTeX">$ _{\text{DC}}$</tex-math></inline-formula>-to-240-V<inline-formula> <tex-math notation="LaTeX">$_{\text{AC}}$</tex-math></inline-formula> power inverter, designed and tested subject to the specifications of the Google/IEEE Little Box Challenge. The inverter features a seven-level flying capacitor multilevel converter, with low-voltage GaN switches operating at 120 kHz. The inverter also includes an active buffer for twice-line-frequency power pulsation decoupling, which reduces the required capacitance by a factor of 8 compared to conventional passive decoupling capacitors, while maintaining an efficiency above 99%. The inverter prototype is a self-contained box that achieves a high power density of 216 W/in<inline-formula> <tex-math notation="LaTeX">$^3$</tex-math></inline-formula> and a peak overall efficiency of 97.6%, while meeting the constraints including input current ripple, load transient, thermal, and FCC Class B EMC specifications.
An Analytical Solution to the Stance Dynamics of Passive Spring-Loaded Inverted Pendulum with Damping
The Spring-Loaded Inverted Pendulum (SLIP) model has been established both as a very accurate descriptive tool as well as a good basis for the design and control of running robots. In particular, approximate analytic solutions to the otherwise nonintegrable dynamics of this model provide principled ways in which gait controllers can be built, yielding invaluable insight into their stability properties. However, most existing work on the SLIP model completely disregards the effects of damping, which often cannot be neglected for physical robot platforms. In this paper, we introduce a new approximate analytical solution to the dynamics of this system that also takes into account viscous damping in the leg. We compare both the predictive performance of our approximation as well as the tracking performance of an associated deadbeat gait controller to similar existing methods in the literature and show that it significantly outperforms them in the presence of damping in the leg.
Altmetric: enriching scholarly content with article-level discussion and metrics
Scholarly content is increasingly being discussed, shared and bookmarked online by researchers. Altmetric is a start-­up that focuses on tracking, collecting and measuring this activity on behalf of publishers and here we describe our approach and general philosophy. Over the past year we've seen sharing and discussion activity around approximately 750k articles. The average number of articles shared each day grows by 5-­ 10% a month. We look at examples of how people are interacting with papers online and at how publishers can collect and present the resulting data to deliver real value to their authors and readers. Introduction Scholars are increasingly visible on the web and social media 1. While the majority of their online activities may not be directly related to their research they are nevertheless discussing, sharing and bookmarking scholarly articles online in large numbers. We know this because our job at Altmetric is to track the attention paid to papers online. Founded in January 2011 and with investment from Digital Science we're a London based start-­‐up that identifies, tracks and collects article level metrics on behalf of publishers. Article level metrics are quantitative or qualitative indicators of the impact that a single article has had. Examples of the former would be a count of the number of times the article has been downloaded, or shared on Twitter. Examples of the latter would be media coverage or a blog post from somebody well respected in the field. Tracking the conversations around papers Encouraging audiences to engage with articles online isn't anything new for many publishers. The Public Library of Science (PLoS), BioMed Central, Cell Press and Nature Publishing Group have all tried encouraging users to leave comments on papers with varying degrees of success but the response from users has generally been poor, with only a small fraction of papers ever receiving notable attention 2. A larger proportion of papers are discussed in some depth on academic blogs and a larger still proportion shared on social networks like Twitter, Facebook and Google+. Scholars seem to feel more comfortable sharing or discussing content in more informal environments tied to their personal identity and where
Prevention and treatment of oral mucositis in patients receiving chemotherapy
Oral mucositis is one of the most common side effects of cancer treatment (chemotherapy and/or radiotherapy). It is an inflammatory process that affects the mucosa of the oral cavity, giving rise to erythematous areas in combination with ulcers that can reach a large size. The true importance of oral mucositis is the complications it causes - fundamentally intense pain associated to the oral ulcers, and the risk of overinfection. This in turn may require reduction or even suspension of the antineoplastic treatment, with the risk of seriously worsening the patient prognosis. This points to the importance of establishing therapeutic tools of use in the prevention and/or treatment of mucositis. The present study offers a literature review of all the articles published over the last 10 years referred to the prevention and/or treatment of oral mucositis associated to chemotherapy. Key words:Oral mucositis, management, prevention, treatment, chemotherapy.
Diversity-multiplexing tradeoff in multiple-access channels
In a point-to-point wireless fading channel, multiple transmit and receive antennas can be used to improve the reliability of reception (diversity gain) or increase the rate of communication for a fixed reliability level (multiplexing gain). In a multiple-access situation, multiple receive antennas can also be used to spatially separate signals from different users (multiple-access gain). Recent work has characterized the fundamental tradeoff between diversity and multiplexing gains in the point-to-point scenario. In this paper, we extend the results to a multiple-access fading channel. Our results characterize the fundamental tradeoff between the three types of gain and provide insights on the capabilities of multiple antennas in a network context.
Bird's-Eye View Vision System for Vehicle Surrounding Monitoring
Blind spots usually lead to difficulties for drivers to maneuver their vehicles in complicated environments, such as garages, parking spaces, or narrow alleys. This paper presents a vision system which can assist drivers by providing the panoramic image of vehicle surroundings in a bird’s-eye view. In the proposed system, there are six fisheye cameras mounted around a vehicle so that their views cover the whole surrounding area. Parameters of these fisheye cameras were calibrated beforehand so that the captured images can be dewarped into perspective views for integration. Instead of error-prone stereo matching, overlapping regions of adjacent views are stitched together by aligning along a seam with dynamic programming method followed by propagating the deformation field of alignment with Wendland functions. In this way the six fisheye images can be integrated into a single, panoramic, and seamless one from a look-down viewpoint. Our experiments clearly demonstrate the effectiveness of the proposed image-stitching method for providing the bird’s eye view vision for vehicle surrounding monitoring.
Control of DFIG-based wind farms for power network frequency support
This paper investigates the frequency control scheme of a doubly fed induction generator (DFIG)-based wind farms to provide frequency support for the power system steady operation. Due to the decoupling of the rotor speed and the grid frequency by the connected converters, the DFIG-based wind farm rarely contributes to the network effective inertia. This is particularly true in the small isolated power system with lower system inertia. A novel method of controlling the DFIG active power and rotor speed is proposed to enable the DFIG-based wind farms participate in the network frequency support. A typical network that combines synchronous generator-based power plants and DFIG-based wind farms is modeled by MATLAB/Simulink to assess the system dynamic performance. The control strategy of the DFIG-based wind farm for the power network frequency regulation is validated by the simulation studies. With the proposed strategy, the DFIG-based wind farm system can provide useful network support as a conventional power plant. Thereby the power network operation and reliability can be improved.
Consumption of a fermented dairy product containing the probiotic Lactobacillus casei DN-114001 reduces the duration of respiratory infections in the elderly in a randomised controlled trial.
Common infectious diseases (CID) of the airways and the gastrointestinal tract are still a considerable cause of morbidity and mortality in elderly. The present study examined the beneficial effect of a dairy product containing the probiotic strain Lactobacillus casei DN-114 001 (fermented product) on the resistance of free-living elderly to CID. The study was multicentric, double blind and controlled, involving 1072 volunteers (median age = 76.0 years) randomised for consumption of either 200 g/d of fermented (n 537) or control (non-fermented) dairy product (n 535) for 3 months, followed by an additional 1 month's follow-up. The results showed that, when considering all CID, the fermented product significantly reduced the average duration per episode of CID (6.5 v. 8 d in control group; P = 0.008) and the cumulative duration of CID (7 v. 8 d in control group; P = 0.009). Reduction in both episode and cumulative durations was also significant for all upper respiratory tract infections (URTI; P < 0.001) and for rhinopharyngitis (P < 0.001). This was accompanied with an increase of L. casei species in stools throughout the fermented product consumption (2-3.8 x 107 equivalents of colony-forming unit/g of stools, P < 0.001). The cumulative number of CID (primary outcome) was not different between groups nor was the CID severity, fever, pathogens' occurrence, medication, immune blood parameters and quality of life. The fermented product was safe and well tolerated. In conclusion, consumption of a fermented dairy product containing the probiotic strain L. casei DN-114 001 in elderly was associated with a decreased duration of CID in comparison with the control group, especially for URTI such as rhinopharyngitis.
The Cubli: A reaction wheel based 3D inverted pendulum
The Cubli is a 15×15×15 cm cube with reaction wheels mounted on three of its faces. By applying controlled torques to the reaction wheels the Cubli is able to balance on its corner or edge. This paper presents the development of the Cubli. First, the mechatronic design of the Cubli is presented. Then the multi-body system dynamics are derived. The parameters of the nonlinear system are identified using a frequency domain based approach while the Cubli is balancing on its edge with a nominal controller. Finally, the corner balancing using a linear feedback controller is presented along with experimental results.
The Pragmatics of Word Meaning
Ann Copestake University of Cambridge In this paper, we explore the interaction between lexical semantics and pragmat­ ics. Linguistic processing is informationally encapsulated and utilises relatively simple 'taxonomic' lexical semantic knowledge. On this basis, defeasible lexical generalisations deliver defeasible parts of logical form. In contrast, pragmatics is open-ended and involves arbitrary knowledge. Two axioms specify when pragmatic defaults override lexical ones. We demonstrate that modelling this interaction al­ lows us to achieve a more refined interpretation of words in a discourse context than either the lexicon or pragmatics could do on their own.
JET-SHAPE OBSERVABLES
Studies of jet-shape observables in hard processes are summarized together with future developments
Rapid image retrieval for mobile location recognition
Recognizing the location and orientation of a mobile device from captured images is a promising application of image retrieval algorithms. Matching the query images to an existing georeferenced database like Google Street View enables mobile search for location related media, products, and services. Due to the rapidly changing field of view of the mobile device caused by constantly changing user attention, very low retrieval times are essential. These can be significantly reduced by performing the feature quantization on the handheld and transferring compressed Bag-of-Feature vectors to the server. To cope with the limited processing capabilities of handhelds, the quantization of high dimensional feature descriptors has to be performed at very low complexity. To this end, we introduce in this paper the novel Multiple Hypothesis Vocabulary Tree (MHVT) as a step towards real-time mobile location recognition. The MHVT increases the probability of assigning matching feature descriptors to the same visual word by introducing an overlapping buffer around the separating hyperplanes to allow for a soft quantization and an adaptive clustering approach. Further, a novel framework is introduced that allows us to integrate the probability of correct quantization in the distance calculation using an inverted file scheme. Our experiments demonstrate that our approach achieves query times reduced by up to a factor of 10 when compared to the state-of-the-art.
Design of outer rotor IPM type PMSM for 3 wheel electric vehicle
In this paper, the optimization design of outer rotor IPM type PMSM was carried out for electric vehicle traction having 3 wheel system. The half round shape was proposed to reduce the cogging torque due to IPM machine characteristics. The object of optimization design is to find out most effective factor in reducing the torque ripple which may occurs the noise, vibration and uncomfortable riding of EV. In order to satisfy the low cogging torque, some design parameter such as air-gap, number of slot, poles, and slot open size was evaluated. From the simulation results of the outer rotor type IPM, we find out that the combination of slot open and depth of magnet is most important factor for reducing the cogging torque and high performance in low speed operational area. Moreover, the prototype of designed outer rotor IPM type PMSM was carried out and verified their properties.
Rules for Synthesizing Quantum Boolean Circuits Using Minimized Nearest-Neighbour Templates
Quantum Boolean circuit (QBC) synthesis issues are becoming a key area of research in the domain of quantum computing. While Minterm gate based synthesis and Reed-Muller based canonical decomposition techniques are adopted commonly, nearest neighbor synthesis technique for QBC utilizes the quantum logic gates involving only the adjacent target and control qbits for a given quantum network. Instead of Quantum Boolean circuit synthesis using (SNOT gate, we have chosen the template-based technique for synthesis of QBC. This work defines new minimization rules using nearest neighbor templates, which results in reduced number of quantum gates and circuit levels. The need of proper relative placements of the quantum gates in order to achieve the minimum gate configuration has also been discussed.
A Persona-Based Neural Conversation Model
We present persona-based models for handling the issue of speaker consistency in neural response generation. A speaker model encodes personas in distributed embeddings that capture individual characteristics such as background information and speaking style. A dyadic speakeraddressee model captures properties of interactions between two interlocutors. Our models yield qualitative performance improvements in both perplexity and BLEU scores over baseline sequence-to-sequence models, with similar gains in speaker consistency as measured by human judges.
The Disengaged Immigrant: Mapping the Francophone Caribbean Metropolis
As an exemplary locus of crosscultural contact, the city offers fertile ground for theories of hybridity, transnationalism, and globalization. Today's metropolis signals the transformative and transforming identifications, the new axes of belonging and exclusion, and the reconfiguring of national and regional borders occurring across the globe, while at the same time occupying a privileged role as the quintessential space of transnationalism. Recent work on postcolonial urban spaces has foregrounded the incidence of the immigrant in these transformations: emblem of the era's mass migrations, this cultural outsider is paradoxically of the new global city, fundamentally changing the fabric of the metropolis through the crosscultural exchange his or her inscription in urban daily life embodies.1 The productive encounters generated by these intersections thus reveal a dependence on the figure of the immigrant, a need for both the presence and the dialogical engagement of the exogenous city dweller in the construction of a transnational metropolitanism. This essay seeks to examine the particular terms of this engagement through a consideration of two recent novels of Francophone Caribbean migration: Marie-Celie Agnant's La Dot de Sara (1995) and Gisele Pineau's L'Exil selon Julia (1996). As contributions to the growing canon on Francophone Caribbean exile in Western metropolitan settings, these novels recall inaugural voices such as Cesaire's and Fanon's as well as later works by Michele Lacrosil, Andre and Simone Schwarz-Bart, Myriam Warner- Vieyra, and Joseph Zobel.2 While Pineau's L'Exil selon Julia reflects a recurrent preoccupation with the autobiographically inflected subject of French metropolitan exile, La Dot de Sara is Agnant's first novel and effects a Haitian-Canadian displacement of this Parisian setting in the tradition of Dany Laferriere and Emile Ollivier.3 Amongst these myr
Exploring heterogeneous algorithms for accelerating deep convolutional neural networks on FPGAs
Convolutional neural network (CNN) finds applications in a variety of computer vision applications ranging from object recognition and detection to scene understanding owing to its exceptional accuracy. There exist different algorithms for CNNs computation. In this paper, we explore conventional convolution algorithm with a faster algorithm using Winograd's minimal filtering theory for efficient FPGA implementation. Distinct from the conventional convolution algorithm, Winograd algorithm uses less computing resources but puts more pressure on the memory bandwidth. We first propose a fusion architecture that can fuse multiple layers naturally in CNNs, reusing the intermediate data. Based on this fusion architecture, we explore heterogeneous algorithms to maximize the throughput of a CNN. We design an optimal algorithm to determine the fusion and algorithm strategy for each layer. We also develop an automated toolchain to ease the mapping from Caffe model to FPGA bitstream using Vivado HLS. Experiments using widely used VGG and AlexNet demonstrate that our design achieves up to 1.99X performance speedup compared to the prior fusion-based FPGA accelerator for CNNs.
A mobile application framework for the geospatial web
In this paper we present an application framework that leverages geospatial content on the World Wide Web by enabling innovative modes of interaction and novel types of user interfaces on advanced mobile phones and PDAs. We discuss the current development steps involved in building mobile geospatial Web applications and derive three technological pre-requisites for our framework: spatial query operations based on visibility and field of view, a 2.5D environment model, and a presentationindependent data exchange format for geospatial query results. We propose the Local Visibility Model as a suitable XML-based candidate and present a prototype implementation.
Jensen Type Inequalities Involving Homogeneous Polynomials
By means of algebraic, analytical and majorization theories, and under the proper hypotheses, we establish several Jensen type inequalities involving th homogeneous polynomials as follows: , , and , and display their applications.
Supplementation with Red Palm Oil Increases β-Carotene and Vitamin A Blood Levels in Patients with Cystic Fibrosis
Patients with cystic fibrosis (CF) show decreased plasma concentrations of antioxidants due to malabsorption of lipid soluble vitamins and consumption by chronic pulmonary inflammation. β-Carotene is a major source of retinol and therefore is of particular significance in CF. The aim of this study was to investigate the effect of daily intake of red palm oil (RPO) containing high amounts of β-carotene on the antioxidant levels in CF patients. Sixteen subjects were recruited and instructed to enrich their food with 2 to 3 tablespoons of RPO (~1.5 mg of β-carotene) daily over 8 weeks. Carotenoids, retinol, and α-tocopherol were measured in plasma at baseline and after intervention. In addition β-carotene, lycopene, α-tocopherol, and vitamin C were measured in buccal mucosa cells (BMC) to determine the influence of RPO on antioxidant tissue levels. Eleven subjects completed the study properly. Plasma β-carotene, retinol, and α-carotene of these patients increased, but plasma concentrations of other carotenoids and α-tocopherol as well as concentrations of β-carotene, lycopene, α-tocopherol, and vitamin C in BMC remained unchanged. Since RPO on a daily basis did not show negative side effects the data suggest that RPO may be used to elevate plasma β-carotene in CF.
Blood diseases detection using data mining techniques
In healthcare systems, there is huge medical data collected from many medical tests which conducted in many domains. Much research has been done to generate knowledge from medical data by using data mining techniques. However, there still needs to extract hidden information in the medical data, which can help in detecting diseases in the early stage or even before happening. In this study, we apply three data mining classifiers; Decision Tree, Rule Induction, and Naïve Bayes, on a test blood dataset which has been collected from Europe Gaza Hospital, Gaza Strip. The classifiers utilize the CBC characteristics to predict information about possible blood diseases in early stage, which may enhance the curing ability. Three experiments are conducted on the test blood dataset, which contains three types of blood diseases; Hematology Adult, Hematology Children and Tumor. The results show that Naïve Bayes classifier has the ability to predict the Tumor of blood disease better than the other two classifiers with accuracy of 56%, Rule induction classifier gives better result in predicting Hematology (Adult, Children) with accuracy of (57%–67%) respectively, while Decision Tree has the Lowest accuracy rate for detecting the three types of diseases in our dataset.
A new compact wide band 8-way SIW power divider at X-band
Ultra wideband components have been developed using SIW technology. The various components including a GCPW transition with less than 0.4dB insertion loss are developed. In addition to, T and Y-junctions are optimized with relatively wide bandwidth of greater than 63% and 40% respectively that have less than 0.6 dB insertion loss. The developed transition was utilized to design an X-band 8 way power divider that demonstrated excellent performance over a 5 GHz bandwidth with less than ±4º and ±0.9 dB phase and amplitude imbalance, respectively. The developed SIW power divider has a low profile and is particularly suitable for circuits' integration.
Systematic Characterizations of Text Similarity in Full Text Biomedical Publications
BACKGROUND Computational methods have been used to find duplicate biomedical publications in MEDLINE. Full text articles are becoming increasingly available, yet the similarities among them have not been systematically studied. Here, we quantitatively investigated the full text similarity of biomedical publications in PubMed Central. METHODOLOGY/PRINCIPAL FINDINGS 72,011 full text articles from PubMed Central (PMC) were parsed to generate three different datasets: full texts, sections, and paragraphs. Text similarity comparisons were performed on these datasets using the text similarity algorithm eTBLAST. We measured the frequency of similar text pairs and compared it among different datasets. We found that high abstract similarity can be used to predict high full text similarity with a specificity of 20.1% (95% CI [17.3%, 23.1%]) and sensitivity of 99.999%. Abstract similarity and full text similarity have a moderate correlation (Pearson correlation coefficient: -0.423) when the similarity ratio is above 0.4. Among pairs of articles in PMC, method sections are found to be the most repetitive (frequency of similar pairs, methods: 0.029, introduction: 0.0076, results: 0.0043). In contrast, among a set of manually verified duplicate articles, results are the most repetitive sections (frequency of similar pairs, results: 0.94, methods: 0.89, introduction: 0.82). Repetition of introduction and methods sections is more likely to be committed by the same authors (odds of a highly similar pair having at least one shared author, introduction: 2.31, methods: 1.83, results: 1.03). There is also significantly more similarity in pairs of review articles than in pairs containing one review and one nonreview paper (frequency of similar pairs: 0.0167 and 0.0023, respectively). CONCLUSION/SIGNIFICANCE While quantifying abstract similarity is an effective approach for finding duplicate citations, a comprehensive full text analysis is necessary to uncover all potential duplicate citations in the scientific literature and is helpful when establishing ethical guidelines for scientific publications.
Generative Partition Networks for Multi-Person Pose Estimation
This paper proposes a new framework, named Generative Partition Network (GPN), for addressing the challenging multi-person pose estimation problem. Different from existing pure top-down and bottom-up solutions, the proposed GPN models the multi-person partition detection as a generative process from joint candidates and infers joint configurations for person instances from each person partition locally, resulting in both low joint detection and joint partition complexities. In particular, GPN designs a generative model based on the Generalized Hough Transform framework to detect person partitions via votes from joint candidates in the Hough space, parameterized by centroids of persons. Such generative model produces joint candidates and their corresponding person partitions by performing only one pass of joint detection. In addition, GPN formulates the inference procedure for joint configurations of human poses as a graph partition problem and optimizes it locally. Inspired by recent success of deep learning techniques for human pose estimation, GPN designs a multi-stage convolutional neural network with feature pyramid branch to jointly learn joint confidence maps and Hough transformation maps. Extensive experiments on two benchmarks demonstrate the efficiency and effectiveness of the proposed GPN.
A Compromise Principle in Deep Monocular Depth Estimation
Monocular depth estimation, which plays a key role in understanding 3D scene geometry, is fundamentally an illposed problem. Existing methods based on deep convolutional neural networks (DCNNs) have examined this problem by learning convolutional networks to estimate continuous depth maps from monocular images. However, we find that training a network to predict a high spatial resolution continuous depth map often suffers from poor local solutions. In this paper, we hypothesize that achieving a compromise between spatial and depth resolutions can improve network training. Based on this “compromise principle”, we propose a regression-classification cascaded network (RCCN), which consists of a regression branch predicting a low spatial resolution continuous depth map and a classification branch predicting a high spatial resolution discrete depth map. The two branches form a cascaded structure allowing the main classification branch to benefit from the auxiliary regression branch. By leveraging large-scale raw training datasets and some data augmentation strategies, our network achieves competitive or state-of-the-art results on three challenging benchmarks, including NYU Depth V2 [1], KITTI [2], and Make3D [3].
On-chip optical isolation in monolithically integrated non-reciprocal optical resonators
Non-reciprocal photonic devices, including optical isolators and circulators, are indispensible components in optical communication systems. However, the integration of such devices on semiconductor platforms has been challenging because of material incompatibilities between semiconductors and magneto-optical materials that necessitate wafer bonding, and because of the large footprint of isolator designs. Here, we report the first monolithically integrated magneto-optical isolator on silicon. Using a non-reciprocal optical resonator on an silicon-on-insulator substrate, we demonstrate unidirectional optical transmission with an isolation ratio up to 19.5 dB near the 1,550 nm telecommunication wavelength in a homogeneous external magnetic field. Our device has a small footprint that is 290 mm in length, significantly smaller than a conventional integrated optical isolator on a single crystal garnet substrate. This monolithically integrated non-reciprocal optical resonator may serve as a fundamental building block in a variety of ultracompact silicon photonic devices including optical isolators and circulators, enabling future low-cost, large-scale integration. Non-reciprocal photonic devices that break the time-reversal symmetry of light propagation provide critical functionalities such as optical isolation and circulation in photonic systems. Although widely used in optical communications, such devices are still lacking in semiconductor integrated photonic systems1,2 because of challenges in both materials integration and device design. On the materials side, magneto-optical garnets used in discrete nonreciprocal photonic devices show large lattice and thermal mismatch with semiconductor substrates, making it difficult to achieve monolithic integration of garnets with phase purity, high Faraday rotation and low transmission loss3,4, and requiring wafer bonding to incorporate them on a semiconductor platform. On the device side, non-reciprocal mode conversion (NRMC) and non-reciprocal phase shift (NRPS) integrated optical isolators have large footprints with length scales from millimetres to centimetres5,6, which severely limits the feasibility of large-scale and low-cost integration. Efforts have been pursued both in the monolithic integration of iron garnet and the exploration of other magneto-optical materials with better semiconductor compatibility. Polycrystalline Y3Fe5O12 (YIG) films 3, epitaxial Sr(Ti1–xFex)O3–d (ref. 7), Sr(Ti1–xCox)O3–d (ref. 8) and Fe-doped InP films 9 have been demonstrated to have promising magneto-optical performance at a wavelength of 1,550 nm. In relation to device design, several monolithic non-reciprocal photonic devices capitalizing on optical resonance effects (for example, magneto-optical photonic crystals10, garnet thin-film based optical resonators11, silicon ring resonators with magneto-optical polymer cladding12 and modulated ring resonators using non-reciprocal photonic transitions1) have been theoretically analysed with a view to reducing the device footprint. However, the experimental realization of monolithic integrated devices on silicon has not been demonstrated so far due to material and fabrication difficulties. Currently, the only experimentally demonstrated optical isolators on silicon still rely on wafer bonding of Ce-doped yttrium iron garnet films grown on garnet single crystals to a silicon-on-insulator (SOI) Mach–Zehnder structure13 or to an SOI ring resonator14. The hybrid integrated Mach–Zehnder device has a transverse-magnetic (TM) mode isolation ratio of 21 dB and insertion loss of 8 dB (ref. 13) (with a device length of 2 mm), and a very recently reported ring resonator has an isolation ratio of 9 dB and insertion loss of 18 dB with a resonator diameter of 1.8 mm (ref. 14). Both devices featured non-uniform magnetic fields provided by bulk magnets. Compared to the hybrid solution, on-chip monolithic integration of non-reciprocal photonic devices offers high throughput, high yield, low cost and large scale, and thus has been sought for integrated photonic platforms for many years. In this Letter, we report the first monolithically integrated optical isolator on an SOI platform. The device operates under a homogeneously applied magnetic field, and uses a design based on a patterned non-reciprocal optical resonator to significantly reduce the footprint15. This device combines three essential characteristics of an on-chip isolator: a monolithically integrated design, small footprint and good isolation performance. The device structure is shown in Fig. 1a. The isolator consists of a single-mode silicon racetrack resonator fabricated on an SOI wafer with a top cladding of 1-mm-thick SiO2. Part of the SiO2 top cladding is etched to form a ‘window’, which directly exposes the underlying silicon resonator waveguide surface. A magneto-optical film is subsequently deposited on the entire sample area without the need for etching. In this work, the film consisted of a polycrystalline garnet bilayer, (Ce1Y2)Fe5O12(80 nm)/Y3Fe5O12(20 nm). By using a two-step deposition method (see Methods) with a thin YIG buffer layer, we successfully obtained phase-pure polycrystalline (Ce1Y2)Fe5O12 (Ce:YIG) films on silicon16 (Supplementary Fig. S3) in which no crystalline material other than the garnet phase was present. The values of Faraday rotation for polycrystalline YIG and Ce:YIG films were þ1008 cm and 21,2638 cm, the saturation magnetizations were 130 e.m.u. cm and 120 e.m.u. cm, and the saturation fields were 100 Oe and 200 Oe, respectively. Because of the SiO2 cladding, the optical mode interacts with the magneto-optical film only at the window region, where the silicon channel waveguide with garnet top-cladding layer provides strong NRPS of the TM-polarized mode due to the large index contrast between silicon (3.48) and magnetic garnets ( 2.2) at 1,550 nm (ref. 17). Figure 1b shows a cross-sectional scanning electron microscopy (SEM) image of the fabricated device at the window section. When an in-plane homogeneous magnetic field is applied perpendicular to
Customers ’ perceptions of online retailing service quality and their satisfaction
Online service quality is one of the key determinants of the success of online retailers. This exploratory study revealed some important findings about online service quality. First, the study identified six key online retailing service quality dimensions as perceived by online customers: reliable/prompt responses, access, ease of use, attentiveness, security, and credibility. Second, of the six, three dimensions, notably reliable/prompt responses, attentiveness, and ease of use, had significant impacts on both customers’ perceived overall service quality and their satisfaction. Third, the access dimension had a significant effect on overall service quality, but not on satisfaction. Finally, this study discovered a significantly positive relationship between overall service quality and satisfaction. Important managerial implications and recommendations are also presented.
Ensemble Deep Learning for Biomedical Time Series Classification
Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.
Big Data – BigData 2018
Traditional time series similarity search, based on relevance feedback, combines initial, positive and negative relevant series directly to create new query sequence for the next search; it can’t make full use of the negative relevant sequence, even results in inaccurate query results due to excessive adjustment of the query sequence in some cases. In this paper, time series similarity search based on separate relevance feedback is proposed, each round of query includes positive query and negative query, and combines the results of them to generate the query results of each round. For one data sequence, positive query evaluates its similarity to the initial and positive relevant sequences, and negative query evaluates it’s similarity to the negative relevant sequences. The final similar sequences should be not only close to positive relevant series but also far away from negative relevant series. The experiments on UCR data sets showed that, compared with the retrieval method without feedback and the commonly used feedback algorithm the proposed method can improve accuracy of similarity search on some data sets.
Macro Malware Detection using Machine Learning Techniques - A New Approach
A malware macro (also called "macro virus") is the code that exploits the macro functionality of office documents (especially Microsoft Office’s Excel and Word) to carry out malicious action against the systems of the victims that open the file. This type of malware was very popular during the late 90s and early 2000s. After its rise when it was created as a propagation method of other malware in 2014, macro viruses continue posing a threat to the user that is far from being controlled. This paper studies the possibility of improving macro malware detection via machine learning techniques applied to the properties of the code.
Design and Optimization of OpenFOAM-based CFD Applications for Modern Hybrid and Heterogeneous HPC Platforms
Design and Optimization of OpenFOAM-based CFD Applications for Modern Hybrid and Heterogeneous HPC Platforms Amani AlOnazi The progress of high performance computing platforms is dramatic, and most of the simulations carried out on these platforms result in improvements on one level, yet expose shortcomings of current CFD packages capabilities. Therefore, hardwareaware design and optimizations are crucial towards exploiting modern computing resources. This thesis proposes optimizations aimed at accelerating numerical simulations, which are illustrated in OpenFOAM solvers. A hybrid MPI and GPGPU parallel conjugate gradient linear solver has been designed and implemented to solve the sparse linear algebraic kernel that derives from two CFD solver: icoFoam, which is an incompressible flow solver, and laplacianFoam, which solves the Poisson equation, for e.g., thermal diffusion. A load-balancing step is applied using heterogeneous decomposition, which decomposes the computations taking into account the performance of each computing device and seeking to minimize communication. In addition, we implemented the recently developed pipeline conjugate gradient as an algorithmic improvement, and parallelized it using MPI, GPGPU, and a hybrid technique. While many questions of ultimately attainable per node performance and multi-node scaling remain, the experimental results show that the hybrid implementation of both solvers significantly outperforms state-of-the-art implementations of a widely used open source package.
The Role of Customer Gratitude in Relationship Marketing
Most theories of relationship marketing emphasize the role of trust and commitment in affecting performance outcomes; however, a recent meta-analysis indicates that other mediating mechanisms are at work. Data from two studies—a laboratory experiment and a dyadic longitudinal field survey—demonstrate that gratitude also mediates the influence of a seller’s relationship marketing investments on performance outcomes. Specifically, relationship marketing investments generate short-term feelings of gratitude that drive long-lasting performance benefits based on gratitude-related reciprocal behaviors. The authors identify a set of managerially relevant factors and test their power to alter customer perceptions of relationship marketing investments to increase customer gratitude, which can make relationship marketing programs more effective. Overall, the research empirically demonstrates that gratitude plays an important role in understanding how relationship marketing investments increase purchase intentions, sales growth, and share of wallet.
A Convenient Multicamera Self-Calibration for Virtual Environments
Virtual immersive environments or telepresence setups often consist of multiple cameras that have to be calibrated. We present a convenient method for doing this. The minimum is three cameras, but there is no upper limit. The method is fully automatic and a freely moving bright spot is the only calibration object. A set of virtual 3D points is made by waving the bright spot through the working volume. Its projections are found with subpixel precision and verified by a robust RANSAC analysis. The cameras do not have to see all points; only reasonable overlap between camera subgroups is necessary. Projective structures are computed via rank-4 factorization and the Euclidean stratification is done by imposing geometric constraints. This linear estimate initializes a postprocessing computation of nonlinear distortion, which is also fully automatic. We suggest a trick on how to use a very ordinary laser pointer as the calibration object. We show that it is possible to calibrate an immersive virtual environment with 16 cameras in less than 60 minutes reaching about 1/5 pixel reprojection error. The method has been successfully tested on numerous multicamera environments using varying numbers of cameras of varying quality.
Deep Learning for Depression Detection of Twitter Users
Mental illness detection in social media can be considered a complex task, mainly due to the complicated nature of mental disorders. In recent years, this research area has started to evolve with the continuous increase in popularity of social media platforms that became an integral part of people’s life. This close relationship between social media platforms and their users has made these platforms to reflect the users’ personal life on many levels. In such an environment, researchers are presented with a wealth of information regarding one’s life. In addition to the level of complexity in identifying mental illnesses through social media platforms, adopting supervised machine learning approaches such as deep neural networks have not been widely accepted due to the difficulties in obtaining sufficient amounts of annotated training data. Due to these reasons, we try to identify the most effective deep neural network architecture among a few of selected architectures that were successfully used in natural language processing tasks. The chosen architectures are used to detect users with signs of mental illnesses (depression in our case) given limited unstructured text data extracted from the Twitter social media platform.
Chest pathology detection using deep learning with non-medical training
In this work, we examine the strength of deep learning approaches for pathology detection in chest radiographs. Convolutional neural networks (CNN) deep architecture classification approaches have gained popularity due to their ability to learn mid and high level image representations. We explore the ability of CNN learned from a non-medical dataset to identify different types of pathologies in chest x-rays. We tested our algorithm on a 433 image dataset. The best performance was achieved using CNN and GIST features. We obtained an area under curve (AUC) of 0.87-0.94 for the different pathologies. The results demonstrate the feasibility of detecting pathology in chest x-rays using deep learning approaches based on non-medical learning. This is a first-of-its-kind experiment that shows that Deep learning with ImageNet, a large scale non-medical image database may be a good substitute to domain specific representations, which are yet to be available, for general medical image recognition tasks.
Oral lichenoid drug reaction associated with antihypertensive and hypoglycemic drugs.
Oral lichen planus and oral lichenoid drug reactions have similar clinical and histologic findings. The onset of oral lichenoid drug reactions appears to correspond to the administration of medications, especially antihypertensive drugs, oral hypoglycemic drugs, antimalarial drugs, gold salts, penicillamine and others. The author reports the case of 58-year-old male patient with oral lichenoid drug reaction, hypertension and diabetes mellitus. The oral manifestation showed radiated white lines with erythematous and erosive areas. The patient experienced pain and a burning sensation when eating spicy food. A tissue biopsy was carried out and revealed the characteristics of lichen planus. The patient was treated with 0.1% fluocinolone acetonide in an orabase as well as the replacement of the oral hypoglycemic and antihypertensive agents. The lesions improved and the burning sensation disappeared in two weeks after treatment. No recurrence was observed in the follow-up after three months.
Differentially Private Random Forest with High Utility
Privacy-preserving data mining has become an active focus of the research community in the domains where data are sensitive and personal in nature. For example, highly sensitive digital repositories of medical or financial records offer enormous values for risk prediction and decision making. However, prediction models derived from such repositories should maintain strict privacy of individuals. We propose a novel random forest algorithm under the framework of differential privacy. Unlike previous works that strictly follow differential privacy and keep the complete data distribution approximately invariant to change in one data instance, we only keep the necessary statistics (e.g. variance of the estimate) invariant. This relaxation results in significantly higher utility. To realize our approach, we propose a novel differentially private decision tree induction algorithm and use them to create an ensemble of decision trees. We also propose feasible adversary models to infer about the attribute and class label of unknown data in presence of the knowledge of all other data. Under these adversary models, we derive bounds on the maximum number of trees that are allowed in the ensemble while maintaining privacy. We focus on binary classification problem and demonstrate our approach on four real-world datasets. Compared to the existing privacy preserving approaches we achieve significantly higher utility.
Detecting perinatal common mental disorders in Ethiopia: validation of the self-reporting questionnaire and Edinburgh Postnatal Depression Scale.
BACKGROUND The cultural validity of instruments to detect perinatal common mental disorders (CMD) in rural, community settings has been little-investigated in developing countries. METHODS Semantic, content, technical, criterion and construct validity of the Edinburgh Postnatal Depression Scale (EPDS) and Self-Reporting Questionnaire (SRQ) were evaluated in perinatal women in rural Ethiopia. Gold-standard measure of CMD was psychiatric assessment using the Comprehensive Psychopathological Rating Scale (CPRS). Community-based, convenience sampling was used. An initial validation study (n=101) evaluated both EPDS and SRQ. Subsequent validation was of SRQ alone (n=119). RESULTS EPDS exhibited poor validity; area under the receiver operating characteristic (AUROC) curve of 0.62 (95%CI 0.49 to 0.76). SRQ-20 showed better validity as a dimensional scale, with AUROC of 0.82 (95%CI 0.68 to 0.96) and 0.70 (95%CI 0.57 to 0.83) in the two studies. The utility of SRQ in detecting 'cases' of CMD was not established, with differing estimates of optimal cut-off score: three and above in Study 1 (sensitivity 85.7%, specificity 75.6%); seven and above in Study 2 (sensitivity 68.4%, specificity 62%). High convergent validity of SRQ as a dimensional measure was demonstrated in a community survey of 1065 pregnant women. LIMITATIONS Estimation of optimal cut-off scores and validity coefficients for detecting CMD was limited by sample size. CONCLUSIONS EPDS demonstrated limited clinical utility as a screen for perinatal CMD in this rural, low-income setting. The SRQ-20 was superior to EPDS across all domains for evaluating cultural equivalence and showed validity as a dimensional measure of perinatal CMD.
Opsin genes, cone photopigments, color vision, and color blindness
In this chapter, we introduce the molecular structure of the genes encoding the human cone photopigments and their expression in photoreceptor cells. We also consider the consequences that alterations in those genes have on the spectral sensitivity of the photopigments, the cone photoreceptor mosaic, and the perceptual worlds of the color normal and color blind individuals who possess them. Throughout, we highlight areas in which our knowledge is still incomplete.
Enhancement of optic cup to disc ratio detection in glaucoma diagnosis
Glaucoma is a major global cause of blindness. An approach to automatically extract the main features in color fundus images is proposed in this paper. The optic cup-to-disc ratio (CDR) in retinal fundus images is one of the principle physiological characteristics in the diagnosis of glaucoma. The least square fitting algorithm aims to improve the accuracy of the boundary estimation. The technique used here is a core component of ARGALI (Automatic cup-to-disc Ratio measurement system for Glaucoma detection and AnaLysIs), a system for automated glaucoma risk assessment. The algorithm's effectiveness is demonstrated manually on segmented retina fundus images. By comparing the automatic cup height measurement to ground truth, we found that the method accurately detected neuro-retinal cup height. This work improves the efficiency of clinical interpretation of Glaucoma in fundus images of the eye. The tool utilized to accomplish the objective is MATLAB7.5.
A CFD-Based Tool for Studying Temperature in Rack-Mounted Servers
Temperature-aware computing is becoming more important in design of computer systems as power densities are increasing and the implications of high operating temperatures result in higher failure rates of components and increased demand for cooling capability. Computer architects and system software designers need to understand the thermal consequences of their proposals, and develop techniques to lower operating temperatures to reduce both transient and permanent component failures. Recognizing the need for thermal modeling tools to support those researches, there has been work on modeling temperatures of processors at the micro-architectural level which can be easily understood and employed by computer architects for processor designs. However, there is a dearth of such tools in the academic/research community for undertaking architectural/systems studies beyond a processor - a server box, rack or even a machine room. In this paper we presents a detailed 3-dimensional computational fluid dynamics based thermal modeling tool, called ThermoStat, for rack-mounted server systems. We conduct several experiments with this tool to show how different load conditions affect the thermal profile, and also illustrate how this tool can help design dynamic thermal management techniques. We propose reactive and proactive thermal management for rack mounted server and isothermal workload distribution for rack.
Mechanical Design of a 6-DOF Aerial Manipulator for assembling bar structures using UAVs
The aim of this paper is to show a methodology to perform the mechanical design of a 6-DOF lightweight manipulator for assembling bar structures using a rotary-wing UAV. The architecture of the aerial manipulator is based on a comprehensive performance analysis, a manipulability study of the different options and a previous evaluation of the required motorization. The manipulator design consists of a base attached to the UAV landing gear, a robotic arm that supports 6-DOF, and a gripper-style end effector specifically developed for grasping bars as a result of this study. An analytical expression of the manipulator kinematic model is obtained.
Intellectual disability and its relationship to autism spectrum disorders.
Intellectual disability (ID) and autism spectrum disorders (ASDs) covary at very high rates. Similarly, greater severity of one of these two disorders appears to have effects on the other disorder on a host of factors. A good deal of research has appeared on the topic with respect to nosology, prevalence, adaptive functioning, challenging behaviors, and comorbid psychopathology. The purpose of this paper was to provide a critical review and status report on the research published on these topics. Current status and future directions for better understanding these two covarying disorders was reviewed along with a discussion of relevant strengths and weaknesses of the current body of research.
Repeated labeling using multiple noisy labelers
This paper addresses the repeated acquisition of labels for data items when the labeling is imperfect. We examine the improvement (or lack thereof) in data quality via repeated labeling, and focus especially on the improvement of training labels for supervised induction of predictive models. With the outsourcing of small tasks becoming easier, for example via Amazon’s Mechanical Turk, it often is possible to obtain less-than-expert labeling at low cost. With low-cost labeling, preparing the unlabeled part of the data can become considerably more expensive than labeling. We present repeated-labeling strategies of increasing complexity, and show several main results. (i) Repeated-labeling can improve label quality and model quality, but not always. (ii) When labels are noisy, repeated labeling can be preferable to single labeling even in the traditional setting where labels are not particularly cheap. (iii) As soon as the cost of processing the unlabeled data is not free, even the simple strategy of labeling everything multiple times can give considerable advantage. (iv) Repeatedly labeling a carefully chosen set of points is generally preferable, and we present a set of robust techniques that combine different notions of uncertainty to select data points for which quality should be improved. The bottom line: the results show clearly that when labeling is not perfect, selective acquisition of multiple labels is a strategy that data miners should have in their repertoire; for certain label-quality/cost regimes, the benefit is substantial.
A Framework for Out of Memory SVD Algorithms
Many important applications – from big data analytics to information retrieval, gene expression analysis, and numerical weather prediction – require the solution of large dense singular value decompositions (SVD). In many cases the problems are too large to fit into the computer’s main memory, and thus require specialized out-of-core algorithms that use disk storage. In this paper, we analyze the SVD communications, as related to hierarchical memories, and design a class of algorithms that minimizes them. This class includes out-of-core SVDs but can also be applied between other consecutive levels of the memory hierarchy, e.g., GPU SVD using the CPU memory for large problems. We call these out-of-memory (OOM) algorithms. To design OOM SVDs, we first study the communications for both classical one-stage blocked SVD and two-stage tiled SVD. We present the theoretical analysis and strategies to design, as well as implement, these communication avoiding OOM SVD algorithms. We show performance results for multicore architecture that illustrate our theoretical findings and match our performance models.
Comparison of Modulation Techniques Used In WCDMA
Wideband Code Division Multiple Access (WCDMA) was introduced to provide higher data rates in mobile communication. This technology provides the users with many multimedia rich applications such as video streams and high resolution pictures. Thus in order to enhance the performance of this technology it is necessary to determine a suitable modulation technique. Also suitable error correcting mechanisms need to be implemented to enhance these services. Analysis of these techniques is crucial to improve the performance of a system. We have considered two modulation techniques of Quadrature Amplitude Modulation (QAM) and Quadrature Phase Shift Keying (QPSK) used in WCDMA systems. We have studied the performance of these two modulation techniques and compared them using the parameters of eye pattern. This analysis will help us determine a suitable modulation technique for WCDMA. We have used MATLAB for the simulation of WCDMA transmitter section. Keywords— WCDMA, UMTS, TDMA, FDMA, CDMA, BER, SNR, Orthogonality, LOS, DSSS.
Comparative efficacy and safety of different doses of ergocalciferol supplementation in patients with metabolic syndrome
Background Vitamin D deficiency is a common problem worldwide. Several studies have shown an association between vitamin D deficiency and the increased risk of metabolic syndrome. No previous study has compared the efficacy and safety of ergocalciferol at 40,000 versus 20,000 IU/week in patients with metabolic syndrome. Objective To evaluate the efficacy of ergocalciferol supplementation on serum 25-hydroxyvitamin D [25(OH)D] concentrations and to examine safety parameters in metabolic syndrome patients. Setting Outpatient department of Phramongkutklao Hospital, Bangkok, Thailand. Method A randomized, double-blinded, parallel study was conducted in metabolic syndrome patients with vitamin D deficiency [25(OH)D <20 ng/mL]. Ninety patients were randomly assigned into three groups of 30 patients each. Group 1 was given two capsules of placebo/week, group 2 was given ergocalciferol 20,000 IU/week, and group 3 was given ergocalciferol 40,000 IU/week for 8 weeks. Main outcome measure serum 25(OH)D concentrations, serum calcium, safety, and corrected QT (QTc) interval. Results Of the 90 patients enrolled, 84 patients completed the study. At the end of the study, the mean serum 25(OH)D in groups 2 and 3 significantly increased from the baseline (15.1 and 14.3 to 26.8 and 30.0 ng/mL, respectively). The increase in serum 25(OH)D in groups 2 and 3 were comparable and significantly greater than that of the placebo group. The percentage number of patients achieving normal vitamin D levels in groups 1, 2 and 3 were 3.3, 33.3, and 60.0 %, respectively, which were significantly different between groups (p < 0.001). Adverse reactions in both ergocalciferol treatment groups were not different from the placebo group (p > 0.05). Serum calcium levels did not change within and between groups of treatment. No significant change in QTc was observed in any patient. Conclusions Both 20,000 and 40,000 IU/week of ergocalciferol supplementation for 8 weeks were able to increase serum 25(OH)D concentrations significantly. However, more patients in the ergocalciferol 40,000 IU/week treatment group achieved a normal serum 25(OH)D level than in the group which received 20,000 IU/week. Clinicians would have informed of choosing the dosing regimen of ergocalciferol in metabolic syndrome patients.
Machine Learning-based Sentiment Analysis of Automatic Indonesian Translations of English Movie Reviews
Sentiment analysis is the automatic classification of the overall opinion conveyed by a text towards its subject matter. This paper discusses an experiment in the sentiment analysis of of a collection of movie reviews that have been automatically translated to Indonesian. Following [1], we employ three well known classification techniques: naive bayes, maximum entropy, and support vector machines, employing unigram presence and frequency values as the features. The translation is achieved through machine translation and simple word substitutions based on a bilingual dictionary constructed from various online resources. Analysis of the Indonesian translations yielded an accuracy of up to 78.82%, still short of the accuracy for the English documents (80.09%), but satisfactorily high given the simple translation approach.
From road distraction to safe driving: Evaluating the effects of boredom and gamification on driving behaviour, physiological arousal, and subjective experience
Boredom and low levels of task engagement while driving can pose road safety risks, e.g., inattention during low traffic, routine trips, or semi-automated driving. Digital technology interventions that increase task engagement, e.g., through performance feedback, increased challenge, and incentives (often referred to as ‘gamification’), could therefore offer safety benefits. To explore the impact of such interventions, we conducted experiments in a highfidelity driving simulator with thirty-two participants. In two counterbalanced conditions (control and intervention), we compared driving behaviour, physiological arousal, and subjective experience. Results indicate that the gamified boredom intervention reduced unsafe coping mechanisms such as speeding while promoting anticipatory driving. We can further infer that the intervention not only increased one’s attention and arousal during the intermittent gamification challenges, but that these intermittent stimuli may also help sustain one’s attention and arousal in between challenges and throughout a drive. At the same time, the gamified condition led to slower hazard reactions and short off-road glances. Our contributions deepen our understanding of driver boredom and pave the way for engaging interventions for safety critical tasks.
Improving the outcome of fractional CO2 laser resurfacing using a probiotic skin cream: Preliminary clinical evaluation
As known, fractional CO2 resurfacing treatments are more effective than non-ablative ones against aging signs, but post-operative redness and swelling prolong the overall downtime requiring up to steroid administration in order to reduce these local systems. In the last years, an increasing interest has been focused on the possible use of probiotics for treating inflammatory and allergic conditions suggesting that they can exert profound beneficial effects on skin homeostasis. In this work, the Authors report their experience on fractional CO2 laser resurfacing and provide the results of a new post-operative topical treatment with an experimental cream containing probiotic-derived active principles potentially able to modulate the inflammatory reaction associated to laser-treatment. The cream containing DermaACB (CERABEST™) was administered post-operatively to 42 consecutive patients who were treated with fractional CO2 laser. All patients adopted the cream twice a day for 2 weeks. Grades were given according to outcome scale. The efficacy of the cream containing DermaACB was evaluated comparing the rate of post-operative signs vanishing with a control group of 20 patients topically treated with an antibiotic cream and a hyaluronic acid based cream. Results registered with the experimental treatment were good in 22 patients, moderate in 17, and poor in 3 cases. Patients using the study cream took an average time of 14.3 days for erythema resolution and 9.3 days for swelling vanishing. The post-operative administration of the cream containing DermaACB induces a quicker reduction of post-operative erythema and swelling when compared to a standard treatment.
Aesthetic quality assessment of consumer photos with faces
Automatically assessing the subjective quality of a photo is a challenging area in visual computing. Previous works study the aesthetic quality assessment on a general set of photos regardless of the photo's content and mainly use features extracted from the entire image. In this work, we focus on a specific genre of photos: consumer photos with faces. This group of photos constitutes an important part of consumer photo collections. We first conduct an online study on Mechanical Turk to collect ground-truth and subjective opinions for a database of consumer photos with faces. We then extract technical features, perceptual features, and social relationship features to represent the aesthetic quality of a photo, by focusing on face-related regions. Experiments show that our features perform well for categorizing or predicting the aesthetic quality.
Bus detection device for the blind using RFID application
This paper outlines a bus detection mechanism for the blind in travelling from one place to another. In order to get transportation independently, the blind use auditory touched clues like walking stick or white cane. The limitation of the walking stick is that a blind person must come into close proximity with their surroundings to determine the location of an obstacle. For that basis, various devices have been developed such as the Sonicguide, the Mowat sensor, the Laser cane and the Navbelt [4]. However, these device can only assist the blind at a pedestrian crossing. Therefore, the project is aims to develop a bus detection prototype using Radio Frequency Identification (RFID) for the blind. The paper covers brief idea about the blind and RFID system, review relevant papers and summary of current research. The review of RFID system compare between families of auto-ID technology, the basic principle of RFID, the type of RFID tagging and the antenna characteristic. The summary of current research discussed about the development of prototype, the database system, the output mechanism and integration between hardware and software. Database management will provided. The information such as bus route, final destination and bus number are also provided. This paper also describes the future work intended to be done.
Exact algorithms for routing problems under vehicle capacity constraints
The solution of a vehicle routing problem calls for the determination of a set of routes, each performed by a single vehicle which starts and ends at its own depot, such that all the requirements of the customers are fulfilled and the global transportation cost is minimized. The routes have to satisfy several operational constraints which depend on the nature of the transported goods, on the quality of the service level, and on the characteristics of the customers and of the vehicles. One of the most common operational constraint addressed in the scientific literature is that the vehicle fleet is capacitated and the total load transported by a vehicle cannot exceed its capacity. This paper provides a review of the most recent developments that had a major impact in the current state-of-the-art of exact algorithms for vehicle routing problems under capacity constraints, with a focus on the basic Capacitated Vehicle Routing Problem (CVRP) and on heterogeneous vehicle routing problems. The most important mathematical formulations for the problem together with various relaxations are reviewed. The paper also describes the recent exact methods and reports a comparison of their computational performances.
Access control management in a distributed environment supporting dynamic collaboration
Ensuring secure and authorized access to remote services and information resources in a dynamic collaborative environment is a challenging task. Two major issues that need to be addressed in this regard are: specification of access control requirements and trust management. Specification of access control requirements for dynamic collaboration is challenging mainly because of the limited or lack of knowledge about remote users' identities and affiliations. The access control policies and constraints defining users' authorization over remote resources and services need to be specified in terms of the attributes and properties of the users. Moreover, the criteria for validating the attributes of the users should also be specified as part of access control requirements. Trust management, in the context of dynamic collaboration, involves validation of user's attributes for secure interaction and prevention of unauthorized disclosure of policies and attributes. The paper discusses these issues in detail and presents a framework for access control and trust management in a distributed collaborative environment.
A Broadband CPW-to-Waveguide Transition Using Quasi-Yagi Antenna
A novel CPW-to-waveguide transition utilizing a uniplanar quasi-Yagi antenna is presented. The X-band back-to-back transition demonstrates 33% bandwidth with return loss better than -10dB. This transition should find a wide variety of applications due to its high MIC/MMIC compatibility and low cost.
Feature selection for ranking using boosted trees
Modern search engines have to be fast to satisfy users, so there are hard back-end latency requirements. The set of features useful for search ranking functions, though, continues to grow, making feature computation a latency bottleneck. As a result, not all available features can be used for ranking, and in fact, much of the time, only a small percentage of these features can be used. Thus, it is crucial to have a feature selection mechanism that can find a subset of features that both meets latency requirements and achieves high relevance. To this end, we explore different feature selection methods using boosted regression trees, including both greedy approaches (selecting the features with highest relative importance as computed by boosted trees; discounting importance by feature similarity and a randomized approach. We evaluate and compare these approaches using data from a commercial search engine. The experimental results show that the proposed randomized feature selection with feature-importance-based backward elimination outperforms greedy approaches and achieves a comparable relevance with 30 features to a full-feature model trained with 419 features and the same modeling parameters.
LANTERN: a randomized study of QVA149 versus salmeterol/fluticasone combination in patients with COPD
BACKGROUND The current Global initiative for chronic Obstructive Lung Disease (GOLD) treatment strategy recommends the use of one or more bronchodilators according to the patient's airflow limitation, their history of exacerbations, and symptoms. The LANTERN study evaluated the effect of the long-acting β2-agonist (LABA)/long-acting muscarinic antagonist (LAMA) dual bronchodilator, QVA149 (indacaterol/glycopyrronium), as compared with the LABA/inhaled corticosteroid, salmeterol/fluticasone (SFC), in patients with moderate-to-severe COPD with a history of ≤1 exacerbation in the previous year. METHODS In this double-blind, double-dummy, parallel-group study, 744 patients with moderate-to-severe COPD with a history of ≤1 exacerbations in the previous year were randomized (1:1) to QVA149 110/50 μg once daily or SFC 50/500 μg twice daily for 26 weeks. The primary endpoint was noninferiority of QVA149 versus SFC for trough forced expiratory volume in 1 second (FEV1) at week 26. RESULTS Overall, 676 patients completed the study. The primary objective of noninferiority between QVA149 and SFC in trough FEV1 at week 26 was met. QVA149 demonstrated statistically significant superiority to SFC for trough FEV1 (treatment difference [Δ]=75 mL; P<0.001). QVA149 demonstrated a statistically significant improvement in standardized area under the curve (AUC) from 0 hours to 4 hours for FEV1 (FEV1 AUC0-4h) at week 26 versus SFC (Δ=122 mL; P<0.001). QVA149 and SFC had similar improvements in transition dyspnea index focal score, St George Respiratory Questionnaire total score, and rescue medication use. However, QVA149 significantly reduced the rate of moderate or severe exacerbations by 31% (P=0.048) over SFC. Overall, the incidence of adverse events was comparable between QVA149 (40.1%) and SFC (47.4%). The incidence of pneumonia was threefold lower with QVA149 (0.8%) versus SFC (2.7%). CONCLUSION These findings support the use of the LABA/LAMA, QVA149 as an alternative treatment, over LABA/inhaled corticosteroid, in the management of moderate-to-severe COPD patients (GOLD B and GOLD D) with a history of ≤1 exacerbation in the previous year.