title
stringlengths
8
300
abstract
stringlengths
0
10k
SIN2: Stealth infection on neural network — A low-cost agile neural Trojan attack methodology
Deep Neural Network (DNN) has recently become the “de facto” technique to drive the artificial intelligence (AI) industry. However, there also emerges many security issues as the DNN based intelligent systems are being increasingly prevalent. Existing DNN security studies, such as adversarial attacks and poisoning attacks, are usually narrowly conducted at the software algorithm level, with the misclassification as their primary goal. The more realistic system-level attacks introduced by the emerging intelligent service supply chain, e.g. the third-party cloud based machine learning as a service (MLaaS) along with the portable DNN computing engine, have never been discussed. In this work, we propose a low-cost modular methodology-Stealth Infection on Neural Network, namely “SIN2”, to demonstrate the novel and practical intelligent supply chain triggered neural Trojan attacks. Our “SIN2” well leverages the attacking opportunities built upon the static neural network model and the underlying dynamic runtime system of neural computing framework through a bunch of neural Trojaning techniques. We implement a variety of neural Trojan attacks in Linux sandbox by following proposed “SIN2”. Experimental results show that our modular design can rapidly produce and trigger various Trojan attacks that can easily evade the existing defenses.
Modeling User Session and Intent with an Attention-based Encoder-Decoder Architecture
We propose an encoder-decoder neural architecture to model user session and intent using browsing and purchasing data from a large e-commerce company. We begin by identifying the source-target transition pairs between items within each session. Then, the set of source items are passed through an encoder, whose learned representation is used by the decoder to estimate the sequence of target items. Therefore, as this process is performed pair-wise, we hypothesize that the model could capture the transition regularities in a more fine grained way. Additionally, our model incorporates an attention mechanism to explicitly learn the more expressive portions of the sequences in order to improve performance. Besides modeling the user sessions, we also extended the original architecture by means of attaching a second decoder that is jointly trained to predict the purchasing intent of user in each session. With this, we want to explore to what extent the model can capture inter session dependencies. We performed an empirical study comparing against several baselines on a large real world dataset, showing that our approach is competitive in both item and intent prediction.
Judiciaries in Comparative Perspective: Table of statutes
Part I: 1. Judicial independence and accountability: core values in liberal democracies Shimon Shetreet Part II: 2. Appointment, discipline and removal of judges in Australia H. P. Lee 3. Appointment, discipline and removal of judges in Canada Martin Friedland 4. Appointment, discipline and removal of judges in New Zealand Philip Joseph 5. Appointment, discipline and removal of judges in South Africa Hugh Corder 6. Appointment, discipline and removal of judges - fundamental reforms in the United Kingdom Kate Malleson 7. Judicial selection, removal and discipline in the United States Mark Tushnet Part III: 8. Judges' freedom of speech: Australia John Williams 9. Judges and free speech in Canada Kent Roach 10. Judges and free speech in New Zealand The Hon. Grant Hammond 11. The judiciary and freedom of speech in South Africa Iain Currie 12. Judges and free speech in the United Kingdom Keith Ewing 13. The criticism and speech of judges in the United States Charles Gardner Geyh Part IV: 14. Judges, bias and recusal in Australia Colin Campbell 15. Judges, bias and recusal in Canada Lorne Soissin 16. Judicial recusal in New Zealand Gerard McCoy 17. Judges, bias and recusal in South Africa The Hon. Kate O'Regan and The Hon. Edwin Cameron 18. Judges, bias and recusal in the United Kingdom Christopher Forsyth 19. Bias, the appearance of bias, and judicial disqualification in the United States W. W. Hodes Part V: 20. Judges and non-judicial functions in Australia Patrick Emerton and H. P. Lee 21. The impact of extra-judicial service on the Canadian judiciary: the need for reform Patrick Monahan and Byron Shaw 22. Judges and the non-judicial function in New Zealand Sir Geoffrey Palmer 23. Judges and non-judicial functions in South Africa Cora Hoexter 24. Judges and non-judicial functions in the United Kingdom Abimbola Olowofoyeku 25. Judges and non-judicial functions in the United States Jeffrey M. Shaman Part VI: 26. The judiciary - a comparative conspectus H. P. Lee.
The outcast-lash-out effect in youth: alienation increases aggression following peer rejection.
Although there are good theoretical reasons to believe that youth who are high in alienation (i.e., estranged from society, significant others, and themselves) are prone to behave aggressively, empirical evidence is lacking. The present experiment tested whether alienation moderates the effects of acute peer rejection on aggression in youth. Participants (N = 121; mean age = 11.5 years) completed a personal profile (e.g., "How do you describe yourself?") that was allegedly evaluated online by a panel of peer judges. After randomly receiving negative or positive feedback from peer judges, participants were given the opportunity to aggress against them (i.e., by reducing their monetary reward and by posting negative comments about them online). As predicted, alienation increased participants' aggression against peers who had rejected them, but not against peers who had praised them, even after controlling for peer-nominated chronic rejection and peer-nominated aggression. Thus, alienated youth are more aggressive than others when they experience acute peer rejection.
STTM: A Tool for Short Text Topic Modeling
Along with the emergence and popularity of social communications on the Internet, topic discovery from short texts becomes fundamental to many applications that require semantic understanding of textual content. As a rising research field, short text topic modeling presents a new and complementary algorithmic methodology to supplement regular text topic modeling, especially targets to limited word co-occurrence information in short texts. This paper presents the first comprehensive open-source package, called STTM, for use in Java that integrates the state-of-theart models of short text topic modeling algorithms, benchmark datasets, and abundant functions for model inference and evaluation. The package is designed to facilitate the expansion of new methods in this research field and make evaluations between the new approaches and existing ones accessible. STTM is open-sourced at https://github.com/qiang2100/STTM.he
Survey of graph database models
Graph database models can be defined as those in which data structures for the schema and instances are modeled as graphs or generalizations of them, and data manipulation is expressed by graph-oriented operations and type constructors. These models took off in the eighties and early nineties alongside object-oriented models. Their influence gradually died out with the emergence of other database models, in particular geographical, spatial, semistructured, and XML. Recently, the need to manage information with graph-like nature has reestablished the relevance of this area. The main objective of this survey is to present the work that has been conducted in the area of graph database modeling, concentrating on data structures, query languages, and integrity constraints.
Automatic speech recognition An evaluation of Google Speech
The use of speech recognition is increasing rapidly and is now available in smart TVs, desktop computers, every new smart phone, etc. allowing us to talk to computers naturally. With the use in home appliances, education and even in surgical procedures accuracy and speed becomes very important. This thesis aims to give an introduction to speech recognition and discuss its use in robotics. An evaluation of Google Speech, using Google’s speech API, in regards to word error rate and translation speed, as well as a comparison between Google Speech and Pocketsphinx is made. The results show that Google Speech presented lower error rates on general purpose sentences but, due to the high average translation speed and the inability to specify vocabulary, was not suitable for voice-controlled moving robots.
Particle-size distributions and seasonal diversity of allergenic and pathogenic fungi in outdoor air
Fungi are ubiquitous in outdoor air, and their concentration, aerodynamic diameters and taxonomic composition have potentially important implications for human health. Although exposure to fungal allergens is considered a strong risk factor for asthma prevalence and severity, limitations in tracking fungal diversity in air have thus far prevented a clear understanding of their human pathogenic properties. This study used a cascade impactor for sampling, and quantitative real-time PCR plus 454 pyrosequencing for analysis to investigate seasonal, size-resolved fungal communities in outdoor air in an urban setting in the northeastern United States. From the 20 libraries produced with an average of ∼800 internal transcribed spacer (ITS) sequences (total 15 326 reads), 12 864 and 11 280 sequences were determined to the genus and species levels, respectively, and 558 different genera and 1172 different species were identified, including allergens and infectious pathogens. These analyses revealed strong relationships between fungal aerodynamic diameters and features of taxonomic compositions. The relative abundance of airborne allergenic fungi ranged from 2.8% to 10.7% of total airborne fungal taxa, peaked in the fall, and increased with increasing aerodynamic diameter. Fungi that can cause invasive fungal infections peaked in the spring, comprised 0.1–1.6% of fungal taxa and typically increased in relative abundance with decreasing aerodynamic diameter. Atmospheric fungal ecology is a strong function of aerodynamic diameter, whereby through physical processes, the size influences the diversity of airborne fungi that deposit in human airways and the efficiencies with which specific groups of fungi partition from outdoor air to indoor environments.
Intracranial EEG and human brain mapping
This review is an attempt to highlight the value of human intracranial recordings (intracranial electro-encephalography, iEEG) for human brain mapping, based on their technical characteristics and based on the corpus of results they have already yielded. The advantages and limitations of iEEG recordings are introduced in detail, with an estimation of their spatial and temporal resolution for both monopolar and bipolar recordings. The contribution of iEEG studies to the general field of human brain mapping is discussed through a review of the effects observed in the iEEG while patients perform cognitive tasks. Those effects range from the generation of well-localized evoked potentials to the formation of large-scale interactions between distributed brain structures, via long-range synchrony in particular. A framework is introduced to organize those iEEG studies according to the level of complexity of the spatio-temporal patterns of neural activity found to correlate with cognition. This review emphasizes the value of iEEG for the study of large-scale interactions, and describes in detail the few studies that have already addressed this point.
Solar Atmospheric Neutrinos: A New Neutrino Floor for Dark Matter Searches
Kenny C. Y. Ng, 2, 3, ∗ John F. Beacom, 3, 4, † Annika H. G. Peter, 3, 4, ‡ and Carsten Rott § Department of Particle Physics and Astrophysics, Weizmann Institute of Science, Rehovot, Israel Center for Cosmology and AstroParticle Physics (CCAPP), Ohio State University, Columbus, OH 43210 Department of Physics, Ohio State University, Columbus, OH 43210 Department of Astronomy, Ohio State University, Columbus, OH 43210 Department of Physics, Sungkyunkwan University, Suwon 440-746, Korea (Dated: 30 March 2017)
Hydrogen-bonded cubanes and ladder fragments by analogy with the inorganic solid state.
By application of an analogy with the structures of alkali-metal phosphinimides, two hydrogen-bonded cubanes, [Ph(3)CNH(3)(+)X(-)](4) (X = Cl, Br), and one unprecedented four-rung ladder fragment, [Ph(3)CNH(3)(+)I(-)](4), have been prepared in the organic solid state.
Ulnar-sided wrist pain. II. Clinical imaging and treatment
Pain at the ulnar aspect of the wrist is a diagnostic challenge for hand surgeons and radiologists due to the small and complex anatomical structures involved. In this article, imaging modalities including radiography, arthrography, ultrasound (US), computed tomography (CT), CT arthrography, magnetic resonance (MR) imaging, and MR arthrography are compared with regard to differential diagnosis. Clinical imaging findings are reviewed for a more comprehensive understanding of this disorder. Treatments for the common diseases that cause the ulnar-sided wrist pain including extensor carpi ulnaris (ECU) tendonitis, flexor carpi ulnaris (FCU) tendonitis, pisotriquetral arthritis, triangular fibrocartilage complex (TFCC) lesions, ulnar impaction, lunotriquetral (LT) instability, and distal radioulnar joint (DRUJ) instability are reviewed.
Supervised Hashing with Deep Neural Networks
In this paper, we propose training very deep neural networks (DNNs) for supervised learning of hash codes. Existing methods in this context train relatively “shallow” networks limited by the issues arising in back propagation (e.g. vanishing gradients) as well as computational efficiency. We propose a novel and efficient training algorithm inspired by alternating direction method of multipliers (ADMM) that overcomes some of these limitations. Our method decomposes the training process into independent layer-wise local updates through auxiliary variables. Empirically we observe that our training algorithm always converges and its computational complexity is linearly proportional to the number of edges in the networks. Empirically we manage to train DNNs with 64 hidden layers and 1024 nodes per layer for supervised hashing in about 3 hours using a single GPU. Our proposed very deep supervised hashing (VDSH) method significantly outperforms the state-of-theart on several benchmark datasets.
Stress and the brain: from adaptation to disease
In response to stress, the brain activates several neuropeptide-secreting systems. This eventually leads to the release of adrenal corticosteroid hormones, which subsequently feed back on the brain and bind to two types of nuclear receptor that act as transcriptional regulators. By targeting many genes, corticosteroids function in a binary fashion, and serve as a master switch in the control of neuronal and network responses that underlie behavioural adaptation. In genetically predisposed individuals, an imbalance in this binary control mechanism can introduce a bias towards stress-related brain disease after adverse experiences. New candidate susceptibility genes that serve as markers for the prediction of vulnerable phenotypes are now being identified.
A 5.8nW, 45ppm/°C on-chip CMOS wake-up timer using a constant charge subtraction scheme
This work presents an ultra-low power oscillator designed for wake-up timers in compact wireless sensors. A constant charge subtraction scheme removes continuous comparator delay from the oscillation period, which is the source of temperature dependence in conventional RC relaxation oscillators. This relaxes comparator design constraints, enabling low power operation. In 0.18μm CMOS, the oscillator consumes 5.8nW at room temperature with temperature stability of 45ppm/°C (-10°C to 90°C) and 1%V line sensitivity.
Greed is good: algorithmic results for sparse approximation
This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho's basis pursuit (BP) paradigm can recover the optimal representation of an exactly sparse signal. It leverages this theory to show that both OMP and BP succeed for every sparse input signal from a wide class of dictionaries. These quasi-incoherent dictionaries offer a natural generalization of incoherent dictionaries, and the cumulative coherence function is introduced to quantify the level of incoherence. This analysis unifies all the recent results on BP and extends them to OMP. Furthermore, the paper develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal. From there, it argues that OMP is an approximation algorithm for the sparse problem over a quasi-incoherent dictionary. That is, for every input signal, OMP calculates a sparse approximant whose error is only a small factor worse than the minimal error that can be attained with the same number of terms.
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition
Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224<inline-formula><tex-math>$\times$ </tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="he-ieq1-2389824.gif"/></alternatives></inline-formula>224) input image. This requirement is “artificial” and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 <inline-formula><tex-math>$\times$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="he-ieq2-2389824.gif"/> </alternatives></inline-formula> faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.
Hemispatial neglect and rehabilitation in acute stroke.
OBJECTIVES To compare 2 methods for determining neglect in patients within 2 days of stroke, and to investigate whether early neglect was related to rehabilitation practice, and whether this relationship was affected by an early, intensive mobilization intervention. DESIGN Data were collected from patients participating in a phase II randomized controlled trial of early rehabilitation after stroke. SETTING Acute hospital stroke unit. PARTICIPANTS Stroke patients (N=71). INTERVENTION The 2 arms of the trial were very early mobilization (VEM) and standard care (SC). MAIN OUTCOME MEASURES Neglect was assessed using the Star Cancellation Test and the National Institutes of Health Stroke Scale (NIHSS) inattention item within 48 hours of stroke onset, and therapy details were recorded during the hospital stay. RESULTS Assessing neglect so acutely after stroke was difficult: 29 of the 71 patients were unable to complete the Star Cancellation Test, and agreement between this test and the NIHSS measure was only .42. Presence of neglect did not preclude early mobilization. SC group patients with neglect had longer hospital stays (median, 11d) than those without neglect (median, 4d); there was no difference in length of stay between patients with and without neglect in the VEM group (median, 6d in both). CONCLUSION Early mobilization of patients with neglect was feasible and may contribute to a shorter acute hospital stay.
MLC Toolbox: A MATLAB/OCTAVE Library for Multi-Label Classification
Multi-Label Classification toolbox is a MATLAB/OCTAVE library for Multi-Label Classification (MLC). There exists a few Java libraries for MLC, but no MATLAB/OCTAVE library that covers various methods. This toolbox offers an environment for evaluation, comparison and visualization of the MLC results. One attraction of this toolbox is that it enables us to try many combinations of feature space dimension reduction, sample clustering, label space dimension reduction and ensemble, etc.
Molecular targets of non-steroidal anti-inflammatory drugs in neurodegenerative diseases
During the last decade, interest has grown in the beneficial effects of non-steroidal anti-inflammatory drugs (NSAIDs) in neurodegeneration, particularly in pathologies such as Alzheimer’s (AD) and Parkinson’s (PD) disease. Evidence from epidemiological studies has indicated a decreased risk for AD and PD in patients with a history of chronic NSAID use. However, clinical trials with NSAIDs in AD patients have yielded conflicting results, suggesting that these drugs may be beneficial only when used as preventive therapy or in early stages of the disease. NSAIDs may also have salutary effects in other neurodegenerative diseases with an inflammatory component, such as multiple sclerosis and amyotrophic lateral sclerosis. In this review we analyze the molecular (cyclooxygenases, secretases, NF-κB, PPAR, or Rho-GTPasas) and cellular (neurons, microglia, astrocytes or endothelial cells) targets of NSAIDs that may mediate the therapeutic function of these drugs in neurodegeneration.
Peptide and protein transdermal drug delivery.
Peptides and proteins are gaining increasing importance as therapeutic agents. Oral delivery of these molecules is not feasible due to gastrointestinal degradation. Parenteral delivery leads to poor patient compliance especially because the short half-life of peptides necessitates repeated administration. Passive transdermal delivery is not feasible but active transdermal delivery of peptides and proteins is a promising alternative route of administration which can bypass gastrointestinal degradation and offer patient compliance. This review discusses active transdermal technologies for delivery of peptides and proteins.
A controlled study of community-based exercise training in patients with moderate COPD
BACKGROUND The effectiveness of clinic-based pulmonary rehabilitation in advanced COPD is well established, but few data exist for less severe patients treated in alternative settings. The purpose of this study was to investigate whether a novel, community-based exercise program (CBE) was feasible and effective for patients with moderate COPD. METHODS Nineteen patients with moderate COPD (mean FEV1 62%) and self-reported exercise impairment were randomized to 12-weeks of progressive endurance and strength training at a local health club under the guidance of a certified personal trainer, or to continuation of unsupervised habitual physical activity. Outcomes assessed at baseline and 12 weeks included session compliance, intensity adherence, treadmill endurance time, muscle strength, dyspnea, and health status. RESULTS Compliance was 94% and adherence was 83%. Comparisons between CBE and control groups yielded the following mean (SEM) differences in favor of CBE: endurance time 134 (74) seconds versus -59 (49) seconds (P=0.041) and TDI 5.1 (0.8) versus -0.2 (0.5) (P<0.001). The CBE group increased muscle strength (weight lifted) by 11.8 kilograms per subject per week of training (P<0.001). SGRQ was not significantly changed. CONCLUSIONS We demonstrated the feasibility and effectiveness of a novel community-based exercise program involving health clubs and personal trainers for patients with moderate COPD. TRIAL REGISTRATION ClinicalTrials.gov Identifier NCT01985529.
Essays on obligation
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Linguistics and Philosophy, 2007.
Uncountable Limits and the lambda Calculus
In this paper we address the problem of solving recursive domain equations using uncountable limits of domains. These arise for instance, when dealing with the ω1-continuous function-space constructor and are used in the denotational semantics of programming languages which feature unbounded choice constructs. Surprisingly, the category of cpo's and ω1-continuous embeddings is not ω0-cocomplete. Hence the standard technique for solving reflexive domain equations fails. We give two alternative methods. We discuss also the issue of completeness of the λβηcalculus w.r.t reflexive domain models. We show that among the reflexive domain models in the category of cpo's and ω0-continuous functions there is one which has a minimal theory. We give a reflexive domain model in the category of cpo's and ω1-continuous functions whose theory is precisely the λβη theory. So ω1-continuous λ-models are complete for the λβη-calculus.
Simulator sickness in virtual display gaming: a comparison of stereoscopic and non-stereoscopic situations
In this paper we compare simulator sickness symptoms produced by racing game in three different conditions. In the first experiment the participants played the Need for Speed car racing game with an ordinary 17" display and in the second and third experiments they used a head-worn virtual display for the game playing. The difference between experiments 2 and 3 was in the use of stereoscopy, as in the third experiment the car racing game was seen in stereoscopic three-dimensions. Our results indicated that there were no significant differences in sickness symptoms when we compared the ordinary display and the virtual display in non-stereoscopic mode. In stereoscopic condition the eye strain and disorientation symptoms were significantly elevated compared to the ordinary display. We conclude that using a virtual display as an accessory in a mobile device is a viable alternative, because the non-stereoscopic virtual display did not produce significantly more sickness symptoms compared to ordinary game playing.
Selective cache digest exchange mechanism for group based Content Delivery Networks
Position estimation and tracking of an autonomous mobile sensor using received signal strength
In this paper, an algorithm for approximating the path of a moving autonomous mobile sensor with an unknown position location using Received Signal Strength (RSS) measurements is proposed. Using a Least Squares (LS) estimation method as an input, a Maximum-Likelihood (ML) approach is used to determine the location of the unknown mobile sensor. For the mobile sensor case, as the sensor changes position the characteristics of the RSS measurements also change; therefore the proposed method adapts the RSS measurement model by dynamically changing the pass loss value alpha to aid in position estimation. Secondly, a Recursive Least-Squares (RLS) algorithm is used to estimate the path of a moving mobile sensor using the Maximum-Likelihood position estimation as an input. The performance of the proposed algorithm is evaluated via simulation and it is shown that this method can accurately determine the position of the mobile sensor, and can efficiently track the position of the mobile sensor during motion.
The neurophysiology of memory.
How do the structures of the medial temporal lobe contribute to memory? To address this question, we examine the neurophysiological correlates of both recognition and associative memory in the medial temporal lobe of humans, monkeys, and rats. These cross-species comparisons show that the patterns of mnemonic activity observed throughout the medial temporal lobe are largely conserved across species. Moreover, these findings show that neurons in each of the medial temporal lobe areas can perform both similar as well as distinctive mnemonic functions. In some cases, similar patterns of mnemonic activity are observed across all structures of the medial temporal lobe. In the majority of cases, however, the hippocampal formation and surrounding cortex signal mnemonic information in distinct, but complementary ways.
A Categorical Treatment of Ornaments
Ornaments aim at taming the multiplication of special-purpose data types in dependently typed programming languages. In type theory, purpose is logic. By presenting data types as the combination of a structure and a logic, ornaments relate these special-purpose data types through their common structure. In the original presentation, the concept of ornament was introduced concretely for an example universe of inductive families in type theory, but it was clear that the notion was more general. This paper digs out the abstract notion of ornaments in the form of a categorical model. As a necessary first step, we abstract the universe of data types using the theory of polynomial functors. We are then able to characterise ornaments as cartesian morphisms between polynomial functors. We thus gain access to powerful mathematical tools that shall help us understand and develop ornaments. We shall also illustrate the adequacy of our model. Firstly, we rephrase the standard ornamental constructions into our framework. Thanks to its conciseness, we gain a deeper understanding of the structures at play. Secondly, we develop new ornamental constructions, by translating categorical structures into type theoretic artefacts.
A Gedanken Experiment For Gravitomagnetism
A gedanken experiment implies the existence of gravitomagnetism and raises a question about what we know about the weak-field limit of the gravitomagnetic field of General Relativity.
Evaluation of Haar Cascade Classifiers Designed for Face Detection
In the past years a lot of effort has been made in the field of face detection. The human face contains important features that can be used by vision-based automated systems in order to identify and recognize individuals. Face location, the primary step of the vision-based automated systems, finds the face area in the input image. An accurate location of the face is still a challenging task. Viola-Jones framework has been widely used by researchers in order to detect the location of faces and objects in a given image. Face detection classifiers are shared by public communities, such as OpenCV. An evaluation of these classifiers will help researchers to choose the best classifier for their particular need. This work focuses of the evaluation of face detection classifiers minding facial landmarks. Keywords—Face datasets, face detection, facial landmarking, haar wavelets, Viola-Jones detectors.
Perfusion deficits detected by arterial spin-labeling in patients with TIA with negative diffusion and vascular imaging.
BACKGROUND AND PURPOSE A substantial portion of clinically diagnosed TIA cases is imaging-negative. The purpose of the current study is to determine if arterial spin-labeling is helpful in detecting perfusion abnormalities in patients presenting clinically with TIA. MATERIALS AND METHODS Pseudocontinuous arterial spin-labeling with 3D background-suppressed gradient and spin-echo was acquired on 49 patients suspected of TIA within 24 hours of symptom onset. All patients were free of stroke history and had no lesion-specific findings on general MR, DWI, and MRA sequences. The calculated arterial spin-labeling CBF maps were scored from 1-3 on the basis of presence and severity of perfusion disturbance by 3 independent observers blinded to patient history. An age-matched cohort of 36 patients diagnosed with no cerebrovascular events was evaluated as a control. Interobserver agreement was assessed by use of the Kendall concordance test. RESULTS Scoring of perfusion abnormalities on arterial spin-labeling scans of the TIA cohort was highly concordant among the 3 observers (W = 0.812). The sensitivity and specificity of arterial spin-labeling in the diagnosis of perfusion abnormalities in TIA was 55.8% and 90.7%, respectively. In 93.3% (70/75) of the arterial spin-labeling CBF map readings with positive scores (≥2), the brain regions where perfusion abnormalities were identified by 3 observers matched with the neurologic deficits at TIA onset. CONCLUSIONS In this preliminary study, arterial spin-labeling showed promise in the detection of perfusion abnormalities that correlated with clinically diagnosed TIA in patients with otherwise normal neuroimaging results.
Embedding Structured Contour and Location Prior in Siamesed Fully Convolutional Networks for Road Detection
Road detection from the perspective of moving vehicles is a challenging issue in autonomous driving. Recently, many deep learning methods spring up for this task, because they can extract high-level local features to find road regions from raw RGB data, such as convolutional neural networks and fully convolutional networks (FCNs). However, how to detect the boundary of road accurately is still an intractable problem. In this paper, we propose siamesed FCNs (named “s-FCN-loc”), which is able to consider RGB-channel images, semantic contours, and location priors simultaneously to segment the road region elaborately. To be specific, the s-FCN-loc has two streams to process the original RGB images and contour maps, respectively. At the same time, the location prior is directly appended to the siamesed FCN to promote the final detection performance. Our contributions are threefold: 1) An s-FCN-loc is proposed that learns more discriminative features of road boundaries than the original FCN to detect more accurate road regions. 2) Location prior is viewed as a type of feature map and directly appended to the final feature map in s-FCN-loc to promote the detection performance effectively, which is easier than other traditional methods, namely, different priors for different inputs (image patches). 3) The convergent speed of training s-FCN-loc model is 30% faster than the original FCN because of the guidance of highly structured contours. The proposed approach is evaluated on the KITTI road detection benchmark and one-class road detection data set, and achieves a competitive result with the state of the arts.
Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter
Camera viewpoint selection is an important aspect of visual grasp detection, especially in clutter where many occlusions are present. Where other approaches use a static camera position or fixed data collection routines, our MultiView Picking (MVP) controller uses an active perception approach to choose informative viewpoints based directly on a distribution of grasp pose estimates in real time, reducing uncertainty in the grasp poses caused by clutter and occlusions. In trials of grasping 20 objects from clutter, our MVP controller achieves 80% grasp success, outperforming a single-viewpoint grasp detector by 12%. We also show that our approach is both more accurate and more efficient than approaches which consider multiple fixed viewpoints.
Combining classifiers to identify online databases
We address the problem of identifying the domain of onlinedatabases. More precisely, given a set F of Web forms automaticallygathered by a focused crawler and an online databasedomain D, our goal is to select from F only the formsthat are entry points to databases in D. Having a set ofWebforms that serve as entry points to similar online databasesis a requirement for many applications and techniques thataim to extract and integrate hidden-Web information, suchas meta-searchers, online database directories, hidden-Webcrawlers, and form-schema matching and merging.We propose a new strategy that automatically and accuratelyclassifies online databases based on features that canbe easily extracted from Web forms. By judiciously partitioningthe space of form features, this strategy allows theuse of simpler classifiers that can be constructed using learningtechniques that are better suited for the features of eachpartition. Experiments using real Web data in a representativeset of domains show that the use of different classifiersleads to high accuracy, precision and recall. This indicatesthat our modular classifier composition provides an effectiveand scalable solution for classifying online databases.
IRRIGATION WITH MAGNETIZED WATER , A NOVEL TOOL FOR IMPROVING CROP PRODUCTION IN EGYPT
Agricultural sciences take an interest not only in the common and valued crop-forming factors, but also in those less expensive, safe environmentally and generally underestimated. The technology of magnetic water has been developed and subsequently used widely in the field of agriculture in many countries such as Australia, USA, China and Japan. Desperate its importance, it is not yet explored in Egypt. Therefore, the present work was carried out to study the response of some food crops using magnetized water for irrigation purpose under green house condition. Monocotyledonous such as wheat and flax and dicotyledonous such as chickpea and lentil plants were selected for the present study. Based on the results of our experiments revealed all the plants which were irrigated with magnetic water exhibited a remarkable increase in vegetative growth and biochemical constitutions. Further the results indicated that the number of protein bands got increased in plants treated with magnetized water when compared to untreated control plants. Moreover, the magnetized water treatment increased yield and yield component traits of all crops. Over two seasons, the increases in seeds yield/plant in monocotyledonous crops reached to 9.10% and 31.3% for flax and wheat, respectively and in dicotyledonous crops reached to 24.9% and 38.5%, for lentil and check pea, respectively compared with crops irrigated with tap water. It appears that the preliminary study on utilization of magnetized water can led to improve quantity and quality of crop production under Egyptian condition.
Are expressive suppression and cognitive reappraisal associated with stress-related symptoms?
Emotion dysregulation is thought to be critical to the development of negative psychological outcomes. Gross (1998b) conceptualized the timing of regulation strategies as key to this relationship, with response-focused strategies, such as expressive suppression, as less effective and more detrimental compared to antecedent-focused ones, such as cognitive reappraisal. In the current study, we examined the relationship between reappraisal and expressive suppression and measures of psychopathology, particularly for stress-related reactions, in both undergraduate and trauma-exposed community samples of women. Generally, expressive suppression was associated with higher, and reappraisal with lower, self-reported stress-related symptoms. In particular, expressive suppression was associated with PTSD, anxiety, and depression symptoms in the trauma-exposed community sample, with rumination partially mediating this association. Finally, based on factor analysis, expressive suppression and cognitive reappraisal appear to be independent constructs. Overall, expressive suppression, much more so than cognitive reappraisal, may play an important role in the experience of stress-related symptoms. Further, given their independence, there are potentially relevant clinical implications, as interventions that shift one of these emotion regulation strategies may not lead to changes in the other.
Advanced illumination techniques for GPU volume raycasting
Volume raycasting techniques are important for both visual arts and visualization. They allow an efficient generation of visual effects and the visualization of scientific data obtained by tomography or numerical simulation. Thanks to their flexibility, experts agree that GPU-based raycasting is the state-of-the art technique for interactive volume rendering. It will most likely replace existing slice-based techniques in the near future. Volume rendering techniques are also effective for the direct rendering of implicit surfaces used for soft body animation and constructive solid geometry. The lecture starts off with an in-depth introduction to the concepts behind GPU-based ray-casting to provide a common base for the following parts. The focus of this course is on advanced illumination techniques which approximate the physically-based light transport more convincingly. Such techniques include interactive implementation of soft and hard shadows, ambient occlusion and simple Monte-Carlo based approaches to global illumination including translucency and scattering. With the proposed techniques, users are able to interactively create convincing images from volumetric data whose visual quality goes far beyond traditional approaches. The optical properties in participating media are defined using the phase function. Many approximations to the physically based light transport applied for rendering natural phenomena such as clouds or smoke assume a rather homogenous phase function model. For rendering volumetric scans on the other hand different phase function models are required to account for both surface-like structures and fuzzy boundaries in the data. Using volume rendering techniques, artists who create medical visualization for science magazines may now work on tomographic scans directly, without the necessity to fall back to creating polygonal models of anatomical structures.
FDG PET imaging for grading and prediction of outcome in chondrosarcoma patients
The aims of this study were to assess the potential of fluorine-18 fluorodeoxyglucose positron emission tomography (FDG PET) for tumor grading in chondrosarcoma patients and to evaluate the role of standardized uptake value (SUV) as a parameter for prediction of patient outcome. FDG PET imaging was performed in 31 patients with chondrosarcoma prior to therapy. SUV was calculated for each tumor and correlated to tumor grade and size, and to patient outcome in terms of local relapse or metastatic disease with a mean follow-up period of 48 months. Chondrosarcomas were detectable in all patients. Tumor SUV was 3.38±1.61 for grade I (n=15), 5.44±3.06 for grade II (n=13), and 7.10±2.61 for grade III (n=3). Significant differences were found between patients with and without disease progression: SUV was 6.42±2.70 (n=10) in patients developing recurrent or metastatic disease compared with 3.74±2.22 in patients without relapse (P=0.015). Using a cut-off of 4 for SUV, sensitivity, specificity, and positive and negative predictive values for a relapse were 90%, 76%, 64%, and 94%, respectively. Combining tumor grade and SUV, these parameters improved to 90%, 95%, 90%, and 95%, respectively. Pretherapeutic tumor SUV obtained by FDG PET imaging was a useful parameter for tumor grading and prediction of outcome in chondrosarcoma patients. The combination of SUV and histopathologic tumor grade further improved prediction of outcome substantially, allowing identification of patients at high risk for local relapse or metastatic disease.
Antirheumatic activity of fenclofenac.
This study has set out to establish whether fenclofenac has an antirheumatic effect in addition to its anti-inflammatory and analgesic activity. The results show that during the course of a 6-month study the drug improved clinical parameters, including the articular index, early morning stiffness, ring sizes, and grip strength, and produced changes in laboratory measurements such as the levels of C-reactive protein, IgG, and rheumatoid factor.
Interactive hemodynamic effects of dipeptidyl peptidase-IV inhibition and angiotensin-converting enzyme inhibition in humans.
Dipeptidyl peptidase-IV inhibitors improve glucose homeostasis in type 2 diabetics by inhibiting degradation of the incretin hormones. Dipeptidyl peptidase-IV inhibition also prevents the breakdown of the vasoconstrictor neuropeptide Y and, when angiotensin-converting enzyme (ACE) is inhibited, substance P. This study tested the hypothesis that dipeptidyl peptidase-IV inhibition would enhance the blood pressure response to acute ACE inhibition. Subjects with the metabolic syndrome were treated with 0 mg of enalapril (n=9), 5 mg of enalapril (n=8), or 10 mg enalapril (n=7) after treatment with sitagliptin (100 mg/day for 5 days and matching placebo for 5 days) in a randomized, cross-over fashion. Sitagliptin decreased serum dipeptidyl peptidase-IV activity (13.08±1.45 versus 30.28±1.76 nmol/mL/min during placebo; P≤0.001) and fasting blood glucose. Enalapril decreased ACE activity in a dose-dependent manner (P<0.001). Sitagliptin lowered blood pressure during enalapril (0 mg; P=0.02) and augmented the hypotensive response to 5 mg of enalapril (P=0.05). In contrast, sitagliptin attenuated the hypotensive response to 10 mg of enalapril (P=0.02). During sitagliptin, but not during placebo, 10 mg of enalapril significantly increased heart rate and plasma norepinephrine concentrations. There was no effect of 0 or 5 mg of enalapril on heart rate or norepinephrine after treatment with either sitagliptin or placebo. Sitagliptin enhanced the dose-dependent effect of enalapril on renal blood flow. In summary, sitagliptin lowers blood pressure during placebo or submaximal ACE inhibition; sitagliptin activates the sympathetic nervous system to diminish hypotension when ACE is maximally inhibited. This study provides the first evidence for an interactive hemodynamic effect of dipeptidyl peptidase-IV and ACE inhibition in humans.
International Sentiment Analysis for News and Blogs
There is a growing interest in mining opinions using sentiment analysis methods from sources such as news, blogs and product reviews. Most of these methods have been developed for English and are difficult to generalize to other languages. We explore an approach utilizing state-of-the-art machine translation technology and perform sentiment analysis on the English translation of a foreign language text. Our experiments indicate that (a) entity sentiment scores obtained by our method are statistically significantly correlated across nine languages of news sources and five languages of a parallel corpus; (b) the quality of our sentiment analysis method is largely translator independent; (c) after applying certain normalization techniques, our entity sentiment scores can be used to perform meaningful cross-cultural comparisons. Introduction There is considerable and rapidly-growing interest in using sentiment analysis methods to mine opinion from news and blogs (Yi et al. 2003; Pang, Lee, & Vaithyanathan 2002; Pang & Lee 2004; Wiebe 2000; Yi & Niblack 2005). Applications include product reviews, market research, public relations, and financial modeling. Almost all existing sentiment analysis systems are designed to work in a single language, usually English. But effectively mining international sentiment requires text analysis in a variety of local languages. Although in principle sentiment analysis systems specific to each language can be built, such approaches are inherently labor intensive and complicated by the lack of linguistic resources comparable to WordNet for many languages. An attractive alternative to this approach uses existing translation programs and simply translates source documents to English before passing them to a sentiment analysis system. The primary difficulty here concerns the loss of nuance incurred during the translation process. Even state-ofthe-art language translation programs fail to translate substantial amounts of text, make serious errors on what they do translate, and reduce well-formed texts to sentence fragments. Still, we believe that translated texts are sufficient to accurately capture sentiment, particularly in sentiment analyCopyright c © 2008, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. sis systems (such as ours) which aggregate sentiment from multiple documents. In particular, we have generalized the Lydia sentiment analysis system to monitor international opinion on a country-by-country basis by aggregating daily news data from roughly 200 international English-language papers and over 400 sources partitioned among eight other languages. Maps illustrating the results of our analysis are shown in Figure 1. From these maps we see that George Bush is mentioned the most positively in newspapers from Australia, France and Germany, and negatively in most other sources. Vladimir Putin, on the other hand, has positive sentiment in most countries, except Canada and Bolivia. Additional examples of such analysis appear on our website, www.textmap.com. Such maps are interesting to study and quite provocative, but beg the question of how meaningful the results are. Here we provide a rigorous and careful analysis of the extent to which sentiment survives the brutal process of automatic translation. Our assessment is complicated by the lack of a “gold standard” for international news sentiment. Instead, we rely on measuring theconsistencyof sentiment scores for given entities across different language sources. Previous work (Godbole, Srinivasaiah, & Skiena 2007) has demonstrated that the Lydia sentiment analysis system accurately captures notions of sentiment in English. The degree to which these judgments correlate with opinions originating from related foreign-language sources will either validate or reject our translation approach to sentiment analysis. In this paper we provide: • Cross-language analysis across news streams – We demonstrate that statistically significant entity sentiment analysis can be performed using as little as ten days of newspapers for each of the eight foreign languages we studied (Arabic, Chinese, French, German, Italian, Japanese, Korean, and Spanish). • Cross-language analysis across parallel corpora – Some of difference in observed entity sentiment across news sources reflect the effects of differing content and opinion instead of interpretation error. To isolate the effects of news source variance, we performed translation analysis of a parallel corpus of European Union law. As expected, these show greater entity frequency conservation
On Orogeny and Epeirogeny in the Study of Phanerozoic and Archean Rocks
The retention of the original, literal meanings of "orogeny" and "epeirogeny" is recommended. Orogeny is not a short-lived, chronostratigraphic feature nor is it a synonym for rock deformation or for widespread isotopic events. The term is applicable to tectonic deformation in belts, and which produced chains of mountains in Phanerozoic (and late Proterozoic) rocks. Therefore, it maybe less appropriate in Archean rocks of the shield areas where broadly uniform isotopic events may be more appropriately considered as epeirogenetic phenomena.
Intraoperative Assessment of Left Atrial Diverticulum and Remnant Stump after Left Atrial Appendage Epicardial Occlusion.
OBJECTIVES Epicardial left atrial appendage (LAA) closure with use of occluder is an emerging technique. Absence of remnant LAA stump is major criterion of successful obliteration. The aim of study was to assess early success rate of epicardial LAA closure. METHODS Fifteen patients with persistent AF and coronary artery disease underwent off-pump coronary revascularization with concomitant ablation and LAA epicardial occlusion with use of two types of occluders. Before incision and after appendage closure, TEE was performed to assess the LAA anatomy, diameter of left atrial ridge, and remnant LAA stump after occlusion. RESULTS In 80% (12) of patients, formation of a left atrial diverticulum was observed with the left atrial ridge forming the superior boundary. In 5 patients (33%), a minimal remnant LAA stump was found, none exceeding 1 cm (average length: 1.5 ± 2.3 mm). In all patients, blood flow in LAA cavity distal to the occluder was absent. There was no significant difference in LAA type, average left atrial diameter, LAA orifice, LAA length, left atrial ridge, or size of occluder used between patients with and without a remnant LAA stump. Occurrence of a remnant LAA stump correlated significantly with unfavorable anatomy (LAA orifice < 20 mm and LA ridge > 5 mm; r = 0.5774, P = 0.02). CONCLUSION The early success of epicardial LAA occlusion is not dependent on LAA morphologic type or occluder used. A minimal remnant LAA stump not exceeding 1 cm in length without distal blood flow was observed in one-third of the cases.
Affective Factors That Influence Chemistry Achievement ( Attitude a nd Self Efficacy ) and The Power Of These Factors To Predict Chemistry Achievement -
In this research, our aim was to determine students’ level of attitude and self efficacy towards chemistry and to put forth effects of these variables on chemistry achievement for consideration (in other words, to determine how the chemistry achievement were predicted by these variables). In this point of view the research was conducted with 1000 students studying at the 1, 2 and the 3 grade of 10 high schools which are located in the city center of Mersin. Addressed to research problems, data was analyzed via descriptive, correlation, linear and multiple regression statistical analyses. As a result it is determined that 2 graders group of high schools has maximum attitude scores and the attitude towards chemistry course, on its own, is a significant predictor of chemistry achievement. it is also determined that 2 graders group of high schools has maximum self efficacy scores and the self efficacy towards chemistry course, on its own, is a significant predictor of chemistry achievement.
Photonic analog-to-digital converters.
This paper reviews over 30 years of work on photonic analog-to-digital converters. The review is limited to systems in which the input is a radio-frequency (RF) signal in the electronic domain and the output is a digital version of that signal also in the electronic domain, and thus the review excludes photonic systems directed towards digitizing images or optical communication signals. The state of the art in electronic ADCs, basic properties of ADCs and properties of analog optical links, which are found in many photonic ADCs, are reviewed as background information for understanding photonic ADCs. Then four classes of photonic ADCs are reviewed: 1) photonic assisted ADC in which a photonic device is added to an electronic ADC to improve performance, 2) photonic sampling and electronic quantizing ADC, 3) electronic sampling and photonic quantizing ADC, and 4) photonic sampling and quantizing ADC. It is noted, however, that all 4 classes of "photonic ADC" require some electronic sampling and quantization. After reviewing all known photonic ADCs in the four classes, the review concludes with a discussion of the potential for photonic ADCs in the future.
Longitudinal melanonychia.
BACKGROUND Ungual melanoma is the most serious disease affecting the nail. The majority start with a longitudinal brown streak in the nail. OBJECTIVE To outline the different nail pigmentations, their differential diagnoses, treatment, and prognosis. METHOD Clinical and histologic evaluation of dark nail pigmentations. CONCLUSION Brown to black nail pigmentation may be due to different coloring substances of exogenous and endogenous origin. Exogenous pigmentations usually are not streaky or do not present as a stripe of even width with regular borders. Bacterial pigmentation, most commonly due to Pseudomonas aeruginosa or Proteus spp., have a greenish or grayish hue and the discoloration is often confined to the lateral edge of the nail. Subungual hematoma may result from a single heavy trauma or repeated microtrauma which often escapes notice. The latter is usually found on the medial aspect of the great toe. Although oval in shape, it commonly does not form a neat streak. Melanin pigmentation in the form of a longitudinal streak in the nail is due to a pigment-producing focus of melanocytes in the matrix. Neither the color intensity nor the age of the patient are proof of benignity or malignancy although subungual melanomas are very rare in children and malignant longitudinal melanonychia is usually wider than 5 mm. Hutchinson's melanotic whitlow, nail dystrophy, and a bleeding mass strongly suggest malignancy. Treatment is as conservative as possible in order to keep the tip of the digit; once the melanoma is completely removed, amputations have not been shown to prolong the disease-free survival time.
Female Genital Mutilation/Cutting in the United States: Updated Estimates of Women and Girls at Risk, 2012.
OBJECTIVES In 1996, the U.S. Congress passed legislation making female genital mutilation/cutting (FGM/C) illegal in the United States. CDC published the first estimates of the number of women and girls at risk for FGM/C in 1997. Since 2012, various constituencies have again raised concerns about the practice in the United States. We updated an earlier estimate of the number of women and girls in the United States who were at risk for FGM/C or its consequences. METHODS We estimated the number of women and girls who were at risk for undergoing FGM/C or its consequences in 2012 by applying country-specific prevalence of FGM/C to the estimated number of women and girls living in the United States who were born in that country or who lived with a parent born in that country. RESULTS Approximately 513,000 women and girls in the United States were at risk for FGM/C or its consequences in 2012, which was more than three times higher than the earlier estimate, based on 1990 data. The increase in the number of women and girls younger than 18 years of age at risk for FGM/C was more than four times that of previous estimates. CONCLUSION The estimated increase was wholly a result of rapid growth in the number of immigrants from FGM/C-practicing countries living in the United States and not from increases in FGM/C prevalence in those countries. Scientifically valid information regarding whether women or their daughters have actually undergone FGM/C and related information that can contribute to efforts to prevent the practice in the United States and provide needed health services to women who have undergone FGM/C are needed.
Soy food consumption does not lower LDL cholesterol in either equol or nonequol producers.
BACKGROUND Health claims link soy protein (SP) consumption, through plasma cholesterol reduction, to a decreased risk of heart disease. Soy isoflavones (ISOs), particularly in individuals who produce equol, might also contribute to lipid lowering and thus reduce SP requirements. OBJECTIVE The objective was to examine the contributions of SP, ISOs, and equol to the hypocholesterolemic effects of soy foods. DESIGN Nonsoy consumers (33 men, 58 women) with a plasma total cholesterol (TChol) concentration >5.5 mmol/L participated in a double-blind, placebo-controlled, crossover intervention trial. The subjects consumed 3 diets for 6 wk each in random order, which consisted of foods providing a daily dose of 1) 24 g SP and 70-80 mg ISOs (diet S); 2) 12 g SP, 12 g dairy protein (DP), and 70-80 mg ISOs (diet SD); and 3) 24 g DP without ISOs (diet D). Fasting plasma TChol, LDL cholesterol, HDL cholesterol, and triglycerides (TGs) were measured after each diet. RESULTS TChol was 3% lower with the S diet (-0.17 +/- 0.06 mmol/L; P < 0.05) than with the D diet, and TGs were 4% lower with both the S (-0.14 +/- 0.05 mmol/L; P < 0.05) and SD (-0.12 +/- 0.05 mmol/L; P < 0.05) diets. There were no significant effects on LDL cholesterol, HDL cholesterol, or the TChol:HDL cholesterol ratio. On the basis of urinary ISOs, 30 subjects were equol producers. Lipids were not affected significantly by equol production. CONCLUSIONS Regular consumption of foods providing 24 g SP/d from ISOs had no significant effect on plasma LDL cholesterol in mildly hypercholesterolemic subjects, regardless of equol-producing status.
A systematic comparative evaluation of biclustering techniques
Biclustering techniques are capable of simultaneously clustering rows and columns of a data matrix. These techniques became very popular for the analysis of gene expression data, since a gene can take part of multiple biological pathways which in turn can be active only under specific experimental conditions. Several biclustering algorithms have been developed in the past recent years. In order to provide guidance regarding their choice, a few comparative studies were conducted and reported in the literature. In these studies, however, the performances of the methods were evaluated through external measures that have more recently been shown to have undesirable properties. Furthermore, they considered a limited number of algorithms and datasets. We conducted a broader comparative study involving seventeen algorithms, which were run on three synthetic data collections and two real data collections with a more representative number of datasets. For the experiments with synthetic data, five different experimental scenarios were studied: different levels of noise, different numbers of implanted biclusters, different levels of symmetric bicluster overlap, different levels of asymmetric bicluster overlap and different bicluster sizes, for which the results were assessed with more suitable external measures. For the experiments with real datasets, the results were assessed by gene set enrichment and clustering accuracy. We observed that each algorithm achieved satisfactory results in part of the biclustering tasks in which they were investigated. The choice of the best algorithm for some application thus depends on the task at hand and the types of patterns that one wants to detect.
Regional Perspectives and Domestic Imperatives
To understand China's Taiwan policy, one must understand Beijing's current perspective on the global political and economic situation. According to the official Chinese viewpoint, since the end of the Cold War, the world has moved towards multi‐polarity (duojihua) and relations among states have become more relaxed. Whether it reflects reality or not, this belief in multi‐polarity is a prime determinant of China's foreign policy. This essay will explore the conceptual dimensions of China's multi‐polar worldview, and the implications for China's foreign policy in general and its Taiwan policy in particular.
Bonding of monocarboxylic acids, monophenols and nonpolar compounds onto goethite.
Adsorption of a diverse set of chemicals onto goethite was evaluated by column chromatography. The pH of the effluents was 4.7-5.2. Van der Waals forces dominate the exothermic adsorption of 8 nonpolar compounds (e.g., PAHs and chlorobenzenes). H-bonding is responsible for the adsorption of 32 monocarboxylic acids (i.e., benzoic acids, naphthoic acids and acidic pharmaceuticals) and their adsorption tends to be endothermic. Steric effects significantly decreased the bonding of monocarboxylic acids with ortho-substitutions. Exothermic adsorption of 10 monophenols is controlled by weak H-bonding. Bonding of these 50 solutes onto goethite is totally reversible. In contrast, inner-sphere complexation of phthalic acid and chlortetracycline with goethite occurred according to their low desorption ratio (1.1%-54.4%). Polyparameter linear free energy relationship (PP-LFER) models were established to provide acceptable fitting results of the goethite-solute distribution coefficients (RMSE = 0.32 and 0.30 at 25 °C and 5 °C, respectively). It is worthy to note that steric effects must be considered to get a better prediction for compounds with ortho-substitutions.
Weighted-to-Spherically-Uniform Quality Evaluation for Omnidirectional Video
Omnidirectional video records a scene in all directions around one central position. It allows users to select viewing content freely in all directions. Assuming that viewing directions are uniformly distributed, the isotropic observation space can be regarded as a sphere. Omnidirectional video is commonly represented by different projection formats with one or multiple planes. To measure objective quality of omnidirectional video in observation space more accurately, a weighted-to-spherically-uniform quality evaluation method is proposed in this letter. The error of each pixel on projection planes is multiplied by a weight to ensure the equivalent spherical area in observation space, in which pixels with equal mapped spherical area have the same influence on distortion measurement. Our method makes the quality evaluation results more accurate and reliable since it avoids error propagation caused by the conversion from resampling representation space to observation space. As an example of such quality evaluation method, weighted-to-spherically-uniform peak signal-to-noise ratio is described and its performance is experimentally analyzed.
Probe reconstruction for holographic X-ray imaging
In X-ray holographic near-field imaging the resolution and image quality depend sensitively on the beam. Artifacts are often encountered due to the strong focusing required to reach high resolution. Here, two schemes for reconstructing the complex-valued and extended wavefront of X-ray nano-probes, primarily in the planes relevant for imaging (i.e. focus, sample and detection plane), are presented and compared. Firstly, near-field ptychography is used, based on scanning a test pattern laterally as well as longitudinally along the optical axis. Secondly, any test pattern is dispensed of and the wavefront reconstructed only from data recorded for different longitudinal translations of the detector. For this purpose, an optimized multi-plane projection algorithm is presented, which can cope with the numerically very challenging setting of a divergent wavefront emanating from a hard X-ray nanoprobe. The results of both schemes are in very good agreement. The probe retrieval can be used as a tool for optics alignment, in particular at X-ray nanoprobe beamlines. Combining probe retrieval and object reconstruction is also shown to improve the image quality of holographic near-field imaging.
Becoming syntactic.
Psycholinguistic research has shown that the influence of abstract syntactic knowledge on performance is shaped by particular sentences that have been experienced. To explore this idea, the authors applied a connectionist model of sentence production to the development and use of abstract syntax. The model makes use of (a) error-based learning to acquire and adapt sequencing mechanisms and (b) meaning-form mappings to derive syntactic representations. The model is able to account for most of what is known about structural priming in adult speakers, as well as key findings in preferential looking and elicited production studies of language acquisition. The model suggests how abstract knowledge and concrete experience are balanced in the development and use of syntax.
A synchronous DRAM controller for an H.264/AVC encoder
In order to use a synchronous dynamic RAM (SDRAM) as the off-chip memory of an H.264/AVC encoder, this paper proposes an efficient SDRAM memory controller with an asynchronous bridge. With the proposed architecture, the SDRAM bandwidth is increased by making the operation frequency of an external SDRAM higher than that of the hardware accelerators of an H.264/AVC encoder. Experimental results show that the encoding speed is increased by 30.5% when the SDRAM clock frequency is increased from 100 MHz to 200 MHz while the H.264/AVC hardware accelerators operate at 100 MHz.
Long-Term Trends in the Public Perception of Artificial Intelligence
Analyses of text corpora over time can reveal trends in beliefs, interest, and sentiment about a topic. We focus on views expressed about artificial intelligence (AI) in the New York Times over a 30-year period. General interest, awareness, and discussion about AI has waxed and waned since the field was founded in 1956. We present a set of measures that captures levels of engagement, measures of pessimism and optimism, the prevalence of specific hopes and concerns, and topics that are linked to discussions about AI over decades. We find that discussion of AI has increased sharply since 2009, and that these discussions have been consistently more optimistic than pessimistic. However, when we examine specific concerns, we find that worries of loss of control of AI, ethical concerns for AI, and the negative impact of AI on work have grown in recent years. We also find that hopes for AI in healthcare and education have increased over time.
The return of the chatbots
By all accounts, 2016 is the year of the chatbot. Some commentators take the view that chatbot technology will be so disruptive that it will eliminate the need for websites and apps. But chatbots have a long history. So what’s new, and what’s different this time? And is there an opportunity here to improve how our industry does technology transfer? 1 The year of interacting conversationally This year’s most hyped language technology is the intelligent virtual assistant. Whether you call these things digital assistants, conversational interfaces or just chatbots, the basic concept is the same: achieve some result by conversing with a machine in a dialogic fashion, using natural language. Most visible at the forefront of the technology, we have the voice-driven digital assistants from the Big Four: Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa and Google’s new Assistant. Following up behind, we have many thousands of text-based chatbots that target specific functionalities, enabled by tools that let you build bots for a number of widely used messaging platforms. Many see this technology as heralding a revolution in how we interact with devices, websites and apps. The MIT Technology Review lists conversational interfaces as one of the ten breakthrough technologies of 2016. In January of this year, Uber’s Chris Messina wrote an influential blog piece declaring 2016 the year of conversational commerce. In March, Microsoft CEO Satya Nadella announced that chatbots were the next big thing, on a par with the graphical user interface, the web browser 1 https://www.technologyreview.com/s/600766/10-breakthrough-technologies2016-conversational-interfaces 2 https://medium.com/chris-messina/2016-will-be-the-year-of-conversationalcommerce-1586e85e3991 https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1351324916000243 Downloaded from https://www.cambridge.org/core. IP address: 54.70.40.11, on 03 Aug 2018 at 20:59:31, subject to the Cambridge Core terms of use, available at
Combining multispectral aerial imagery and digital surface models to extract urban buildings
This paper presents an automated classification of buildings in Coleraine, Northern Ireland. The classification was generated using very high spatial resolution data (10 cm) from a Digital Mapping Camera (DMC) for March 2009. The visible to near infrared (VNIR) bands of the DMC enabled a supervised classification to be performed to extract buildings from vegetation. A Digital Surface Model (DSM) was also created from the image to differentiate between buildings and other land classes with similar spectral profiles, such as roads. The supervised classification had the lowest classification accuracy (50%) while the DSM had an accuracy of 81%. The combination of the DSM and the supervised classification achieved an overall classification accuracy of 95%. Two spatial metrics (percentage of the landscape and number of patches) were also used to test the level of agreement between the classification and digitised building data. The results suggest that fine resolution multispectral aerial imagery can automatically detect buildings to a very high level of accuracy. Current space borne sensors, such as IKONOS and QuickBird, lag behind airborne sensors with VNIR bands provided at a much coarser spatial resolution (4m and 2.4m respectively). Techniques must be developed from current airborne sensors that can be applied to new space borne sensors in the future. The ability to generate DSMs from high resolution aerial imagery will afford new insights into the three-dimensional aspects of urban areas which will in turn inform future urban planning. (Received 19th August 2010; Revised 26th January 2011; Accepted 9th February 2011) ISSN 1744-5647 doi:10.4113/jom.2011.1152 51 Journal of Maps, 2011, 51-59 McNally, A.J.D. & McKenzie, S.J.P.
Aspect extraction in sentiment analysis: comparative analysis and survey
Sentiment analysis (SA) has become one of the most active and progressively popular areas in information retrieval and text mining due to the expansion of the World Wide Web (WWW). SA deals with the computational treatment or the classification of user’s sentiments, opinions and emotions hidden within the text. Aspect extraction is the most vital and extensively explored phase of SA to carry out the classification of sentiments in precise manners. During the last decade, enormous number of research has focused on identifying and extracting aspects. Therefore, in this survey, a comprehensive overview has been attempted for different aspect extraction techniques and approaches. These techniques have been categorized in accordance with the adopted approach. Despite being a traditional survey, a comprehensive comparative analysis is conducted among different approaches of aspect extraction, which not only elaborates the performance of any technique but also guides the reader to compare the accuracy with other state-of-the-art and most recent approaches.
Clinical profile of homozygous JAK2 617V>F mutation in patients with polycythemia vera or essential thrombocythemia.
JAK2 617V>F mutation occurs in a homozygous state in 25% to 30% of patients with polycythemia vera (PV) and 2% to 4% with essential thrombocythemia (ET). Whether homozygosity associates with distinct clinical phenotypes is still under debate. This retrospective multicenter study considered 118 JAK2 617V>F homozygous patients (104 PV, 14 ET) whose clinical characteristics were compared with those of 587 heterozygous and 257 wild-type patients. Irrespective of their clinical diagnosis, homozygous patients were older, displayed a higher leukocyte count and hematocrit value at diagnosis, and presented larger spleen volume. Aquagenic pruritus was significantly more common among homozygous PV patients. JAK2 617V>F homozygosity associated with more frequent evolution into secondary myelofibrosis in both PV and ET. After adjustment for sex, age, leukocyte count, and previous thrombosis in a multivariate analysis, homozygous ET patients displayed a significantly higher risk of cardiovascular events (hazard ratio [HR] 3.97, 95% confidence interval [CI] 1.34-11.7; P = .013) than wild-type (HR = 1.0) or heterozygous patients (HR = 1.49). No significant association of JAK2 617V>F homozygosity with thrombosis risk was observed in PV. Finally, JAK2 617V>F homozygous patients were more likely to receive chemotherapy for control of disease. We conclude that JAK2 617V>F homozygosity identifies PV or ET patients with a more symptomatic myeloproliferative disorder and is associated with a higher risk of major cardiovascular events in patients with ET.
MMP-2, TIMP-1, E-cadherin, and beta-catenin expression in endometrial serous carcinoma compared with low-grade endometrial endometrioid carcinoma and proliferative endometrium.
OBJECTIVE To investigate the expression of MMP-2, TIMP-1, E-cadherin and beta-catenin in endometrial serous carcinoma (ESC), low-grade endometrial endometrioid carcinoma (EEC), and proliferative endometrium. METHODS We performed an immunohistochemical study on 14 cases of ESC, 15 cases of low-grade EEC, and 10 cases of proliferative endometrium. RESULTS Compared with low-grade EEC, ESC showed significantly increased MMP-2 and TIMP-1 expression, as well as decreased membranous beta-catenin staining. E-cadherin expression was significantly lower in ESC and EEC as compared with proliferative endometrium. CONCLUSIONS We suggest that MMP-2 and TIMP-1 expression and loss of beta-catenin have a role in the pathogenesis and progression of ESC. Decreased E-cadherin may have an important role in the development of both ESC and EEC. Furthermore, the dissimilarities in MMP-2, TIMP-1, E-cadherin and beta-catenin expressions in ESC compared with EEC may be responsible, along with other factors, for their different biological behavior.
A Fast Two-Dimensional Median Filtering Algorithm
Aktmcz-We present a fast algorithm for two-dimensional median fiitering. It is based on storing and updating the gray level histogram of the picture elements in the window. The algorithm is much faster than conventional sorting methods. For a window size of m X n, the computer time required is O(n).
A self-deployable origami structure with locking mechanism induced by buckling effect
One of the major problems in utilizing origami structures is ensuring variable stiffness; a deployable structure needs to become stiff or flexible according to the requirements of its use in an application. In this study, we present a self-deploying tubular origami mechanism that switches between two distinctive states: small and flexible at its normal state and rigid and stiffened at a locked state. By embedding compact torsional SMA actuators into the mechanism in a novel way through stitching, the process of deploying from the normal to the locked state proceeds in a simple and low-profile manner. With global heating, the torsional SMA wires activate a buckling effect that draws a radical change of folding line from one diagonal to another in every unit tile of the tube, creating axial stiffness. The activated structure, which weighs only 2.9 g, can endure a load of 2.7 kg or more. Additionally, since it does not require bulky actuators, this origami structure can be highly mobile and small in size. This novel origami mechanism is expected to be useful in a wide variety of applications, such as aerospace equipment, mobile architecture, and medical devices, especially those used in minimally invasive surgery (MIS).
Deep learning for network analysis: Problems, approaches and challenges
The analysis of social, communication and information networks for identifying patterns, evolutionary characteristics and anomalies is a key problem for the military, for instance in the Intelligence community. Current techniques do not have the ability to discern unusual features or patterns that are not a priori known. We investigate the use of deep learning for network analysis. Over the last few years, deep learning has had unprecedented success in areas such as image classification, speech recognition, etc. However, research on the use of deep learning to network or graph analysis is limited. We present three preliminary techniques that we have developed as part of the ARL Network Science CTA program: (a) unsupervised classification using a very highly trained image recognizer, namely Caffe; (b) supervised classification using a variant of convolutional neural networks on node features such as degree and assortativity; and (c) a framework called node2vec for learning representations of nodes in a network using a mapping to natural language processing.
Wireless Energy Transfer Using Magnetic Resonance
In 1899, Nikola Tesla, who had devised a type of resonant transformer called the Tesla coil, achieved a major breakthrough in his work by transmitting 100 million volts of electric power wirelessly over a distance of 26 miles to light up a bank of 200 light bulbs and run one electric motor. Tesla claimed to have achieved 95% efficiency, but the technology had to be shelved because the effects of transmitting such high voltages in electric arcs would have been disastrous to humans and electrical equipment in the vicinity. This technology has been languishing in obscurity for a number of years, but the advent of portable devices such as mobiles, laptops, smartphones, MP3 players, etc warrants another look at the technology. We propose the use of a new technology, based on strongly coupled magnetic resonance. It consists of a transmitter, a current carrying copper coil, which acts as an electromagnetic resonator and a receiver, another copper coil of similar dimensions to which the device to be powered is attached. The transmitter emits a non-radiative magnetic field resonating at MHz frequencies, and the receiving unit resonates in that field. The resonant nature of the process ensures a strong interaction between the sending and receiving unit, while interaction with rest of the environment is weak.
Blechnum Orientale Linn - a fern with potential as antioxidant, anticancer and antibacterial agent
BACKGROUND Blechnum orientale Linn. (Blechnaceae) is used ethnomedicinally for the treatment of various skin diseases, stomach pain, urinary bladder complaints and sterilization of women. The aim of the study was to evaluate antioxidant, anticancer and antibacterial activity of five solvent fractions obtained from the methanol extract of the leaves of Blechnum orientale Linn. METHODS Five solvent fractions were obtained from the methanol extract of B. orientale through successive partitioning with petroleum ether, chloroform, ethyl acetate, butanol and water. Total phenolic content was assessed using Folin-Ciocalteu's method. The antioxidant activity was determined by measuring the scavenging activity of DPPH radicals. Cytotoxic activity was tested against four cancer cell lines and a non-malignant cell using MTT assay. Antibacterial activity was assessed using the disc diffusion and broth microdilution assays. Standard phytochemical screening tests for saponins, tannins, terpenoids, flavonoids and alkaloids were also conducted. RESULTS The ethyl acetate, butanol and water fractions possessed strong radical scavenging activity (IC50 8.6-13.0 microg/ml) and cytotoxic activity towards human colon cancer cell HT-29 (IC50 27.5-42.8 microg/ml). The three extracts were also effective against all Gram-positive bacteria tested: Bacillus cereus, Micrococcus luteus, methicillin-susceptible Staphylococcus aureus (MSSA), methicillin-resistant Staphylococcus aureus (MRSA) and Stapylococcus epidermidis(minimum inhibitory concentration MIC 15.6-250 mug/ml; minimum bactericidal concentration MBC 15.6-250 microg/ml). Phytochemical analysis revealed the presence of flavonoids, terpenoids and tannins. Ethyl acetate and butanol fractions showed highest total phenolic content (675-804 mg gallic acid equivalent/g). CONCLUSIONS The results indicate that this fern is a potential candidate to be used as an antioxidant agent, for colon cancer therapy and for treatment of MRSA infections and other MSSA/Gram-positive bacterial infectious diseases.
Genome-wide association study of toxic metals and trace elements reveals novel associations
The accumulation of toxic metals in the human body is influenced by exposure and mechanisms involved in metabolism, some of which may be under genetic control. This is the first genome-wide association study to investigate variants associated with whole blood levels of a range of toxic metals. Eleven toxic metals and trace elements (aluminium, cadmium, cobalt, copper, chromium, mercury, manganese, molybdenum, nickel, lead and zinc) were assayed in a cohort of 949 individuals using mass spectrometry. DNA samples were genotyped on the Infinium Omni Express bead microarray and imputed up to reference panels from the 1000 Genomes Project. Analyses revealed two regions associated with manganese level at genome-wide significance, mapping to 4q24 and 1q41. The lead single nucleotide polymorphism (SNP) in the 4q24 locus was rs13107325 (P-value = 5.1 × 10(-11), β = -0.77), located in an exon of SLC39A8, which encodes a protein involved in manganese and zinc transport. The lead SNP in the 1q41 locus is rs1776029 (P-value = 2.2 × 10(-14), β = -0.46). The SNP lies within the intronic region of SLC30A10, another transporter protein. Among other metals, the loci 6q14.1 and 3q26.32 were associated with cadmium and mercury levels (P = 1.4 × 10(-10), β = -1.2 and P = 1.8 × 10(-9), β = -1.8, respectively). Whole blood measurements of toxic metals are associated with genetic variants in metal transporter genes and others. This is relevant in inferring metabolic pathways of metals and identifying subsets of individuals who may be more susceptible to metal toxicity.
A Reinforcement Learning Based Approach for Automated Lane Change Maneuvers
Lane change is a crucial vehicle maneuver which needs coordination with surrounding vehicles. Automated lane changing functions built on rule-based models may perform well under pre-defined operating conditions, but they may be prone to failure when unexpected situations are encountered. In our study, we proposed a Reinforcement Learning based approach to train the vehicle agent to learn an automated lane change behavior such that it can intelligently make a lane change under diverse and even unforeseen scenarios. Particularly, we treated both state space and action space as continuous, and designed a Q-function approximator that has a closed-form greedy policy, which contributes to the computation efficiency of our deep Q-learning algorithm. Extensive simulations are conducted for training the algorithm, and the results illustrate that the Reinforcement Learning based vehicle agent is capable of learning a smooth and efficient driving policy for lane change maneuvers.
Progress in Mechanizing Sesame in the US Through Breeding
OVERVIEW Sesame (Sesamum indicum L. Pedaliaceae) is one of the oldest crops known to humans. There are archeological remnants of sesame dating to 5,500 BP in the Harappa valley in the Indian subcontinent (Bedigian and Harlan 1986). Assyrian tablets from 4,300 BP in a British museum describe how before the gods battled to restore order to the universe, they ate bread and drank sesame wine together (Weiss 1971). Most people remember the words “Open sesame” from Ali Baba and the 40 Thieves used to open a cave full of riches. It is similar to the sesame capsules in that their opening produced great riches. Sesame was a major oilseed in the ancient world because of its ease of extraction, its great stability, and its drought resistance. In India today, almost as in olden days, a farmer can take his crop to an expeller that consists of grinding mortar and pestle stones driven by a bullock. He can place the oil in a vessel, take it back to his home and have cooking oil for a year without the oil going rancid (S.S. Rajan, pers. commun.). The origins of sesame are still debated. Kobayashi (1986) suggested that sesame originated in Africa, but Bedigian et al. (1985) concluded that sesame originated on the Indian subcontinent. Ashri (1998) felt that settling the debate on the origin of sesame will require more detailed cytogenetic and suitable DNA comparisons. From whatever origin, sesame spread into Africa, the Mediterranean and into the Far East. In the Middle East a tremendous amount of sesame is consumed as tahini (sesame butter or sesame paste). Tahini mixed with ground chickpea kernels becomes hummus. In China, Japan, and Korea, sesame is used widely as a cooking oil, and it is consumed for its medicinal qualities. In these countries, grandmothers advise, “Eat sesame for health.” (M. Namiki, T. Osawa, C.W. Kang, pers. commun.). In recent years the Japanese have been identifying and quantifying the medicinal benefits of sesame. In vitro and animal studies have verified several antioxidant properties (Namiki 1995), and initial unpublished results in human studies further verify that stories passed down through generations have merit (M. Namiki, pers. commun.). In the West, sesame is primarily used in the confectionary trade in rolls and crackers. Throughout the world, sesame seeds or paste are mixed into sweets, esp. halva. Sesame oil use in the cosmetic industry continues to expand. In India sesame is used in many religious ceremonies (Joshi 1961). There are many publications that provide an overview of sesame and breeding strategies, and most of that information will not be repeated in this paper. D.G. Langham and Rodriguez (1945) provided much of the initial thinking on mechanization. Weiss (1971, 1983, 2000) provided a good introduction to sesame; the latter two publications have more recent information, but the 1971 publication is longer and has more detail that is still pertinent. Ashri (1998) summarized most of the recent breeding and genetics work and includes information from the discussions at many FAO, IAEA, and IDRC sesame conferences. Beech (1985) and Beech and Imrie (2001) have been working on introducing sesame in Australia and their discussions on his ideal plant type are very relevant to cultivar development in general whether it be for manual or mechanized harvest. Mazzani (1999) reviewed many of the aspects of the Venezuelan sesame development program. The present paper will provide some updated information and will primarily address the characters that are important in the machine/plant interface as elucidated in the extensive breeding program of Sesaco Corporation (San Antonio, Texas).
On the Analysis and Detection of Mobile Botnet Applications
Mobile botnet phenomenon is gaining popularity among malware writers in order to exploit vulnerabilities in smartphones. In particular, mobile botnets enable illegal access to a victim’s smartphone, can compromise critical user data and launch a DDoS attack through Command and Control (C&C). In this article, we propose a static analysis approach, DeDroid, to investigate botnet-specific properties that can be used to detect mobile applications with botnet intensions. Initially, we identify critical features by observing code behavior of the few known malware binaries having C&C features. Then, we compare the identified features with the malicious and benign applications of Drebin dataset. The results show against the comparative analysis that, Drebin dataset has 35% malicious applications which qualify as botnets. Upon closer examination, 90% of the potential botnets are confirmed as botnets. Similarly, for comparative analysis against benign applications having C&C features, DeDroid has achieved adequate detection accuracy. In addition, DeDroid has achieved high accuracy with negligible false positive rate while making decision for state-of-the-art malicious applications.
The Computation of All 4R Serial Spherical Wrists With an Isotropic Architecture
A spherical wrist of the serial type is said to be isotropic if it can attain a posture whereby the singular values of its Jacobian matrix are all identical and nonzero. What isotropy brings about is robustness to manufacturing, assembly, and measurement errors, thereby guaranteeing a maximum orientation accuracy. In this paper we investigate the existence of redundant isotropic architectures, which should add to the dexterity of the wrist under design by virtue of its extra degree of freedom. The problem formulation leads to a system of eight quadratic equations with eight unknowns. The Bezout number of this system is thus 2 = 256, its BKK bound being 192. However, the actual number of solutions is shown to be 32. We list all solutions of the foregoing algebraic problem. All these solutions are real, but distinct solutions do not necessarily lead to distinct manipulators. Upon discarding those algebraic solutions that yield no new wrists, we end up with exactly eight distinct architectures, the eight corresponding manipulators being displayed at their isotropic posture.
3D Texture Recognition Using Bidirectional Feature Histograms
Textured surfaces are an inherent constituent of the natural surroundings, therefore efficient real-world applications of computer vision algorithms require precise surface descriptors. Often textured surfaces present not only variations of color or reflectance, but also local height variations. This type of surface is referred to as a 3D texture. As the lighting and viewing conditions are varied, effects such as shadowing, foreshortening and occlusions, give rise to significant changes in texture appearance. Accounting for the variation of texture appearance due to changes in imaging parameters is a key issue in developing accurate 3D texture models. The bidirectional texture function (BTF) is observed image texture as a function of viewing and illumination directions. In this work, we construct a BTF-based surface model which captures the variation of the underlying statistical distribution of local structural image features, as the viewing and illumination conditions are changed. This 3D texture representation is called the bidirectional feature histogram (BFH). Based on the BFH, we design a 3D texture recognition method which employs the BFH as the surface model, and classifies surfaces based on a single novel texture image of unknown imaging parameters. Also, we develop a computational method for quantitatively evaluating the relative significance of texture images within the BTF. The performance of our methods is evaluated by employing over 6200 texture images corresponding to 40 real-world surface samples from the CUReT (Columbia-Utrecht reflectance and texture) database. Our experiments produce excellent classification results, which validate the strong descriptive properties of the BFH as a 3D texture representation.
Prospective study of community needlestick injuries.
Fifty three children were referred following community needlestick injuries, August 1995 to September 2003. Twenty five attended for serology six months later. None were positive for HIV, or hepatitis B or C. Routine follow up after community needlestick injury is unnecessary. HIV post-exposure prophylaxis should only be considered in high risk children.
Creation of a highly detailed, dynamic, global model and map of science
The majority of the effort in metrics research has addressed research evaluation. Far less research has been done to address the unique problems of research planning. Models and maps of science that can address the detailed problems associated with research planning are needed. This article reports on the creation of an article-level model and map of science covering 16 years and nearly 20 million articles using co-citation-based techniques. The map is then used to define disciplinelike structures consisting of natural groupings of articles and clusters of articles. This combination of detail and high-level structure can be used to address planning-related problems such as identification of emerging topics, and the identification of which areas of science and technology are innovative and which are simply persisting. In addition to presenting the model and map, several process improvements that result in higher accuracy structures are detailed, including a bibliographic coupling approach for assigning current papers to co-citation clusters, and a sequentially hybrid approach to producing visual maps from models. Introduction The majority of the effort in metrics (sciento-, biblio-, infor-, alt-) studies has been aimed at research evaluation (B. R. Martin, Nightingale, & Yegros-Yegros, 2012). Examples of evaluation-related topics include impact factors, the h-index and other related indices, university rankings and national level science indicators. The 40+ year history of the use of documentbased indicators for research evaluation includes publication of handbooks (cf., Moed, Glänzel, & Schmoch, 2004), the introduction of new journals (e.g., Scientometrics, Journal of Informetrics) and several annual or biannual conferences (e.g., ISSI, Collnet, STI/ENID) specifically aimed at reporting on these activities. The evaluation of research using scientific and technical documents is a well-established area of research. Far less effort in metrics research has been aimed at the unique problems of research planning. Planning-related questions (Börner, Boyack, Milojević, & Morris, 2012) that are asked by funders, administrators and researchers are different from evaluation-related questions. As such, they require different models than those that are used for evaluation. For example, planning requires a model that can predict emerging topics in science and technology. Funders need models that can help them identify the most innovative and promising proposals and researchers. Administrators, particularly those in industry, need models to help them best allocate their internal research funds, including knowing which existing areas to cut. To enable detailed planning, document-based models of science and technology need to be highly granular, and while based on retrospective data, must be robust enough to enable forecasting. Overall, the This is a preprint of an article accepted for publication in Journal of the American Society for Information Science and Technology copyright © 2013 (American Society for Information Science and Technology) 2 technical requirements of a model that can be used for planning are unique. Research and development of such models is an under-developed area in metrics research. To that end, this article reports on a co-citation-based model and map of science comprised of nearly 20 million articles over 16 years that has the potential to be used to answer planningrelated questions. Although the model is similar in concept to one that has been previously reported (Klavans & Boyack, 2011), it differs in several significant aspects: it uses an improved current paper assignment process, it uses an improved map layout process, and it has been used to create article-level discipline-like structures to provide context for its detailed structure. In the balance of the article, we first review related work to provide context for the work reported here. We then detail the experiments that led to the improvements in our map creation process. This is followed by introduction and characterization of the model and map, along with a description of how the map was divided into discipline-like groupings. The paper concludes with a brief discussion of how the map will be used in the future for a variety of analyses. Background Science mapping, when reduced to its most basic components, is a combination of classification and visualization. We assume there is a structure to science, and then we seek to create a representation of that structure by partitioning sets of documents (or journals, authors, grants, etc.) into different groups. This act of partitioning is the classification part of science mapping, and typically takes the majority of the effort. The resulting classification system, along with some representation of the relationships between the partitions, can be thought of as a model of science inasmuch as it specifies structure. The visualization part of science mapping uses the classification and relationships as input, and creates a visual representation (or map) of that model as output. Mapping of scientific structure using data on scientific publications began not long after the introduction of ISI’s citation indexes in the 1950s. Since then, science mapping has been done at a variety of scales and with a variety of data types. Science mapping studies have been roughly evenly split between document, journal, and author-based maps. Both text-based and citationbased methods have been used. Many of these different types of studies have been reviewed at intervals in the past (Börner, Chen, & Boyack, 2003; Morris & Martens, 2008; White & McCain, 1997). Each type of study is aimed at answering certain types of questions. For example, authorbased maps are most often used to characterize the major topics within an area of science, and to show the key researchers in those topic areas. Journal-based models and maps are often used to characterize discipline-level structures in science. Overlay maps based on journals can be used to answer high level policy questions (Rafols, Porter, & Leydesdorff, 2010). However, more detailed questions, such as questions related to planning, require the use of document-level models and maps of science. The balance of this section thus focuses on document-level models and maps. When it comes to mapping of document sets, most studies have been done using local datasets. The term ‘local’ is used here to denote a small set of topics or a small subset of the whole of science. While these local studies have successfully been used to improve mapping techniques, and to provide detailed information about the areas they study, we prefer global mapping This is a preprint of an article accepted for publication in Journal of the American Society for Information Science and Technology copyright © 2013 (American Society for Information Science and Technology) 3 because of the increased context and accuracy that are enabled by mapping of all of science (Klavans & Boyack, 2011). The context for work presented here lies in the efforts undertaken since the 1970s to map all of science at the document level using citation-based techniques. The first map of worldwide science based on documents was created by Griffith, Small, Stonehill & Dey (1974). Their map, based on co-citation analysis, contained 1,310 highly cited references in 115 clusters, showing the most highly cited areas in biomedicine, physics, and chemistry. Henry Small continued generating document-level maps using co-citation analysis (Small, Sweeney, & Greenlee, 1985), typically using thresholds based on fractional citation counting that ended up keeping roughly (but not strictly) the top 1% of highly cited references by discipline. The mapping process and software created by Small at the Institute for Scientific Information (ISI) evolved to generate hierarchically nested maps with four levels. Small (1999) presents a four level map based on nearly 130,000 highly cited references from papers published in 1995, which contained nearly 19,000 clusters at its lowest level. At roughly the same time, the Center for Research Planning (CRP) was creating similar maps for the private sector using similar thresholds and methods (Franklin & Johnston, 1988). One major difference is that CRP’s maps only used one level of clustering rather than multiple levels. The next major step in mapping all of science at the document level took place in the mid-2000’s when Klavans & Boyack (2006) created co-citation models of over 700,000 reference papers and bibliographic coupling models of over 700,000 current papers from the 2002 fileyear of the combined Science and Social Science Citation Indexes. Later, Boyack (2009) used bibliographic coupling to create a model and map of nearly 1,000,000 documents in 117,000 clusters from the 2003 citation indexes. Through 2004, the citation indexes from ISI were the only comprehensive data source that could be used for such maps. The introduction of the Scopus database in late 2004 provided another data source that could be used for comprehensive models and maps of science. Klavans & Boyack (2010) used Scopus data from 2007 to create a co-citation model of science comprised of over 2,000,000 reference papers assigned to 84,000 clusters. Over 5,600,000 citing papers from 2003-2007 were assigned to these co-citation clusters based on reference patterns. All of the models and maps mentioned to this point have been static maps – that is they were all created using data from a single year, and were snapshot pictures of science at a single point in time. It is only recently that researchers have created maps of all of science that are longitudinal in nature. In the first of these, Klavans & Boyack (2011) extended their co-citation mapping approach by linking together nine annual models of science to generate a nine-year global map of science comprised of 10,360,000 papers from 2000-2008. The clusters in this model have been found
Automatic game tuning for strategic diversity
Finding the ideal game parameters is a common problem solved by game designers by manually tweaking game parameters. The aim is to ensure the desired gameplay outcomes for a specific game, a tedious process which could be alleviated through the use of Artificial Intelligence: using automatic game tuning. This paper presents an example of this process and introduces the concept of simulation based fitness evaluation focused on strategic diversity. A simple but effective Random Mutation Hill Climber algorithm is used to evolve a Zelda inspired game, by ensuring that agents using distinct heuristics are capable of achieving similar degrees of fitness. Two versions of the same game are presented to human players and their gameplay data is analyzed to identify whether they indeed find slightly more varied paths to the goal in the game evolved to be the more strategically diverse. Although the evolutionary process yields promising results, the human trials are unable to conclude a statistically significant difference between the two variants.
A comparison of microbial growth in alfaxalone, propofol and thiopental.
OBJECTIVES To compare the growth of Staphylococcus aureus and Escherichia coli in alfaxalone with that in propofol and thiopental and to evaluate contaminant microbial growth in these agents under two different conditions of storage and handling. METHODS Known quanta of S aureus and E coli were inoculated into separate 5 ml samples of propofol, thiopental and alfaxalone. Quantitative bacterial analysis was performed at intervals over a 14 day period. Commercial preparations of propofol, thiopental and alfaxalone were stored and handled using "dirty" or "clean" techniques. Microbial quantification and identification was performed over a 14 day period. RESULTS S aureus and E coli grew rapidly in propofol after six hours. Both bacteria were killed by thiopental. S aureus numbers slowly declined in alfaxalone; E coli growth was rapid after 24 hours. In "dirty" and "clean" groups of intravenous anaesthetics, 9.3 and 7.4 per cent of samples, respectively, were positive for microbial growth; none were considered to represent colonisation of bottles. CLINICAL SIGNIFICANCE Alfaxalone supports growth of some microorganisms but less readily than propofol. Bacterial colonisation of intravenous anaesthetic bottles is uncommon, but contamination as syringes are prepared for injection occurs regardless of storage and handling technique.
Visual cue training to improve walking and turning after stroke: a study protocol for a multi-centre, single blind randomised pilot trial
BACKGROUND Visual information comprises one of the most salient sources of information used to control walking and the dependence on vision to maintain dynamic stability increases following a stroke. We hypothesize, therefore, that rehabilitation efforts incorporating visual cues may be effective in triggering recovery and adaptability of gait following stroke. This feasibility trial aims to estimate probable recruitment rate, effect size, treatment adherence and response to gait training with visual cues in contrast to conventional overground walking practice following stroke. METHODS/DESIGN A 3-arm, parallel group, multi-centre, single blind, randomised control feasibility trial will compare overground visual cue training (O-VCT), treadmill visual cue training (T-VCT), and usual care (UC). Participants (n = 60) will be randomly assigned to one of three treatments by a central randomisation centre using computer generated tables to allocate treatment groups. The research assessor will remain blind to allocation. Treatment, delivered by physiotherapists, will be twice weekly for 8 weeks at participating outpatient hospital sites for the O-VCT or UC and in a University setting for T-VCT participants.Individuals with gait impairment due to stroke, with restricted community ambulation (gait speed <0.8m/s), residual lower limb paresis and who are able to take part in repetitive walking practice involving visual cues (i.e., no severe visual impairments, able to walk with minimal assistance and no comorbid medical contraindications for walking practice) will be included.The primary outcomes concerning participant enrolment, recruitment, retention, and health and social care resource use data will be recorded over a recruitment period of 18 months. Secondary outcome measures will be undertaken before randomisation (baseline), after the eight-week intervention (outcome), and at three months (follow-up). Outcome measures will include gait speed and step length symmetry; time and steps taken to complete a 180° turn; assessment of gait adaptability (success rate in target stepping); timed up and go; Fugl-Meyer lower limb motor assessment; Berg balance scale; falls efficacy scale; SF-12; and functional ambulation category. DISCUSSION Participation and compliance measured by treatment logs, accrual rate, attrition, and response variation will determine sample sizes for an early phase randomised controlled trial and indicate whether a definitive late phase efficacy trial is justified. TRIAL REGISTRATION Clinicaltrials.gov, NCT01600391.
Learning Context-free Grammars : Capabilities andLimitations of a Recurrent Neural Network with
This work describes an approach for inferring De-terministic Context-free (DCF) Grammars in a Connectionist paradigm using a Recurrent Neu-ral Network Pushdown Automaton (NNPDA). The NNPDA consists of a recurrent neural network connected to an external stack memory through a common error function. We show that the NNPDA is able to learn the dynamics of an underlying push-down automaton from examples of grammatical and non-grammatical strings. Not only does the network learn the state transitions in the automaton , it also learns the actions required to control the stack. In order to use continuous optimization methods, we develop an analog stack which reverts to a discrete stack by quantization of all activations, after the network has learned the transition rules and stack actions. We further show an enhancement of the network's learning capabilities by providing hints. In addition, an initial comparative study of simulations with rst, second and third order recurrent networks has shown that the increased degree of freedom in a higher order networks improve generalization but not necessarily learning speed.
I Know What Your Packet Did Last Hop: Using Packet Histories to Troubleshoot Networks
The complexity of networks has outpaced our tools to debug them; today, administrators use manual tools to diagnose problems. In this paper, we show how packet histories—the full stories of every packet’s journey through the network—can simplify network diagnosis. To demonstrate the usefulness of packet histories and the practical feasibility of constructing them, we built NetSight, an extensible platform that captures packet histories and enables applications to concisely and flexibly retrieve packet histories of interest. Atop NetSight, we built four applications that illustrate its flexibility: an interactive network debugger, a live invariant monitor, a path-aware history logger, and a hierarchical network profiler. On a single modern multi-core server, NetSight can process packet histories for the traffic of multiple 10 Gb/s links. For larger networks, NetSight scales linearly with additional servers and scales even further with straightforward additions to hardwareand hypervisor-based switches.
Cybersecurity as an Application Domain for Multiagent Systems
The science of cybersecurity has recently been garnering much attention among researchers and practitioners dissatisfied with the ad hoc nature of much of the existing work on cybersecurity. Cybersecurity offers a great opportunity for multiagent systems research. We motivate cybersecurity as an application area for multiagent systems with an emphasis on normative multiagent systems. First, we describe ways in which multiagent systems could help advance our understanding of cybersecurity and provide a set of principles that could serve as a foundation for a new science of cybersecurity. Second, we argue how paying close attention to the challenges of cybersecurity could expose the limitations of current research in multiagent systems, especially with respect to dealing with considerations of autonomy and interdependence.
The broaden-and-build theory of positive emotions.
The broaden-and-build theory describes the form and function of a subset of positive emotions, including joy, interest, contentment and love. A key proposition is that these positive emotions broaden an individual's momentary thought-action repertoire: joy sparks the urge to play, interest sparks the urge to explore, contentment sparks the urge to savour and integrate, and love sparks a recurring cycle of each of these urges within safe, close relationships. The broadened mindsets arising from these positive emotions are contrasted to the narrowed mindsets sparked by many negative emotions (i.e. specific action tendencies, such as attack or flee). A second key proposition concerns the consequences of these broadened mindsets: by broadening an individual's momentary thought-action repertoire--whether through play, exploration or similar activities--positive emotions promote discovery of novel and creative actions, ideas and social bonds, which in turn build that individual's personal resources; ranging from physical and intellectual resources, to social and psychological resources. Importantly, these resources function as reserves that can be drawn on later to improve the odds of successful coping and survival. This chapter reviews the latest empirical evidence supporting the broaden-and-build theory and draws out implications the theory holds for optimizing health and well-being.
Credit Card Default Predictive Modeling
Background: Predicting credit card payment default is critical for the successful business model of a credit card company. An accurate predictive model can help the company identify customers who might default their payment in the future so that the company can get involved earlier to manage risk and reduce loss. It is even better if a model can assist the company on credit card application approval to minimize the risk at upfront. However, credit card default prediction is never an easy task. It is dynamic. A customer who paid his/her payment on time in the last few months may suddenly default his/her next payment. It is also unbalanced given the fact that default payment is rare compared to non-default payments. Unbalanced dataset will easily fail using most machine learning techniques if the dataset is not treated properly.
A new force-feedback arm exoskeleton for haptic interaction in virtual environments
The paper presents the mechanical design of the L-EXOS, a new exoskeleton for the human arm. The exoskeleton is a tendon driven wearable haptic interface with 5 dof 4 actuated ones, and is characterized by a workspace very close to the one of the human arm. The design has been optimized to obtain a solution with reduced mass and high stiffness, by employing special mechanical components and carbon fiber structural parts. The devised exoskeleton is very effective for simulating the touch by hand of large objects or the manipulation within the whole workspace of the arm. The main features of the first prototype that has been developed at PERCRO are presented, together with an indication of the achieved and tested performance.
Weakly Supervised Slot Tagging with Partially Labeled Sequences from Web Search Click Logs
In this paper, we apply a weakly-supervised learning approach for slot tagging using conditional random fields by exploiting web search click logs. We extend the constrained lattice training of Täckström et al. (2013) to non-linear conditional random fields in which latent variables mediate between observations and labels. When combined with a novel initialization scheme that leverages unlabeled data, we show that our method gives significant improvement over strong supervised and weakly-supervised baselines.
Circuit analysis and optimization driven by worst-case distances
AbsbW-In this paper, a new methodology for integrated circuit design considering the inevitable manufacturing and operating tolerances is presented. It is based on a new concept for specification analysis that provides exact worst-case transistor model parameters and exact worst-case operating conditions. Corresponding worst-case distances provide a key measure for the performance, the yield, and the robustness of a circuit. A new deterministic method for parametric circuit design that is based on worst-case distances is presented: It comprises nominal design, worst-case analysis, yield optimization, and design centering. In contrast to current approaches, it uses standard circuit simulators and at the same time considers deterministic design parameters of integrated circuits at reasonable computational costs. The most serious disadvantage of geometric approaches to design centering is eliminated, as the method's complexity increases only linearly with the number of design variables.
Concentration of Measure for the Analysis of Randomized Algorithms
We have observed that the cost of the search is equal to the number of tosses of a coin of bias p that are necessary until we obtain H successes. That is, we flip the coin repeatedly and stop as soon as we observe H successes. The difficulty here is that the random variable we are studying is the sum of geometrically distributed random variables. The distribution of this random variable is called negative binomial and some of its properties are explored in the problem section. Here, we take a different approach. To fix ideas, let p := 1 2 . Suppose that we toss the coin L times where
Randomized trial of a GPIIb/IIIa platelet receptor blocker in refractory unstable angina. European Cooperative Study Group.
BACKGROUND Patients with unstable angina despite intensive medical therapy, ie, refractory angina, are at high risk for developing thrombotic complications: myocardial infarction or coronary occlusion during percutaneous transluminal coronary angioplasty (PTCA). Chimeric 7E3 (c7E3) Fab is an antibody fragment that blocks the platelet glycoprotein (GP) IIb/IIIa receptor and potently inhibits platelet aggregation. METHODS AND RESULTS To evaluate whether potent platelet inhibition could reduce these complications, 60 patients with dynamic ST-T changes and recurrent pain despite intensive medical therapy were randomized to c7E3 Fab or placebo. After initial angiography had demonstrated a culprit lesion suitable for PTCA, placebo or c7E3 Fab was administered as 0.25 mg/kg bolus injection followed by 10 micrograms/min for 18 to 24 hours until 1 hour after completion of second angiography and PTCA. During study drug infusion, ischemia occurred in 9 c7E3 Fab and 16 placebo patients (P = .06). During hospital stay, 12 major events occurred in 7 placebo patients (23%), including 1 death, 4 infarcts, and 7 urgent interventions. In the c7E3 Fab group, only 1 event (an infarct) occurred (3%, P = .03). Angiography showed improved TIMI flow in 4 placebo and 6 c7E3 Fab patients and worsening of flow in 3 placebo patients but in none of the c7E3 Fab patients. Quantitative analysis showed significant improvement of the lesion in the patients treated with c7E3 Fab, which was not observed in the placebo group, although the difference between the two treatment groups was not significant. Measurement of platelet function and bleeding time demonstrated > 90% blockade of GPIIb/IIIa receptors, > 90% reduction of ex vivo platelet aggregation to ADP, and a significantly prolonged bleeding time during c7E3 Fab infusion, without excess bleeding. CONCLUSIONS Combined therapy with c7E3 Fab, heparin, and aspirin appears safe. These pilot study results support the concept that effective blockade of the platelet GPIIb/IIIa receptors can reduce myocardial infarction and facilitate PTCA in patients with refractory unstable angina.
Obesity and reduction of the response rate to anti-tumor necrosis factor α in rheumatoid arthritis: an approach to a personalized medicine.
OBJECTIVE Obesity is a mild, long-lasting inflammatory disease and, as such, could increase the inflammatory burden of rheumatoid arthritis (RA). The study aim was to determine whether obesity represents a risk factor for a poor remission rate in RA patients requiring anti-tumor necrosis factor α (anti-TNFα) therapy for progressive and active disease despite treatment with methotrexate or other disease-modifying antirheumatic drugs. METHODS Patients were identified from 15 outpatient clinics of university hospitals and hospitals in Italy taking part in the Gruppo Italiano di Studio sulle Early Arthritis network. Disease Activity Score in 28 joints (DAS28), body mass index (BMI; categorized as <25, 25-30, and >30 kg/m(2) ), acute-phase reactants, IgM rheumatoid factor, and anti-cyclic citrullinated peptide antibody values were collected. DAS28 remission was defined as a score of <2.6 lasting for at least 3 months. RESULTS Six hundred forty-one outpatients with longstanding RA receiving anti-TNFα blockers (adalimumab, n = 260; etanercept, n = 227; infliximab, n = 154), recruited from 2006-2009 and monitored for at least 12 months, were analyzed. The mean ± SD DAS28 at baseline was 5.6 ± 1.4. A BMI of >30 kg/m(2) was recorded in 66 (10.3%) of 641 RA patients. After 12 months of anti-TNFα treatment, a DAS28 of <2.6 was noted in 15.2% of the obese subjects, in 30.4% of the patients with a BMI of 25-30 kg/m(2) , and in 32.9% of the patients with a BMI of <25 kg/m(2) (P = 0.01). The lowest percentage of remission, which was statistically significant versus adalimumab and etanercept (P = 0.003), was observed with infliximab. CONCLUSION Obesity represents a risk factor for a poor remission rate in patients with longstanding RA treated with anti-TNFα agents. A personalized treatment plan might be a possible solution.
Multi-task Gaussian Process Prediction
In this paper we investigate multi-task learning in the context of Gaussian Processes (GP). We propose a model that learns a shared covariance function on input-dependent features and a “free-form” covariance matrix over tasks. This allows for good flexibility when modelling inter-task dependencies while avoiding the need for large amounts of data for training. We show that under the assumption of noise-free observations and a block design, predictions for a given task only depend on its target values and therefore a cancellation of inter-task transfer occurs. We evaluate the benefits of our model on two practical applications: a compiler performance prediction problem and an exam score prediction task. Additionally, we make use of GP approximations and properties of our model in order to provide scalability to large data sets.
SNDlib 1.0 - Survivable Network Design Library
We provide information on the Survivable Network Design Library (SNDlib), a data library for fixed telecommunication network design that can be accessed at http://sndlib.zib.de. In version 1.0, the library contains data related to 22 networks which, combined with a set of selected planning parameters, leads to 830 network planning problem instances. In this paper, we provide a mathematical model for each planning problem considered in the library and describe the data concepts of the SNDlib. Furthermore, we provide statistical information and details about the origin of the data sets.
Vascular vertigo: epidemiology and clinical syndromes.
BACKGROUND vertigo is a common complaint in medicine. The most common causes of vertigo are benign paroxysmal positional vertigo, vestibular neuritis, Meniere's syndrome, and vascular disorders. Vertigo of vascular origin is usually limited to migraine, transient ischemic attacks, and ischemic or hemorrhagic stroke. Vascular causes lead to various central or peripheral vestibular syndromes with vertigo. This study provides an overview of epidemiology and clinical syndromes of vascular vertigo. REVIEW SUMMARY vertigo is an illusion of movement caused by asymmetrical involvement of the vestibular system by various causes. Migraine is the most frequent vascular disorder that causes vertigo in all age groups. Vertigo may occur in up to 25% of patients with migraine. The lifetime prevalence of migrainous vertigo is almost 1%. Cerebrovascular disorders are estimated to account for 3% to 7% of patients with vertigo. Vestibular paroxysmia has been diagnosed in 1.8% to 4% of cases in various dizziness units. Vasculitic disorders are rare in the general population, but vertigo may be seen in almost up to 50% of patients with different vasculitic syndromes. CONCLUSIONS migraine, cerebrovascular disorders especially involving the vertebrobasilar territory, cardiocirculatory diseases, neurovascular compression of the eighth nerve, and vasculitis are vascular causes of vertigo syndromes.
Make it stand: balancing shapes for 3D fabrication
Imbalance suggests a feeling of dynamism and movement in static objects. It is therefore not surprising that many 3D models stand in impossibly balanced configurations. As long as the models remain in a computer this is of no consequence: the laws of physics do not apply. However, fabrication through 3D printing breaks the illusion: printed models topple instead of standing as initially intended. We propose to assist users in producing novel, properly balanced designs by interactively deforming an existing model. We formulate balance optimization as an energy minimization, improving stability by modifying the volume of the object, while preserving its surface details. This takes place during interactive editing: the user cooperates with our optimizer towards the end result. We demonstrate our method on a variety of models. With our technique, users can produce fabricated objects that stand in one or more surprising poses without requiring glue or heavy pedestals.
' s personal copy A comparative study on feature reduction approaches in Hindi and Bengali named entity recognition
Features used for named entity recognition (NER) are often high dimensional in nature. These cause overfitting when training data is not sufficient. Dimensionality reduction leads to performance enhancement in such situations. There are a number of approaches for dimensionality reduction based on feature selection and feature extraction. In this paper we perform a comprehensive and comparative study on different dimensionality reduction approaches applied to the NER task. To compare the performance of the various approaches we consider two Indian languages namely Hindi and Bengali. NER accuracies achieved in these languages are comparatively poor as yet, primarily due to scarcity of annotated corpus. For both the languages dimensionality reduction is found to improve performance of the classifiers. A Comparative study of the effectiveness of several dimensionality reduction techniques is presented in detail in this paper. 2011 Elsevier B.V. All rights reserved.
Unleashing the Power of Mobile Cloud Computing using ThinkAir
Smartphones have exploded in popularity in recent years, becoming ever more sophisticated and capable. As a result, developers worldwide are building increasingly complex applications that require ever increasing amounts of computational power and energy. In this paper we propose ThinkAir, a framework that makes it simple for developers to migrate their smartphone applications to the cloud. ThinkAir exploits the concept of smartphone virtualization in the cloud and provides method level computation offloading. Advancing on previous works, it focuses on the elasticity and scalability of the server side and enhances the power of mobile cloud computing by parallelizing method execution using multiple Virtual Machine (VM) images. We evaluate the system using a range of benchmarks starting from simple micro-benchmarks to more complex applications. First, we show that the execution time and energy consumption decrease two orders of magnitude for the N -queens puzzle and one order of magnitude for a face detection and a virus scan application, using cloud offloading. We then show that if a task is parallelizable, the user can request more than one VM to execute it, and these VMs will be provided dynamically. In fact, by exploiting parallelization, we achieve a greater reduction on the execution time and energy consumption for the previous applications. Finally, we use a memory-hungry image combiner tool to demonstrate that applications can dynamically request VMs with more computational power in order to meet their computational requirements.
Private and Accurate Data Aggregation against Dishonest Nodes
Privacy-preserving data aggregation in ad hoc networks is a challenging problem, considering the distributed communication and control requirement, dynamic network topology, unreliable communication links, etc. The difficulty is exaggerated when there exist dishonest nodes, and how to ensure privacy, accuracy, and robustness against dishonest nodes remains a n open issue. Different from the widely used cryptographic approaches, in this paper, we address this challenging proble m by exploiting the distributed consensus technique. We first pr opose a secure consensus-based data aggregation (SCDA) algorith m that guarantees an accurate sum aggregation while preservi ng the privacy of sensitive data. Then, to mitigate the polluti on from dishonest nodes, we propose an Enhanced SCDA (E-SCDA) algorithm that allows neighbors to detect dishonest nodes, and derive the error bound when there are undetectable dishones t nodes. We prove the convergence of both SCDA and E-SCDA. We also prove that the proposed algorithms are(ǫ, σ)-dataprivacy, and obtain the mathematical relationship betweenǫ and σ. Extensive simulations have shown that the proposed algori thms have high accuracy and low complexity, and they are robust against network dynamics and dishonest nodes.
RDF-4X: a scalable solution for RDF quads store in the cloud
Resource Description Framework (RDF) represents a flexible and concise model for representing the metadata of resources on the web. Over the past years, with the increasing amount of RDF data, efficient and scalable RDF data management has become a fundamental challenge to achieve the Semantic Web vision. However, multiple approaches for RDF storage have been suggested, ranging from simple triple stores to more advanced techniques like vertical partitioning on the predicates or centralized approaches. Unfortunately, it is still a challenge to store a huge quantity of RDF quads due, in part, to the query processing for RDF data. This paper proposes a scalable solution for RDF data management that uses Apache Accumulo. We focus on introducing storage methods and indexing techniques that scale to billions of quads across multiple nodes, while providing fast and easy access to the data through conventional query mechanisms such as SPARQL. Our performance evaluation shows that in most cases our approach works well against large RDF datasets.
Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition
Recurrent Neural Networks (RNNs) and their variants, such as Long-Short Term Memory (LSTM) networks, and Gated Recurrent Unit (GRU) networks, have achieved promising performance in sequential data modeling. The hidden layers in RNNs can be regarded as the memory units, which are helpful in storing information in sequential contexts. However, when dealing with high dimensional input data, such as video and text, the input-to-hidden linear transformation in RNNs brings high memory usage and huge computational cost. This makes the training of RNNs very difficult. To address this challenge, we propose a novel compact LSTM model, named as TR-LSTM, by utilizing the low-rank tensor ring decomposition (TRD) to reformulate the input-to-hidden transformation. Compared with other tensor decomposition methods, TRLSTM is more stable. In addition, TR-LSTM can complete an end-to-end training and also provide a fundamental building block for RNNs in handling large input data. Experiments on real-world action recognition datasets have demonstrated the promising performance of the proposed TR-LSTM compared with the tensortrain LSTM and other state-of-the-art competitors.
SynCoPation: Interactive Synthesis-Coupled Sound Propagation
Recent research in sound simulation has focused on either sound synthesis or sound propagation, and many standalone algorithms have been developed for each domain. We present a novel technique for coupling sound synthesis with sound propagation to automatically generate realistic aural content for virtual environments. Our approach can generate sounds from rigid-bodies based on the vibration modes and radiation coefficients represented by the single-point multipole expansion. We present a mode-adaptive propagation algorithm that uses a perceptual Hankel function approximation technique to achieve interactive runtime performance. The overall approach allows for high degrees of dynamism - it can support dynamic sources, dynamic listeners, and dynamic directivity simultaneously. We have integrated our system with the Unity game engine and demonstrate the effectiveness of this fully-automatic technique for audio content creation in complex indoor and outdoor scenes. We conducted a preliminary, online user-study to evaluate whether our Hankel function approximation causes any perceptible loss of audio quality. The results indicate that the subjects were unable to distinguish between the audio rendered using the approximate function and audio rendered using the full Hankel function in the Cathedral, Tuscany, and the Game benchmarks.
Prediction of students performance using Educational Data Mining
Data mining plays an important role in the business world and it helps to the educational institution to predict and make decisions related to the students' academic status. With a higher education, now a days dropping out of students' has been increasing, it affects not only the students' career but also on the reputation of the institute. The existing system is a system which maintains the student information in the form of numerical values and it just stores and retrieve the information what it contains. So the system has no intelligence to analyze the data. The proposed system is a web based application which makes use of the Naive Bayesian mining technique for the extraction of useful information. The experiment is conducted on 700 students' with 19 attributes in Amrita Vishwa Vidyapeetham, Mysuru. Result proves that Naive Bayesian algorithm provides more accuracy over other methods like Regression, Decision Tree, Neural networks etc., for comparison and prediction. The system aims at increasing the success graph of students using Naive Bayesian and the system which maintains all student admission details, course details, subject details, student marks details, attendance details, etc. It takes student's academic history as input and gives students' upcoming performances on the basis of semester.